Update FreeBSD SPL atomics

Sync up with the following changes from FreeBSD:

ZFS: add emulation of atomic_swap_64 and atomic_load_64

Some 32-bit platforms do not provide 64-bit atomic operations that ZFS
requires, either in userland or at all.  We emulate those operations
for those platforms using a mutex.  That is not entirely correct and
it's very efficient.  Besides, the loads are plain loads, so torn
values are possible.

Nevertheless, the emulation seems to work for some definition of work.

This change adds atomic_swap_64, which is already used in ZFS code,
and atomic_load_64 that can be used to prevent torn reads.

Authored by: avg <avg@FreeBSD.org>
FreeBSD-commit: freebsd/freebsd@3458e5d1e6

cleanup of illumos compatibility atomics

atomic_cas_32 is implemented using atomic_fcmpset_32 on all platforms.
Ditto for atomic_cas_64 and atomic_fcmpset_64 on platforms that have
it.  The only exception is sparc64 that provides MD atomic_cas_32 and
atomic_cas_64.
This is slightly inefficient as fcmpset reports whether the operation
updated the target and that information is not needed for cas.
Nevertheless, there is less code to maintain and to add for new
platforms.  Also, the operations are done inline now as opposed to
function calls before.

atomic_add_64_nv is implemented using atomic_fetchadd_64 on platforms
that provide it.

casptr, cas32, atomic_or_8, atomic_or_8_nv are completely removed as
they have no users.

atomic_mtx that is used to emulate 64-bit atomics on platforms that
lack them is defined only on those platforms.

As a result, platform specific opensolaris_atomic.S files have lost
most of their code.  The only exception is i386 where the
compat+contrib code provides 64-bit atomics for userland use.  That
code assumes availability of cmpxchg8b instruction.  FreeBSD does not
have that assumption for i386 userland and does not provide 64-bit
atomics.  Hopefully, this can and will be fixed.

Authored by: avg <avg@FreeBSD.org>
FreeBSD-commit: freebsd/freebsd@e9642c209b

emulate illumos membar_producer with atomic_thread_fence_rel

membar_producer is supposed to be a store-store barrier.
Also, in the code that FreeBSD has ported from illumos membar_producer
is used only with regular stores to regular memory (with respect to
caching).

We do not have an MI primitive for the store-store barrier, so
atomic_thread_fence_rel is the closest we have as it provides
(load | store) -> store barrier.

Previously, membar_producer was an empty function call on all 32-bit
arm-s, 32-bit powerpc, riscv and all mips variants.  I think that it
was inadequate.
On other platforms, such as amd64, arm64, i386, powerpc64, sparc64,
membar_producer was implemented using stronger primitives than required
for a store-store barrier with respect to regular memory access.
For example, it used sfence on amd64 and lock-ed nop in i386 (despite
TSO).
On powerpc64 we now use recommended lwsync instead of eieio.
On sparc64 FreeBSD uses TSO mode.
On arm64/aarch64 we now use dmb sy instead of dmb ish.  Not sure if
this is an improvement, actually.

After this change we can drop opensolaris_atomic.S for aarch64, amd64,
powerpc64 and sparc64 as all required atomic operations have either
direct or light-weight mapping to FreeBSD native atomic operations.

Discussed with: kib
Authored by: avg <avg@FreeBSD.org>
FreeBSD-commit: freebsd/freebsd@50cdda62fc

fix up r353340, don't assume that fcmpset has strong semantics

fcmpset can have two kinds of semantics, weak and strong.
For practical purposes, strong semantics means that if fcmpset fails
then the reported current value is always different from the expected
value.  Weak semantics means that the reported current value may be the
same as the expected value even though fcmpset failed.  That's a so
called "sporadic" failure.

I originally implemented atomic_cas expecting strong semantics, but
many platforms actually have weak one.

Reported by:    pkubaj (not confirmed if same issue)
Discussed with: kib, mjg
Authored by: avg <avg@FreeBSD.org>
FreeBSD-commit: freebsd/freebsd@238787c74e

[PowerPC] [MIPS] Implement 32-bit kernel emulation of atomic64 operations

This is a lock-based emulation of 64-bit atomics for kernel use, split off
from an earlier patch by jhibbits.

This is needed to unblock future improvements that reduce the need for
locking on 64-bit platforms by using atomic updates.

The implementation allows for future integration with userland atomic64,
but as that implies going through sysarch for every use, the current
status quo of userland doing its own locking may be for the best.

Submitted by:   jhibbits (original patch), kevans (mips bits)
Reviewed by:    jhibbits, jeff, kevans
Authored by: bdragon <bdragon@FreeBSD.org>
Differential Revision:  https://reviews.freebsd.org/D22976
FreeBSD-commit: freebsd/freebsd@db39dab3a8

Remove sparc64 kernel support

Remove all sparc64 specific files
Remove all sparc64 ifdefs
Removee indireeect sparc64 ifdefs

Authored by: imp <imp@FreeBSD.org>
FreeBSD-commit: freebsd/freebsd@48b94864c5

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Ported-by: Ryan Moeller <ryan@iXsystems.com>
Signed-off-by: Ryan Moeller <ryan@iXsystems.com>
Closes #10250
This commit is contained in:
Ryan Moeller 2020-05-04 18:07:04 -04:00 committed by GitHub
parent 6ed4391da9
commit 639dfeb831
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 107 additions and 128 deletions

View File

@ -32,79 +32,30 @@
#include <sys/types.h>
#include <machine/atomic.h>
#define casptr(_a, _b, _c) \
atomic_cmpset_ptr((volatile uintptr_t *)(_a), \
(uintptr_t)(_b), \
(uintptr_t)(_c))
#define cas32 atomic_cmpset_32
#define atomic_sub_64 atomic_subtract_64
#define atomic_sub_64 atomic_subtract_64
#if defined(__i386__) || defined(KLD_MODULE)
#if defined(__i386__) && (defined(_KERNEL) || defined(KLD_MODULE))
#define I386_HAVE_ATOMIC64
#endif
#if defined(__i386__) || defined(__amd64__) || defined(__arm__)
/* No spurious failures from fcmpset. */
#define STRONG_FCMPSET
#endif
#if !defined(__LP64__) && !defined(__mips_n32) && \
!defined(ARM_HAVE_ATOMIC64) && !defined(I386_HAVE_ATOMIC64)
!defined(ARM_HAVE_ATOMIC64) && !defined(I386_HAVE_ATOMIC64) && \
!defined(HAS_EMULATED_ATOMIC64)
extern void atomic_add_64(volatile uint64_t *target, int64_t delta);
extern void atomic_dec_64(volatile uint64_t *target);
#endif
#ifndef __sparc64__
#if defined(__LP64__) || defined(__mips_n32) || \
defined(ARM_HAVE_ATOMIC64) || defined(I386_HAVE_ATOMIC64)
#define membar_producer() wmb()
static __inline uint64_t
atomic_cas_64(volatile uint64_t *target, uint64_t cmp, uint64_t newval)
{
#ifdef __i386__
atomic_fcmpset_64(target, &cmp, newval);
#else
atomic_fcmpset_long(target, &cmp, newval);
#endif
return (cmp);
}
static __inline uint32_t
atomic_cas_32(volatile uint32_t *target, uint32_t cmp, uint32_t newval)
{
atomic_fcmpset_int(target, &cmp, newval);
return (cmp);
}
static __inline uint64_t
atomic_add_64_nv(volatile uint64_t *target, int64_t delta)
{
uint64_t prev;
prev = atomic_fetchadd_long(target, delta);
return (prev + delta);
}
#else
extern uint32_t atomic_cas_32(volatile uint32_t *target, uint32_t cmp,
uint32_t newval);
extern uint64_t atomic_swap_64(volatile uint64_t *a, uint64_t value);
extern uint64_t atomic_load_64(volatile uint64_t *a);
extern uint64_t atomic_add_64_nv(volatile uint64_t *target, int64_t delta);
extern uint64_t atomic_cas_64(volatile uint64_t *target, uint64_t cmp,
uint64_t newval);
extern uint64_t atomic_add_64_nv(volatile uint64_t *target, int64_t delta);
extern void membar_producer(void);
#endif
#endif
extern uint8_t atomic_or_8_nv(volatile uint8_t *target, uint8_t value);
#if defined(__sparc64__) || defined(__powerpc__) || defined(__arm__) || \
defined(__mips__) || defined(__aarch64__) || defined(__riscv)
extern void atomic_or_8(volatile uint8_t *target, uint8_t value);
#else
static __inline void
atomic_or_8(volatile uint8_t *target, uint8_t value)
{
atomic_set_8(target, value);
}
#endif
#define membar_producer atomic_thread_fence_rel
static __inline uint32_t
atomic_add_32_nv(volatile uint32_t *target, int32_t delta)
@ -112,33 +63,12 @@ atomic_add_32_nv(volatile uint32_t *target, int32_t delta)
return (atomic_fetchadd_32(target, delta) + delta);
}
static __inline uint32_t
atomic_add_int_nv(volatile uint32_t *target, int delta)
static __inline uint_t
atomic_add_int_nv(volatile uint_t *target, int delta)
{
return (atomic_add_32_nv(target, delta));
}
static __inline void
atomic_dec_32(volatile uint32_t *target)
{
atomic_subtract_32(target, 1);
}
static __inline uint32_t
atomic_dec_32_nv(volatile uint32_t *target)
{
return (atomic_fetchadd_32(target, -1) - 1);
}
#if defined(__LP64__) || defined(__mips_n32) || \
defined(ARM_HAVE_ATOMIC64) || defined(I386_HAVE_ATOMIC64)
static __inline void
atomic_dec_64(volatile uint64_t *target)
{
atomic_subtract_64(target, 1);
}
#endif
static __inline void
atomic_inc_32(volatile uint32_t *target)
{
@ -151,6 +81,70 @@ atomic_inc_32_nv(volatile uint32_t *target)
return (atomic_add_32_nv(target, 1));
}
static __inline void
atomic_dec_32(volatile uint32_t *target)
{
atomic_subtract_32(target, 1);
}
static __inline uint32_t
atomic_dec_32_nv(volatile uint32_t *target)
{
return (atomic_add_32_nv(target, -1));
}
#ifndef __sparc64__
static inline uint32_t
atomic_cas_32(volatile uint32_t *target, uint32_t cmp, uint32_t newval)
{
#ifdef STRONG_FCMPSET
(void) atomic_fcmpset_32(target, &cmp, newval);
#else
uint32_t expected = cmp;
do {
if (atomic_fcmpset_32(target, &cmp, newval))
break;
} while (cmp == expected);
#endif
return (cmp);
}
#endif
#if defined(__LP64__) || defined(__mips_n32) || \
defined(ARM_HAVE_ATOMIC64) || defined(I386_HAVE_ATOMIC64) || \
defined(HAS_EMULATED_ATOMIC64)
static __inline void
atomic_dec_64(volatile uint64_t *target)
{
atomic_subtract_64(target, 1);
}
static inline uint64_t
atomic_add_64_nv(volatile uint64_t *target, int64_t delta)
{
return (atomic_fetchadd_64(target, delta) + delta);
}
#ifndef __sparc64__
static inline uint64_t
atomic_cas_64(volatile uint64_t *target, uint64_t cmp, uint64_t newval)
{
#ifdef STRONG_FCMPSET
(void) atomic_fcmpset_64(target, &cmp, newval);
#else
uint64_t expected = cmp;
do {
if (atomic_fcmpset_64(target, &cmp, newval))
break;
} while (cmp == expected);
#endif
return (cmp);
}
#endif
#endif
static __inline void
atomic_inc_64(volatile uint64_t *target)
{

View File

@ -32,6 +32,10 @@ __FBSDID("$FreeBSD$");
#include <sys/mutex.h>
#include <sys/atomic.h>
#if !defined(__LP64__) && !defined(__mips_n32) && \
!defined(ARM_HAVE_ATOMIC64) && !defined(I386_HAVE_ATOMIC64) && \
!defined(HAS_EMULATED_ATOMIC64)
#ifdef _KERNEL
#include <sys/kernel.h>
@ -52,8 +56,6 @@ atomic_init(void)
}
#endif
#if !defined(__LP64__) && !defined(__mips_n32) && \
!defined(ARM_HAVE_ATOMIC64) && !defined(I386_HAVE_ATOMIC64)
void
atomic_add_64(volatile uint64_t *target, int64_t delta)
{
@ -71,7 +73,29 @@ atomic_dec_64(volatile uint64_t *target)
*target -= 1;
mtx_unlock(&atomic_mtx);
}
#endif
uint64_t
atomic_swap_64(volatile uint64_t *a, uint64_t value)
{
uint64_t ret;
mtx_lock(&atomic_mtx);
ret = *a;
*a = value;
mtx_unlock(&atomic_mtx);
return (ret);
}
uint64_t
atomic_load_64(volatile uint64_t *a)
{
uint64_t ret;
mtx_lock(&atomic_mtx);
ret = *a;
mtx_unlock(&atomic_mtx);
return (ret);
}
uint64_t
atomic_add_64_nv(volatile uint64_t *target, int64_t delta)
@ -84,27 +108,6 @@ atomic_add_64_nv(volatile uint64_t *target, int64_t delta)
return (newval);
}
#if defined(__powerpc__) || defined(__arm__) || defined(__mips__)
void
atomic_or_8(volatile uint8_t *target, uint8_t value)
{
mtx_lock(&atomic_mtx);
*target |= value;
mtx_unlock(&atomic_mtx);
}
#endif
uint8_t
atomic_or_8_nv(volatile uint8_t *target, uint8_t value)
{
uint8_t newval;
mtx_lock(&atomic_mtx);
newval = (*target |= value);
mtx_unlock(&atomic_mtx);
return (newval);
}
uint64_t
atomic_cas_64(volatile uint64_t *target, uint64_t cmp, uint64_t newval)
{
@ -117,22 +120,4 @@ atomic_cas_64(volatile uint64_t *target, uint64_t cmp, uint64_t newval)
mtx_unlock(&atomic_mtx);
return (oldval);
}
uint32_t
atomic_cas_32(volatile uint32_t *target, uint32_t cmp, uint32_t newval)
{
uint32_t oldval;
mtx_lock(&atomic_mtx);
oldval = *target;
if (oldval == cmp)
*target = newval;
mtx_unlock(&atomic_mtx);
return (oldval);
}
void
membar_producer(void)
{
/* nothing */
}
#endif