diff options
author | Paul Burton <[email protected]> | 2018-11-01 17:51:34 +0000 |
---|---|---|
committer | Trond Myklebust <[email protected]> | 2018-11-01 13:55:24 -0400 |
commit | c3be6577d82a9f0163eb1e2c37a477414d12a209 (patch) | |
tree | d6cee7e555354f3f81f10b35b284b93fecc68ade /net/sunrpc/auth_gss/gss_krb5_wrap.c | |
parent | 86bbd7422ae6a33735df6846fd685e46686da714 (diff) |
SUNRPC: Use atomic(64)_t for seq_send(64)
The seq_send & seq_send64 fields in struct krb5_ctx are used as
atomically incrementing counters. This is implemented using cmpxchg() &
cmpxchg64() to implement what amount to custom versions of
atomic_fetch_inc() & atomic64_fetch_inc().
Besides the duplication, using cmpxchg64() has another major drawback in
that some 32 bit architectures don't provide it. As such commit
571ed1fd2390 ("SUNRPC: Replace krb5_seq_lock with a lockless scheme")
resulted in build failures for some architectures.
Change seq_send to be an atomic_t and seq_send64 to be an atomic64_t,
then use atomic(64)_* functions to manipulate the values. The atomic64_t
type & associated functions are provided even on architectures which
lack real 64 bit atomic memory access via CONFIG_GENERIC_ATOMIC64 which
uses spinlocks to serialize access. This fixes the build failures for
architectures lacking cmpxchg64().
A potential alternative that was raised would be to provide cmpxchg64()
on the 32 bit architectures that currently lack it, using spinlocks.
However this would provide a version of cmpxchg64() with semantics a
little different to the implementations on architectures with real 64
bit atomics - the spinlock-based implementation would only work if all
access to the memory used with cmpxchg64() is *always* performed using
cmpxchg64(). That is not currently a requirement for users of
cmpxchg64(), and making it one seems questionable. As such avoiding
cmpxchg64() outside of architecture-specific code seems best,
particularly in cases where atomic64_t seems like a better fit anyway.
The CONFIG_GENERIC_ATOMIC64 implementation of atomic64_* functions will
use spinlocks & so faces the same issue, but with the key difference
that the memory backing an atomic64_t ought to always be accessed via
the atomic64_* functions anyway making the issue moot.
Signed-off-by: Paul Burton <[email protected]>
Fixes: 571ed1fd2390 ("SUNRPC: Replace krb5_seq_lock with a lockless scheme")
Cc: Trond Myklebust <[email protected]>
Cc: Anna Schumaker <[email protected]>
Cc: J. Bruce Fields <[email protected]>
Cc: Jeff Layton <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Trond Myklebust <[email protected]>
Diffstat (limited to 'net/sunrpc/auth_gss/gss_krb5_wrap.c')
-rw-r--r-- | net/sunrpc/auth_gss/gss_krb5_wrap.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/net/sunrpc/auth_gss/gss_krb5_wrap.c b/net/sunrpc/auth_gss/gss_krb5_wrap.c index 41cb294cd071..6af6f119d9c1 100644 --- a/net/sunrpc/auth_gss/gss_krb5_wrap.c +++ b/net/sunrpc/auth_gss/gss_krb5_wrap.c @@ -228,7 +228,7 @@ gss_wrap_kerberos_v1(struct krb5_ctx *kctx, int offset, memcpy(ptr + GSS_KRB5_TOK_HDR_LEN, md5cksum.data, md5cksum.len); - seq_send = gss_seq_send_fetch_and_inc(kctx); + seq_send = atomic_fetch_inc(&kctx->seq_send); /* XXX would probably be more efficient to compute checksum * and encrypt at the same time: */ @@ -475,7 +475,7 @@ gss_wrap_kerberos_v2(struct krb5_ctx *kctx, u32 offset, *be16ptr++ = 0; be64ptr = (__be64 *)be16ptr; - *be64ptr = cpu_to_be64(gss_seq_send64_fetch_and_inc(kctx)); + *be64ptr = cpu_to_be64(atomic64_fetch_inc(&kctx->seq_send64)); err = (*kctx->gk5e->encrypt_v2)(kctx, offset, buf, pages); if (err) |