diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2023-04-16 18:23:06 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2023-04-18 17:05:28 -0700 |
commit | 427fda2c8a4977d9dbd9bc108bbe6e21ec84648d (patch) | |
tree | 4bad251e89bd015dcb0314251cafaff3183d3181 /arch/x86/include/asm | |
parent | 8c9b6a88b7e2f33c656cd667a081bfd4dc8f5005 (diff) |
x86: improve on the non-rep 'copy_user' function
The old 'copy_user_generic_unrolled' function was oddly implemented for
largely historical reasons: it had been largely based on the uncached
copy case, which has some other concerns.
For example, the __copy_user_nocache() function uses 'movnti' for the
destination stores, and those want the destination to be aligned. In
contrast, the regular copy function doesn't really care, and trying to
align things only complicates matters.
Also, like the clear_user function, the copy function had some odd
handling of the repeat counts, complicating the exception handling for
no really good reason. So as with clear_user, just write it to keep all
the byte counts in the %rcx register, exactly like the 'rep movs'
functionality that this replaces.
Unlike a real 'rep movs', we do allow for this to trash a few temporary
registers to not have to unnecessarily save/restore registers on the
stack.
And like the clearing case, rename this to what it now clearly is:
'rep_movs_alternative', and make it one coherent function, so that it
shows up as such in profiles (instead of the odd split between
"copy_user_generic_unrolled" and "copy_user_short_string", the latter of
which was not about strings at all, and which was shared with the
uncached case).
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'arch/x86/include/asm')
-rw-r--r-- | arch/x86/include/asm/uaccess_64.h | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h index a0533e672496..435ca24c5e1d 100644 --- a/arch/x86/include/asm/uaccess_64.h +++ b/arch/x86/include/asm/uaccess_64.h @@ -18,7 +18,7 @@ /* Handles exceptions in both to and from, but doesn't do access_ok */ __must_check unsigned long -copy_user_generic_unrolled(void *to, const void *from, unsigned len); +rep_movs_alternative(void *to, const void *from, unsigned len); static __always_inline __must_check unsigned long copy_user_generic(void *to, const void *from, unsigned long len) @@ -26,16 +26,16 @@ copy_user_generic(void *to, const void *from, unsigned long len) stac(); /* * If CPU has FSRM feature, use 'rep movs'. - * Otherwise, use copy_user_generic_unrolled. + * Otherwise, use rep_movs_alternative. */ asm volatile( "1:\n\t" ALTERNATIVE("rep movsb", - "call copy_user_generic_unrolled", ALT_NOT(X86_FEATURE_FSRM)) + "call rep_movs_alternative", ALT_NOT(X86_FEATURE_FSRM)) "2:\n" _ASM_EXTABLE_UA(1b, 2b) :"+c" (len), "+D" (to), "+S" (from), ASM_CALL_CONSTRAINT - : : "memory", "rax", "rdx", "r8", "r9", "r10", "r11"); + : : "memory", "rax", "r8", "r9", "r10", "r11"); clac(); return len; } |