diff options
author | Andrea Arcangeli <[email protected]> | 2021-06-28 19:36:36 -0700 |
---|---|---|
committer | Linus Torvalds <[email protected]> | 2021-06-29 10:53:48 -0700 |
commit | 292648ac5cf16ec1fce33e29e0f9e35da7de63f7 (patch) | |
tree | 0b3d1d4c5c8d8a0d8f70b27cfad4aacafeb964b2 | |
parent | f39bd8534594535f6fd968ee7e05d6a70b74d1a9 (diff) |
mm: gup: allow FOLL_PIN to scale in SMP
has_pinned cannot be written by each pin-fast or it won't scale in SMP.
This isn't "false sharing" strictly speaking (it's more like "true
non-sharing"), but it creates the same SMP scalability bottleneck of
"false sharing".
To verify the improvement, below test is done on 40 cpus host with
Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz (must be with
CONFIG_GUP_TEST=y):
$ sudo chrt -f 1 ./gup_test -a -m 512 -j 40
Where we can get (average value for 40 threads):
Old kernel: 477729.97 (+- 3.79%)
New kernel: 89144.65 (+-11.76%)
On a similar condition with 256 cpus, this commits increases the SMP
scalability of pin_user_pages_fast() executed by different threads of the
same process by more than 4000%.
[[email protected]: rewrite commit message, add parentheses against "(A & B)"]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Andrea Arcangeli <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: John Hubbard <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Jan Kara <[email protected]>
Cc: Jann Horn <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Kirill Shutemov <[email protected]>
Cc: Kirill Tkhai <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Oleg Nesterov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
-rw-r--r-- | mm/gup.c | 4 |
1 files changed, 2 insertions, 2 deletions
@@ -1320,7 +1320,7 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, BUG_ON(*locked != 1); } - if (flags & FOLL_PIN) + if ((flags & FOLL_PIN) && !atomic_read(&mm->has_pinned)) atomic_set(&mm->has_pinned, 1); /* @@ -2641,7 +2641,7 @@ static int internal_get_user_pages_fast(unsigned long start, FOLL_FAST_ONLY))) return -EINVAL; - if (gup_flags & FOLL_PIN) + if ((gup_flags & FOLL_PIN) && !atomic_read(¤t->mm->has_pinned)) atomic_set(¤t->mm->has_pinned, 1); if (!(gup_flags & FOLL_FAST_ONLY)) |