diff options
author | Kefeng Wang <[email protected]> | 2024-04-03 16:37:59 +0800 |
---|---|---|
committer | Andrew Morton <[email protected]> | 2024-04-25 20:56:38 -0700 |
commit | 6ea02ee489799317c6640ac014c49b1d1b7124c5 (patch) | |
tree | b372790fba7541e3ab35ff4de6c24ac2882b3d18 /arch/arm/mm/fault.c | |
parent | 3931b871c4936c00c4e27c469056d8da47a3493f (diff) |
arm64: mm: cleanup __do_page_fault()
Patch series "arch/mm/fault: accelerate pagefault when badaccess", v2.
After VMA lock-based page fault handling enabled, if bad access met
under per-vma lock, it will fallback to mmap_lock-based handling,
so it leads to unnessary mmap lock and vma find again. A test from
lmbench shows 34% improve after this changes on arm64,
lat_sig -P 1 prot lat_sig 0.29194 -> 0.19198
This patch (of 7):
The __do_page_fault() only calls handle_mm_fault() after vm_flags checked,
and it is only called by do_page_fault(), let's squash it into
do_page_fault() to cleanup code.
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Kefeng Wang <[email protected]>
Reviewed-by: Suren Baghdasaryan <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Cc: Albert Ou <[email protected]>
Cc: Alexander Gordeev <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Gerald Schaefer <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Palmer Dabbelt <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Russell King <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Diffstat (limited to 'arch/arm/mm/fault.c')
0 files changed, 0 insertions, 0 deletions