diff options
author | Muhammad Usama Anjum <[email protected]> | 2023-08-21 19:15:15 +0500 |
---|---|---|
committer | Andrew Morton <[email protected]> | 2023-10-18 14:34:13 -0700 |
commit | 12f6b01a0bcbeeab8cc9305673314adb3adf80f7 (patch) | |
tree | 132285a46e19974d9c5863b9ce7bc76726cd6c25 | |
parent | 52526ca7fdb905a768a93f8faa418e9b988fc34b (diff) |
fs/proc/task_mmu: add fast paths to get/clear PAGE_IS_WRITTEN flag
Adding fast code paths to handle specifically only get and/or clear
operation of PAGE_IS_WRITTEN, increases its performance by 0-35%. The
results of some test cases are given below:
Test-case-1
t1 = (Get + WP) time
t2 = WP time
t1 t2
Without this patch: 140-170mcs 90-115mcs
With this patch: 110mcs 80mcs
Worst case diff: 35% faster 30% faster
Test-case-2
t3 = atomic Get and WP
t3
Without this patch: 120-140mcs
With this patch: 100-110mcs
Worst case diff: 21% faster
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Muhammad Usama Anjum <[email protected]>
Cc: Alex Sierra <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Andrei Vagin <[email protected]>
Cc: Axel Rasmussen <[email protected]>
Cc: Christian Brauner <[email protected]>
Cc: Cyrill Gorcunov <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Gustavo A. R. Silva <[email protected]>
Cc: "Liam R. Howlett" <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Michal Miroslaw <[email protected]>
Cc: Michał Mirosław <[email protected]>
Cc: Mike Rapoport (IBM) <[email protected]>
Cc: Nadav Amit <[email protected]>
Cc: Pasha Tatashin <[email protected]>
Cc: Paul Gofman <[email protected]>
Cc: Peter Xu <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: Suren Baghdasaryan <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Yang Shi <[email protected]>
Cc: Yun Zhou <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
-rw-r--r-- | fs/proc/task_mmu.c | 36 |
1 files changed, 36 insertions, 0 deletions
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index d4ef9a2bf95d..1d994505805b 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -2145,6 +2145,41 @@ static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start, return 0; } + if (!p->vec_out) { + /* Fast path for performing exclusive WP */ + for (addr = start; addr != end; pte++, addr += PAGE_SIZE) { + if (pte_uffd_wp(ptep_get(pte))) + continue; + make_uffd_wp_pte(vma, addr, pte); + if (!flush_end) + start = addr; + flush_end = addr + PAGE_SIZE; + } + goto flush_and_return; + } + + if (!p->arg.category_anyof_mask && !p->arg.category_inverted && + p->arg.category_mask == PAGE_IS_WRITTEN && + p->arg.return_mask == PAGE_IS_WRITTEN) { + for (addr = start; addr < end; pte++, addr += PAGE_SIZE) { + unsigned long next = addr + PAGE_SIZE; + + if (pte_uffd_wp(ptep_get(pte))) + continue; + ret = pagemap_scan_output(p->cur_vma_category | PAGE_IS_WRITTEN, + p, addr, &next); + if (next == addr) + break; + if (~p->arg.flags & PM_SCAN_WP_MATCHING) + continue; + make_uffd_wp_pte(vma, addr, pte); + if (!flush_end) + start = addr; + flush_end = next; + } + goto flush_and_return; + } + for (addr = start; addr != end; pte++, addr += PAGE_SIZE) { unsigned long categories = p->cur_vma_category | pagemap_page_category(p, vma, addr, ptep_get(pte)); @@ -2168,6 +2203,7 @@ static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start, flush_end = next; } +flush_and_return: if (flush_end) flush_tlb_range(vma, start, addr); |