diff options
author | Matthew Wilcox (Oracle) <[email protected]> | 2023-09-20 04:53:35 +0100 |
---|---|---|
committer | Andrew Morton <[email protected]> | 2023-09-29 17:20:45 -0700 |
commit | a501a0703044f00180d7697b32cacd7ff46d02d8 (patch) | |
tree | 5f64bdf00b33bd11203e270d0b8d0ab2912d86dd | |
parent | 7c3151585730b7095287be8162b846d31e6eee61 (diff) |
mm: report success more often from filemap_map_folio_range()
Even though we had successfully mapped the relevant page, we would rarely
return success from filemap_map_folio_range(). That leads to falling back
from the VMA lock path to the mmap_lock path, which is a speed &
scalability issue. Found by inspection.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 617c28ecab22 ("filemap: batch PTE mappings")
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Yin Fengwei <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
-rw-r--r-- | mm/filemap.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/filemap.c b/mm/filemap.c index 4ea4387053e8..f0a15ce1bd1b 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3503,7 +3503,7 @@ skip: if (count) { set_pte_range(vmf, folio, page, count, addr); folio_ref_add(folio, count); - if (in_range(vmf->address, addr, count)) + if (in_range(vmf->address, addr, count * PAGE_SIZE)) ret = VM_FAULT_NOPAGE; } @@ -3517,7 +3517,7 @@ skip: if (count) { set_pte_range(vmf, folio, page, count, addr); folio_ref_add(folio, count); - if (in_range(vmf->address, addr, count)) + if (in_range(vmf->address, addr, count * PAGE_SIZE)) ret = VM_FAULT_NOPAGE; } |