diff options
author | Arjun Roy <[email protected]> | 2020-04-10 14:33:01 -0700 |
---|---|---|
committer | Linus Torvalds <[email protected]> | 2020-04-10 15:36:21 -0700 |
commit | 8cd3984d81d5fd5e18bccb12d7d228a114ec2508 (patch) | |
tree | c5d9fe45555d382ce898e490c4b4ccf1561e7049 /include/linux/mm.h | |
parent | c97078bd219cbe1a878b24bb4e61d312f19ece1f (diff) |
mm/memory.c: add vm_insert_pages()
Add the ability to insert multiple pages at once to a user VM with lower
PTE spinlock operations.
The intention of this patch-set is to reduce atomic ops for tcp zerocopy
receives, which normally hits the same spinlock multiple times
consecutively.
[[email protected]: pte_alloc() no longer takes the `addr' argument]
[[email protected]: add missing page_count() check to vm_insert_pages()]
Link: http://lkml.kernel.org/r/[email protected]
[[email protected]: vm_insert_pages() checks if pte_index defined]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arjun Roy <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
Signed-off-by: Soheil Hassas Yeganeh <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Cc: David Miller <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
Diffstat (limited to 'include/linux/mm.h')
-rw-r--r-- | include/linux/mm.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/include/linux/mm.h b/include/linux/mm.h index e2f938c5a9d8..ed896cedd4c4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2689,6 +2689,8 @@ struct vm_area_struct *find_extend_vma(struct mm_struct *, unsigned long addr); int remap_pfn_range(struct vm_area_struct *, unsigned long addr, unsigned long pfn, unsigned long size, pgprot_t); int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *); +int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr, + struct page **pages, unsigned long *num); int vm_map_pages(struct vm_area_struct *vma, struct page **pages, unsigned long num); int vm_map_pages_zero(struct vm_area_struct *vma, struct page **pages, |