diff options
author | Suren Baghdasaryan <[email protected]> | 2023-07-05 18:13:59 -0700 |
---|---|---|
committer | Andrew Morton <[email protected]> | 2023-07-08 09:29:29 -0700 |
commit | 2b4f3b4987b56365b981f44a7e843efa5b6619b9 (patch) | |
tree | b7c7c6cc2843c9693589d838c918eed2a831c82d | |
parent | 3fbff91afbf0148e937b8718ed865b073c587d9f (diff) |
fork: lock VMAs of the parent process when forking
Patch series "Avoid memory corruption caused by per-VMA locks", v4.
A memory corruption was reported in [1] with bisection pointing to the
patch [2] enabling per-VMA locks for x86. Based on the reproducer
provided in [1] we suspect this is caused by the lack of VMA locking while
forking a child process.
Patch 1/2 in the series implements proper VMA locking during fork. I
tested the fix locally using the reproducer and was unable to reproduce
the memory corruption problem.
This fix can potentially regress some fork-heavy workloads. Kernel build
time did not show noticeable regression on a 56-core machine while a
stress test mapping 10000 VMAs and forking 5000 times in a tight loop
shows ~7% regression. If such fork time regression is unacceptable,
disabling CONFIG_PER_VMA_LOCK should restore its performance. Further
optimizations are possible if this regression proves to be problematic.
Patch 2/2 disables per-VMA locks until the fix is tested and verified.
This patch (of 2):
When forking a child process, parent write-protects an anonymous page and
COW-shares it with the child being forked using copy_present_pte().
Parent's TLB is flushed right before we drop the parent's mmap_lock in
dup_mmap(). If we get a write-fault before that TLB flush in the parent,
and we end up replacing that anonymous page in the parent process in
do_wp_page() (because, COW-shared with the child), this might lead to some
stale writable TLB entries targeting the wrong (old) page. Similar issue
happened in the past with userfaultfd (see flush_tlb_page() call inside
do_wp_page()).
Lock VMAs of the parent process when forking a child, which prevents
concurrent page faults during fork operation and avoids this issue. This
fix can potentially regress some fork-heavy workloads. Kernel build time
did not show noticeable regression on a 56-core machine while a stress
test mapping 10000 VMAs and forking 5000 times in a tight loop shows ~7%
regression. If such fork time regression is unacceptable, disabling
CONFIG_PER_VMA_LOCK should restore its performance. Further optimizations
are possible if this regression proves to be problematic.
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first")
Signed-off-by: Suren Baghdasaryan <[email protected]>
Suggested-by: David Hildenbrand <[email protected]>
Reported-by: Jiri Slaby <[email protected]>
Closes: https://lore.kernel.org/all/[email protected]/
Reported-by: Holger Hoffstätte <[email protected]>
Closes: https://lore.kernel.org/all/[email protected]/
Reported-by: Jacob Young <[email protected]>
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=3D217624
Reviewed-by: Liam R. Howlett <[email protected]>
Acked-by: David Hildenbrand <[email protected]>
Tested-by: Holger Hoffsttte <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
-rw-r--r-- | kernel/fork.c | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/kernel/fork.c b/kernel/fork.c index b85814e614a5..2ba918f83bde 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -658,6 +658,12 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, retval = -EINTR; goto fail_uprobe_end; } +#ifdef CONFIG_PER_VMA_LOCK + /* Disallow any page faults before calling flush_cache_dup_mm */ + for_each_vma(old_vmi, mpnt) + vma_start_write(mpnt); + vma_iter_set(&old_vmi, 0); +#endif flush_cache_dup_mm(oldmm); uprobe_dup_mmap(oldmm, mm); /* |