diff options
author | Ingo Molnar <[email protected]> | 2009-06-16 10:23:32 +0200 |
---|---|---|
committer | Ingo Molnar <[email protected]> | 2009-06-16 10:23:32 +0200 |
commit | 5dfaf90f8052327c92fbe3c470a2e6634be296c0 (patch) | |
tree | d29c1191df48fcf1180a8509f93744d343e58d17 /net/lapb/lapb_timer.c | |
parent | 507fa3a3d80365c595113a5ac3232309e3dbf5d8 (diff) |
x86: mm: Read cr2 before prefetching the mmap_lock
Prefetch instructions can generate spurious faults on certain
models of older CPUs. The faults themselves cannot be stopped
and they can occur pretty much anywhere - so the way we solve
them is that we detect certain patterns and ignore the fault.
There is one small path of code where we must not take faults
though: the #PF handler execution leading up to the reading
of the CR2 (the faulting address). If we take a fault there
then we destroy the CR2 value (with that of the prefetching
instruction's) and possibly mishandle user-space or
kernel-space pagefaults.
It turns out that in current upstream we do exactly that:
prefetchw(&mm->mmap_sem);
/* Get the faulting address: */
address = read_cr2();
This is not good.
So turn around the order: first read the cr2 then prefetch
the lock address. Reading cr2 is plenty fast (2 cycles) so
delaying the prefetch by this amount shouldnt be a big issue
performance-wise.
[ And this might explain a mystery fault.c warning that sometimes
occurs on one an old AMD/Semptron based test-system i have -
which does have such prefetch problems. ]
Cc: Mathieu Desnoyers <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Nick Piggin <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: Vegard Nossum <[email protected]>
Cc: Jeremy Fitzhardinge <[email protected]>
Cc: Hugh Dickins <[email protected]>
LKML-Reference: <20090616030522.GA22162@Krystal>
Signed-off-by: Ingo Molnar <[email protected]>
Diffstat (limited to 'net/lapb/lapb_timer.c')
0 files changed, 0 insertions, 0 deletions