diff options
| author | Rik van Riel <[email protected]> | 2013-07-31 22:14:21 -0400 |
|---|---|---|
| committer | Ingo Molnar <[email protected]> | 2013-08-01 09:10:26 +0200 |
| commit | 8f898fbbe5ee5e20a77c4074472a1fd088dc47d1 (patch) | |
| tree | 7ef51401d7b98bddb1b59939deda8c3c6ad7dfa6 /tools/perf/scripts/python/compaction-times.py | |
| parent | 46591962cb5bfd2bfb0baf42497119c816503598 (diff) | |
sched/x86: Optimize switch_mm() for multi-threaded workloads
Dick Fowles, Don Zickus and Joe Mario have been working on
improvements to perf, and noticed heavy cache line contention
on the mm_cpumask, running linpack on a 60 core / 120 thread
system.
The cause turned out to be unnecessary atomic accesses to the
mm_cpumask. When in lazy TLB mode, the CPU is only removed from
the mm_cpumask if there is a TLB flush event.
Most of the time, no such TLB flush happens, and the kernel
skips the TLB reload. It can also skip the atomic memory
set & test.
Here is a summary of Joe's test results:
* The __schedule function dropped from 24% of all program cycles down
to 5.5%.
* The cacheline contention/hotness for accesses to that bitmask went
from being the 1st/2nd hottest - down to the 84th hottest (0.3% of
all shared misses which is now quite cold)
* The average load latency for the bit-test-n-set instruction in
__schedule dropped from 10k-15k cycles down to an average of 600 cycles.
* The linpack program results improved from 133 GFlops to 144 GFlops.
Peak GFlops rose from 133 to 153.
Reported-by: Don Zickus <[email protected]>
Reported-by: Joe Mario <[email protected]>
Tested-by: Joe Mario <[email protected]>
Signed-off-by: Rik van Riel <[email protected]>
Reviewed-by: Paul Turner <[email protected]>
Acked-by: Linus Torvalds <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
[ Made the comments consistent around the modified code. ]
Signed-off-by: Ingo Molnar <[email protected]>
Diffstat (limited to 'tools/perf/scripts/python/compaction-times.py')
0 files changed, 0 insertions, 0 deletions