diff options
| author | Mateusz Guzik <[email protected]> | 2024-05-28 22:42:57 +0200 |
|---|---|---|
| committer | Andrew Morton <[email protected]> | 2024-06-24 22:25:03 -0700 |
| commit | 51d821654be4286b005ad2b7dc8b973d5008a2ec (patch) | |
| tree | fcb7bf5cea0db77b0ac335acd4455ee41ee74da3 /tools/perf/scripts/python | |
| parent | 727759d748ed34cc8d3e1d215fbc1766010dee3d (diff) | |
percpu_counter: add a cmpxchg-based _add_batch variant
Interrupt disable/enable trips are quite expensive on x86-64 compared to a
mere cmpxchg (note: no lock prefix!) and percpu counters are used quite
often.
With this change I get a bump of 1% ops/s for negative path lookups,
plugged into will-it-scale:
void testcase(unsigned long long *iterations, unsigned long nr)
{
while (1) {
int fd = open("/tmp/nonexistent", O_RDONLY);
assert(fd == -1);
(*iterations)++;
}
}
The win would be higher if it was not for other slowdowns, but one has
to start somewhere.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Mateusz Guzik <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Acked-by: Dennis Zhou <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Diffstat (limited to 'tools/perf/scripts/python')
0 files changed, 0 insertions, 0 deletions