diff options
| author | Jason A. Donenfeld <[email protected]> | 2022-01-28 23:29:45 +0100 |
|---|---|---|
| committer | Jason A. Donenfeld <[email protected]> | 2022-02-21 16:48:06 +0100 |
| commit | 77760fd7f7ae3dfd03668204e708d1568d75447d (patch) | |
| tree | 6161e533a389f172e3135d68070f6bb525962428 /tools/perf/scripts/python/bin/stackcollapse-report | |
| parent | 5d58ea3a31cc98b9fa563f6921d3d043bf0103d1 (diff) | |
random: remove batched entropy locking
Rather than use spinlocks to protect batched entropy, we can instead
disable interrupts locally, since we're dealing with per-cpu data, and
manage resets with a basic generation counter. At the same time, we
can't quite do this on PREEMPT_RT, where we still want spinlocks-as-
mutexes semantics. So we use a local_lock_t, which provides the right
behavior for each. Because this is a per-cpu lock, that generation
counter is still doing the necessary CPU-to-CPU communication.
This should improve performance a bit. It will also fix the linked splat
that Jonathan received with a PROVE_RAW_LOCK_NESTING=y.
Reviewed-by: Sebastian Andrzej Siewior <[email protected]>
Reviewed-by: Dominik Brodowski <[email protected]>
Reviewed-by: Eric Biggers <[email protected]>
Suggested-by: Andy Lutomirski <[email protected]>
Reported-by: Jonathan Neuschäfer <[email protected]>
Tested-by: Jonathan Neuschäfer <[email protected]>
Link: https://lore.kernel.org/lkml/YfMa0QgsjCVdRAvJ@latitude/
Signed-off-by: Jason A. Donenfeld <[email protected]>
Diffstat (limited to 'tools/perf/scripts/python/bin/stackcollapse-report')
0 files changed, 0 insertions, 0 deletions