diff options
| author | Will Deacon <[email protected]> | 2018-04-26 11:34:18 +0100 |
|---|---|---|
| committer | Ingo Molnar <[email protected]> | 2018-04-27 09:48:47 +0200 |
| commit | b247be3fe89b6aba928bf80f4453d1c4ba8d2063 (patch) | |
| tree | 4e3ca1be09b40d6885eae9a2d2a432afb7cfeb30 /scripts/gdb/linux/tasks.py | |
| parent | 6512276d97b160d90b53285bd06f7f201459a7e3 (diff) | |
locking/qspinlock/x86: Increase _Q_PENDING_LOOPS upper bound
On x86, atomic_cond_read_relaxed will busy-wait with a cpu_relax() loop,
so it is desirable to increase the number of times we spin on the qspinlock
lockword when it is found to be transitioning from pending to locked.
According to Waiman Long:
| Ideally, the spinning times should be at least a few times the typical
| cacheline load time from memory which I think can be down to 100ns or
| so for each cacheline load with the newest systems or up to several
| hundreds ns for older systems.
which in his benchmarking corresponded to 512 iterations.
Suggested-by: Waiman Long <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Waiman Long <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Diffstat (limited to 'scripts/gdb/linux/tasks.py')
0 files changed, 0 insertions, 0 deletions