diff options
| author | Peter Zijlstra <[email protected]> | 2013-11-19 16:13:38 +0100 |
|---|---|---|
| committer | Ingo Molnar <[email protected]> | 2014-01-13 13:47:36 +0100 |
| commit | 9ea4c380066fbe23fe0da7f4abfabc444f2467f4 (patch) | |
| tree | a6f3e8def29275f04b33cfb0defcefc9e0adaaf0 /lib/mpi/mpicoder.c | |
| parent | c726099ec224be8078d91072207053ff9a1ad6fc (diff) | |
locking: Optimize lock_bh functions
Currently all _bh_ lock functions do two preempt_count operations:
local_bh_disable();
preempt_disable();
and for the unlock:
preempt_enable_no_resched();
local_bh_enable();
Since its a waste of perfectly good cycles to modify the same variable
twice when you can do it in one go; use the new
__local_bh_{dis,en}able_ip() functions that allow us to provide a
preempt_count value to add/sub.
So define SOFTIRQ_LOCK_OFFSET as the offset a _bh_ lock needs to
add/sub to be done in one go.
As a bonus it gets rid of the preempt_enable_no_resched() usage.
This reduces a 1000 loops of:
spin_lock_bh(&bh_lock);
spin_unlock_bh(&bh_lock);
from 53596 cycles to 51995 cycles. I didn't do enough measurements to
say for absolute sure that the result is significant but the the few
runs I did for each suggest it is so.
Reviewed-by: Thomas Gleixner <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: [email protected]
Cc: Mike Galbraith <[email protected]>
Cc: [email protected]
Cc: Arjan van de Ven <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: Linus Torvalds <[email protected]>
Cc: Andrew Morton <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Diffstat (limited to 'lib/mpi/mpicoder.c')
0 files changed, 0 insertions, 0 deletions