aboutsummaryrefslogtreecommitdiff
path: root/fs/jbd/commit.c
diff options
context:
space:
mode:
authorPeter Zijlstra <[email protected]>2013-11-19 16:13:38 +0100
committerIngo Molnar <[email protected]>2014-01-13 17:39:11 +0100
commit37089834528be3ef8cbf927e47c753b3e272a856 (patch)
tree9cce66ba40e5c0684b3e4f4f354dd717aa1fef84 /fs/jbd/commit.c
parent1774e9f3e5c8b38de3b3bc8bd0eacd280f655baf (diff)
sched, net: Fixup busy_loop_us_clock()
The only valid use of preempt_enable_no_resched() is if the very next line is schedule() or if we know preemption cannot actually be enabled by that statement due to known more preempt_count 'refs'. This busy_poll stuff looks to be completely and utterly broken, sched_clock() can return utter garbage with interrupts enabled (rare but still) and it can drift unbounded between CPUs. This means that if you get preempted/migrated and your new CPU is years behind on the previous CPU we get to busy spin for a _very_ long time. There is a _REASON_ sched_clock() warns about preemptability - papering over it with a preempt_disable()/preempt_enable_no_resched() is just terminal brain damage on so many levels. Replace sched_clock() usage with local_clock() which has a bounded drift between CPUs (<2 jiffies). There is a further problem with the entire busy wait poll thing in that the spin time is additive to the syscall timeout, not inclusive. Reviewed-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: David S. Miller <[email protected]> Cc: [email protected] Cc: [email protected] Cc: Mike Galbraith <[email protected]> Cc: [email protected] Cc: Arjan van de Ven <[email protected]> Cc: [email protected] Cc: [email protected] Cc: Eliezer Tamir <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Andrew Morton <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
Diffstat (limited to 'fs/jbd/commit.c')
0 files changed, 0 insertions, 0 deletions