diff options
| author | Davidlohr Bueso <[email protected]> | 2016-01-24 18:23:43 -0800 |
|---|---|---|
| committer | Ingo Molnar <[email protected]> | 2016-02-29 10:02:42 +0100 |
| commit | 1329ce6fbbe4536592dfcfc8d64d61bfeb598fe6 (patch) | |
| tree | 9e62701e84453ac079b56e5a69861a7498c1182c /kernel/locking | |
| parent | 32d62510f949d3c8e83b9b3b844a84446611661b (diff) | |
locking/mutex: Allow next waiter lockless wakeup
Make use of wake-queues and enable the wakeup to occur after releasing the
wait_lock. This is similar to what we do with rtmutex top waiter,
slightly shortening the critical region and allow other waiters to
acquire the wait_lock sooner. In low contention cases it can also help
the recently woken waiter to find the wait_lock available (fastpath)
when it continues execution.
Reviewed-by: Waiman Long <[email protected]>
Signed-off-by: Davidlohr Bueso <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Ding Tianhong <[email protected]>
Cc: Jason Low <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Tim Chen <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Will Deacon <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Diffstat (limited to 'kernel/locking')
| -rw-r--r-- | kernel/locking/mutex.c | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 0551c219c40e..e364b424b019 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -716,6 +716,7 @@ static inline void __mutex_unlock_common_slowpath(struct mutex *lock, int nested) { unsigned long flags; + WAKE_Q(wake_q); /* * As a performance measurement, release the lock before doing other @@ -743,11 +744,11 @@ __mutex_unlock_common_slowpath(struct mutex *lock, int nested) struct mutex_waiter, list); debug_mutex_wake_waiter(lock, waiter); - - wake_up_process(waiter->task); + wake_q_add(&wake_q, waiter->task); } spin_unlock_mutex(&lock->wait_lock, flags); + wake_up_q(&wake_q); } /* |