diff options
author | Minchan Kim <[email protected]> | 2022-01-21 22:14:13 -0800 |
---|---|---|
committer | Linus Torvalds <[email protected]> | 2022-01-22 08:33:37 +0200 |
commit | b475d42d2c43321d8bea685f54916220cb76b511 (patch) | |
tree | 572947f5c54d50ae9a97b205feb39d1564e3f140 /drivers/pnp/isapnp/proc.c | |
parent | 4a57d6bbaecd28c8175dc5da013009e4158018c2 (diff) |
zsmalloc: replace per zpage lock with pool->migrate_lock
The zsmalloc has used a bit for spin_lock in zpage handle to keep zpage
object alive during several operations. However, it causes the problem
for PREEMPT_RT as well as introducing too complicated.
This patch replaces the bit spin_lock with pool->migrate_lock rwlock.
It could make the code simple as well as zsmalloc work under PREEMPT_RT.
The drawback is the pool->migrate_lock is bigger granuarity than per
zpage lock so the contention would be higher than old when both
IO-related operations(i.e., zsmalloc, zsfree, zs_[map|unmap]) and
compaction(page/zpage migration) are going in parallel(*, the
migrate_lock is rwlock and IO related functions are all read side lock
so there is no contention). However, the write-side is fast
enough(dominant overhead is just page copy) so it wouldn't affect much.
If the lock granurity becomes more problem later, we could introduce
table locks based on handle as a hash value.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Minchan Kim <[email protected]>
Acked-by: Sebastian Andrzej Siewior <[email protected]>
Tested-by: Sebastian Andrzej Siewior <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra (Intel) <[email protected]>
Cc: Sergey Senozhatsky <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Diffstat (limited to 'drivers/pnp/isapnp/proc.c')
0 files changed, 0 insertions, 0 deletions