diff options
author | Chengming Zhou <[email protected]> | 2024-01-28 13:28:49 +0000 |
---|---|---|
committer | Andrew Morton <[email protected]> | 2024-02-07 21:20:36 -0800 |
commit | 27d3969b47cc38810b3fd65d72231940e8671e6c (patch) | |
tree | 0bbd29258415e4f39bc0c80fe2256ff6582adb7e | |
parent | 79d72c68c58784a3e1cd2378669d51bfd0cb7498 (diff) |
mm/zswap: don't return LRU_SKIP if we have dropped lru lock
LRU_SKIP can only be returned if we don't ever dropped lru lock, or we
need to return LRU_RETRY to restart from the head of lru list.
Otherwise, the iteration might continue from a cursor position that was
freed while the locks were dropped.
Actually we may need to introduce another LRU_STOP to really terminate the
ongoing shrinking scan process, when we encounter a warm page already in
the swap cache. The current list_lru implementation doesn't have this
function to early break from __list_lru_walk_one.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: b5ba474f3f51 ("zswap: shrink zswap pool based on memory pressure")
Signed-off-by: Chengming Zhou <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Reviewed-by: Nhat Pham <[email protected]>
Cc: Chris Li <[email protected]>
Cc: Yosry Ahmed <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
-rw-r--r-- | mm/zswap.c | 4 |
1 files changed, 1 insertions, 3 deletions
diff --git a/mm/zswap.c b/mm/zswap.c index 0a94b197ed32..350dd2fc8159 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -895,10 +895,8 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o * into the warmer region. We should terminate shrinking (if we're in the dynamic * shrinker context). */ - if (writeback_result == -EEXIST && encountered_page_in_swapcache) { - ret = LRU_SKIP; + if (writeback_result == -EEXIST && encountered_page_in_swapcache) *encountered_page_in_swapcache = true; - } goto put_unlock; } |