diff options
author | Charan Teja Kalla <[email protected]> | 2023-12-14 04:58:41 +0000 |
---|---|---|
committer | Andrew Morton <[email protected]> | 2023-12-20 13:46:19 -0800 |
commit | fc346d0a70a13d52fe1c4bc49516d83a42cd7c4c (patch) | |
tree | a7185749258900db9678c580aa8f316b64dfa0ad /scripts/generate_rust_analyzer.py | |
parent | 4249f13c11be8b8b7bf93204185e150c3bdc968d (diff) |
mm: migrate high-order folios in swap cache correctly
Large folios occupy N consecutive entries in the swap cache instead of
using multi-index entries like the page cache. However, if a large folio
is re-added to the LRU list, it can be migrated. The migration code was
not aware of the difference between the swap cache and the page cache and
assumed that a single xas_store() would be sufficient.
This leaves potentially many stale pointers to the now-migrated folio in
the swap cache, which can lead to almost arbitrary data corruption in the
future. This can also manifest as infinite loops with the RCU read lock
held.
[[email protected]: modifications to the changelog & tweaked the fix]
Fixes: 3417013e0d18 ("mm/migrate: Add folio_migrate_mapping()")
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Charan Teja Kalla <[email protected]>
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reported-by: Charan Teja Kalla <[email protected]>
Closes: https://lkml.kernel.org/r/[email protected]
Cc: David Hildenbrand <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Shakeel Butt <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Diffstat (limited to 'scripts/generate_rust_analyzer.py')
0 files changed, 0 insertions, 0 deletions