aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorKefeng Wang <[email protected]>2024-06-26 16:53:26 +0800
committerAndrew Morton <[email protected]>2024-07-06 11:53:19 -0700
commit060913999d7a9e50c283fdb15253fc27974ddadc (patch)
tree388afadc643274ab90a8153c82604c1a1b3f5f52
parent528815392f873f0af8c6cdc279c89bd0154cbf6a (diff)
mm: migrate: support poisoned recover from migrate folio
The folio migration is widely used in kernel, memory compaction, memory hotplug, soft offline page, numa balance, memory demote/promotion, etc, but once access a poisoned source folio when migrating, the kerenl will panic. There is a mechanism in the kernel to recover from uncorrectable memory errors, ARCH_HAS_COPY_MC, which is already used in other core-mm paths, eg, CoW, khugepaged, coredump, ksm copy, see copy_mc_to_{user,kernel}, copy_mc_{user_}highpage callers. In order to support poisoned folio copy recover from migrate folio, we chose to make folio migration tolerant of memory failures and return error for folio migration, because folio migration is no guarantee of success, this could avoid the similar panic shown below. CPU: 1 PID: 88343 Comm: test_softofflin Kdump: loaded Not tainted 6.6.0 pc : copy_page+0x10/0xc0 lr : copy_highpage+0x38/0x50 ... Call trace: copy_page+0x10/0xc0 folio_copy+0x78/0x90 migrate_folio_extra+0x54/0xa0 move_to_new_folio+0xd8/0x1f0 migrate_folio_move+0xb8/0x300 migrate_pages_batch+0x528/0x788 migrate_pages_sync+0x8c/0x258 migrate_pages+0x440/0x528 soft_offline_in_use_page+0x2ec/0x3c0 soft_offline_page+0x238/0x310 soft_offline_page_store+0x6c/0xc0 dev_attr_store+0x20/0x40 sysfs_kf_write+0x4c/0x68 kernfs_fop_write_iter+0x130/0x1c8 new_sync_write+0xa4/0x138 vfs_write+0x238/0x2d8 ksys_write+0x74/0x110 Note, folio copy is moved in the begin of the __migrate_folio(), which could simplify the error handling since there is no turning back if folio_migrate_mapping() return success, the downside is the folio copied even though folio_migrate_mapping() return fail, an optimization is to check whether source folio does not have extra refs before we do folio copy. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kefeng Wang <[email protected]> Cc: Alistair Popple <[email protected]> Cc: Benjamin LaHaise <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Jane Chu <[email protected]> Cc: Jérôme Glisse <[email protected]> Cc: Jiaqi Yan <[email protected]> Cc: Lance Yang <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Miaohe Lin <[email protected]> Cc: Muchun Song <[email protected]> Cc: Naoya Horiguchi <[email protected]> Cc: Oscar Salvador <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vishal Moola (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
-rw-r--r--mm/migrate.c14
1 files changed, 11 insertions, 3 deletions
diff --git a/mm/migrate.c b/mm/migrate.c
index 13b032269242..9dd5eb846d38 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -681,16 +681,24 @@ static int __migrate_folio(struct address_space *mapping, struct folio *dst,
struct folio *src, void *src_private,
enum migrate_mode mode)
{
- int rc;
+ int rc, expected_count = folio_expected_refs(mapping, src);
+
+ /* Check whether src does not have extra refs before we do more work */
+ if (folio_ref_count(src) != expected_count)
+ return -EAGAIN;
+
+ rc = folio_mc_copy(dst, src);
+ if (unlikely(rc))
+ return rc;
- rc = folio_migrate_mapping(mapping, dst, src, 0);
+ rc = __folio_migrate_mapping(mapping, dst, src, expected_count);
if (rc != MIGRATEPAGE_SUCCESS)
return rc;
if (src_private)
folio_attach_private(dst, folio_detach_private(src));
- folio_migrate_copy(dst, src);
+ folio_migrate_flags(dst, src);
return MIGRATEPAGE_SUCCESS;
}