diff options
| author | Matthew Wilcox (Oracle) <[email protected]> | 2023-09-20 05:09:58 +0100 |
|---|---|---|
| committer | Andrew Morton <[email protected]> | 2023-09-29 17:20:46 -0700 |
| commit | ce60f27bb62dfeb1bf827350520f34abc84e0933 (patch) | |
| tree | ba2bb32d49a6ed1df73a1f1e62392929b12c5e1f /arch/x86/include/asm/pgtable.h | |
| parent | a501a0703044f00180d7697b32cacd7ff46d02d8 (diff) | |
mm: abstract moving to the next PFN
In order to fix the L1TF vulnerability, x86 can invert the PTE bits for
PROT_NONE VMAs, which means we cannot move from one PTE to the next by
adding 1 to the PFN field of the PTE. This results in the BUG reported at
[1].
Abstract advancing the PTE to the next PFN through a pte_next_pfn()
function/macro.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: bcc6cc832573 ("mm: add default definition of set_ptes()")
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reported-by: [email protected]
Closes: https://lkml.kernel.org/r/[email protected] [1]
Reviewed-by: Yin Fengwei <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Diffstat (limited to 'arch/x86/include/asm/pgtable.h')
| -rw-r--r-- | arch/x86/include/asm/pgtable.h | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index d6ad98ca1288..e02b179ec659 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -955,6 +955,14 @@ static inline int pte_same(pte_t a, pte_t b) return a.pte == b.pte; } +static inline pte_t pte_next_pfn(pte_t pte) +{ + if (__pte_needs_invert(pte_val(pte))) + return __pte(pte_val(pte) - (1UL << PFN_PTE_SHIFT)); + return __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); +} +#define pte_next_pfn pte_next_pfn + static inline int pte_present(pte_t a) { return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE); |