diff options
author | Ard Biesheuvel <[email protected]> | 2024-02-14 13:29:22 +0100 |
---|---|---|
committer | Catalin Marinas <[email protected]> | 2024-02-16 12:42:41 +0000 |
commit | 0dd4f60a2c76938c2625f6c630c225699d97608b (patch) | |
tree | fce658dd0e32ba152364007bbbb32497c5f0afcd /arch/arm64/include/asm/tlb.h | |
parent | 0383808e4d99ac31892655ae9dc93597eb6f1412 (diff) |
arm64: mm: Add support for folding PUDs at runtime
In order to support LPA2 on 16k pages in a way that permits non-LPA2
systems to run the same kernel image, we have to be able to fall back to
at most 48 bits of virtual addressing.
Falling back to 48 bits would result in a level 0 with only 2 entries,
which is suboptimal in terms of TLB utilization. So instead, let's fall
back to 47 bits in that case. This means we need to be able to fold PUDs
dynamically, similar to how we fold P4Ds for 48 bit virtual addressing
on LPA2 with 4k pages.
Signed-off-by: Ard Biesheuvel <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>
Diffstat (limited to 'arch/arm64/include/asm/tlb.h')
-rw-r--r-- | arch/arm64/include/asm/tlb.h | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 0150deb332af..a947c6e784ed 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -103,6 +103,9 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, { struct ptdesc *ptdesc = virt_to_ptdesc(pudp); + if (!pgtable_l4_enabled()) + return; + pagetable_pud_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } |