diff options
author | Will Deacon <will@kernel.org> | 2019-08-13 16:26:54 +0100 |
---|---|---|
committer | Will Deacon <will@kernel.org> | 2019-08-14 13:04:46 +0100 |
commit | 577c2b35283fbadcc9ce4b56304ccea3ec8a5ca1 (patch) | |
tree | 8cef1e56b90a19ce79989f0b606ae88e1397343e /drivers/gpu/drm/amd/amdgpu/amdgpu_trace_points.c | |
parent | 68dd8ef321626f14ae9ef2039b7a03c707149489 (diff) |
arm64: memory: Ensure address tag is masked in conversion macros
When converting a linear virtual address to a physical address, pfn or
struct page *, we must make sure that the tag bits are masked before the
calculation otherwise we end up with corrupt pointers when running with
CONFIG_KASAN_SW_TAGS=y:
| Unable to handle kernel paging request at virtual address 0037fe0007580d08
| [0037fe0007580d08] address between user and kernel address ranges
Mask out the tag in __virt_to_phys_nodebug() and virt_to_page().
Reported-by: Qian Cai <cai@lca.pw>
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Tested-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Fixes: 9cb1c5ddd2c4 ("arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START")
Signed-off-by: Will Deacon <will@kernel.org>
Diffstat (limited to 'drivers/gpu/drm/amd/amdgpu/amdgpu_trace_points.c')
0 files changed, 0 insertions, 0 deletions