aboutsummaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2024-07-03s390/checksum: add a KMSAN checkIlya Leoshkevich1-0/+2
Add a KMSAN check to the CKSM inline assembly, similar to how it was done for ASAN in commit e42ac7789df6 ("s390/checksum: always use cksm instruction"). Link: https://lkml.kernel.org/r/20240621113706.315500-26-iii@linux.ibm.com Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: <kasan-dev@googlegroups.com> Cc: Marco Elver <elver@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Steven Rostedt (Google) <rostedt@goodmis.org> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-07-03s390/boot: add the KMSAN runtime stubIlya Leoshkevich2-0/+7
It should be possible to have inline functions in the s390 header files, which call kmsan_unpoison_memory(). The problem is that these header files might be included by the decompressor, which does not contain KMSAN runtime, causing linker errors. Not compiling these calls if __SANITIZE_MEMORY__ is not defined - either by changing kmsan-checks.h or at the call sites - may cause unintended side effects, since calling these functions from an uninstrumented code that is linked into the kernel is valid use case. One might want to explicitly distinguish between the kernel and the decompressor. Checking for a decompressor-specific #define is quite heavy-handed, and will have to be done at all call sites. A more generic approach is to provide a dummy kmsan_unpoison_memory() definition. This produces some runtime overhead, but only when building with CONFIG_KMSAN. The benefit is that it does not disturb the existing KMSAN build logic and call sites don't need to be changed. Link: https://lkml.kernel.org/r/20240621113706.315500-25-iii@linux.ibm.com Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: <kasan-dev@googlegroups.com> Cc: Marco Elver <elver@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Steven Rostedt (Google) <rostedt@goodmis.org> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-07-03s390: use a larger stack for KMSANIlya Leoshkevich2-2/+2
Adjust the stack size for the KMSAN-enabled kernel like it was done for the KASAN-enabled one in commit 7fef92ccadd7 ("s390/kasan: double the stack size"). Both tools have similar requirements. Link: https://lkml.kernel.org/r/20240621113706.315500-24-iii@linux.ibm.com Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: <kasan-dev@googlegroups.com> Cc: Marco Elver <elver@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Steven Rostedt (Google) <rostedt@goodmis.org> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-07-03s390/boot: turn off KMSANIlya Leoshkevich1-0/+2
All other sanitizers are disabled for boot as well. While at it, add a comment explaining why we need this. Link: https://lkml.kernel.org/r/20240621113706.315500-23-iii@linux.ibm.com Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: <kasan-dev@googlegroups.com> Cc: Marco Elver <elver@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Steven Rostedt (Google) <rostedt@goodmis.org> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-07-03arch/x86: do not explicitly clear Reserved flag in free_pagetableOscar Salvador1-2/+0
In free_pagetable() we use the non-atomic version for clearing the PageReserved bit from the page. free_pagetable() will either call free_reserved_page() or put_page_bootmem(), which will eventually end up calling free_reserved_page(), and in there we already clear the PageReserved flag. Link: https://lkml.kernel.org/r/20240527044523.29207-1-osalvador@suse.de Signed-off-by: Oscar Salvador <osalvador@suse.de> Acked-by: David Hildenbrand <david@redhat.com> Cc: Dave Hansen <dave.hansen@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-07-03mm: drop leftover comment references to pxx_huge()Peter Xu1-2/+2
pxx_huge() has been removed in recent commit 9636f055dae1 ("mm/treewide: remove pXd_huge()"), however there are still three comments referencing the API that got overlooked. Remove them. Link: https://lkml.kernel.org/r/20240527154855.528816-1-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> Reported-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: David Hildenbrand <david@redhat.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-07-03mm: remove page_mapping()Matthew Wilcox (Oracle)4-4/+4
All callers are now converted, delete this compatibility wrapper. Also fix up some comments which referred to page_mapping. Link: https://lkml.kernel.org/r/20240423225552.4113447-7-willy@infradead.org Link: https://lkml.kernel.org/r/20240524181813.698813-1-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: David Hildenbrand <david@redhat.com> Cc: Eric Biggers <ebiggers@google.com> Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-07-03mm/mm_init: use node's number of cpus in deferred_page_init_max_threadsEric Chanudet1-12/+0
x86_64 is already using the node's cpu as maximum threads. Make that the default for all archs setting DEFERRED_STRUCT_PAGE_INIT. This returns to the behavior prior making the function arch-specific with commit ecd096506922 ("mm: make deferred init's max threads arch-specific"). Setting DEFERRED_STRUCT_PAGE_INIT and testing on a few arm64 platforms shows faster deferred_init_memmap completions: | | x13s | SA8775p-ride | Ampere R137-P31 | Ampere HR330 | | | Metal, 32GB | VM, 36GB | VM, 58GB | Metal, 128GB | | | 8cpus | 8cpus | 8cpus | 32cpus | |---------|-------------|--------------|-----------------|--------------| | threads | ms (%) | ms (%) | ms (%) | ms (%) | |---------|-------------|--------------|-----------------|--------------| | 1 | 108 (0%) | 72 (0%) | 224 (0%) | 324 (0%) | | cpus | 24 (-77%) | 36 (-50%) | 40 (-82%) | 56 (-82%) | Michael Ellerman reported: : On a machine here (1TB, 40 cores, 4KB pages) the existing code gives: : : [ 0.500124] node 2 deferred pages initialised in 210ms : [ 0.515790] node 3 deferred pages initialised in 230ms : [ 0.516061] node 0 deferred pages initialised in 230ms : [ 0.516522] node 7 deferred pages initialised in 230ms : [ 0.516672] node 4 deferred pages initialised in 230ms : [ 0.516798] node 6 deferred pages initialised in 230ms : [ 0.517051] node 5 deferred pages initialised in 230ms : [ 0.523887] node 1 deferred pages initialised in 240ms : : vs with the patch: : : [ 0.379613] node 0 deferred pages initialised in 90ms : [ 0.380388] node 1 deferred pages initialised in 90ms : [ 0.380540] node 4 deferred pages initialised in 100ms : [ 0.390239] node 6 deferred pages initialised in 100ms : [ 0.390249] node 2 deferred pages initialised in 100ms : [ 0.390786] node 3 deferred pages initialised in 110ms : [ 0.396721] node 5 deferred pages initialised in 110ms : [ 0.397095] node 7 deferred pages initialised in 110ms : : Which is a nice speedup. [echanude@redhat.com: v3] Link: https://lkml.kernel.org/r/20240528185455.643227-4-echanude@redhat.com Link: https://lkml.kernel.org/r/20240522203758.626932-4-echanude@redhat.com Signed-off-by: Eric Chanudet <echanude@redhat.com> Tested-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Reviewed-by: Baoquan He <bhe@redhat.com> Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov (AMD) <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-07-03mm: implement update_mmu_tlb() using update_mmu_tlb_range()Bang Li5-15/+0
Let's make update_mmu_tlb() simply a generic wrapper around update_mmu_tlb_range(). Only the latter can now be overridden by the architecture. We can now remove __HAVE_ARCH_UPDATE_MMU_TLB as well. Link: https://lkml.kernel.org/r/20240522061204.117421-3-libang.li@antgroup.com Signed-off-by: Bang Li <libang.li@antgroup.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: Chris Zankel <chris@zankel.net> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Lance Yang <ioworker0@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-07-03mm: add update_mmu_tlb_range()Bang Li5-0/+15
Patch series "Add update_mmu_tlb_range() to simplify code", v4. This series of commits mainly adds the update_mmu_tlb_range() to batch update tlb in an address range and implement update_mmu_tlb() using update_mmu_tlb_range(). After commit 19eaf44954df ("mm: thp: support allocation of anonymous multi-size THP"), We may need to batch update tlb of a certain address range by calling update_mmu_tlb() in a loop. Using the update_mmu_tlb_range(), we can simplify the code and possibly reduce the execution of some unnecessary code in some architectures. This patch (of 3): Add update_mmu_tlb_range(), we can batch update tlb of an address range. Link: https://lkml.kernel.org/r/20240522061204.117421-1-libang.li@antgroup.com Link: https://lkml.kernel.org/r/20240522061204.117421-2-libang.li@antgroup.com Signed-off-by: Bang Li <libang.li@antgroup.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: Chris Zankel <chris@zankel.net> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Lance Yang <ioworker0@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-06-30x86-32: fix cmpxchg8b_emu build error with clangLinus Torvalds1-7/+5
The kernel test robot reported that clang no longer compiles the 32-bit x86 kernel in some configurations due to commit 95ece48165c1 ("locking/atomic/x86: Rewrite x86_32 arch_atomic64_{,fetch}_{and,or,xor}() functions"). The build fails with arch/x86/include/asm/cmpxchg_32.h:149:9: error: inline assembly requires more registers than available and the reason seems to be that not only does the cmpxchg8b instruction need four fixed registers (EDX:EAX and ECX:EBX), with the emulation fallback the inline asm also wants a fifth fixed register for the address (it uses %esi for that, but that's just a software convention with cmpxchg8b_emu). Avoiding using another pointer input to the asm (and just forcing it to use the "0(%esi)" addressing that we end up requiring for the sw fallback) seems to fix the issue. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202406230912.F6XFIyA6-lkp@intel.com/ Fixes: 95ece48165c1 ("locking/atomic/x86: Rewrite x86_32 arch_atomic64_{,fetch}_{and,or,xor}() functions") Link: https://lore.kernel.org/all/202406230912.F6XFIyA6-lkp@intel.com/ Suggested-by: Uros Bizjak <ubizjak@gmail.com> Reviewed-and-Tested-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-06-28Merge tag 'riscv-for-linus-6.10-rc6' of ↵Linus Torvalds4-15/+22
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux Pull RISC-V fixes from Palmer Dabbelt: - A fix for vector load/store instruction decoding, which could result in reserved vector element length encodings decoding as valid vector instructions. - Instruction patching now aggressively flushes the local instruction cache, to avoid situations where patching functions on the flush path results in torn instructions being fetched. - A fix to prevent the stack walker from showing up as part of traces. * tag 'riscv-for-linus-6.10-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: riscv: stacktrace: convert arch_stack_walk() to noinstr riscv: patch: Flush the icache right after patching to avoid illegal insns RISC-V: fix vector insn load/store width mask
2024-06-28Merge tag 'hardening-v6.10-rc6' of ↵Linus Torvalds3-19/+14
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull hardening fixes from Kees Cook: - Remove invalid tty __counted_by annotation (Nathan Chancellor) - Add missing MODULE_DESCRIPTION()s for KUnit string tests (Jeff Johnson) - Remove non-functional per-arch kstack entropy filtering * tag 'hardening-v6.10-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: tty: mxser: Remove __counted_by from mxser_board.ports[] randomize_kstack: Remove non-functional per-arch entropy filtering string: kunit: add missing MODULE_DESCRIPTION() macros
2024-06-28x86: stop playing stack games in profile_pc()Linus Torvalds1-19/+1
The 'profile_pc()' function is used for timer-based profiling, which isn't really all that relevant any more to begin with, but it also ends up making assumptions based on the stack layout that aren't necessarily valid. Basically, the code tries to account the time spent in spinlocks to the caller rather than the spinlock, and while I support that as a concept, it's not worth the code complexity or the KASAN warnings when no serious profiling is done using timers anyway these days. And the code really does depend on stack layout that is only true in the simplest of cases. We've lost the comment at some point (I think when the 32-bit and 64-bit code was unified), but it used to say: Assume the lock function has either no stack frame or a copy of eflags from PUSHF. which explains why it just blindly loads a word or two straight off the stack pointer and then takes a minimal look at the values to just check if they might be eflags or the return pc: Eflags always has bits 22 and up cleared unlike kernel addresses but that basic stack layout assumption assumes that there isn't any lock debugging etc going on that would complicate the code and cause a stack frame. It causes KASAN unhappiness reported for years by syzkaller [1] and others [2]. With no real practical reason for this any more, just remove the code. Just for historical interest, here's some background commits relating to this code from 2006: 0cb91a229364 ("i386: Account spinlocks to the caller during profiling for !FP kernels") 31679f38d886 ("Simplify profile_pc on x86-64") and a code unification from 2009: ef4512882dbe ("x86: time_32/64.c unify profile_pc") but the basics of this thing actually goes back to before the git tree. Link: https://syzkaller.appspot.com/bug?extid=84fe685c02cd112a2ac3 [1] Link: https://lore.kernel.org/all/CAK55_s7Xyq=nh97=K=G1sxueOFrJDAvPOJAL4TPTCAYvmxO9_A@mail.gmail.com/ [2] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-06-28Merge tag 'arm64-fixes' of ↵Linus Torvalds3-2/+4
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Will Deacon: "A pair of small arm64 fixes for -rc6. One is a fix for the recently merged uffd-wp support (which was triggering a spurious warning) and the other is a fix to the clearing of the initial idmap pgd in some configurations Summary: - Fix spurious page-table warning when clearing PTE_UFFD_WP in a live pte - Fix clearing of the idmap pgd when using large addressing modes" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: Clear the initial ID map correctly before remapping arm64: mm: Permit PTE SW bits to change in live mappings
2024-06-28randomize_kstack: Remove non-functional per-arch entropy filteringKees Cook3-19/+14
An unintended consequence of commit 9c573cd31343 ("randomize_kstack: Improve entropy diffusion") was that the per-architecture entropy size filtering reduced how many bits were being added to the mix, rather than how many bits were being used during the offsetting. All architectures fell back to the existing default of 0x3FF (10 bits), which will consume at most 1KiB of stack space. It seems that this is working just fine, so let's avoid the confusion and update everything to use the default. The prior intent of the per-architecture limits were: arm64: capped at 0x1FF (9 bits), 5 bits effective powerpc: uncapped (10 bits), 6 or 7 bits effective riscv: uncapped (10 bits), 6 bits effective x86: capped at 0xFF (8 bits), 5 (x86_64) or 6 (ia32) bits effective s390: capped at 0xFF (8 bits), undocumented effective entropy Current discussion has led to just dropping the original per-architecture filters. The additional entropy appears to be safe for arm64, x86, and s390. Quoting Arnd, "There is no point pretending that 15.75KB is somehow safe to use while 15.00KB is not." Co-developed-by: Yuntao Liu <liuyuntao12@huawei.com> Signed-off-by: Yuntao Liu <liuyuntao12@huawei.com> Fixes: 9c573cd31343 ("randomize_kstack: Improve entropy diffusion") Link: https://lore.kernel.org/r/20240617133721.377540-1-liuyuntao12@huawei.com Reviewed-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> # s390 Link: https://lore.kernel.org/r/20240619214711.work.953-kees@kernel.org Signed-off-by: Kees Cook <kees@kernel.org>
2024-06-27Merge tag 's390-6.10-7' of ↵Linus Torvalds2-5/+8
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 updates from Alexander Gordeev: - Add missing virt_to_phys() conversion for directed interrupt bit vectors - Fix broken configuration change notifications for virtio-ccw - Fix sclp_init() cleanup path on failure and as result - fix a list double add warning - Fix unconditional adjusting of GOT entries containing undefined weak symbols that resolve to zero * tag 's390-6.10-7' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: s390/boot: Do not adjust GOT entries for undef weak sym s390/sclp: Fix sclp_init() cleanup on failure s390/virtio_ccw: Fix config change notifications s390/pci: Add missing virt_to_phys() for directed DIBV
2024-06-26Merge patch "riscv: stacktrace: convert arch_stack_walk() to noinstr"Palmer Dabbelt1-1/+1
This first patch in the larger series is a fix, so I'm merging it into fixes while the rest of the patch set is still under development. * b4-shazam-merge: riscv: stacktrace: convert arch_stack_walk() to noinstr Link: https://lore.kernel.org/r/20240613-dev-andyc-dyn-ftrace-v4-v1-0-1a538e12c01e@sifive.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-06-26riscv: stacktrace: convert arch_stack_walk() to noinstrAndy Chiu1-1/+1
arch_stack_walk() is called intensively in function_graph when the kernel is compiled with CONFIG_TRACE_IRQFLAGS. As a result, the kernel logs a lot of arch_stack_walk and its sub-functions into the ftrace buffer. However, these functions should not appear on the trace log because they are part of the ftrace itself. This patch references what arm64 does for the smae function. So it further prevent the re-enter kprobe issue, which is also possible on riscv. Related-to: commit 0fbcd8abf337 ("arm64: Prohibit instrumentation on arch_stack_walk()") Fixes: 680341382da5 ("riscv: add CALLER_ADDRx support") Signed-off-by: Andy Chiu <andy.chiu@sifive.com> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20240613-dev-andyc-dyn-ftrace-v4-v1-1-1a538e12c01e@sifive.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-06-26riscv: patch: Flush the icache right after patching to avoid illegal insnsAlexandre Ghiti2-13/+20
We cannot delay the icache flush after patching some functions as we may have patched a function that will get called before the icache flush. The only way to completely avoid such scenario is by flushing the icache as soon as we patch a function. This will probably be costly as we don't batch the icache maintenance anymore. Fixes: 6ca445d8af0e ("riscv: Fix early ftrace nop patching") Reported-by: Conor Dooley <conor.dooley@microchip.com> Closes: https://lore.kernel.org/linux-riscv/20240613-lubricant-breath-061192a9489a@wendy/ Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Andy Chiu <andy.chiu@sifive.com> Link: https://lore.kernel.org/r/20240624082141.153871-1-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-06-25RISC-V: fix vector insn load/store width maskJesse Taube1-1/+1
RVFDQ_FL_FS_WIDTH_MASK should be 3 bits [14-12], shifted down by 12 bits. Replace GENMASK(3, 0) with GENMASK(2, 0). Fixes: cd054837243b ("riscv: Allocate user's vector context in the first-use trap") Signed-off-by: Jesse Taube <jesse@rivosinc.com> Reviewed-by: Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20240606182800.415831-1-jesse@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-06-25syscalls: mmap(): use unsigned offset type consistentlyArnd Bergmann4-5/+5
Most architectures that implement the old-style mmap() with byte offset use 'unsigned long' as the type for that offset, but microblaze and riscv have the off_t type that is shared with userspace, matching the prototype in include/asm-generic/syscalls.h. Make this consistent by using an unsigned argument everywhere. This changes the behavior slightly, as the argument is shifted to a page number, and an user input with the top bit set would result in a negative page offset rather than a large one as we use elsewhere. For riscv, the 32-bit sys_mmap2() definition actually used a custom type that is different from the global declaration, but this was missed due to an incorrect type check. Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-06-25s390: remove native mmap2() syscallArnd Bergmann1-27/+0
The mmap2() syscall has never been used on 64-bit s390x and should have been removed as part of 5a79859ae0f3 ("s390: remove 31 bit support"). Remove it now. Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-06-25hexagon: fix fadvise64_64 calling conventionsArnd Bergmann2-0/+13
fadvise64_64() has two 64-bit arguments at the wrong alignment for hexagon, which turns them into a 7-argument syscall that is not supported by Linux. The downstream musl port for hexagon actually asks for a 6-argument version the same way we do it on arm, csky, powerpc, so make the kernel do it the same way to avoid having to change both. Link: https://github.com/quic/musl/blob/hexagon/arch/hexagon/syscall_arch.h#L78 Cc: stable@vger.kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-06-25csky, hexagon: fix broken sys_sync_file_rangeArnd Bergmann2-0/+2
Both of these architectures require u64 function arguments to be passed in even/odd pairs of registers or stack slots, which in case of sync_file_range would result in a seven-argument system call that is not currently possible. The system call is therefore incompatible with all existing binaries. While it would be possible to implement support for seven arguments like on mips, it seems better to use a six-argument version, either with the normal argument order but misaligned as on most architectures or with the reordered sync_file_range2() calling conventions as on arm and powerpc. Cc: stable@vger.kernel.org Acked-by: Guo Ren <guoren@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-06-25sh: rework sync_file_range ABIArnd Bergmann2-1/+13
The unusual function calling conventions on SuperH ended up causing sync_file_range to have the wrong argument order, with the 'flags' argument getting sorted before 'nbytes' by the compiler. In userspace, I found that musl, glibc, uclibc and strace all expect the normal calling conventions with 'nbytes' last, so changing the kernel to match them should make all of those work. In order to be able to also fix libc implementations to work with existing kernels, they need to be able to tell which ABI is used. An easy way to do this is to add yet another system call using the sync_file_range2 ABI that works the same on all architectures. Old user binaries can now work on new kernels, and new binaries can try the new sync_file_range2() to work with new kernels or fall back to the old sync_file_range() version if that doesn't exist. Cc: stable@vger.kernel.org Fixes: 75c92acdd5b1 ("sh: Wire up new syscalls.") Acked-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-06-25powerpc: restore some missing spu syscallsArnd Bergmann1-0/+4
A couple of system calls were inadventently removed from the table during a bugfix for 32-bit powerpc entry. Restore the original behavior. Fixes: e23750623835 ("powerpc/32: fix syscall wrappers with 64-bit arguments of unaligned register-pairs") Acked-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-06-25parisc: use generic sys_fanotify_mark implementationArnd Bergmann3-10/+2
The sys_fanotify_mark() syscall on parisc uses the reverse word order for the two halves of the 64-bit argument compared to all syscalls on all 32-bit architectures. As far as I can tell, the problem is that the function arguments on parisc are sorted backwards (26, 25, 24, 23, ...) compared to everyone else, so the calling conventions of using an even/odd register pair in native word order result in the lower word coming first in function arguments, matching the expected behavior on little-endian architectures. The system call conventions however ended up matching what the other 32-bit architectures do. A glibc cleanup in 2020 changed the userspace behavior in a way that handles all architectures consistently, but this inadvertently broke parisc32 by changing to the same method as everyone else. The change made it into glibc-2.35 and subsequently into debian 12 (bookworm), which is the latest stable release. This means we need to choose between reverting the glibc change or changing the kernel to match it again, but either hange will leave some systems broken. Pick the option that is more likely to help current and future users and change the kernel to match current glibc. This also means the behavior is now consistent across architectures, but it breaks running new kernels with old glibc builds before 2.35. Link: https://sourceware.org/git/?p=glibc.git;a=commitdiff;h=d150181d73d9 Link: https://git.kernel.org/pub/scm/linux/kernel/git/history/history.git/commit/arch/parisc/kernel/sys_parisc.c?h=57b1dfbd5b4a39d Cc: Adhemerval Zanella <adhemerval.zanella@linaro.org> Tested-by: Helge Deller <deller@gmx.de> Acked-by: Helge Deller <deller@gmx.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de> --- I found this through code inspection, please double-check to make sure I got the bug and the fix right. The alternative is to fix this by reverting glibc back to the unusual behavior.
2024-06-25parisc: use correct compat recv/recvfrom syscallsArnd Bergmann1-2/+2
Johannes missed parisc back when he introduced the compat version of these syscalls, so receiving cmsg messages that require a compat conversion is still broken. Use the correct calls like the other architectures do. Fixes: 1dacc76d0014 ("net/compat/wext: send different messages to compat tasks") Acked-by: Helge Deller <deller@gmx.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-06-25sparc: fix compat recv/recvfrom syscallsArnd Bergmann2-223/+2
sparc has the wrong compat version of recv() and recvfrom() for both the direct syscalls and socketcall(). The direct syscalls just need to use the compat version. For socketcall, the same thing could be done, but it seems better to completely remove the custom assembler code for it and just use the same implementation that everyone else has. Fixes: 1dacc76d0014 ("net/compat/wext: send different messages to compat tasks") Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-06-25sparc: fix old compat_sys_select()Arnd Bergmann1-1/+1
sparc has two identical select syscalls at numbers 93 and 230, respectively. During the conversion to the modern syscall.tbl format, the older one of the two broke in compat mode, and now refers to the native 64-bit syscall. Restore the correct behavior. This has very little effect, as glibc has been using the newer number anyway. Fixes: 6ff645dd683a ("sparc: add system call table generation support") Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-06-25syscalls: fix compat_sys_io_pgetevents_time64 usageArnd Bergmann7-7/+7
Using sys_io_pgetevents() as the entry point for compat mode tasks works almost correctly, but misses the sign extension for the min_nr and nr arguments. This was addressed on parisc by switching to compat_sys_io_pgetevents_time64() in commit 6431e92fc827 ("parisc: io_pgetevents_time64() needs compat syscall in 32-bit compat mode"), as well as by using more sophisticated system call wrappers on x86 and s390. However, arm64, mips, powerpc, sparc and riscv still have the same bug. Change all of them over to use compat_sys_io_pgetevents_time64() like parisc already does. This was clearly the intention when the function was originally added, but it got hooked up incorrectly in the tables. Cc: stable@vger.kernel.org Fixes: 48166e6ea47d ("y2038: add 64-bit time_t syscalls to all 32-bit architectures") Acked-by: Heiko Carstens <hca@linux.ibm.com> # s390 Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-06-25s390/boot: Do not adjust GOT entries for undef weak symJens Remus1-4/+7
Since commit 778666df60f0 ("s390: compile relocatable kernel without -fPIE") and commit 00cda11d3b2e ("s390: Compile kernel with -fPIC and link with -no-pie") the kernel on s390x may have a Global Offset Table (GOT) whose entries are adjusted for KASLR in kaslr_adjust_got(). The GOT may contain entries for undefined weak symbols that resolved to zero. That is the resulting GOT entry value is zero. Adjusting those entries unconditionally in kaslr_adjust_got() is wrong. Otherwise the following sample code would erroneously assume foo to be defined, due to the adjustment changing the zero-value to a non-zero one: extern int foo __attribute__((weak)); if (*foo) /* foo is defined [or undefined and erroneously adjusted] */ The vmlinux build at commit 00cda11d3b2e ("s390: Compile kernel with -fPIC and link with -no-pie") with defconfig actually had two GOT entries for the undefined weak symbols __start_BTF and __stop_BTF: $ objdump -tw vmlinux | grep -F "*UND*" 0000000000000000 w *UND* 0000000000000000 __stop_BTF 0000000000000000 w *UND* 0000000000000000 __start_BTF $ readelf -rw vmlinux | grep -E "R_390_GOTENT +0{16}" 000000345760 2776a0000001a R_390_GOTENT 0000000000000000 __stop_BTF + 2 000000345766 2d5480000001a R_390_GOTENT 0000000000000000 __start_BTF + 2 The s390-specific vmlinux linker script sets the section start to __START_KERNEL, which is currently defined as 0x100000 on s390x. Access to lowcore is performed via a pointer of 0 and not a symbol in a section starting at 0. The first 64K are reserved for the loader on s390x. Thus it is safe to assume that __START_KERNEL will never be 0. As a result there cannot be any defined symbols resolving to zero in the kernel. Note that the first three GOT entries are reserved for the dynamic loader on s390x. [1] In the kernel they are zero. Therefore no extra handling is required to skip these. Skip adjusting GOT entries with a value of zero in kaslr_adjust_got(). While at it update the comment when a GOT exists on s390x. Since commit 00cda11d3b2e ("s390: Compile kernel with -fPIC and link with -no-pie") it no longer only exists when compiling with Clang, but also with GCC. [1]: s390x ELF ABI, section "Global Offset Table", https://github.com/IBM/s390x-abi/releases Fixes: 778666df60f0 ("s390: compile relocatable kernel without -fPIE") Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Jens Remus <jremus@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2024-06-24arm64: Clear the initial ID map correctly before remappingZenghui Yu1-1/+1
In the attempt to clear and recreate the initial ID map for LPA2, we wrongly use 'start - end' as the map size and make the memset() almost a nop. Fix it by passing the correct map size. Fixes: 9684ec186f8f ("arm64: Enable LPA2 at boot if supported by the system") Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240621092809.162-1-yuzenghui@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2024-06-23Merge tag 'mips-fixes_6.10_2' of ↵Linus Torvalds2-2/+2
git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux Pull MIPS fixes from Thomas Bogendoerfer: - fix lseek in o32 compat mode - fix for microMIPS MT ASE helpers * tag 'mips-fixes_6.10_2' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: mips: fix compat_sys_lseek syscall MIPS: mipsmtregs: Fix target register for MFTC0
2024-06-23Merge tag 'x86_urgent_for_v6.10_rc5' of ↵Linus Torvalds1-1/+2
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Borislav Petkov: - An ARM-relevant fix to not free default RMIDs of a resource control group - A randconfig build fix for the VMware virtual GPU driver * tag 'x86_urgent_for_v6.10_rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/resctrl: Don't try to free nonexistent RMIDs drm/vmwgfx: Fix missing HYPERVISOR_GUEST dependency
2024-06-23Merge tag 'powerpc-6.10-3' of ↵Linus Torvalds2-5/+15
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fixes from Michael Ellerman: - Prevent use-after-free in 64-bit KVM VFIO - Add generated Power8 crypto asm to .gitignore Thanks to Al Viro and Nathan Lynch. * tag 'powerpc-6.10-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: KVM: PPC: Book3S HV: Prevent UAF in kvm_spapr_tce_attach_iommu_group() powerpc/crypto: Add generated P8 asm to .gitignore
2024-06-22Merge tag 'arm-fixes-6.10' of ↵Linus Torvalds8-8/+11
git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc Pull SoC fixes from Arnd Bergmann: "There are seven oneline patches that each address a distinct problem on the NXP i.MX platform, mostly the popular i.MX8M variant. The only other two fixes are for error handling on the psci firmware driver and SD card support on the milkv duo riscv board" * tag 'arm-fixes-6.10' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: firmware: psci: Fix return value from psci_system_suspend() riscv: dts: sophgo: disable write-protection for milkv duo arm64: dts: imx8qm-mek: fix gpio number for reg_usdhc2_vmmc arm64: dts: freescale: imx8mm-verdin: enable hysteresis on slow input pin arm64: dts: imx93-11x11-evk: Remove the 'no-sdio' property arm64: dts: freescale: imx8mp-venice-gw73xx-2x: fix BT shutdown GPIO arm: dts: imx53-qsb-hdmi: Disable panel instead of deleting node arm64: dts: imx8mp: Fix TC9595 input clock on DH i.MX8M Plus DHCOM SoM arm64: dts: freescale: imx8mm-verdin: Fix GPU speed
2024-06-22Merge tag 'loongarch-fixes-6.10-2' of ↵Linus Torvalds6-64/+91
git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson Pull LoongArch fixes from Huacai Chen: "Some hw breakpoint fixes, an objtool build warnging fix, and a trivial cleanup" * tag 'loongarch-fixes-6.10-2' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson: LoongArch: KVM: Remove an unneeded semicolon LoongArch: Fix multiple hardware watchpoint issues LoongArch: Trigger user-space watchpoints correctly LoongArch: Fix watchpoint setting error LoongArch: Only allow OBJTOOL & ORC unwinder if toolchain supports -mthin-add-sub
2024-06-22Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds6-11/+33
Pull kvm fixes from Paolo Bonzini: "ARM: - Fix dangling references to a redistributor region if the vgic was prematurely destroyed. - Properly mark FFA buffers as released, ensuring that both parties can make forward progress. x86: - Allow getting/setting MSRs for SEV-ES guests, if they're using the pre-6.9 KVM_SEV_ES_INIT API. - Always sync pending posted interrupts to the IRR prior to IOAPIC route updates, so that EOIs are intercepted properly if the old routing table requested that. Generic: - Avoid __fls(0) - Fix reference leak on hwpoisoned page - Fix a race in kvm_vcpu_on_spin() by ensuring loads and stores are atomic. - Fix bug in __kvm_handle_hva_range() where KVM calls a function pointer that was intended to be a marker only (nothing bad happens but kind of a mine and also technically undefined behavior) - Do not bother accounting allocations that are small and freed before getting back to userspace. Selftests: - Fix compilation for RISC-V. - Fix a "shift too big" goof in the KVM_SEV_INIT2 selftest. - Compute the max mappable gfn for KVM selftests on x86 using GuestMaxPhyAddr from KVM's supported CPUID (if it's available)" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: SEV-ES: Fix svm_get_msr()/svm_set_msr() for KVM_SEV_ES_INIT guests KVM: Discard zero mask with function kvm_dirty_ring_reset virt: guest_memfd: fix reference leak on hwpoisoned page kvm: do not account temporary allocations to kmem MAINTAINERS: Drop Wanpeng Li as a Reviewer for KVM Paravirt support KVM: x86: Always sync PIR to IRR prior to scanning I/O APIC routes KVM: Stop processing *all* memslots when "null" mmu_notifier handler is found KVM: arm64: FFA: Release hyp rx buffer KVM: selftests: Fix RISC-V compilation KVM: arm64: Disassociate vcpus from redistributor region on teardown KVM: Fix a data race on last_boosted_vcpu in kvm_vcpu_on_spin() KVM: selftests: x86: Prioritize getting max_gfn from GuestPhysBits KVM: selftests: Fix shift of 32 bit unsigned int more than 32 bits
2024-06-21KVM: SEV-ES: Fix svm_get_msr()/svm_set_msr() for KVM_SEV_ES_INIT guestsMichael Roth1-2/+2
With commit 27bd5fdc24c0 ("KVM: SEV-ES: Prevent MSR access post VMSA encryption"), older VMMs like QEMU 9.0 and older will fail when booting SEV-ES guests with something like the following error: qemu-system-x86_64: error: failed to get MSR 0x174 qemu-system-x86_64: ../qemu.git/target/i386/kvm/kvm.c:3950: kvm_get_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed. This is because older VMMs that might still call svm_get_msr()/svm_set_msr() for SEV-ES guests after guest boot even if those interfaces were essentially just noops because of the vCPU state being encrypted and stored separately in the VMSA. Now those VMMs will get an -EINVAL and generally crash. Newer VMMs that are aware of KVM_SEV_INIT2 however are already aware of the stricter limitations of what vCPU state can be sync'd during guest run-time, so newer QEMU for instance will work both for legacy KVM_SEV_ES_INIT interface as well as KVM_SEV_INIT2. So when using KVM_SEV_INIT2 it's okay to assume userspace can deal with -EINVAL, whereas for legacy KVM_SEV_ES_INIT the kernel might be dealing with either an older VMM and so it needs to assume that returning -EINVAL might break the VMM. Address this by only returning -EINVAL if the guest was started with KVM_SEV_INIT2. Otherwise, just silently return. Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Nikunj A Dadhania <nikunj@amd.com> Reported-by: Srikanth Aithal <sraithal@amd.com> Closes: https://lore.kernel.org/lkml/37usuu4yu4ok7be2hqexhmcyopluuiqj3k266z4gajc2rcj4yo@eujb23qc3zcm/ Fixes: 27bd5fdc24c0 ("KVM: SEV-ES: Prevent MSR access post VMSA encryption") Signed-off-by: Michael Roth <michael.roth@amd.com> Message-ID: <20240604233510.764949-1-michael.roth@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-06-21mips: fix compat_sys_lseek syscallArnd Bergmann1-1/+1
This is almost compatible, but passing a negative offset should result in a EINVAL error, but on mips o32 compat mode would seek to a large 32-bit byte offset. Use compat_sys_lseek() to correctly sign-extend the argument. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2024-06-21MIPS: mipsmtregs: Fix target register for MFTC0Jiaxun Yang1-1/+1
Target register of mftc0 should be __res instead of $1, this is a leftover from old .insn code. Fixes: dd6d29a61489 ("MIPS: Implement microMIPS MT ASE helpers") Cc: stable@vger.kernel.org Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2024-06-21LoongArch: KVM: Remove an unneeded semicolonYang Li1-1/+1
Remove an unneeded semicolon to avoid build warnings: ./arch/loongarch/kvm/exit.c:764:2-3: Unneeded semicolon Cc: stable@vger.kernel.org Reported-by: Abaci Robot <abaci@linux.alibaba.com> Closes: https://bugzilla.openanolis.cn/show_bug.cgi?id=9343 Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2024-06-21LoongArch: Fix multiple hardware watchpoint issuesHui Li1-24/+33
In the current code, if multiple hardware breakpoints/watchpoints in a user-space thread, some of them will not be triggered. When debugging the following code using gdb. lihui@bogon:~$ cat test.c #include <stdio.h> int a = 0; int main() { printf("start test\n"); a = 1; printf("a = %d\n", a); printf("end test\n"); return 0; } lihui@bogon:~$ gcc -g test.c -o test lihui@bogon:~$ gdb test ... (gdb) start ... Temporary breakpoint 1, main () at test.c:5 5 printf("start test\n"); (gdb) watch a Hardware watchpoint 2: a (gdb) hbreak 8 Hardware assisted breakpoint 3 at 0x1200006ec: file test.c, line 8. (gdb) c Continuing. start test a = 1 Breakpoint 3, main () at test.c:8 8 printf("end test\n"); ... The first hardware watchpoint is not triggered, the root causes are: 1. In hw_breakpoint_control(), The FWPnCFG1.2.4/MWPnCFG1.2.4 register settings are not distinguished. They should be set based on hardware watchpoint functions (fetch or load/store operations). 2. In breakpoint_handler() and watchpoint_handler(), it doesn't identify which watchpoint is triggered. So, all watchpoint-related perf_event callbacks are called and siginfo is sent to the user space. This will cause user-space unable to determine which watchpoint is triggered. The kernel need to identity which watchpoint is triggered via MWPS/ FWPS registers, and then call the corresponding perf event callbacks to report siginfo to the user-space. Modify the relevant code to solve above issues. All changes according to the LoongArch Reference Manual: https://loongson.github.io/LoongArch-Documentation/LoongArch-Vol1-EN.html#control-and-status-registers-related-to-watchpoints With this patch: lihui@bogon:~$ gdb test ... (gdb) start ... Temporary breakpoint 1, main () at test.c:5 5 printf("start test\n"); (gdb) watch a Hardware watchpoint 2: a (gdb) hbreak 8 Hardware assisted breakpoint 3 at 0x1200006ec: file test.c, line 8. (gdb) c Continuing. start test Hardware watchpoint 2: a Old value = 0 New value = 1 main () at test.c:7 7 printf("a = %d\n", a); (gdb) c Continuing. a = 1 Breakpoint 3, main () at test.c:8 8 printf("end test\n"); (gdb) c Continuing. end test [Inferior 1 (process 778) exited normally] Cc: stable@vger.kernel.org Signed-off-by: Hui Li <lihui@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2024-06-21LoongArch: Trigger user-space watchpoints correctlyHui Li3-6/+31
In the current code, gdb can set the watchpoint successfully through ptrace interface, but watchpoint will not be triggered. When debugging the following code using gdb. lihui@bogon:~$ cat test.c #include <stdio.h> int a = 0; int main() { a = 1; printf("a = %d\n", a); return 0; } lihui@bogon:~$ gcc -g test.c -o test lihui@bogon:~$ gdb test ... (gdb) watch a ... (gdb) r ... a = 1 [Inferior 1 (process 4650) exited normally] No watchpoints were triggered, the root causes are: 1. Kernel uses perf_event and hw_breakpoint framework to control watchpoint, but the perf_event corresponding to watchpoint is not enabled. So it needs to be enabled according to MWPnCFG3 or FWPnCFG3 PLV bit field in ptrace_hbp_set_ctrl(), and privilege is set according to the monitored addr in hw_breakpoint_control(). Furthermore, add a judgment in ptrace_hbp_set_addr() to ensure kernel-space addr cannot be monitored in user mode. 2. The global enable control for all watchpoints is the WE bit of CSR.CRMD, and hardware sets the value to 0 when an exception is triggered. When the ERTN instruction is executed to return, the hardware restores the value of the PWE field of CSR.PRMD here. So, before a thread containing watchpoints be scheduled, the PWE field of CSR.PRMD needs to be set to 1. Add this modification in hw_breakpoint_control(). All changes according to the LoongArch Reference Manual: https://loongson.github.io/LoongArch-Documentation/LoongArch-Vol1-EN.html#control-and-status-registers-related-to-watchpoints https://loongson.github.io/LoongArch-Documentation/LoongArch-Vol1-EN.html#basic-control-and-status-registers With this patch: lihui@bogon:~$ gdb test ... (gdb) watch a Hardware watchpoint 1: a (gdb) r ... Hardware watchpoint 1: a Old value = 0 New value = 1 main () at test.c:6 6 printf("a = %d\n", a); (gdb) c Continuing. a = 1 [Inferior 1 (process 775) exited normally] Cc: stable@vger.kernel.org Signed-off-by: Hui Li <lihui@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2024-06-21LoongArch: Fix watchpoint setting errorHui Li3-32/+21
In the current code, when debugging the following code using gdb, "invalid argument ..." message will be displayed. lihui@bogon:~$ cat test.c #include <stdio.h> int a = 0; int main() { a = 1; return 0; } lihui@bogon:~$ gcc -g test.c -o test lihui@bogon:~$ gdb test ... (gdb) watch a Hardware watchpoint 1: a (gdb) r ... Invalid argument setting hardware debug registers There are mainly two types of issues. 1. Some incorrect judgment condition existed in user_watch_state argument parsing, causing -EINVAL to be returned. When setting up a watchpoint, gdb uses the ptrace interface, ptrace(PTRACE_SETREGSET, tid, NT_LOONGARCH_HW_WATCH, (void *) &iov)). Register values in user_watch_state as follows: addr[0] = 0x0, mask[0] = 0x0, ctrl[0] = 0x0 addr[1] = 0x0, mask[1] = 0x0, ctrl[1] = 0x0 addr[2] = 0x0, mask[2] = 0x0, ctrl[2] = 0x0 addr[3] = 0x0, mask[3] = 0x0, ctrl[3] = 0x0 addr[4] = 0x0, mask[4] = 0x0, ctrl[4] = 0x0 addr[5] = 0x0, mask[5] = 0x0, ctrl[5] = 0x0 addr[6] = 0x0, mask[6] = 0x0, ctrl[6] = 0x0 addr[7] = 0x12000803c, mask[7] = 0x0, ctrl[7] = 0x610 In arch_bp_generic_fields(), return -EINVAL when ctrl.len is LOONGARCH_BREAKPOINT_LEN_8(0b00). So delete the incorrect judgment here. In ptrace_hbp_fill_attr_ctrl(), when note_type is NT_LOONGARCH_HW_WATCH and ctrl[0] == 0x0, if ((type & HW_BREAKPOINT_RW) != type) will return -EINVAL. Here ctrl.type should be set based on note_type, and unnecessary judgments can be removed. 2. The watchpoint argument was not set correctly due to unnecessary offset and alignment_mask. Modify ptrace_hbp_fill_attr_ctrl() and hw_breakpoint_arch_parse(), which ensure the watchpont argument is set correctly. All changes according to the LoongArch Reference Manual: https://loongson.github.io/LoongArch-Documentation/LoongArch-Vol1-EN.html#control-and-status-registers-related-to-watchpoints Cc: stable@vger.kernel.org Signed-off-by: Hui Li <lihui@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2024-06-21LoongArch: Only allow OBJTOOL & ORC unwinder if toolchain supports ↵Xi Ruoyao2-1/+5
-mthin-add-sub GAS <= 2.41 does not support generating R_LARCH_{32,64}_PCREL for "label - ." and it generates R_LARCH_{ADD,SUB}{32,64} pairs instead. Objtool cannot handle R_LARCH_{ADD,SUB}{32,64} pair in __jump_table (static key implementation) and etc. so it will produce some warnings. This is causing the kernel CI systems to complain everywhere. For GAS we can check if -mthin-add-sub option is available to know if R_LARCH_{32,64}_PCREL are supported. For Clang, we require Clang >= 18 and Clang >= 17 already supports R_LARCH_{32,64}_PCREL. But unfortunately Clang has some other issues, so we disable objtool for Clang at present. Note that __jump_table here is not generated by the compiler, so -fno-jump-table is completely irrelevant for this issue. Fixes: cb8a2ef0848c ("LoongArch: Add ORC stack unwinder support") Closes: https://lore.kernel.org/loongarch/Zl5m1ZlVmGKitAof@yujie-X299/ Closes: https://lore.kernel.org/loongarch/ZlY1gDDPi_mNrwJ1@slm.duckdns.org/ Closes: https://lore.kernel.org/loongarch/1717478006.038663-1-hengqi@linux.alibaba.com/ Link: https://sourceware.org/git/?p=binutils-gdb.git;a=commitdiff;h=816029e06768 Link: https://github.com/llvm/llvm-project/commit/42cb3c6346fc Signed-off-by: Xi Ruoyao <xry111@xry111.site> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2024-06-20Merge tag 'kvmarm-fixes-6.10-2' of ↵Paolo Bonzini4-4/+27
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD KVM/arm64 fixes for 6.10, take #2 - Fix dangling references to a redistributor region if the vgic was prematurely destroyed. - Properly mark FFA buffers as released, ensuring that both parties can make forward progress.
2024-06-20Merge tag 'riscv-sophgo-dt-fixes-for-v6.10-rc4' of ↵Arnd Bergmann1-0/+1
https://github.com/sophgo/linux into arm/fixes RISC-V Sophgo Devicetree fixes for v6.10-rc4 Just one minor fix to disable write protect for milkv-duo because it does not have write-protect pin. Signed-off-by: Chen Wang <unicorn_wang@outlook.com> * tag 'riscv-sophgo-dt-fixes-for-v6.10-rc4' of https://github.com/sophgo/linux: riscv: dts: sophgo: disable write-protection for milkv duo Link: https://lore.kernel.org/r/MA0P287MB28226E34D9390B311201B7C4FECF2@MA0P287MB2822.INDP287.PROD.OUTLOOK.COM Signed-off-by: Arnd Bergmann <arnd@arndb.de>