aboutsummaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2019-02-22s390/cpum_cf: move counter set controls to a new header fileHendrik Brueckner4-43/+56
Move counter set specific controls and functions to the asm/cpu_mcf.h header file containg all counter facility support definitions. Also adapt few variable names and header file includes. No functional changes. Signed-off-by: Hendrik Brueckner <[email protected]> Signed-off-by: Martin Schwidefsky <[email protected]>
2019-02-22parisc: use memblock_alloc() instead of custom get_memblock()Mike Rapoport1-33/+19
The get_memblock() function implements custom bottom-up memblock allocator. Setting 'memblock_bottom_up = true' before any memblock allocation is done allows replacing get_memblock() calls with memblock_alloc(). Signed-off-by: Mike Rapoport <[email protected]> Tested-by: Helge Deller <[email protected]> Signed-off-by: Helge Deller <[email protected]>
2019-02-22crypto: arm/aes-ce - update IV after partial final CTR blockEric Biggers1-13/+13
Make the arm ctr-aes-ce algorithm update the IV buffer to contain the next counter after processing a partial final block, rather than leave it as the last counter. This makes ctr-aes-ce pass the updated AES-CTR tests. This change also makes the code match the arm64 version in arch/arm64/crypto/aes-modes.S more closely. Signed-off-by: Eric Biggers <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-02-22crypto: arm64/aes-blk - update IV after partial final CTR blockEric Biggers1-2/+1
Make the arm64 ctr-aes-neon and ctr-aes-ce algorithms update the IV buffer to contain the next counter after processing a partial final block, rather than leave it as the last counter. This makes these algorithms pass the updated AES-CTR tests. Signed-off-by: Eric Biggers <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-02-22crypto: sha512/arm - fix crash bug in Thumb2 buildArd Biesheuvel2-2/+4
The SHA512 code we adopted from the OpenSSL project uses a rather peculiar way to take the address of the round constant table: it takes the address of the sha256_block_data_order() routine, and substracts a constant known quantity to arrive at the base of the table, which is emitted by the same assembler code right before the routine's entry point. However, recent versions of binutils have helpfully changed the behavior of references emitted via an ADR instruction when running in Thumb2 mode: it now takes the Thumb execution mode bit into account, which is bit 0 af the address. This means the produced table address also has bit 0 set, and so we end up with an address value pointing 1 byte past the start of the table, which results in crashes such as Unable to handle kernel paging request at virtual address bf825000 pgd = 42f44b11 [bf825000] *pgd=80000040206003, *pmd=5f1bd003, *pte=00000000 Internal error: Oops: 207 [#1] PREEMPT SMP THUMB2 Modules linked in: sha256_arm(+) sha1_arm_ce sha1_arm ... CPU: 7 PID: 396 Comm: cryptomgr_test Not tainted 5.0.0-rc6+ #144 Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 PC is at sha256_block_data_order+0xaaa/0xb30 [sha256_arm] LR is at __this_module+0x17fd/0xffffe800 [sha256_arm] pc : [<bf820bca>] lr : [<bf824ffd>] psr: 800b0033 sp : ebc8bbe8 ip : faaabe1c fp : 2fdd3433 r10: 4c5f1692 r9 : e43037df r8 : b04b0a5a r7 : c369d722 r6 : 39c3693e r5 : 7a013189 r4 : 1580d26b r3 : 8762a9b0 r2 : eea9c2cd r1 : 3e9ab536 r0 : 1dea4ae7 Flags: Nzcv IRQs on FIQs on Mode SVC_32 ISA Thumb Segment user Control: 70c5383d Table: 6b8467c0 DAC: dbadc0de Process cryptomgr_test (pid: 396, stack limit = 0x69e1fe23) Stack: (0xebc8bbe8 to 0xebc8c000) ... unwind: Unknown symbol address bf820bca unwind: Index not found bf820bca Code: 441a ea80 40f9 440a (f85e) 3b04 ---[ end trace e560cce92700ef8a ]--- Given that this affects older kernels as well, in case they are built with a recent toolchain, apply a minimal backportable fix, which is to emit another non-code label at the start of the routine, and reference that instead. (This is similar to the current upstream state of this file in OpenSSL) Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-02-22crypto: sha256/arm - fix crash bug in Thumb2 buildArd Biesheuvel2-2/+4
The SHA256 code we adopted from the OpenSSL project uses a rather peculiar way to take the address of the round constant table: it takes the address of the sha256_block_data_order() routine, and substracts a constant known quantity to arrive at the base of the table, which is emitted by the same assembler code right before the routine's entry point. However, recent versions of binutils have helpfully changed the behavior of references emitted via an ADR instruction when running in Thumb2 mode: it now takes the Thumb execution mode bit into account, which is bit 0 af the address. This means the produced table address also has bit 0 set, and so we end up with an address value pointing 1 byte past the start of the table, which results in crashes such as Unable to handle kernel paging request at virtual address bf825000 pgd = 42f44b11 [bf825000] *pgd=80000040206003, *pmd=5f1bd003, *pte=00000000 Internal error: Oops: 207 [#1] PREEMPT SMP THUMB2 Modules linked in: sha256_arm(+) sha1_arm_ce sha1_arm ... CPU: 7 PID: 396 Comm: cryptomgr_test Not tainted 5.0.0-rc6+ #144 Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 PC is at sha256_block_data_order+0xaaa/0xb30 [sha256_arm] LR is at __this_module+0x17fd/0xffffe800 [sha256_arm] pc : [<bf820bca>] lr : [<bf824ffd>] psr: 800b0033 sp : ebc8bbe8 ip : faaabe1c fp : 2fdd3433 r10: 4c5f1692 r9 : e43037df r8 : b04b0a5a r7 : c369d722 r6 : 39c3693e r5 : 7a013189 r4 : 1580d26b r3 : 8762a9b0 r2 : eea9c2cd r1 : 3e9ab536 r0 : 1dea4ae7 Flags: Nzcv IRQs on FIQs on Mode SVC_32 ISA Thumb Segment user Control: 70c5383d Table: 6b8467c0 DAC: dbadc0de Process cryptomgr_test (pid: 396, stack limit = 0x69e1fe23) Stack: (0xebc8bbe8 to 0xebc8c000) ... unwind: Unknown symbol address bf820bca unwind: Index not found bf820bca Code: 441a ea80 40f9 440a (f85e) 3b04 ---[ end trace e560cce92700ef8a ]--- Given that this affects older kernels as well, in case they are built with a recent toolchain, apply a minimal backportable fix, which is to emit another non-code label at the start of the routine, and reference that instead. (This is similar to the current upstream state of this file in OpenSSL) Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-02-22Merge remote-tracking branch 'remotes/powerpc/topic/ppc-kvm' into kvm-ppc-nextPaul Mackerras8-100/+71
This merges in the "ppc-kvm" topic branch of the powerpc tree to get a series of commits that touch both general arch/powerpc code and KVM code. These commits will be merged both via the KVM tree and the powerpc tree. Signed-off-by: Paul Mackerras <[email protected]>
2019-02-22powerpc/kvm: Save and restore host AMR/IAMR/UAMORMichael Ellerman1-9/+17
When the hash MMU is active the AMR, IAMR and UAMOR are used for pkeys. The AMR is directly writable by user space, and the UAMOR masks those writes, meaning both registers are effectively user register state. The IAMR is used to create an execute only key. Also we must maintain the value of at least the AMR when running in process context, so that any memory accesses done by the kernel on behalf of the process are correctly controlled by the AMR. Although we are correctly switching all registers when going into a guest, on returning to the host we just write 0 into all regs, except on Power9 where we restore the IAMR correctly. This could be observed by a user process if it writes the AMR, then runs a guest and we then return immediately to it without rescheduling. Because we have written 0 to the AMR that would have the effect of granting read/write permission to pages that the process was trying to protect. In addition, when using the Radix MMU, the AMR can prevent inadvertent kernel access to userspace data, writing 0 to the AMR disables that protection. So save and restore AMR, IAMR and UAMOR. Fixes: cf43d3b26452 ("powerpc: Enable pkey subsystem") Cc: [email protected] # v4.16+ Signed-off-by: Russell Currey <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Acked-by: Paul Mackerras <[email protected]>
2019-02-22KVM: PPC: Book3S: Improve KVM reference countingAlexey Kardashevskiy1-3/+4
The anon fd's ops releases the KVM reference in the release hook. However we reference the KVM object after we create the fd so there is small window when the release function can be called and dereferenced the KVM object which potentially may free it. It is not a problem at the moment as the file is created and KVM is referenced under the KVM lock and the release function obtains the same lock before dereferencing the KVM (although the lock is not held when calling kvm_put_kvm()) but it is potentially fragile against future changes. This references the KVM object before creating a file. Signed-off-by: Alexey Kardashevskiy <[email protected]> Signed-off-by: Paul Mackerras <[email protected]>
2019-02-22KVM: PPC: Book3S HV: Fix build failure without IOMMU supportJordan Niethe2-0/+12
Currently trying to build without IOMMU support will fail: (.text+0x1380): undefined reference to `kvmppc_h_get_tce' (.text+0x1384): undefined reference to `kvmppc_rm_h_put_tce' (.text+0x149c): undefined reference to `kvmppc_rm_h_stuff_tce' (.text+0x14a0): undefined reference to `kvmppc_rm_h_put_tce_indirect' This happens because turning off IOMMU support will prevent book3s_64_vio_hv.c from being built because it is only built when SPAPR_TCE_IOMMU is set, which depends on IOMMU support. Fix it using ifdefs for the undefined references. Fixes: 76d837a4c0f9 ("KVM: PPC: Book3S PR: Don't include SPAPR TCE code on non-pseries platforms") Signed-off-by: Jordan Niethe <[email protected]> Signed-off-by: Paul Mackerras <[email protected]>
2019-02-21MIPS: ingenic: Add support for appended devicetreePaul Cercueil2-5/+11
Add support for booting the kernel from an externally-appended devicetree, if no devicetree was built-in. Signed-off-by: Paul Cercueil <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: [email protected] Cc: [email protected]
2019-02-21ARCv2: don't assume core 0x54 has dual issueVineet Gupta2-5/+29
The first release of core4 (0x54) was dual issue only (HS4x). Newer releases allow hardware to be configured as single issue (HS3x) or dual issue. Prevent accessing a HS4x only aux register in HS3x, which otherwise leads to illegal instruction exceptions Signed-off-by: Vineet Gupta <[email protected]>
2019-02-21parisc: Add constants for various PDC firmware callsHelge Deller1-2/+18
PDC_DEBUG, PDC_ALLOC and PDC_SCSI_PARMS were missing. Add PDC_MODEL_GET_INSTALL_KERNEL and PDC_NVOLATILE_* subfunctions. PDC_CONFIG is call #17, not 16. Luckily it's nowhere referenced yet. Signed-off-by: Helge Deller <[email protected]>
2019-02-21parisc: Add constant for PDC_PAT_COMPLEX firmware callHelge Deller1-0/+4
Signed-off-by: Helge Deller <[email protected]>
2019-02-21parisc: Show machine product number during bootHelge Deller3-0/+34
Ask PDC firmware during boot for the original and current product number as well as the serial number and show it (if available). Signed-off-by: Helge Deller <[email protected]>
2019-02-21parisc: Add constants for PDC_RELOCATE PDC callHelge Deller1-1/+6
The PDC_RELOCATE function is called by HP-UX shortly before crashing. So, we need to handle it in qemu and thus it makes sense to add the constant here. Additionally add other subfunctions like PDC_MODEL_GET_PLATFORM_INFO (to get product and serial numbers) and PDC_TOD_CALIBRATE (to calibrate timers) too. Signed-off-by: Helge Deller <[email protected]>
2019-02-21parisc: Add PDC_CRASH_PREP PDC function numberSven Schnelle1-0/+1
Signed-off-by: Sven Schnelle <[email protected]> Signed-off-by: Helge Deller <[email protected]>
2019-02-21parisc: remove the HBA_DATA macroChristoph Hellwig1-2/+0
No need to hide a cast in a macro, especially as all users have cleaner ways to archive the result than blind casting. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Helge Deller <[email protected]>
2019-02-21parisc: properly type the iommu field in struct pci_hba_dataChristoph Hellwig1-1/+1
Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Helge Deller <[email protected]>
2019-02-21parisc: move internal implementation details out of <asm/dma-mapping.h>Christoph Hellwig1-44/+0
Move everything that is not required for the public facing DMA API out of <asm/dma-mapping.h> and into a new drivers/parisc/iommu.h header. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Helge Deller <[email protected]>
2019-02-21parisc: don't include <asm/cacheflush.h> in <asm/dma-mapping.h>Christoph Hellwig2-2/+1
No need for any of the definitions here, all there real work now happens out of line. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Helge Deller <[email protected]>
2019-02-21parisc: remove meaningless ccflags-y in arch/parisc/boot/MakefileMasahiro Yamada1-6/+0
This ccflags-y is never used because arch/parisc/boot/Makefile only contains objcopy and install targets. Signed-off-by: Masahiro Yamada <[email protected]> Signed-off-by: Helge Deller <[email protected]>
2019-02-21parisc: replace oops_in_progress manipulation with bust_spinlocks()Sergey Senozhatsky1-2/+2
Use bust_spinlocks() function to set oops_in_progress. Signed-off-by: Sergey Senozhatsky <[email protected]> Signed-off-by: Helge Deller <[email protected]>
2019-02-21parisc: Improve initial IRQ to CPU assignmentHelge Deller1-1/+4
On parisc, each IRQ can only be handled by one CPU, and currently CPU0 is choosen as default for handling all IRQs by default. With this patch we now assign each requested IRQ to one of the online CPUs (and thus distribute the IRQs across all CPUs), even without an instance of irqbalance running. Signed-off-by: Helge Deller <[email protected]>
2019-02-21parisc: Count IPI function call interruptsHelge Deller3-0/+6
Like other platforms, count the number of IPI function call interrupts and show it in /proc/interrupts. Signed-off-by: Helge Deller <[email protected]>
2019-02-21parisc: Show rescheduling interrupts on SMP machines onlyHelge Deller1-4/+6
Signed-off-by: Helge Deller <[email protected]>
2019-02-21parisc: Fix ptrace syscall number modificationDmitry V. Levin1-8/+21
Commit 910cd32e552e ("parisc: Fix and enable seccomp filter support") introduced a regression in ptrace-based syscall tampering: when tracer changes syscall number to -1, the kernel fails to initialize %r28 with -ENOSYS and subsequently fails to return the error code of the failed syscall to userspace. This erroneous behaviour could be observed with a simple strace syscall fault injection command which is expected to print something like this: $ strace -a0 -ewrite -einject=write:error=enospc echo hello write(1, "hello\n", 6) = -1 ENOSPC (No space left on device) (INJECTED) write(2, "echo: ", 6) = -1 ENOSPC (No space left on device) (INJECTED) write(2, "write error", 11) = -1 ENOSPC (No space left on device) (INJECTED) write(2, "\n", 1) = -1 ENOSPC (No space left on device) (INJECTED) +++ exited with 1 +++ After commit 910cd32e552ea09caa89cdbe328e468979b030dd it loops printing something like this instead: write(1, "hello\n", 6../strace: Failed to tamper with process 12345: unexpectedly got no error (return value 0, error 0) ) = 0 (INJECTED) This bug was found by strace test suite. Fixes: 910cd32e552e ("parisc: Fix and enable seccomp filter support") Cc: [email protected] # v4.5+ Signed-off-by: Dmitry V. Levin <[email protected]> Tested-by: Helge Deller <[email protected]> Signed-off-by: Helge Deller <[email protected]>
2019-02-21ARC: define ARCH_SLAB_MINALIGN = 8Alexey Brodkin1-0/+11
The default value of ARCH_SLAB_MINALIGN in "include/linux/slab.h" is "__alignof__(unsigned long long)" which for ARC unexpectedly turns out to be 4. This is not a compiler bug, but as defined by ARC ABI [1] Thus slab allocator would allocate a struct which is 32-bit aligned, which is generally OK even if struct has long long members. There was however potetial problem when it had any atomic64_t which use LLOCKD/SCONDD instructions which are required by ISA to take 64-bit addresses. This is the problem we ran into [ 4.015732] EXT4-fs (mmcblk0p2): re-mounted. Opts: (null) [ 4.167881] Misaligned Access [ 4.172356] Path: /bin/busybox.nosuid [ 4.176004] CPU: 2 PID: 171 Comm: rm Not tainted 4.19.14-yocto-standard #1 [ 4.182851] [ 4.182851] [ECR ]: 0x000d0000 => Check Programmer's Manual [ 4.190061] [EFA ]: 0xbeaec3fc [ 4.190061] [BLINK ]: ext4_delete_entry+0x210/0x234 [ 4.190061] [ERET ]: ext4_delete_entry+0x13e/0x234 [ 4.202985] [STAT32]: 0x80080002 : IE K [ 4.207236] BTA: 0x9009329c SP: 0xbe5b1ec4 FP: 0x00000000 [ 4.212790] LPS: 0x9074b118 LPE: 0x9074b120 LPC: 0x00000000 [ 4.218348] r00: 0x00000040 r01: 0x00000021 r02: 0x00000001 ... ... [ 4.270510] Stack Trace: [ 4.274510] ext4_delete_entry+0x13e/0x234 [ 4.278695] ext4_rmdir+0xe0/0x238 [ 4.282187] vfs_rmdir+0x50/0xf0 [ 4.285492] do_rmdir+0x9e/0x154 [ 4.288802] EV_Trap+0x110/0x114 The fix is to make sure slab allocations are 64-bit aligned. Do note that atomic64_t is __attribute__((aligned(8)) which means gcc does generate 64-bit aligned references, relative to beginning of container struct. However the issue is if the container itself is not 64-bit aligned, atomic64_t ends up unaligned which is what this patch ensures. [1] https://github.com/foss-for-synopsys-dwc-arc-processors/toolchain/wiki/files/ARCv2_ABI.pdf Signed-off-by: Alexey Brodkin <[email protected]> Cc: <[email protected]> # 4.8+ Signed-off-by: Vineet Gupta <[email protected]> [vgupta: reworked changelog, added dependency on LL64+LLSC]
2019-02-21ARC: enable uboot support unconditionallyEugeniy Paltsev6-20/+0
After reworking U-boot args handling code and adding paranoid arguments check we can eliminate CONFIG_ARC_UBOOT_SUPPORT and enable uboot support unconditionally. For JTAG case we can assume that core registers will come up reset value of 0 or in worst case we rely on user passing '-on=clear_regs' to Metaware debugger. Cc: [email protected] Tested-by: Corentin LABBE <[email protected]> Signed-off-by: Eugeniy Paltsev <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2019-02-21ARC: U-boot: check arguments paranoidlyEugeniy Paltsev2-27/+64
Handle U-boot arguments paranoidly: * don't allow to pass unknown tag. * try to use external device tree blob only if corresponding tag (TAG_DTB) is set. * don't check uboot_tag if kernel build with no ARC_UBOOT_SUPPORT. NOTE: If U-boot args are invalid we skip them and try to use embedded device tree blob. We can't panic on invalid U-boot args as we really pass invalid args due to bug in U-boot code. This happens if we don't provide external DTB to U-boot and don't set 'bootargs' U-boot environment variable (which is default case at least for HSDK board) In that case we will pass {r0 = 1 (bootargs in r2); r1 = 0; r2 = 0;} to linux which is invalid. While I'm at it refactor U-boot arguments handling code. Cc: [email protected] Tested-by: Corentin LABBE <[email protected]> Signed-off-by: Eugeniy Paltsev <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2019-02-21ARCv2: support manual regfile save on interruptsVineet Gupta5-1/+68
There's a hardware bug which affects the HSDK platform, triggered by micro-ops for auto-saving regfile on taken interrupt. The workaround is to inhibit autosave. Signed-off-by: Vineet Gupta <[email protected]>
2019-02-21ARC: uacces: remove lp_start, lp_end from clobber listVineet Gupta1-4/+4
Newer ARC gcc handles lp_start, lp_end in a different way and doesn't like them in the clobber list. Signed-off-by: Vineet Gupta <[email protected]>
2019-02-21ARC: fix actionpoints configuration detectionEugeniy Paltsev1-1/+1
Fix reversed logic while actionpoints configuration (full/min) detection. Fixies: 7dd380c338f1e ("ARC: boot log: print Action point details") Signed-off-by: Eugeniy Paltsev <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2019-02-21ARCv2: lib: memcpy: fix doing prefetchw outside of bufferEugeniy Paltsev1-14/+0
ARCv2 optimized memcpy uses PREFETCHW instruction for prefetching the next cache line but doesn't ensure that the line is not past the end of the buffer. PRETECHW changes the line ownership and marks it dirty, which can cause data corruption if this area is used for DMA IO. Fix the issue by avoiding the PREFETCHW. This leads to performance degradation but it is OK as we'll introduce new memcpy implementation optimized for unaligned memory access using. We also cut off all PREFETCH instructions at they are quite useless here: * we call PREFETCH right before LOAD instruction call. * we copy 16 or 32 bytes of data (depending on CONFIG_ARC_HAS_LL64) in a main logical loop. so we call PREFETCH 4 times (or 2 times) for each L1 cache line (in case of 64B L1 cache Line which is default case). Obviously this is not optimal. Signed-off-by: Eugeniy Paltsev <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2019-02-21ARCv2: Enable unaligned access in early ASM codeEugeniy Paltsev1-0/+10
It is currently done in arc_init_IRQ() which might be too late considering gcc 7.3.1 onwards (GNU 2018.03) generates unaligned memory accesses by default Cc: [email protected] #4.4+ Signed-off-by: Eugeniy Paltsev <[email protected]> Signed-off-by: Vineet Gupta <[email protected]> [vgupta: rewrote changelog]
2019-02-21s390/net: convert pnetids to asciiHans Wippel1-0/+3
Pnetids are retrieved from the underlying hardware as EBCDIC. This patch converts pnetids to ASCII. Signed-off-by: Hans Wippel <[email protected]> Signed-off-by: Ursula Braun <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-02-21Merge tag 'arm64-fixes' of ↵Linus Torvalds3-9/+13
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull late arm64 fixes from Will Deacon: "Three small arm64 fixes for 5.0. They fix a build breakage with clang introduced in 4.20, an oversight in our sigframe restoration relating to the SSBS bit and a boot fix for systems with newer revisions of our interrupt controller. Summary: - Fix handling of PSTATE.SSBS bit in sigreturn() - Fix version checking of the GIC during early boot - Fix clang builds failing due to use of NEON in the crypto code" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: Relax GIC version check during early boot arm64/neon: Disable -Wincompatible-pointer-types when building with Clang arm64: fix SSBS sanitization
2019-02-21kasan: fix random seed generation for tag-based modeAndrey Konovalov2-2/+3
There are two issues with assigning random percpu seeds right now: 1. We use for_each_possible_cpu() to iterate over cpus, but cpumask is not set up yet at the moment of kasan_init(), and thus we only set the seed for cpu #0. 2. A call to get_random_u32() always returns the same number and produces a message in dmesg, since the random subsystem is not yet initialized. Fix 1 by calling kasan_init_tags() after cpumask is set up. Fix 2 by using get_cycles() instead of get_random_u32(). This gives us lower quality random numbers, but it's good enough, as KASAN is meant to be used as a debugging tool and not a mitigation. Link: http://lkml.kernel.org/r/1f815cc914b61f3516ed4cc9bfd9eeca9bd5d9de.1550677973.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Dmitry Vyukov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-02-21s390/extmem: print DCSS range with %pxGerald Schaefer1-2/+2
The DCSS range is currently printed with %p, which results in hashed values instead of the actual addresses. Use %px instead, the DCSS ranges do not reveal any kernel symbol addresses. Signed-off-by: Gerald Schaefer <[email protected]> Signed-off-by: Martin Schwidefsky <[email protected]>
2019-02-21s390/extmem: remove code for 31 bit addressing modeGerald Schaefer1-117/+12
All supported releases of z/VM allow 64 bit subcodes and addressing mode for diag 0x64. This patch removes a lot of code for handling 31 bit addressing mode and old subcodes. Signed-off-by: Gerald Schaefer <[email protected]> Signed-off-by: Martin Schwidefsky <[email protected]>
2019-02-22powerpc: dump as a single line areas mapping a single physical page.Christophe Leroy1-6/+12
When using KASAN, there are parts of the shadow area where all pages are mapped to the kasan_early_shadow_page. It is pointless to dump one line for each of those pages (in the example below there are 7168 entries pointing to the same physical page). ~# cat /sys/kernel/debug/kernel_page_tables ... ---[ kasan shadow mem start ]--- 0xf7c00000-0xf8bfffff 0x06fac000 16M rw present dirty accessed 0xf8c00000-0xf8c03fff 0x00cd0000 16K r present dirty accessed 0xf8c04000-0xf8c07fff 0x00cd0000 16K r present dirty accessed 0xf8c08000-0xf8c0bfff 0x00cd0000 16K r present dirty accessed 0xf8c0c000-0xf8c0ffff 0x00cd0000 16K r present dirty accessed 0xf8c10000-0xf8c13fff 0x00cd0000 16K r present dirty accessed ... 7168 identical lines 0xffbfc000-0xffbfffff 0x00cd0000 16K r present dirty accessed ---[ kasan shadow mem end ]--- ... This patch modifies linux table dump to dump as a single line areas where all addresses points to the same physical page. That physical address is put inside [] to show that all virt pages points to the same phys page. ~# cat /sys/kernel/debug/kernel_page_tables ... ---[ kasan shadow mem start ]--- 0xf7c00000-0xf8bfffff 0x06fac000 16M rw present dirty accessed 0xf8c00000-0xffbfffff [0x00cd0000] 16K r present dirty accessed ---[ kasan shadow mem end ]--- ... Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-02-22powerpc/32: Fix CONFIG_VIRT_CPU_ACCOUNTING_NATIVE for 40x/bookeChristophe Leroy1-5/+7
40x/booke have another path to reach 3f from transfer_to_handler, make sure it also calls ACCOUNT_CPU_USER_ENTRY() when CONFIG_VIRT_CPU_ACCOUNTING_NATIVE is selected. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-02-22powerpc/book3s32: Reorder _PAGE_XXX flags to simplify TLB handlingChristophe Leroy3-12/+7
For pages without _PAGE_USER, PP field is 00 For pages with _PAGE_USER, PP field is 10 for RW and 11 for RO. This patch sets _PAGE_USER to 0x002 and _PAGE_RW to 0x001 is order to simplify TLB handling by reducing amount of shifts. The location of _PAGE_PRESENT and _PAGE_HASHPTE doesn't matter as they are only SW related flags. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-02-22powerpc/603: don't handle PAGE_ACCESSED in TLB miss handlers.Christophe Leroy1-11/+13
PAGE_ACCESSED is only needed for CONFIG_SWAP. When CONFIG_SWAP is not set, just ignore it. If CONFIG_SWAP is set and PAGE_ACCESSED is not, let's take a minor fault. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-02-22powerpc/603: Don't worry about _PAGE_USER in TLB miss handlersChristophe Leroy1-9/+3
PP bits take user access into account, so no need to check _PAGE_USER here. A DSI or ISI will be generated if needed. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-02-22powerpc/603: let's handle PAGE_DIRTY directlyChristophe Leroy1-4/+2
PAGE_DIRTY corresponds to the C bit. If writing on a page for which the C bit is not set, a DataStoreTLBMiss is generated. No need to check it in DataLoadTLBMiss. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-02-22powerpc/603: Don't handle _PAGE_RW and _PAGE_DIRTY on ITLB missesChristophe Leroy1-6/+2
_PAGE_RW and _PAGE_DIRTY do not matter for ITLB misses. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-02-22powerpc/603: Don't handle kernel page TLB misses when not needChristophe Leroy1-0/+4
ITLB miss on kernel pages only occur with CONFIG_MODULES and CONFIG_DEBUG_PAGEALLOC. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-02-22powerpc/hash32: use physical address directly in hash handlers.Christophe Leroy2-37/+31
Since commit c62ce9ef97ba ("powerpc: remove remaining bits from CONFIG_APUS"), tophys() has become a pure constant operation. PAGE_OFFSET is known at compile time so the physical address can be builtin directly. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-02-22powerpc/603: use physical address directly in TLB miss handlers.Christophe Leroy1-9/+6
Since commit c62ce9ef97ba ("powerpc: remove remaining bits from CONFIG_APUS"), tophys() has become a pure constant operation. PAGE_OFFSET is known at compile time so the physical address can be builtin directly. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>