aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2018-02-06proc: use %u for pid printing and slightly less stackAlexey Dobriyan4-15/+14
PROC_NUMBUF is 13 which is enough for "negative int + \n + \0". However PIDs and TGIDs are never negative and newline is not a concern, so use just 10 per integer. Link: http://lkml.kernel.org/r/20171120203005.GA27743@avx2 Signed-off-by: Alexey Dobriyan <[email protected]> Cc: Alexander Viro <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-02-06kasan: remove redundant initialization of variable 'real_size'Colin Ian King1-1/+1
Variable real_size is initialized with a value that is never read, it is re-assigned a new value later on, hence the initialization is redundant and can be removed. Cleans up clang warning: lib/test_kasan.c:422:21: warning: Value stored to 'real_size' during its initialization is never read Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Colin Ian King <[email protected]> Acked-by: Andrey Ryabinin <[email protected]> Reviewed-by: Andrew Morton <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Dmitry Vyukov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-02-06kasan: clean up KASAN_SHADOW_SCALE_SHIFT usageAndrey Konovalov5-15/+22
Right now the fact that KASAN uses a single shadow byte for 8 bytes of memory is scattered all over the code. This change defines KASAN_SHADOW_SCALE_SHIFT early in asm include files and makes use of this constant where necessary. [[email protected]: coding-style fixes] Link: http://lkml.kernel.org/r/34937ca3b90736eaad91b568edf5684091f662e3.1515775666.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <[email protected]> Acked-by: Andrey Ryabinin <[email protected]> Cc: Dmitry Vyukov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-02-06kasan: fix prototype author email addressAndrey Konovalov2-2/+2
Use the new one. Link: http://lkml.kernel.org/r/de3b7ffc30a55178913a7d3865216aa7accf6c40.1515775666.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Dmitry Vyukov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-02-06kasan: detect invalid freesDmitry Vyukov2-0/+56
Detect frees of pointers into middle of heap objects. Link: http://lkml.kernel.org/r/cb569193190356beb018a03bb8d6fbae67e7adbc.1514378558.git.dvyukov@google.com Signed-off-by: Dmitry Vyukov <[email protected]> Cc: Andrey Ryabinin <[email protected]>a Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-02-06kasan: unify code between kasan_slab_free() and kasan_poison_kfree()Dmitry Vyukov1-16/+12
Both of these functions deal with freeing of slab objects. However, kasan_poison_kfree() mishandles SLAB_TYPESAFE_BY_RCU (must also not poison such objects) and does not detect double-frees. Unify code between these functions. This solves both of the problems and allows to add more common code (e.g. detection of invalid frees). Link: http://lkml.kernel.org/r/385493d863acf60408be219a021c3c8e27daa96f.1514378558.git.dvyukov@google.com Signed-off-by: Dmitry Vyukov <[email protected]> Cc: Andrey Ryabinin <[email protected]>a Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-02-06kasan: detect invalid frees for large mempool objectsDmitry Vyukov3-8/+13
Detect frees of pointers into middle of mempool objects. I did a one-off test, but it turned out to be very tricky, so I reverted it. First, mempool does not call kasan_poison_kfree() unless allocation function fails. I stubbed an allocation function to fail on second and subsequent allocations. But then mempool stopped to call kasan_poison_kfree() at all, because it does it only when allocation function is mempool_kmalloc(). We could support this special failing test allocation function in mempool, but it also can't live with kasan tests, because these are in a module. Link: http://lkml.kernel.org/r/bf7a7d035d7a5ed62d2dd0e3d2e8a4fcdf456aa7.1514378558.git.dvyukov@google.com Signed-off-by: Dmitry Vyukov <[email protected]> Cc: Andrey Ryabinin <[email protected]>a Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-02-06kasan: don't use __builtin_return_address(1)Dmitry Vyukov6-18/+19
__builtin_return_address(1) is unreliable without frame pointers. With defconfig on kmalloc_pagealloc_invalid_free test I am getting: BUG: KASAN: double-free or invalid-free in (null) Pass caller PC from callers explicitly. Link: http://lkml.kernel.org/r/9b01bc2d237a4df74ff8472a3bf6b7635908de01.1514378558.git.dvyukov@google.com Signed-off-by: Dmitry Vyukov <[email protected]> Cc: Andrey Ryabinin <[email protected]>a Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-02-06kasan: detect invalid frees for large objectsDmitry Vyukov6-15/+44
Patch series "kasan: detect invalid frees". KASAN detects double-frees, but does not detect invalid-frees (when a pointer into a middle of heap object is passed to free). We recently had a very unpleasant case in crypto code which freed an inner object inside of a heap allocation. This left unnoticed during free, but totally corrupted heap and later lead to a bunch of random crashes all over kernel code. Detect invalid frees. This patch (of 5): Detect frees of pointers into middle of large heap objects. I dropped const from kasan_kfree_large() because it starts propagating through a bunch of functions in kasan_report.c, slab/slub nearest_obj(), all of their local variables, fixup_red_left(), etc. Link: http://lkml.kernel.org/r/1b45b4fe1d20fc0de1329aab674c1dd973fee723.1514378558.git.dvyukov@google.com Signed-off-by: Dmitry Vyukov <[email protected]> Cc: Andrey Ryabinin <[email protected]>a Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-02-06kasan: add functions for unpoisoning stack variablesAlexander Potapenko2-0/+59
As a code-size optimization, LLVM builds since r279383 may bulk-manipulate the shadow region when (un)poisoning large memory blocks. This requires new callbacks that simply do an uninstrumented memset(). This fixes linking the Clang-built kernel when using KASAN. [[email protected]: add declarations for internal functions] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: __asan_set_shadow_00 can be static] Link: http://lkml.kernel.org/r/20171223125943.GA74341@lkp-ib03 [[email protected]: fix memset() parameters, and tweak commit message to describe new callbacks] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Alexander Potapenko <[email protected]> Signed-off-by: Greg Hackmann <[email protected]> Signed-off-by: Paul Lawrence <[email protected]> Signed-off-by: Fengguang Wu <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]> Acked-by: Andrey Ryabinin <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Masahiro Yamada <[email protected]> Cc: Matthias Kaehlcke <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-02-06kasan: add tests for alloca poisoningPaul Lawrence1-0/+22
Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Hackmann <[email protected]> Signed-off-by: Paul Lawrence <[email protected]> Acked-by: Andrey Ryabinin <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Masahiro Yamada <[email protected]> Cc: Matthias Kaehlcke <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-02-06kasan: support alloca() poisoningPaul Lawrence4-1/+48
clang's AddressSanitizer implementation adds redzones on either side of alloca()ed buffers. These redzones are 32-byte aligned and at least 32 bytes long. __asan_alloca_poison() is passed the size and address of the allocated buffer, *excluding* the redzones on either side. The left redzone will always be to the immediate left of this buffer; but AddressSanitizer may need to add padding between the end of the buffer and the right redzone. If there are any 8-byte chunks inside this padding, we should poison those too. __asan_allocas_unpoison() is just passed the top and bottom of the dynamic stack area, so unpoisoning is simpler. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Hackmann <[email protected]> Signed-off-by: Paul Lawrence <[email protected]> Acked-by: Andrey Ryabinin <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Masahiro Yamada <[email protected]> Cc: Matthias Kaehlcke <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-02-06kasan/Makefile: support LLVM style asan parametersAndrey Ryabinin1-11/+18
LLVM doesn't understand GCC-style paramters ("--param asan-foo=bar"), thus we currently we don't use inline/globals/stack instrumentation when building the kernel with clang. Add support for LLVM-style parameters ("-mllvm -asan-foo=bar") to enable all KASAN features. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Andrey Ryabinin <[email protected]> Signed-off-by: Paul Lawrence <[email protected]> Reviewed-by: Alexander Potapenko <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Greg Hackmann <[email protected]> Cc: Masahiro Yamada <[email protected]> Cc: Matthias Kaehlcke <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-02-06kasan: add compiler support for clangPaul Lawrence1-0/+8
Patch series "kasan: support alloca, LLVM", v4. This patch (of 5): For now we can hard-code ASAN ABI level 5, since historical clang builds can't build the kernel anyway. We also need to emulate gcc's __SANITIZE_ADDRESS__ flag, or memset() calls won't be instrumented. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Hackmann <[email protected]> Signed-off-by: Paul Lawrence <[email protected]> Acked-by: Andrey Ryabinin <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Masahiro Yamada <[email protected]> Cc: Matthias Kaehlcke <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-02-06kasan: don't emit builtin calls when sanitization is offAndrey Konovalov3-2/+6
With KASAN enabled the kernel has two different memset() functions, one with KASAN checks (memset) and one without (__memset). KASAN uses some macro tricks to use the proper version where required. For example memset() calls in mm/slub.c are without KASAN checks, since they operate on poisoned slab object metadata. The issue is that clang emits memset() calls even when there is no memset() in the source code. They get linked with improper memset() implementation and the kernel fails to boot due to a huge amount of KASAN reports during early boot stages. The solution is to add -fno-builtin flag for files with KASAN_SANITIZE := n marker. Link: http://lkml.kernel.org/r/8ffecfffe04088c52c42b92739c2bd8a0bcb3f5e.1516384594.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <[email protected]> Acked-by: Nick Desaulniers <[email protected]> Cc: Masahiro Yamada <[email protected]> Cc: Michal Marek <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-02-07kbuild: clang: disable unused variable warnings only when constantSodagudi Prasad1-2/+1
Currently, GCC disables -Wunused-const-variable, but not -Wunused-variable, so warns unused variables if they are non-constant. While, Clang does not warn unused variables at all regardless of the const qualifier because -Wno-unused-const-variable is implied by the stronger option -Wno-unused-variable. Disable -Wunused-const-variable instead of -Wunused-variable so that GCC and Clang work in the same way. Signed-off-by: Prasad Sodagudi <[email protected]> Signed-off-by: Masahiro Yamada <[email protected]>
2018-02-07netfilter: nf_tables: fix flowtable freePablo Neira Ayuso6-13/+26
Every flow_offload entry is added into the table twice. Because of this, rhashtable_free_and_destroy can't be used, since it would call kfree for each flow_offload object twice. This patch cleans up the flowtable via nf_flow_table_iterate() to schedule removal of entries by setting on the dying bit, then there is an explicitly invocation of the garbage collector to release resources. Based on patch from Felix Fietkau. Signed-off-by: Pablo Neira Ayuso <[email protected]>
2018-02-07netfilter: nft_flow_offload: move flowtable cleanup routines to nf_flow_tablePablo Neira Ayuso3-18/+28
Move the flowtable cleanup routines to nf_flow_table and expose the nf_flow_table_cleanup() helper function. Signed-off-by: Pablo Neira Ayuso <[email protected]>
2018-02-07netfilter: xt_RATEEST: acquire xt_rateest_mutex for hash insertCong Wang1-5/+17
rateest_hash is supposed to be protected by xt_rateest_mutex, and, as suggested by Eric, lookup and insert should be atomic, so we should acquire the xt_rateest_mutex once for both. So introduce a non-locking helper for internal use and keep the locking one for external. Reported-by: <[email protected]> Fixes: 5859034d7eb8 ("[NETFILTER]: x_tables: add RATEEST target") Signed-off-by: Cong Wang <[email protected]> Reviewed-by: Florian Westphal <[email protected]> Reviewed-by: Eric Dumazet <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]>
2018-02-06Documentation/process: tweak pgp maintainer guideKonstantin Ryabitsev1-16/+34
Based on the feedback provided: - Uniformly use lowercase k in "Linux kernel" - Give a one-sentence explanation of what subkeys are - Explain what signed commits might be useful for even if upstream developers do not use them for much of anything - Admonish to set up gpg-agent if signed commits are turned on in git config - Fix a typo reported by Luc Van Oostenryck Signed-off-by: Konstantin Ryabitsev <[email protected]> Signed-off-by: Jonathan Corbet <[email protected]>
2018-02-06Merge tag 'platform-drivers-x86-v4.16-1' of ↵Linus Torvalds38-1111/+2301
git://git.infradead.org/linux-platform-drivers-x86 Pull x86 platform-driver updates from Darren Hart: "New model support added for Dell, Ideapad, Acer, Asus, Thinkpad, and GPD laptops. Improvements to the common intel-vbtn driver, including tablet mode, rotate, and front button support. Intel CPU support added for Cannonlake and platform support for Dollar Cove power button. Overhaul of the mellanox platform driver, creating a new platform/mellanox directory for the newly multi-architecture regmap interface. Significant Intel PMC update with CannonLake support, Coffeelake update, CPUID enumeration, module support, new read64 API, refactoring and cleanups. Revert the apple-gmux iGP IO lock, addressing reported issues with non-binary drivers, leaving Nvidia binary driver users to comment out conflicting code. Miscellaneous fixes and cleanups" * tag 'platform-drivers-x86-v4.16-1' of git://git.infradead.org/linux-platform-drivers-x86: (81 commits) platform/x86: mlx-platform: Fix an ERR_PTR vs NULL issue platform/x86: intel_pmc_core: Special case for Coffeelake platform/x86: intel_pmc_core: Add CannonLake PCH support x86/cpu: Add Cannonlake to Intel family platform/x86: intel_pmc_core: Read base address from LPIT ACPI / LPIT: Export lpit_read_residency_count_address() platform/x86: intel-vbtn: Replace License by SDPX identifier platform/x86: intel-vbtn: Remove redundant inclusions platform/x86: intel-vbtn: Support tablet mode switch platform/x86: dell-laptop: Allocate buffer on heap rather than globally platform/x86: intel_pmc_core: Remove unused header file platform/x86: mlx-platform: Add hotplug device unregister to error path platform/x86: mlx-platform: fix module aliases platform/mellanox: mlxreg-hotplug: Add check for negative adapter number platform/x86: mlx-platform: Add IO access verification callbacks platform/x86: mlx-platform: Document pdev_hotplug field platform/x86: mlx-platform: Allow compilation for 32 bit arch platform/mellanox: mlxreg-hotplug: Enable building for ARM platform/mellanox: mlxreg-hotplug: Modify to use a regmap interface platform/mellanox: Group create/destroy with attribute functions ...
2018-02-06Merge branch 'next' of ↵Linus Torvalds13-210/+312
git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux Pull thermal management updates from Zhang Rui: - fix a race condition issue in power allocator governor (Yi Zeng). - add support for AP806 and CP110 in armada thermal driver, together with several improvements (Baruch Siach, Miquel Raynal) - add support for r8z7743 in rcar thermal driver (Biju Das) - convert thermal core to use new hwmon API to avoid warning (Fabio Estevam) - small fixes and cleanups in thermal core and x86_pkg_thermal, int3400_thermal, hisi_thermal, mtk_thermal and imx_thermal drivers (Pravin Shedge, Geert Uytterhoeven, Alexey Khoroshilov, Brian Bian, Matthias Brugger, Nicolin Chen, Uwe Kleine-König) * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux: (25 commits) thermal: thermal_hwmon: Convert to hwmon_device_register_with_info() thermal/x86 pkg temp: Remove debugfs_create_u32() casts thermal: int3400_thermal: fix error handling in int3400_thermal_probe() thermal/drivers/hisi: Remove bogus const from function return type thermal: armada: Give meaningful names to the thermal zones thermal: armada: Wait sensors validity before exiting the init callback thermal: armada: Change sensors trim default value thermal: armada: Update Kconfig and module description thermal: armada: Add support for Armada CP110 thermal: armada: Add support for Armada AP806 thermal: armada: Use real status register name thermal: armada: Clarify control registers accesses thermal: armada: Simplify the check of the validity bit thermal: armada: Use msleep for long delays dt-bindings: thermal: Describe Armada AP806 and CP110 dt-bindings: thermal: rcar: Add device tree support for r8a7743 thermal: mtk: Cleanup unused defines thermal: imx: update to new formula according to NXP AN5215 thermal: imx: use consistent style to write temperatures thermal: imx: improve comments describing algorithm for temp calculation ...
2018-02-06arm64: Kill PSCI_GET_VERSION as a variant-2 workaroundMarc Zyngier3-70/+13
Now that we've standardised on SMCCC v1.1 to perform the branch prediction invalidation, let's drop the previous band-aid. If vendors haven't updated their firmware to do SMCCC 1.1, they haven't updated PSCI either, so we don't loose anything. Tested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening supportMarc Zyngier2-1/+87
Add the detection and runtime code for ARM_SMCCC_ARCH_WORKAROUND_1. It is lovely. Really. Tested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm/arm64: smccc: Implement SMCCC v1.1 inline primitiveMarc Zyngier1-0/+141
One of the major improvement of SMCCC v1.1 is that it only clobbers the first 4 registers, both on 32 and 64bit. This means that it becomes very easy to provide an inline version of the SMC call primitive, and avoid performing a function call to stash the registers that would otherwise be clobbered by SMCCC v1.0. Reviewed-by: Robin Murphy <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm/arm64: smccc: Make function identifiers an unsigned quantityMarc Zyngier1-2/+4
Function identifiers are a 32bit, unsigned quantity. But we never tell so to the compiler, resulting in the following: 4ac: b26187e0 mov x0, #0xffffffff80000001 We thus rely on the firmware narrowing it for us, which is not always a reasonable expectation. Cc: [email protected] Reported-by: Ard Biesheuvel <[email protected]> Acked-by: Ard Biesheuvel <[email protected]> Reviewed-by: Robin Murphy <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06firmware/psci: Expose SMCCC version through psci_opsMarc Zyngier2-0/+33
Since PSCI 1.0 allows the SMCCC version to be (indirectly) probed, let's do that at boot time, and expose the version of the calling convention as part of the psci_ops structure. Acked-by: Lorenzo Pieralisi <[email protected]> Reviewed-by: Robin Murphy <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06firmware/psci: Expose PSCI conduitMarc Zyngier2-5/+30
In order to call into the firmware to apply workarounds, it is useful to find out whether we're using HVC or SMC. Let's expose this through the psci_ops. Acked-by: Lorenzo Pieralisi <[email protected]> Reviewed-by: Robin Murphy <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handlingMarc Zyngier1-2/+18
We want SMCCC_ARCH_WORKAROUND_1 to be fast. As fast as possible. So let's intercept it as early as we can by testing for the function call number as soon as we've identified a HVC call coming from the guest. Tested-by: Ard Biesheuvel <[email protected]> Reviewed-by: Christoffer Dall <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: KVM: Report SMCCC_ARCH_WORKAROUND_1 BP hardening supportMarc Zyngier4-1/+26
A new feature of SMCCC 1.1 is that it offers firmware-based CPU workarounds. In particular, SMCCC_ARCH_WORKAROUND_1 provides BP hardening for CVE-2017-5715. If the host has some mitigation for this issue, report that we deal with it using SMCCC_ARCH_WORKAROUND_1, as we apply the host workaround on every guest exit. Tested-by: Ard Biesheuvel <[email protected]> Reviewed-by: Christoffer Dall <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm/arm64: KVM: Turn kvm_psci_version into a static inlineMarc Zyngier3-19/+34
We're about to need kvm_psci_version in HYP too. So let's turn it into a static inline, and pass the kvm structure as a second parameter (so that HYP can do a kern_hyp_va on it). Tested-by: Ard Biesheuvel <[email protected]> Reviewed-by: Christoffer Dall <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm/arm64: KVM: Advertise SMCCC v1.1Marc Zyngier5-4/+39
The new SMC Calling Convention (v1.1) allows for a reduced overhead when calling into the firmware, and provides a new feature discovery mechanism. Make it visible to KVM guests. Tested-by: Ard Biesheuvel <[email protected]> Reviewed-by: Christoffer Dall <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm/arm64: KVM: Implement PSCI 1.0 supportMarc Zyngier2-1/+47
PSCI 1.0 can be trivially implemented by providing the FEATURES call on top of PSCI 0.2 and returning 1.0 as the PSCI version. We happily ignore everything else, as they are either optional or are clarifications that do not require any additional change. PSCI 1.0 is now the default until we decide to add a userspace selection API. Reviewed-by: Christoffer Dall <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm/arm64: KVM: Add smccc accessors to PSCI codeMarc Zyngier1-10/+42
Instead of open coding the accesses to the various registers, let's add explicit SMCCC accessors. Reviewed-by: Christoffer Dall <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm/arm64: KVM: Add PSCI_VERSION helperMarc Zyngier3-5/+8
As we're about to trigger a PSCI version explosion, it doesn't hurt to introduce a PSCI_VERSION helper that is going to be used everywhere. Reviewed-by: Christoffer Dall <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm/arm64: KVM: Consolidate the PSCI include filesMarc Zyngier6-34/+9
As we're about to update the PSCI support, and because I'm lazy, let's move the PSCI include file to include/kvm so that both ARM architectures can find it. Acked-by: Christoffer Dall <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: KVM: Increment PC after handling an SMC trapMarc Zyngier1-0/+9
When handling an SMC trap, the "preferred return address" is set to that of the SMC, and not the next PC (which is a departure from the behaviour of an SMC that isn't trapped). Increment PC in the handler, as the guest is otherwise forever stuck... Cc: [email protected] Fixes: acfb3b883f6d ("arm64: KVM: Fix SMCCC handling of unimplemented SMC/HVC calls") Reviewed-by: Christoffer Dall <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm: KVM: Fix SMCCC handling of unimplemented SMC/HVC callsMarc Zyngier1-2/+11
KVM doesn't follow the SMCCC when it comes to unimplemented calls, and inject an UNDEF instead of returning an error. Since firmware calls are now used for security mitigation, they are becoming more common, and the undef is counter productive. Instead, let's follow the SMCCC which states that -1 must be returned to the caller when getting an unknown function number. Cc: <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: KVM: Fix SMCCC handling of unimplemented SMC/HVC callsMarc Zyngier1-2/+2
KVM doesn't follow the SMCCC when it comes to unimplemented calls, and inject an UNDEF instead of returning an error. Since firmware calls are now used for security mitigation, they are becoming more common, and the undef is counter productive. Instead, let's follow the SMCCC which states that -1 must be returned to the caller when getting an unknown function number. Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Christoffer Dall <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: entry: Apply BP hardening for suspicious interrupts from EL0Will Deacon2-0/+11
It is possible to take an IRQ from EL0 following a branch to a kernel address in such a way that the IRQ is prioritised over the instruction abort. Whilst an attacker would need to get the stars to align here, it might be sufficient with enough calibration so perform BP hardening in the rare case that we see a kernel address in the ELR when handling an IRQ from EL0. Reported-by: Dan Hettena <[email protected]> Reviewed-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: entry: Apply BP hardening for high-priority synchronous exceptionsWill Deacon2-1/+13
Software-step and PC alignment fault exceptions have higher priority than instruction abort exceptions, so apply the BP hardening hooks there too if the user PC appears to reside in kernel space. Reported-by: Dan Hettena <[email protected]> Reviewed-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: futex: Mask __user pointers prior to dereferenceWill Deacon1-3/+6
The arm64 futex code has some explicit dereferencing of user pointers where performing atomic operations in response to a futex command. This patch uses masking to limit any speculative futex operations to within the user address space. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: uaccess: Mask __user pointers for __arch_{clear, copy_*}_userWill Deacon4-14/+30
Like we've done for get_user and put_user, ensure that user pointers are masked before invoking the underlying __arch_{clear,copy_*}_user operations. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: uaccess: Don't bother eliding access_ok checks in __{get, put}_userWill Deacon1-22/+32
access_ok isn't an expensive operation once the addr_limit for the current thread has been loaded into the cache. Given that the initial access_ok check preceding a sequence of __{get,put}_user operations will take the brunt of the miss, we can make the __* variants identical to the full-fat versions, which brings with it the benefits of address masking. The likely cost in these sequences will be from toggling PAN/UAO, which we can address later by implementing the *_unsafe versions. Reviewed-by: Robin Murphy <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: uaccess: Prevent speculative use of the current addr_limitWill Deacon1-0/+7
A mispredicted conditional call to set_fs could result in the wrong addr_limit being forwarded under speculation to a subsequent access_ok check, potentially forming part of a spectre-v1 attack using uaccess routines. This patch prevents this forwarding from taking place, but putting heavy barriers in set_fs after writing the addr_limit. Reviewed-by: Mark Rutland <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: entry: Ensure branch through syscall table is bounded under speculationWill Deacon2-0/+13
In a similar manner to array_index_mask_nospec, this patch introduces an assembly macro (mask_nospec64) which can be used to bound a value under speculation. This macro is then used to ensure that the indirect branch through the syscall table is bounded under speculation, with out-of-range addresses speculating as calls to sys_io_setup (0). Reviewed-by: Mark Rutland <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: Use pointer masking to limit uaccess speculationRobin Murphy1-3/+23
Similarly to x86, mitigate speculation past an access_ok() check by masking the pointer against the address limit before use. Even if we don't expect speculative writes per se, it is plausible that a CPU may still speculate at least as far as fetching a cache line for writing, hence we also harden put_user() and clear_user() for peace of mind. Signed-off-by: Robin Murphy <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: Make USER_DS an inclusive limitRobin Murphy4-23/+33
Currently, USER_DS represents an exclusive limit while KERNEL_DS is inclusive. In order to do some clever trickery for speculation-safe masking, we need them both to behave equivalently - there aren't enough bits to make KERNEL_DS exclusive, so we have precisely one option. This also happens to correct a longstanding false negative for a range ending on the very top byte of kernel memory. Mark Rutland points out that we've actually got the semantics of addresses vs. segments muddled up in most of the places we need to amend, so shuffle the {USER,KERNEL}_DS definitions around such that we can correct those properly instead of just pasting "-1"s everywhere. Signed-off-by: Robin Murphy <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: Implement array_index_mask_nospec()Robin Murphy1-0/+21
Provide an optimised, assembly implementation of array_index_mask_nospec() for arm64 so that the compiler is not in a position to transform the code in ways which affect its ability to inhibit speculation (e.g. by introducing conditional branches). This is similar to the sequence used by x86, modulo architectural differences in the carry/borrow flags. Reviewed-by: Mark Rutland <[email protected]> Signed-off-by: Robin Murphy <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: barrier: Add CSDB macros to control data-value predictionWill Deacon2-0/+8
For CPUs capable of data value prediction, CSDB waits for any outstanding predictions to architecturally resolve before allowing speculative execution to continue. Provide macros to expose it to the arch code. Reviewed-by: Mark Rutland <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>