aboutsummaryrefslogtreecommitdiff
path: root/arch/x86/kernel
AgeCommit message (Collapse)AuthorFilesLines
2019-04-08x86/dma: Remove the x86_dma_fallback_dev hackChristoph Hellwig2-26/+0
Now that we removed support for the NULL device argument in the DMA API, there is no need to cater for that in the x86 code. Signed-off-by: Christoph Hellwig <[email protected]>
2019-04-08x86: Convert some slow-path static_cpu_has() callers to boot_cpu_has()Borislav Petkov10-23/+23
Using static_cpu_has() is pointless on those paths, convert them to the boot_cpu_has() variant. No functional changes. Reported-by: Nadav Amit <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Rik van Riel <[email protected]> Reviewed-by: Juergen Gross <[email protected]> # for paravirt Cc: Aubrey Li <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Dominik Brodowski <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jann Horn <[email protected]> Cc: Joerg Roedel <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: Thomas Lendacky <[email protected]> Cc: [email protected] Cc: Masami Hiramatsu <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: "Rafael J. Wysocki" <[email protected]> Cc: Sebastian Andrzej Siewior <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: [email protected] Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2019-04-07PM / arch: x86: MSR_IA32_ENERGY_PERF_BIAS sysfs interfaceRafael J. Wysocki1-4/+89
The Performance and Energy Bias Hint (EPB) is expected to be set by user space through the generic MSR interface, but that interface is not particularly nice and there are security concerns regarding it, so it is not always available. For this reason, add a sysfs interface for reading and updating the EPB, in the form of a new attribute, energy_perf_bias, located under /sys/devices/system/cpu/cpu#/power/ for online CPUs that support the EPB feature. Signed-off-by: Rafael J. Wysocki <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Acked-by: Borislav Petkov <[email protected]>
2019-04-07PM / arch: x86: Rework the MSR_IA32_ENERGY_PERF_BIAS handlingRafael J. Wysocki5-53/+132
The current handling of MSR_IA32_ENERGY_PERF_BIAS in the kernel is problematic, because it may cause changes made by user space to that MSR (with the help of the x86_energy_perf_policy tool, for example) to be lost every time a CPU goes offline and then back online as well as during system-wide power management transitions into sleep states and back into the working state. The first problem is that if the current EPB value for a CPU going online is 0 ('performance'), the kernel will change it to 6 ('normal') regardless of whether or not this is the first bring-up of that CPU. That also happens during system-wide resume from sleep states (including, but not limited to, hibernation). However, the EPB may have been adjusted by user space this way and the kernel should not blindly override that setting. The second problem is that if the platform firmware resets the EPB values for any CPUs during system-wide resume from a sleep state, the kernel will not restore their previous EPB values that may have been set by user space before the preceding system-wide suspend transition. Again, that behavior may at least be confusing from the user space perspective. In order to address these issues, rework the handling of MSR_IA32_ENERGY_PERF_BIAS so that the EPB value is saved on CPU offline and restored on CPU online as well as (for the boot CPU) during the syscore stages of system-wide suspend and resume transitions, respectively. However, retain the policy by which the EPB is set to 6 ('normal') on the first bring-up of each CPU if its initial value is 0, based on the observation that 0 may mean 'not initialized' just as well as 'performance' in that case. While at it, move the MSR_IA32_ENERGY_PERF_BIAS handling code into a separate file and document it in Documentation/admin-guide. Fixes: abe48b108247 (x86, intel, power: Initialize MSR_IA32_ENERGY_PERF_BIAS) Fixes: b51ef52df71c (x86/cpu: Restore MSR_IA32_ENERGY_PERF_BIAS after resume) Reported-by: Thomas Renninger <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Acked-by: Borislav Petkov <[email protected]> Acked-by: Thomas Gleixner <[email protected]>
2019-04-05x86/kexec/crash: Use struct_size() in vzalloc()Gustavo A. R. Silva1-2/+1
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; struct boo entry[]; }; instance = vzalloc(sizeof(struct foo) + count * sizeof(struct boo)); Instead of leaving these open-coded and prone to type mistakes, use the new struct_size() helper: instance = vzalloc(struct_size(instance, entry, count)); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Link: https://lkml.kernel.org/r/20190403184230.GA5295@embeddedor
2019-04-04acpi: Create subtable parsing infrastructureKeith Busch1-18/+18
Parsing entries in an ACPI table had assumed a generic header structure. There is no standard ACPI header, though, so less common layouts with different field sizes required custom parsers to go through their subtable entry list. Create the infrastructure for adding different table types so parsing the entries array may be more reused for all ACPI system tables and the common code doesn't need to be duplicated. Reviewed-by: Rafael J. Wysocki <[email protected]> Acked-by: Jonathan Cameron <[email protected]> Tested-by: Jonathan Cameron <[email protected]> Signed-off-by: Keith Busch <[email protected]> Tested-by: Brice Goglin <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-04-03x86/fpu: Fix __user annotationsJann Horn2-5/+5
In save_xstate_epilog(), use __user when type-casting userspace pointers. In setup_sigcontext() and x32_setup_rt_frame(), cast the userspace pointers to 'unsigned long __user *' before writing into them. These pointers are originally '__u32 __user *' or '__u64 __user *', causing sparse to complain when a userspace pointer is written into them. The casts are okay because the pointers always point to pointer-sized values. Thanks to Luc Van Oostenryck and Al Viro for explaining this to me. Signed-off-by: Jann Horn <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Mukesh Ojha <[email protected]> Cc: Qiaowei Ren <[email protected]> Cc: Sebastian Andrzej Siewior <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: x86-ml <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-04-03sched/x86_64: Don't save flags on context switchPeter Zijlstra1-7/+0
Now that we have objtool validating AC=1 state for all x86_64 code, we can once again guarantee clean flags on schedule. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-04-03x86/uaccess, signal: Fix AC=1 bloatPeter Zijlstra1-12/+17
Occasionally GCC is less agressive with inlining and the following is observed: arch/x86/kernel/signal.o: warning: objtool: restore_sigcontext()+0x3cc: call to force_valid_ss.isra.5() with UACCESS enabled arch/x86/kernel/signal.o: warning: objtool: do_signal()+0x384: call to frame_uc_flags.isra.0() with UACCESS enabled Cure this by moving this code out of the AC=1 region, since it really isn't needed for the user access. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-04-03sched/x86: Save [ER]FLAGS on context switchPeter Zijlstra2-0/+15
Effectively reverts commit: 2c7577a75837 ("sched/x86_64: Don't save flags on context switch") Specifically because SMAP uses FLAGS.AC which invalidates the claim that the kernel has clean flags. In particular; while preemption from interrupt return is fine (the IRET frame on the exception stack contains FLAGS) it breaks any code that does synchonous scheduling, including preempt_enable(). This has become a significant issue ever since commit: 5b24a7a2aa20 ("Add 'unsafe' user access functions for batched accesses") provided for means of having 'normal' C code between STAC / CLAC, exposing the FLAGS.AC state. So far this hasn't led to trouble, however fix it before it comes apart. Reported-by: Julien Thierry <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Fixes: 5b24a7a2aa20 ("Add 'unsafe' user access functions for batched accesses") Signed-off-by: Ingo Molnar <[email protected]>
2019-04-02x86/speculation/mds: Add SMT warning messageJosh Poimboeuf1-0/+8
MDS is vulnerable with SMT. Make that clear with a one-time printk whenever SMT first gets enabled. Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Tyler Hicks <[email protected]> Acked-by: Jiri Kosina <[email protected]>
2019-04-02x86/speculation: Move arch_smt_update() call to after mitigation decisionsJosh Poimboeuf1-3/+2
arch_smt_update() now has a dependency on both Spectre v2 and MDS mitigations. Move its initial call to after all the mitigation decisions have been made. Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Tyler Hicks <[email protected]> Acked-by: Jiri Kosina <[email protected]>
2019-04-02x86/speculation/mds: Add mds=full,nosmt cmdline optionJosh Poimboeuf1-0/+10
Add the mds=full,nosmt cmdline option. This is like mds=full, but with SMT disabled if the CPU is vulnerable. Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Tyler Hicks <[email protected]> Acked-by: Jiri Kosina <[email protected]>
2019-04-01x86/gpu: add ElkhartLake to gen11 early quirksRodrigo Vivi1-0/+1
Let's reserve EHL stolen memory for graphics. ElkhartLake is a gen11 platform which is compatible with ICL changes. Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: [email protected] Cc: José Roberto de Souza <[email protected]> Signed-off-by: Rodrigo Vivi <[email protected]> Reviewed-by: José Roberto de Souza <[email protected]> Acked-by: Borislav Petkov <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
2019-04-01x86/resctrl: Fix typos in the mba_sc mount optionXiaochen Shen1-3/+3
The user can control the MBA memory bandwidth in MBps (Mega Bytes per second) units of the MBA Software Controller (mba_sc) by using the "mba_MBps" mount option. For details, see Documentation/x86/resctrl_ui.txt. However, commit 23bf1b6be9c2 ("kernfs, sysfs, cgroup, intel_rdt: Support fs_context") changed the mount option name from "mba_MBps" to "mba_mpbs" by mistake. Change it back from to "mba_MBps" because it is user-visible, and correct "Opt_mba_mpbs" spelling to "Opt_mba_mbps". [ bp: massage commit message. ] Fixes: 23bf1b6be9c2 ("kernfs, sysfs, cgroup, intel_rdt: Support fs_context") Signed-off-by: Xiaochen Shen <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: [email protected] Cc: Fenghua Yu <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: [email protected] Cc: Reinette Chatre <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: x86-ml <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-04-01drm/i915: Split Pineview device info into desktop and mobileTvrtko Ursulin1-1/+2
This allows the IS_PINEVIEW_<G|M> macros to be removed and avoid duplication of device ids already defined in i915_pciids.h. !IS_MOBILE check can be used in place of existing IS_PINEVIEW_G call sites. Signed-off-by: Tvrtko Ursulin <[email protected]> Suggested-by: Ville Syrjälä <[email protected]> Cc: Ville Syrjälä <[email protected]> Cc: Chris Wilson <[email protected]> Reviewed-by: Chris Wilson <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
2019-03-29x86/mce: Remove mce_report_event()Borislav Petkov1-18/+2
Calling this function has been wrong for a while now: * Can't call schedule_work() in #MC context. * mce_notify_irq() either. * None of that noodling is needed anymore - all it needs to do is kick the IRQ work which would self-IPI so that once the #MC handler is done, the work queue will run and process queued MCE records. So remove it. Reported-by: Peter Zijlstra <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Tony Luck <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2019-03-29efi: Unify DMI setup code over the arm/arm64, ia64 and x86 architecturesRobert Richter1-4/+2
All architectures (arm/arm64, ia64 and x86) do the same here, so unify the code. Note: We do not need to call dump_stack_set_arch_desc() in case of !dmi_available. Both strings, dmi_ids_string and dump_stack_arch_ desc_str are initialized zero and thus nothing would change. Signed-off-by: Robert Richter <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Jean Delvare <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Matt Fleming <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2019-03-27x86/ima: require signed kernel modulesMimi Zohar1-1/+8
Have the IMA architecture specific policy require signed kernel modules on systems with secure boot mode enabled; and coordinate the different signature verification methods, so only one signature is required. Requiring appended kernel module signatures may be configured, enabled on the boot command line, or with this patch enabled in secure boot mode. This patch defines set_module_sig_enforced(). To coordinate between appended kernel module signatures and IMA signatures, only define an IMA MODULE_CHECK policy rule if CONFIG_MODULE_SIG is not enabled. A custom IMA policy may still define and require an IMA signature. Signed-off-by: Mimi Zohar <[email protected]> Reviewed-by: Luis Chamberlain <[email protected]> Acked-by: Jessica Yu <[email protected]>
2019-03-27x86/mce: Handle varying MCA bank countsYazen Ghannam2-22/+14
Linux reads MCG_CAP[Count] to find the number of MCA banks visible to a CPU. Currently, this number is the same for all CPUs and a warning is shown if there is a difference. The number of banks is overwritten with the MCG_CAP[Count] value of each following CPU that boots. According to the Intel SDM and AMD APM, the MCG_CAP[Count] value gives the number of banks that are available to a "processor implementation". The AMD BKDGs/PPRs further clarify that this value is per core. This value has historically been the same for every core in the system, but that is not an architectural requirement. Future AMD systems may have different MCG_CAP[Count] values per core, so the assumption that all CPUs will have the same MCG_CAP[Count] value will no longer be valid. Also, the first CPU to boot will allocate the struct mce_banks[] array using the number of banks based on its MCG_CAP[Count] value. The machine check handler and other functions use the global number of banks to iterate and index into the mce_banks[] array. So it's possible to use an out-of-bounds index on an asymmetric system where a following CPU sees a MCG_CAP[Count] value greater than its predecessors. Thus, allocate the mce_banks[] array to the maximum number of banks. This will avoid the potential out-of-bounds index since the value of mca_cfg.banks is capped to MAX_NR_BANKS. Set the value of mca_cfg.banks equal to the max of the previous value and the value for the current CPU. This way mca_cfg.banks will always represent the max number of banks detected on any CPU in the system. This will ensure that all CPUs will access all the banks that are visible to them. A CPU that can access fewer than the max number of banks will find the registers of the extra banks to be read-as-zero. Furthermore, print the resulting number of MCA banks in use. Do this in mcheck_late_init() so that the final value is printed after all CPUs have been initialized. Finally, get bank count from target CPU when doing injection with mce-inject module. [ bp: Remove out-of-bounds example, passify and cleanup commit message. ] Signed-off-by: Yazen Ghannam <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: linux-edac <[email protected]> Cc: Pu Wen <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vishal Verma <[email protected]> Cc: x86-ml <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-03-27x86/mce: Fix machine_check_poll() tests for error typesTony Luck1-7/+37
There has been a lurking "TBD" in the machine check poll routine ever since it was first split out from the machine check handler. The potential issue is that the poll routine may have just begun a read from the STATUS register in a machine check bank when the hardware logs an error in that bank and signals a machine check. That race used to be pretty small back when machine checks were broadcast, but the addition of local machine check means that the poll code could continue running and clear the error from the bank before the local machine check handler on another CPU gets around to reading it. Fix the code to be sure to only process errors that need to be processed in the poll code, leaving other logged errors alone for the machine check handler to find and process. [ bp: Massage a bit and flip the "== 0" check to the usual !(..) test. ] Fixes: b79109c3bbcf ("x86, mce: separate correct machine check poller and fatal exception handler") Fixes: ed7290d0ee8f ("x86, mce: implement new status bits") Reported-by: Ashok Raj <[email protected]> Signed-off-by: Tony Luck <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Ashok Raj <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: linux-edac <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: x86-ml <[email protected]> Cc: Yazen Ghannam <[email protected]> Link: https://lkml.kernel.org/r/20190312170938.GA23035@agluck-desk
2019-03-24x86/resctrl: Remove unused variablePeng Hao1-3/+0
Variable "struct rdt_resource *r" is set but not used. So remove it. Signed-off-by: Peng Hao <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-03-23x86/CPU/hygon: Fix phys_proc_id calculation logic for multi-die processorsPu Wen1-0/+5
The Hygon family 18h multi-die processor platform supports 1, 2 or 4-Dies per socket. The topology looks like this: System View (with 1-Die 2-Socket): |------------| ------ ----- SOCKET0 | D0 | | D1 | SOCKET1 ------ ----- System View (with 2-Die 2-socket): -------------------- | -------------|------ | | | | ------------ ------------ SOCKET0 | D1 -- D0 | | D3 -- D2 | SOCKET1 ------------ ------------ System View (with 4-Die 2-Socket) : -------------------- | -------------|------ | | | | ------------ ------------ | D1 -- D0 | | D7 -- D6 | | | \/ | | | | \/ | | SOCKET0 | | /\ | | | | /\ | | SOCKET1 | D2 -- D3 | | D4 -- D5 | ------------ ------------ | | | | ------|------------| | -------------------- Currently phys_proc_id = initial_apicid >> bits calculates the physical processor ID from the initial_apicid by shifting *bits*. However, this does not work for 1-Die and 2-Die 2-socket systems. According to document [1] section 2.1.11.1, the bits is the value of CPUID_Fn80000008_ECX[12:15]. The possible values are 4, 5 or 6 which mean: 4 - 1 die 5 - 2 dies 6 - 3/4 dies. Hygon programs the initial ApicId the same way as AMD. The ApicId is read from CPUID_Fn00000001_EBX (see section 2.1.11.1 of referrence [1]) and the definition is as below (see section 2.1.10.2.1.3 of [1]): ------------------------------------------------- Bit | 6 | 5 4 | 3 | 2 1 0 | |-----------|---------|--------|----------------| IDs | Socket ID | Node ID | CCX ID | Core/Thread ID | ------------------------------------------------- So for 3/4-Die configurations, the bits variable is 6, which is the same as the ApicID definition field. For 1-Die and 2-Die configurations, bits is 4 or 5, which will cause the right shifted result to not be exactly the value of socket ID. However, the socket ID should be obtained from ApicId[6]. To fix the problem and match the ApicID field definition, set the shift bits to 6 for all Hygon family 18h multi-die CPUs. Because AMD doesn't have 2-Socket systems with 1-Die/2-Die processors (see reference [2]), this doesn't need to be changed on the AMD side but only for Hygon. References: [1] https://www.amd.com/system/files/TechDocs/54945_PPR_Family_17h_Models_00h-0Fh.pdf [2] https://www.amd.com/en/products/specifications/processors [bp: heavily massage commit message. ] Signed-off-by: Pu Wen <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Thomas Lendacky <[email protected]> Cc: Yazen Ghannam <[email protected]> Cc: x86-ml <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-03-23x86/gart: Exclude GART aperture from kcoreKairui Song1-7/+13
On machines where the GART aperture is mapped over physical RAM, /proc/kcore contains the GART aperture range. Accessing the GART range via /proc/kcore results in a kernel crash. vmcore used to have the same issue, until it was fixed with commit 2a3e83c6f96c ("x86/gart: Exclude GART aperture from vmcore")', leveraging existing hook infrastructure in vmcore to let /proc/vmcore return zeroes when attempting to read the aperture region, and so it won't read from the actual memory. Apply the same workaround for kcore. First implement the same hook infrastructure for kcore, then reuse the hook functions introduced in the previous vmcore fix. Just with some minor adjustment, rename some functions for more general usage, and simplify the hook infrastructure a bit as there is no module usage yet. Suggested-by: Baoquan He <[email protected]> Signed-off-by: Kairui Song <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Jiri Bohac <[email protected]> Acked-by: Baoquan He <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Omar Sandoval <[email protected]> Cc: Dave Young <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-03-22x86/hw_breakpoints: Make default case in hw_breakpoint_arch_parse() return ↵Nathan Chancellor1-0/+1
an error When building with -Wsometimes-uninitialized, Clang warns: arch/x86/kernel/hw_breakpoint.c:355:2: warning: variable 'align' is used uninitialized whenever switch default is taken [-Wsometimes-uninitialized] The default cannot be reached because arch_build_bp_info() initializes hw->len to one of the specified cases. Nevertheless the warning is valid and returning -EINVAL makes sure that this cannot be broken by future modifications. Suggested-by: Nick Desaulniers <[email protected]> Signed-off-by: Nathan Chancellor <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Nick Desaulniers <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: [email protected] Link: https://github.com/ClangBuiltLinux/linux/issues/392 Link: https://lkml.kernel.org/r/[email protected]
2019-03-22x86/tsc: Add option to disable tsc clocksource watchdogJuri Lelli1-1/+4
Clocksource watchdog has been found responsible for generating latency spikes (in the 10-20 us range) when woken up to check for TSC stability. Add an option to disable it at boot. Signed-off-by: Juri Lelli <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2019-03-21x86/cpu/cyrix: Use correct macros for Cyrix calls on Geode processorsMatthew Whitehead1-7/+7
There are comments in processor-cyrix.h advising you to _not_ make calls using the deprecated macros in this style: setCx86_old(CX86_CCR4, getCx86_old(CX86_CCR4) | 0x80); This is because it expands the macro into a non-functioning calling sequence. The calling order must be: outb(CX86_CCR2, 0x22); inb(0x23); From the comments: * When using the old macros a line like * setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x88); * gets expanded to: * do { * outb((CX86_CCR2), 0x22); * outb((({ * outb((CX86_CCR2), 0x22); * inb(0x23); * }) | 0x88), 0x23); * } while (0); The new macros fix this problem, so use them instead. Tested on an actual Geode processor. Signed-off-by: Matthew Whitehead <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2019-03-21x86/microcode: Announce reload operation's completionBorislav Petkov1-0/+2
By popular demand, issue a single line to dmesg after the reload operation completes to let the user know that a reload has at least been attempted. Signed-off-by: Borislav Petkov <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-03-21x86/hpet: Prevent potential NULL pointer dereferenceAditya Pakki1-0/+2
hpet_virt_address may be NULL when ioremap_nocache fail, but the code lacks a check. Add a check to prevent NULL pointer dereference. Signed-off-by: Aditya Pakki <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: Borislav Petkov <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Kees Cook <[email protected]> Cc: Joe Perches <[email protected]> Cc: Nicolai Stange <[email protected]> Cc: Roland Dreier <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-03-19x86/mm: Don't leak kernel addressesMatteo Croce1-2/+2
Since commit: ad67b74d2469d9b8 ("printk: hash addresses printed with %p") at boot "____ptrval____" is printed instead of actual addresses: found SMP MP-table at [mem 0x000f5cc0-0x000f5ccf] mapped at [(____ptrval____)] Instead of changing the print to "%px", and leaking a kernel addresses, just remove the print completely, like in: 071929dbdd865f77 ("arm64: Stop printing the virtual memory layout"). Signed-off-by: Matteo Croce <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-03-15Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds1-5/+15
Pull KVM updates from Paolo Bonzini: "ARM: - some cleanups - direct physical timer assignment - cache sanitization for 32-bit guests s390: - interrupt cleanup - introduction of the Guest Information Block - preparation for processor subfunctions in cpu models PPC: - bug fixes and improvements, especially related to machine checks and protection keys x86: - many, many cleanups, including removing a bunch of MMU code for unnecessary optimizations - AVIC fixes Generic: - memcg accounting" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (147 commits) kvm: vmx: fix formatting of a comment KVM: doc: Document the life cycle of a VM and its resources MAINTAINERS: Add KVM selftests to existing KVM entry Revert "KVM/MMU: Flush tlb directly in the kvm_zap_gfn_range()" KVM: PPC: Book3S: Add count cache flush parameters to kvmppc_get_cpu_char() KVM: PPC: Fix compilation when KVM is not enabled KVM: Minor cleanups for kvm_main.c KVM: s390: add debug logging for cpu model subfunctions KVM: s390: implement subfunction processor calls arm64: KVM: Fix architecturally invalid reset value for FPEXC32_EL2 KVM: arm/arm64: Remove unused timer variable KVM: PPC: Book3S: Improve KVM reference counting KVM: PPC: Book3S HV: Fix build failure without IOMMU support Revert "KVM: Eliminate extra function calls in kvm_get_dirty_log_protect()" x86: kvmguest: use TSC clocksource if invariant TSC is exposed KVM: Never start grow vCPU halt_poll_ns from value below halt_poll_ns_grow_start KVM: Expose the initial start value in grow_halt_poll_ns() as a module parameter KVM: grow_halt_poll_ns() should never shrink vCPU halt_poll_ns KVM: x86/mmu: Consolidate kvm_mmu_zap_all() and kvm_mmu_zap_mmio_sptes() KVM: x86/mmu: WARN if zapping a MMIO spte results in zapping children ...
2019-03-12Merge branch 'work.mount' of ↵Linus Torvalds2-69/+132
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull vfs mount infrastructure updates from Al Viro: "The rest of core infrastructure; no new syscalls in that pile, but the old parts are switched to new infrastructure. At that point conversions of individual filesystems can happen independently; some are done here (afs, cgroup, procfs, etc.), there's also a large series outside of that pile dealing with NFS (quite a bit of option-parsing stuff is getting used there - it's one of the most convoluted filesystems in terms of mount-related logics), but NFS bits are the next cycle fodder. It got seriously simplified since the last cycle; documentation is probably the weakest bit at the moment - I considered dropping the commit introducing Documentation/filesystems/mount_api.txt (cutting the size increase by quarter ;-), but decided that it would be better to fix it up after -rc1 instead. That pile allows to do followup work in independent branches, which should make life much easier for the next cycle. fs/super.c size increase is unpleasant; there's a followup series that allows to shrink it considerably, but I decided to leave that until the next cycle" * 'work.mount' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (41 commits) afs: Use fs_context to pass parameters over automount afs: Add fs_context support vfs: Add some logging to the core users of the fs_context log vfs: Implement logging through fs_context vfs: Provide documentation for new mount API vfs: Remove kern_mount_data() hugetlbfs: Convert to fs_context cpuset: Use fs_context kernfs, sysfs, cgroup, intel_rdt: Support fs_context cgroup: store a reference to cgroup_ns into cgroup_fs_context cgroup1_get_tree(): separate "get cgroup_root to use" into a separate helper cgroup_do_mount(): massage calling conventions cgroup: stash cgroup_root reference into cgroup_fs_context cgroup2: switch to option-by-option parsing cgroup1: switch to option-by-option parsing cgroup: take options parsing into ->parse_monolithic() cgroup: fold cgroup1_mount() into cgroup1_get_tree() cgroup: start switching to fs_context ipc: Convert mqueue fs to fs_context proc: Add fs_context support to procfs ...
2019-03-12Merge branch 'akpm' (patches from Andrew)Linus Torvalds4-6/+17
Merge misc updates from Andrew Morton: - a few misc things - the rest of MM - remove flex_arrays, replace with new simple radix-tree implementation * emailed patches from Andrew Morton <[email protected]>: (38 commits) Drop flex_arrays sctp: convert to genradix proc: commit to genradix generic radix trees selinux: convert to kvmalloc md: convert to kvmalloc openvswitch: convert to kvmalloc of: fix kmemleak crash caused by imbalance in early memory reservation mm: memblock: update comments and kernel-doc memblock: split checks whether a region should be skipped to a helper function memblock: remove memblock_{set,clear}_region_flags memblock: drop memblock_alloc_*_nopanic() variants memblock: memblock_alloc_try_nid: don't panic treewide: add checks for the return value of memblock_alloc*() swiotlb: add checks for the return value of memblock_alloc*() init/main: add checks for the return value of memblock_alloc*() mm/percpu: add checks for the return value of memblock_alloc*() sparc: add checks for the return value of memblock_alloc*() ia64: add checks for the return value of memblock_alloc*() arch: don't memset(0) memory returned by memblock_alloc() ...
2019-03-12memblock: drop memblock_alloc_*_nopanic() variantsMike Rapoport1-5/+5
As all the memblock allocation functions return NULL in case of error rather than panic(), the duplicates with _nopanic suffix can be removed. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Acked-by: Greg Kroah-Hartman <[email protected]> Reviewed-by: Petr Mladek <[email protected]> [printk] Cc: Catalin Marinas <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Guo Ren <[email protected]> [c-sky] Cc: Heiko Carstens <[email protected]> Cc: Juergen Gross <[email protected]> [Xen] Cc: Mark Salter <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Paul Burton <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Rob Herring <[email protected]> Cc: Rob Herring <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-03-12treewide: add checks for the return value of memblock_alloc*()Mike Rapoport3-0/+11
Add check for the return value of memblock_alloc*() functions and call panic() in case of error. The panic message repeats the one used by panicing memblock allocators with adjustment of parameters to include only relevant ones. The replacement was mostly automated with semantic patches like the one below with manual massaging of format strings. @@ expression ptr, size, align; @@ ptr = memblock_alloc(size, align); + if (!ptr) + panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__, size, align); [[email protected]: use '%pa' with 'phys_addr_t' type] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: fix format strings for panics after memblock_alloc] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: don't panic if the allocation in sparse_buffer_init fails] Link: http://lkml.kernel.org/r/20190131074018.GD28876@rapoport-lnx [[email protected]: fix xtensa printk warning] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Anders Roxell <[email protected]> Reviewed-by: Guo Ren <[email protected]> [c-sky] Acked-by: Paul Burton <[email protected]> [MIPS] Acked-by: Heiko Carstens <[email protected]> [s390] Reviewed-by: Juergen Gross <[email protected]> [Xen] Reviewed-by: Geert Uytterhoeven <[email protected]> [m68k] Acked-by: Max Filippov <[email protected]> [xtensa] Cc: Catalin Marinas <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Mark Salter <[email protected]> Cc: Matt Turner <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Petr Mladek <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Rob Herring <[email protected]> Cc: Rob Herring <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-03-12memblock: drop __memblock_alloc_base()Mike Rapoport1-1/+1
The __memblock_alloc_base() function tries to allocate a memory up to the limit specified by its max_addr parameter. Depending on the value of this parameter, the __memblock_alloc_base() can is replaced with the appropriate memblock_phys_alloc*() variant. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Acked-by: Rob Herring <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Guo Ren <[email protected]> [c-sky] Cc: Heiko Carstens <[email protected]> Cc: Juergen Gross <[email protected]> [Xen] Cc: Mark Salter <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Paul Burton <[email protected]> Cc: Petr Mladek <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Rob Herring <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-03-11Merge tag 'for-linus-5.1a-rc1-tag' of ↵Linus Torvalds1-0/+5
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull xen updates from Juergen Gross: "xen fixes and features: - remove fallback code for very old Xen hypervisors - three patches for fixing Xen dom0 boot regressions - an old patch for Xen PCI passthrough which was never applied for unknown reasons - some more minor fixes and cleanup patches" * tag 'for-linus-5.1a-rc1-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: xen: fix dom0 boot on huge systems xen, cpu_hotplug: Prevent an out of bounds access xen: remove pre-xen3 fallback handlers xen/ACPI: Switch to bitmap_zalloc() x86/xen: dont add memory above max allowed allocation x86: respect memory size limiting via mem= parameter xen/gntdev: Check and release imported dma-bufs on close xen/gntdev: Do not destroy context while dma-bufs are in use xen/pciback: Don't disable PCI_COMMAND on PCI device reset. xen-scsiback: mark expected switch fall-through xen: mark expected switch fall-through
2019-03-11Merge tag 'trace-v5.1' of ↵Linus Torvalds1-25/+17
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull tracing updates from Steven Rostedt: "The biggest change for this release is in the histogram code: - Add "onchange(var)" histogram handler that executes a action when $var changes. - Add new "snapshot()" action for histogram handlers, that causes a snapshot of the ring buffer when triggered. ie. onchange(var).snapshot() will trigger a snapshot if var changes. - Add alternative for "trace()" action. Currently, to trigger a synthetic event, the name of that event is used as the handler name, which is inconsistent with the other actions. onchange(var).synthetic(param) where it can now be onchange(var).trace(synthetic, param). The older method will still be allowed, as long as the synthetic events do not overlap with other handler names. - The histogram documentation at testcases were updated for the new changes. Outside of the histogram code, we have: - Added a quicker way to enable set_ftrace_filter files, that will make it much quicker to bisect tracing a function that shouldn't be traced and crashes the kernel. (You can echo in numbers to set_ftrace_filter, and it will select the corresponding function that is in available_filter_functions). - Some better displaying of the tracing data (and more information was added). The rest are small fixes and more clean ups to the code" * tag 'trace-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (37 commits) tracing: Use strncpy instead of memcpy when copying comm in trace.c tracing: Use strncpy instead of memcpy when copying comm for hist triggers tracing: Use strncpy instead of memcpy for string keys in hist triggers tracing: Use str_has_prefix() in synth_event_create() x86/ftrace: Fix warning and considate ftrace_jmp_replace() and ftrace_call_replace() tracing/perf: Use strndup_user() instead of buggy open-coded version doc: trace: Fix documentation for uprobe_profile tracing: Fix spelling mistake: "analagous" -> "analogous" tracing: Comment why cond_snapshot is checked outside of max_lock protection tracing: Add hist trigger action 'expected fail' test case tracing: Add alternative synthetic event trace action test case tracing: Add hist trigger onchange() handler test case tracing: Add hist trigger snapshot() action test case tracing: Add SPDX license GPL-2.0 license identifier to inter-event testcases tracing: Add alternative synthetic event trace action syntax tracing: Add hist trigger onchange() handler Documentation tracing: Add hist trigger onchange() handler tracing: Add hist trigger snapshot() action Documentation tracing: Add hist trigger snapshot() action tracing: Add conditional snapshot ...
2019-03-10Merge branch 'next-integrity' of ↵Linus Torvalds1-3/+11
git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security Pull integrity updates from James Morris: "Mimi Zohar says: 'Linux 5.0 introduced the platform keyring to allow verifying the IMA kexec kernel image signature using the pre-boot keys. This pull request similarly makes keys on the platform keyring accessible for verifying the PE kernel image signature. Also included in this pull request is a new IMA hook that tags tmp files, in policy, indicating the file hash needs to be calculated. The remaining patches are cleanup'" * 'next-integrity' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: evm: Use defined constant for UUID representation ima: define ima_post_create_tmpfile() hook and add missing call evm: remove set but not used variable 'xattr' encrypted-keys: fix Opt_err/Opt_error = -1 kexec, KEYS: Make use of platform keyring for signature verify integrity, KEYS: add a reference to platform keyring
2019-03-10Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds2-3/+39
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Thomas Gleixner: "A set of fixes for x86: - Make the unwinder more robust when it encounters a NULL pointer call, so the backtrace becomes more useful - Fix the bogus ORC unwind table alignment - Prevent kernel panic during kexec on HyperV caused by a cleared but not disabled hypercall page. - Remove the now pointless stacksize increase for KASAN_EXTRA, as KASAN_EXTRA is gone. - Remove unused variables from the x86 memory management code" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/hyperv: Fix kernel panic when kexec on HyperV x86/mm: Remove unused variable 'old_pte' x86/mm: Remove unused variable 'cpu' Revert "x86_64: Increase stack size for KASAN_EXTRA" x86/unwind: Add hardcoded ORC entry for NULL x86/unwind: Handle NULL pointer calls better in frame unwinder x86/unwind/orc: Fix ORC unwind table alignment
2019-03-10Merge tag 'iommu-updates-v5.1' of ↵Linus Torvalds1-0/+12
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu Pull IOMMU updates from Joerg Roedel: - A big cleanup and optimization patch-set for the Tegra GART driver - Documentation updates and fixes for the IOMMU-API - Support for page request in Intel VT-d scalable mode - Intel VT-d dma_[un]map_resource() support - Updates to the ATS enabling code for PCI (acked by Bjorn) and Intel VT-d to align with the latest version of the ATS spec - Relaxed IRQ source checking in the Intel VT-d driver for some aliased devices, needed for future devices which send IRQ messages from more than on request-ID - IRQ remapping driver for Hyper-V - Patches to make generic IOVA and IO-Page-Table code usable outside of the IOMMU code - Various other small fixes and cleanups * tag 'iommu-updates-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (60 commits) iommu/vt-d: Get domain ID before clear pasid entry iommu/vt-d: Fix NULL pointer reference in intel_svm_bind_mm() iommu/vt-d: Set context field after value initialized iommu/vt-d: Disable ATS support on untrusted devices iommu/mediatek: Fix semicolon code style issue MAINTAINERS: Add Hyper-V IOMMU driver into Hyper-V CORE AND DRIVERS scope iommu/hyper-v: Add Hyper-V stub IOMMU driver x86/Hyper-V: Set x2apic destination mode to physical when x2apic is available PCI/ATS: Add inline to pci_prg_resp_pasid_required() iommu/vt-d: Check identity map for hot-added devices iommu: Fix IOMMU debugfs fallout iommu: Document iommu_ops.is_attach_deferred() iommu: Document iommu_ops.iotlb_sync_map() iommu/vt-d: Enable ATS only if the device uses page aligned address. PCI/ATS: Add pci_ats_page_aligned() interface iommu/vt-d: Fix PRI/PASID dependency issue. PCI/ATS: Add pci_prg_resp_pasid_required() interface. iommu/vt-d: Allow interrupts from the entire bus for aliased devices iommu/vt-d: Add helper to set an IRTE to verify only the bus number iommu: Fix flush_tlb_all typo ...
2019-03-08Merge branch 'ras-core-for-linus' of ↵Linus Torvalds4-39/+68
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull RAS updates from Borislav Petkov: "This time around we have in store: - Disable MC4_MISC thresholding banks on all AMD family 0x15 models (Shirish S) - AMD MCE error descriptions update and error decode improvements (Yazen Ghannam) - The usual smaller conversions and fixes" * 'ras-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/mce: Improve error message when kernel cannot recover, p2 EDAC/mce_amd: Decode MCA_STATUS in bit definition order EDAC/mce_amd: Decode MCA_STATUS[Scrub] bit EDAC, mce_amd: Print ExtErrorCode and description on a single line EDAC, mce_amd: Match error descriptions to latest documentation x86/MCE/AMD, EDAC/mce_amd: Add new error descriptions for some SMCA bank types x86/MCE/AMD, EDAC/mce_amd: Add new McaTypes for CS, PSP, and SMU units x86/MCE/AMD, EDAC/mce_amd: Add new MP5, NBIO, and PCIE SMCA bank types RAS: Add a MAINTAINERS entry RAS: Use consistent types for UUIDs x86/MCE/AMD: Carve out the MC4_MISC thresholding quirk x86/MCE/AMD: Turn off MC4_MISC thresholding on all family 0x15 models x86/MCE: Switch to use the new generic UUID API
2019-03-07Merge branch 'x86-kdump-for-linus' of ↵Linus Torvalds1-0/+3
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 kdump update from Ingo Molnar: "Add the AMD SME mask to the vmcoreinfo, and also document our vmcoreinfo fields" * 'x86-kdump-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: kdump: Document kernel data exported in the vmcoreinfo note x86/kdump: Export the SME mask to vmcoreinfo
2019-03-07Merge branch 'x86-fpu-for-linus' of ↵Linus Torvalds1-3/+2
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fpu updates from Ingo Molnar: "Three changes: - preparatory patch for AVX state tracking that computing-cluster folks would like to use for user-space batching - but we are not happy about the related ABI yet so this is only the kernel tracking side - a cleanup for CR0 handling in do_device_not_available() - plus we removed a workaround for an ancient binutils version" * 'x86-fpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/fpu: Track AVX-512 usage of tasks x86/fpu: Get rid of CONFIG_AS_FXSAVEQ x86/traps: Have read_cr0() only once in the #NM handler
2019-03-07Merge branch 'x86-cleanups-for-linus' of ↵Linus Torvalds12-33/+19
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 cleanups from Ingo Molnar: "Various cleanups and simplifications, none of them really stands out, they are all over the place" * 'x86-cleanups-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/uaccess: Remove unused __addr_ok() macro x86/smpboot: Remove unused phys_id variable x86/mm/dump_pagetables: Remove the unused prev_pud variable x86/fpu: Move init_xstate_size() to __init section x86/cpu_entry_area: Move percpu_setup_debug_store() to __init section x86/mtrr: Remove unused variable x86/boot/compressed/64: Explain paging_prepare()'s return value x86/resctrl: Remove duplicate MSR_MISC_FEATURE_CONTROL definition x86/asm/suspend: Drop ENTRY from local data x86/hw_breakpoints, kprobes: Remove kprobes ifdeffery x86/boot: Save several bytes in decompressor x86/trap: Remove useless declaration x86/mm/tlb: Remove unused cpu variable x86/events: Mark expected switch-case fall-throughs x86/asm-prototypes: Remove duplicate include <asm/page.h> x86/kernel: Mark expected switch-case fall-throughs x86/insn-eval: Mark expected switch-case fall-through x86/platform/UV: Replace kmalloc() and memset() with k[cz]alloc() calls x86/e820: Replace kmalloc() + memcpy() with kmemdup()
2019-03-07Merge branch 'x86-build-for-linus' of ↵Linus Torvalds1-2/+2
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 build updates from Ingo Molnar: "Misc cleanups and a retpoline code generation optimization" * 'x86-build-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86, retpolines: Raise limit for generating indirect calls from switch-case x86/build: Use the single-argument OUTPUT_FORMAT() linker script command x86/build: Specify elf_i386 linker emulation explicitly for i386 objects x86/build: Mark per-CPU symbols as absolute explicitly for LLD
2019-03-07Merge branch 'x86-boot-for-linus' of ↵Linus Torvalds1-1/+3
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 boot updates from Ingo Molnar: "Most of the changes center around the difficult problem of KASLR pinning down hot-removable memory regions. At the very early stage KASRL is making irreversible kernel address layout decisions we don't have full knowledge about the memory maps yet. So the changes from Chao Fan add this (parsing the RSDP table early), together with fixes from Borislav Petkov" * 'x86-boot-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/boot/compressed/64: Do not read legacy ROM on EFI system x86/boot: Correct RSDP parsing with 32-bit EFI x86/kexec: Fill in acpi_rsdp_addr from the first kernel x86/boot: Fix randconfig build error due to MEMORY_HOTREMOVE x86/boot: Fix cmdline_find_option() prototype visibility x86/boot/KASLR: Limit KASLR to extract the kernel in immovable memory only x86/boot: Parse SRAT table and count immovable memory regions x86/boot: Early parse RSDP and save it in boot_params x86/boot: Search for RSDP in memory x86/boot: Search for RSDP in the EFI tables x86/boot: Add "acpi_rsdp=" early parsing x86/boot: Copy kstrtoull() to boot/string.c x86/boot: Build the command line parsing code unconditionally
2019-03-06x86/unwind: Add hardcoded ORC entry for NULLJann Horn1-0/+17
When the ORC unwinder is invoked for an oops caused by IP==0, it currently has no idea what to do because there is no debug information for the stack frame of NULL. But if RIP is NULL, it is very likely that the last successfully executed instruction was an indirect CALL/JMP, and it is possible to unwind out in the same way as for the first instruction of a normal function. Hardcode a corresponding ORC entry. With an artificially-added NULL call in prctl_set_seccomp(), before this patch, the trace is: Call Trace: ? __x64_sys_prctl+0x402/0x680 ? __ia32_sys_prctl+0x6e0/0x6e0 ? __do_page_fault+0x457/0x620 ? do_syscall_64+0x6d/0x160 ? entry_SYSCALL_64_after_hwframe+0x44/0xa9 After this patch, the trace looks like this: Call Trace: __x64_sys_prctl+0x402/0x680 ? __ia32_sys_prctl+0x6e0/0x6e0 ? __do_page_fault+0x457/0x620 do_syscall_64+0x6d/0x160 entry_SYSCALL_64_after_hwframe+0x44/0xa9 prctl_set_seccomp() still doesn't show up in the trace because for some reason, tail call optimization is only disabled in builds that use the frame pointer unwinder. Signed-off-by: Jann Horn <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: syzbot <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Masahiro Yamada <[email protected]> Cc: Michal Marek <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2019-03-06x86/unwind: Handle NULL pointer calls better in frame unwinderJann Horn1-3/+22
When the frame unwinder is invoked for an oops caused by a call to NULL, it currently skips the parent function because BP still points to the parent's stack frame; the (nonexistent) current function only has the first half of a stack frame, and BP doesn't point to it yet. Add a special case for IP==0 that calculates a fake BP from SP, then uses the real BP for the next frame. Note that this handles first_frame specially: Return information about the parent function as long as the saved IP is >=first_frame, even if the fake BP points below it. With an artificially-added NULL call in prctl_set_seccomp(), before this patch, the trace is: Call Trace: ? prctl_set_seccomp+0x3a/0x50 __x64_sys_prctl+0x457/0x6f0 ? __ia32_sys_prctl+0x750/0x750 do_syscall_64+0x72/0x160 entry_SYSCALL_64_after_hwframe+0x44/0xa9 After this patch, the trace is: Call Trace: prctl_set_seccomp+0x3a/0x50 __x64_sys_prctl+0x457/0x6f0 ? __ia32_sys_prctl+0x750/0x750 do_syscall_64+0x72/0x160 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Signed-off-by: Jann Horn <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: syzbot <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Masahiro Yamada <[email protected]> Cc: Michal Marek <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2019-03-06Documentation: Move L1TF to separate directoryThomas Gleixner1-1/+1
Move L!TF to a separate directory so the MDS stuff can be added at the side. Otherwise the all hardware vulnerabilites have their own top level entry. Should have done that right away. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Greg Kroah-Hartman <[email protected]> Reviewed-by: Jon Masters <[email protected]>