aboutsummaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)AuthorFilesLines
2018-10-03x86-32, hibernate: Adjust in_suspend after resumed on 32bit systemZhimin Gu1-0/+3
Update the in_suspend variable to reflect the actual hibernation status. Back-port from 64bit system. Signed-off-by: Zhimin Gu <[email protected]> Acked-by: Pavel Machek <[email protected]> Signed-off-by: Chen Yu <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2018-10-03x86-32, hibernate: Set up temporary text mapping for 32bit systemZhimin Gu3-4/+34
Set up the temporary text mapping for the final jump address so that the system could jump to the right address after all the pages have been copied back to their original address - otherwise the final mapping for the jump address is invalid. Analogous changes were made for 64-bit in commit 65c0554b73c9 (x86/power/64: Fix kernel text mapping corruption during image restoration). Signed-off-by: Zhimin Gu <[email protected]> Acked-by: Pavel Machek <[email protected]> Signed-off-by: Chen Yu <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2018-10-03x86-32, hibernate: Switch to relocated restore code during resume on 32bit ↵Zhimin Gu3-2/+11
system On 64bit system, code should be executed in a safe page during page restoring, as the page where instruction is running during resume might be scribbled and causes issues. Although on 32 bit, we only suspend resuming by same kernel that did the suspend, we'd like to remove that restriction in the future. Porting corresponding code from 64bit system: Allocate a safe page, and copy the restore code to it, then jump to the safe page to run the code. Signed-off-by: Zhimin Gu <[email protected]> Acked-by: Pavel Machek <[email protected]> Signed-off-by: Chen Yu <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2018-10-03x86-32, hibernate: Switch to original page table after resumedZhimin Gu2-5/+9
After all the pages are restored to previous address, the page table switches back to current swapper_pg_dir. However the swapper_pg_dir currently in used might not be consistent with previous page table, which might cause issue after resume. Fix this issue by switching to original page table after resume, and the address of the original page table is saved in the hibernation image header. Move the manipulation of restore_cr3 into common code blocks. Signed-off-by: Zhimin Gu <[email protected]> Acked-by: Pavel Machek <[email protected]> Signed-off-by: Chen Yu <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2018-10-03x86-32, hibernate: Use the page size macro instead of constant valueZhimin Gu1-1/+1
Convert the hard code into PAGE_SIZE for better scalability. No functional change. Signed-off-by: Zhimin Gu <[email protected]> Acked-by: Pavel Machek <[email protected]> Signed-off-by: Chen Yu <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2018-10-03x86-32, hibernate: Use temp_pgt as the temporary page tableZhimin Gu2-2/+3
This is to reuse the temp_pgt for both 32bit and 64bit system. No intentional behavior change. Signed-off-by: Zhimin Gu <[email protected]> Acked-by: Pavel Machek <[email protected]> Signed-off-by: Chen Yu <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2018-10-03x86, hibernate: Rename temp_level4_pgt to temp_pgtZhimin Gu4-4/+4
As 32bit system is not using 4-level page, rename it to temp_pgt so that it can be reused for both 32bit and 64bit hibernation. No functional change. Signed-off-by: Zhimin Gu <[email protected]> Acked-by: Pavel Machek <[email protected]> Signed-off-by: Chen Yu <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2018-10-03x86-32, hibernate: Enable CONFIG_ARCH_HIBERNATION_HEADER on 32bit systemZhimin Gu2-2/+10
Enable CONFIG_ARCH_HIBERNATION_HEADER for 32bit system so that 1. arch_hibernation_header_save/restore() are invoked across hibernation on 32bit system. 2. The checksum handling as well as 'magic' number checking for 32bit system are enabled. Controlled by CONFIG_X86_64 in hibernate.c Signed-off-by: Zhimin Gu <[email protected]> Signed-off-by: Chen Yu <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2018-10-03x86, hibernate: Extract the common code of 64/32 bit systemZhimin Gu5-236/+256
Reduce the hibernation code duplication between x86-32 and x86-64 by extracting the common code into hibernate.c. Currently only pfn_is_nosave() is the activated common function in hibernate.c No functional change. Acked-by: Pavel Machek <[email protected]> Signed-off-by: Zhimin Gu <[email protected]> Signed-off-by: Chen Yu <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2018-10-03x86-32/asm/power: Create stack frames in hibernate_asm_32.SZhimin Gu2-0/+13
swsusp_arch_suspend() is callable non-leaf function which doesn't honor CONFIG_FRAME_POINTER, which can result in bad stack traces. Also it's not annotated as ELF callable function which can confuse tooling. Create a stack frame for it when CONFIG_FRAME_POINTER is enabled and give it proper ELF function annotation. Also in this patch introduces the restore_registers() symbol and gives it ELF function annotation, thus to prepare for later register restore. Analogous changes were made for 64bit before in commit ef0f3ed5a4ac (x86/asm/power: Create stack frames in hibernate_asm_64.S) and commit 4ce827b4cc58 (x86/power/64: Fix hibernation return address corruption). Signed-off-by: Zhimin Gu <[email protected]> Signed-off-by: Chen Yu <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2018-10-03PM / hibernate: Check the success of generating md5 digest before hibernationChen Yu1-6/+5
Currently if get_e820_md5() fails, then it will hibernate nevertheless. Actually the error code should be propagated to upper caller so that the hibernation could be aware of the result and terminates the process if md5 digest fails. Suggested-by: Thomas Gleixner <[email protected]> Acked-by: Pavel Machek <[email protected]> Signed-off-by: Chen Yu <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2018-10-03x86, hibernate: Fix nosave_regions setup for hibernationZhimin Gu1-1/+1
On 32bit systems, nosave_regions(non RAM areas) located between max_low_pfn and max_pfn are not excluded from hibernation snapshot currently, which may result in a machine check exception when trying to access these unsafe regions during hibernation: [ 612.800453] Disabling lock debugging due to kernel taint [ 612.805786] mce: [Hardware Error]: CPU 0: Machine Check Exception: 5 Bank 6: fe00000000801136 [ 612.814344] mce: [Hardware Error]: RIP !INEXACT! 60:<00000000d90be566> {swsusp_save+0x436/0x560} [ 612.823167] mce: [Hardware Error]: TSC 1f5939fe276 ADDR dd000000 MISC 30e0000086 [ 612.830677] mce: [Hardware Error]: PROCESSOR 0:306c3 TIME 1529487426 SOCKET 0 APIC 0 microcode 24 [ 612.839581] mce: [Hardware Error]: Run the above through 'mcelog --ascii' [ 612.846394] mce: [Hardware Error]: Machine check: Processor context corrupt [ 612.853380] Kernel panic - not syncing: Fatal machine check [ 612.858978] Kernel Offset: 0x18000000 from 0xc1000000 (relocation range: 0xc0000000-0xf7ffdfff) This is because on 32bit systems, pages above max_low_pfn are regarded as high memeory, and accessing unsafe pages might cause expected MCE. On the problematic 32bit system, there are reserved memory above low memory, which triggered the MCE: e820 memory mapping: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009d7ff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009d800-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000d160cfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000d160d000-0x00000000d1613fff] ACPI NVS [ 0.000000] BIOS-e820: [mem 0x00000000d1614000-0x00000000d1a44fff] usable [ 0.000000] BIOS-e820: [mem 0x00000000d1a45000-0x00000000d1ecffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000d1ed0000-0x00000000d7eeafff] usable [ 0.000000] BIOS-e820: [mem 0x00000000d7eeb000-0x00000000d7ffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000d8000000-0x00000000d875ffff] usable [ 0.000000] BIOS-e820: [mem 0x00000000d8760000-0x00000000d87fffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000d8800000-0x00000000d8fadfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000d8fae000-0x00000000d8ffffff] ACPI data [ 0.000000] BIOS-e820: [mem 0x00000000d9000000-0x00000000da71bfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000da71c000-0x00000000da7fffff] ACPI NVS [ 0.000000] BIOS-e820: [mem 0x00000000da800000-0x00000000dbb8bfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000dbb8c000-0x00000000dbffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000dd000000-0x00000000df1fffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000f8000000-0x00000000fbffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fed00000-0x00000000fed03fff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000041edfffff] usable Fix this problem by changing pfn limit from max_low_pfn to max_pfn. This fix does not impact 64bit system because on 64bit max_low_pfn is the same as max_pfn. Signed-off-by: Zhimin Gu <[email protected]> Acked-by: Pavel Machek <[email protected]> Signed-off-by: Chen Yu <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Cc: All applicable <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2018-10-03x86/cpu/amd: Remove unnecessary parenthesesNathan Chancellor1-1/+1
Clang warns when multiple pairs of parentheses are used for a single conditional statement. arch/x86/kernel/cpu/amd.c:925:14: warning: equality comparison with extraneous parentheses [-Wparentheses-equality] if ((c->x86 == 6)) { ~~~~~~~^~~~ arch/x86/kernel/cpu/amd.c:925:14: note: remove extraneous parentheses around the comparison to silence this warning if ((c->x86 == 6)) { ~ ^ ~ arch/x86/kernel/cpu/amd.c:925:14: note: use '=' to turn this equality comparison into an assignment if ((c->x86 == 6)) { ^~ = 1 warning generated. Signed-off-by: Nathan Chancellor <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Link: https://github.com/ClangBuiltLinux/linux/issues/187 Signed-off-by: Ingo Molnar <[email protected]>
2018-10-03x86/vdso: Only enable vDSO retpolines when enabled and supportedAndy Lutomirski1-2/+14
When I fixed the vDSO build to use inline retpolines, I messed up the Makefile logic and made it unconditional. It should have depended on CONFIG_RETPOLINE and on the availability of compiler support. This broke the build on some older compilers. Reported-by: [email protected] Signed-off-by: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Matt Rickard <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Fixes: 2e549b2ee0e3 ("x86/vdso: Fix vDSO build if a retpoline is emitted") Link: http://lkml.kernel.org/r/08a1f29f2c238dd1f493945e702a521f8a5aa3ae.1538540801.git.luto@kernel.org Signed-off-by: Ingo Molnar <[email protected]>
2018-10-02x86/PCI: Apply VMD's AERSID fixup genericallyJon Derrick1-9/+3
A root port Device ID changed between simulation and production. Rather than match Device IDs which may not be future-proof if left unmaintained, match all root ports which exist in a VMD domain. Signed-off-by: Jon Derrick <[email protected]> Signed-off-by: Bjorn Helgaas <[email protected]>
2018-10-02x86/tsc: Fix UV TSC initializationMike Travis1-0/+4
The recent rework of the TSC calibration code introduced a regression on UV systems as it added a call to tsc_early_init() which initializes the TSC ADJUST values before acpi_boot_table_init(). In the case of UV systems, that is a necessary step that calls uv_system_init(). This informs tsc_sanitize_first_cpu() that the kernel runs on a platform with async TSC resets as documented in commit 341102c3ef29 ("x86/tsc: Add option that TSC on Socket 0 being non-zero is valid") Fix it by skipping the early tsc initialization on UV systems and let TSC init tests take place later in tsc_init(). Fixes: cf7a63ef4e02 ("x86/tsc: Calibrate tsc only once") Suggested-by: Hedi Berriche <[email protected]> Signed-off-by: Mike Travis <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Russ Anderson <[email protected]> Reviewed-by: Dimitri Sivanich <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Russ Anderson <[email protected]> Cc: Dimitri Sivanich <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Kate Stewart <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Philippe Ombredanne <[email protected]> Cc: Pavel Tatashin <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Len Brown <[email protected]> Cc: Dou Liyang <[email protected]> Cc: Xiaoming Gao <[email protected]> Cc: Rajvi Jingar <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2018-10-02x86/platform/uv: Provide is_early_uv_system()Mike Travis1-0/+6
Introduce is_early_uv_system() which uses efi.uv_systab to decide early in the boot process whether the kernel runs on a UV system. This is needed to skip other early setup/init code that might break the UV platform if done too early such as before necessary ACPI tables parsing takes place. Suggested-by: Hedi Berriche <[email protected]> Signed-off-by: Mike Travis <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Russ Anderson <[email protected]> Reviewed-by: Dimitri Sivanich <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Russ Anderson <[email protected]> Cc: Dimitri Sivanich <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Kate Stewart <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Philippe Ombredanne <[email protected]> Cc: Pavel Tatashin <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Len Brown <[email protected]> Cc: Dou Liyang <[email protected]> Cc: Xiaoming Gao <[email protected]> Cc: Rajvi Jingar <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2018-10-02x86/earlyprintk: Add a force option for pciserial deviceFeng Tang1-10/+19
The "pciserial" earlyprintk variant helps much on many modern x86 platforms, but unfortunately there are still some platforms with PCI UART devices which have the wrong PCI class code. In that case, the current class code check does not allow for them to be used for logging. Add a sub-option "force" which overrides the class code check and thus the use of such device can be enforced. [ bp: massage formulations. ] Suggested-by: Borislav Petkov <[email protected]> Signed-off-by: Feng Tang <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: "Stuart R . Anderson" <[email protected]> Cc: Bjorn Helgaas <[email protected]> Cc: David Rientjes <[email protected]> Cc: Feng Tang <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: H Peter Anvin <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Kai-Heng Feng <[email protected]> Cc: Kate Stewart <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Philippe Ombredanne <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Thymo van Beers <[email protected]> Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected]
2018-10-02perf/x86/intel: Add quirk for Goldmont PlusKan Liang1-0/+35
A ucode patch is needed for Goldmont Plus while counter freezing feature is enabled. Otherwise, there will be some issues, e.g. PMI flood with some events. Add a quirk to check microcode version. If the system starts with the wrong ucode, leave the counter-freezing feature permanently disabled. Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-10-02x86/cpu: Sanitize FAM6_ATOM namingPeter Zijlstra11-60/+63
Going primarily by: https://en.wikipedia.org/wiki/List_of_Intel_Atom_microprocessors with additional information gleaned from other related pages; notably: - Bonnell shrink was called Saltwell - Moorefield is the Merriefield refresh which makes it Airmont The general naming scheme is: FAM6_ATOM_UARCH_SOCTYPE for i in `git grep -l FAM6_ATOM` ; do sed -i -e 's/ATOM_PINEVIEW/ATOM_BONNELL/g' \ -e 's/ATOM_LINCROFT/ATOM_BONNELL_MID/' \ -e 's/ATOM_PENWELL/ATOM_SALTWELL_MID/g' \ -e 's/ATOM_CLOVERVIEW/ATOM_SALTWELL_TABLET/g' \ -e 's/ATOM_CEDARVIEW/ATOM_SALTWELL/g' \ -e 's/ATOM_SILVERMONT1/ATOM_SILVERMONT/g' \ -e 's/ATOM_SILVERMONT2/ATOM_SILVERMONT_X/g' \ -e 's/ATOM_MERRIFIELD/ATOM_SILVERMONT_MID/g' \ -e 's/ATOM_MOOREFIELD/ATOM_AIRMONT_MID/g' \ -e 's/ATOM_DENVERTON/ATOM_GOLDMONT_X/g' \ -e 's/ATOM_GEMINI_LAKE/ATOM_GOLDMONT_PLUS/g' ${i} done Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-10-02perf/x86/intel: Add a separate Arch Perfmon v4 PMI handlerAndi Kleen3-1/+117
Implements counter freezing for Arch Perfmon v4 (Skylake and newer). This allows to speed up the PMI handler by avoiding unnecessary MSR writes and make it more accurate. The Arch Perfmon v4 PMI handler is substantially different than the older PMI handler. Differences to the old handler: - It relies on counter freezing, which eliminates several MSR writes from the PMI handler and lowers the overhead significantly. It makes the PMI handler more accurate, as all counters get frozen atomically as soon as any counter overflows. So there is much less counting of the PMI handler itself. With the freezing we don't need to disable or enable counters or PEBS. Only BTS which does not support auto-freezing still needs to be explicitly managed. - The PMU acking is done at the end, not the beginning. This makes it possible to avoid manual enabling/disabling of the PMU, instead we just rely on the freezing/acking. - The APIC is acked before reenabling the PMU, which avoids problems with LBRs occasionally not getting unfreezed on Skylake. - Looping is only needed to workaround a corner case which several PMIs are very close to each other. For common cases, the counters are freezed during PMI handler. It doesn't need to do re-check. This patch: - Adds code to enable v4 counter freezing - Fork <=v3 and >=v4 PMI handlers into separate functions. - Add kernel parameter to disable counter freezing. It took some time to debug counter freezing, so in case there are new problems we added an option to turn it off. Would not expect this to be used until there are new bugs. - Only for big core. The patch for small core will be posted later separately. Performance: When profiling a kernel build on Kabylake with different perf options, measuring the length of all NMI handlers using the nmi handler trace point: V3 is without counter freezing. V4 is with counter freezing. The value is the average cost of the PMI handler. (lower is better) perf options ` V3(ns) V4(ns) delta -c 100000 1088 894 -18% -g -c 100000 1862 1646 -12% --call-graph lbr -c 100000 3649 3367 -8% --c.g. dwarf -c 100000 2248 1982 -12% Signed-off-by: Andi Kleen <[email protected]> Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-10-02perf/x86/intel: Factor out common code of PMI handlerKan Liang1-49/+60
The Arch Perfmon v4 PMI handler is substantially different than the older PMI handler. Instead of adding more and more ifs cleanly fork the new handler into a new function, with the main common code factored out into a common function. Fix complaint from checkpatch.pl by removing "false" from "static bool warned". No functional change. Based-on-code-from: Andi Kleen <[email protected]> Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-10-02Merge branch 'x86/cache' into perf/core, to resolve conflictsIngo Molnar3-152/+239
Avoid conflict with upcoming perf/core patches, merge in the RDT perf work. Signed-off-by: Ingo Molnar <[email protected]>
2018-10-02Merge branch 'perf/urgent' into perf/core, to pick up fixesIngo Molnar4-26/+25
Signed-off-by: Ingo Molnar <[email protected]>
2018-10-02perf/x86/amd/uncore: Set ThreadMask and SliceMask for L3 Cache perf eventsNatarajan, Janakarajan2-0/+18
In Family 17h, some L3 Cache Performance events require the ThreadMask and SliceMask to be set. For other events, these fields do not affect the count either way. Set ThreadMask and SliceMask to 0xFF and 0xF respectively. Signed-off-by: Janakarajan Natarajan <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: H . Peter Anvin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Suravee <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Link: http://lkml.kernel.org/r/Message-ID: Signed-off-by: Ingo Molnar <[email protected]>
2018-10-02perf/x86/intel/uncore: Fix PCI BDF address of M3UPI on SKXKan Liang1-6/+6
The counters on M3UPI Link 0 and Link 3 don't count properly, and writing 0 to these counters may causes system crash on some machines. The PCI BDF addresses of the M3UPI in the current code are incorrect. The correct addresses should be: D18:F1 0x204D D18:F2 0x204E D18:F5 0x204D Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Fixes: cd34cd97b7b4 ("perf/x86/intel/uncore: Add Skylake server uncore support") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-10-02perf/x86/intel/uncore: Use boot_cpu_data.phys_proc_id instead of hardcorded ↵Masayoshi Mizuma1-1/+1
physical package ID 0 Physical package id 0 doesn't always exist, we should use boot_cpu_data.phys_proc_id here. Signed-off-by: Masayoshi Mizuma <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Masayoshi Mizuma <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-10-02x86/vdso: Fix asm constraints on vDSO syscall fallbacksAndy Lutomirski1-8/+10
The syscall fallbacks in the vDSO have incorrect asm constraints. They are not marked as writing to their outputs -- instead, they are marked as clobbering "memory", which is useless. In particular, gcc is smart enough to know that the timespec parameter hasn't escaped, so a memory clobber doesn't clobber it. And passing a pointer as an asm *input* does not tell gcc that the pointed-to value is changed. Add in the fact that the asm instructions weren't volatile, and gcc was free to omit them entirely unless their sole output (the return value) is used. Which it is (phew!), but that stops happening with some upcoming patches. As a trivial example, the following code: void test_fallback(struct timespec *ts) { vdso_fallback_gettime(CLOCK_MONOTONIC, ts); } compiles to: 00000000000000c0 <test_fallback>: c0: c3 retq To add insult to injury, the RCX and R11 clobbers on 64-bit builds were missing. The "memory" clobber is also unnecessary -- no ordering with respect to other memory operations is needed, but that's going to be fixed in a separate not-for-stable patch. Fixes: 2aae950b21e4 ("x86_64: Add vDSO for x86-64 with gettimeofday/clock_gettime/getcpu") Signed-off-by: Andy Lutomirski <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/2c0231690551989d2fafa60ed0e7b5cc8b403908.1538422295.git.luto@kernel.org
2018-10-01Merge tag 'v4.19-rc6' into for-4.20/blockJens Axboe50-290/+661
Merge -rc6 in, for two reasons: 1) Resolve a trivial conflict in the blk-mq-tag.c documentation 2) A few important regression fixes went into upstream directly, so they aren't in the 4.20 branch. Signed-off-by: Jens Axboe <[email protected]> * tag 'v4.19-rc6': (780 commits) Linux 4.19-rc6 MAINTAINERS: fix reference to moved drivers/{misc => auxdisplay}/panel.c cpufreq: qcom-kryo: Fix section annotations perf/core: Add sanity check to deal with pinned event failure xen/blkfront: correct purging of persistent grants Revert "xen/blkfront: When purging persistent grants, keep them in the buffer" selftests/powerpc: Fix Makefiles for headers_install change blk-mq: I/O and timer unplugs are inverted in blktrace dax: Fix deadlock in dax_lock_mapping_entry() x86/boot: Fix kexec booting failure in the SEV bit detection code bcache: add separate workqueue for journal_write to avoid deadlock drm/amd/display: Fix Edid emulation for linux drm/amd/display: Fix Vega10 lightup on S3 resume drm/amdgpu: Fix vce work queue was not cancelled when suspend Revert "drm/panel: Add device_link from panel device to DRM device" xen/blkfront: When purging persistent grants, keep them in the buffer clocksource/drivers/timer-atmel-pit: Properly handle error cases block: fix deadline elevator drain for zoned block devices ACPI / hotplug / PCI: Don't scan for non-hotplug bridges if slot is not bridge drm/syncobj: Don't leak fences when WAIT_FOR_SUBMIT is set ... Signed-off-by: Jens Axboe <[email protected]>
2018-10-01KVM: x86: fix L1TF's MMIO GFN calculationSean Christopherson1-4/+20
One defense against L1TF in KVM is to always set the upper five bits of the *legal* physical address in the SPTEs for non-present and reserved SPTEs, e.g. MMIO SPTEs. In the MMIO case, the GFN of the MMIO SPTE may overlap with the upper five bits that are being usurped to defend against L1TF. To preserve the GFN, the bits of the GFN that overlap with the repurposed bits are shifted left into the reserved bits, i.e. the GFN in the SPTE will be split into high and low parts. When retrieving the GFN from the MMIO SPTE, e.g. to check for an MMIO access, get_mmio_spte_gfn() unshifts the affected bits and restores the original GFN for comparison. Unfortunately, get_mmio_spte_gfn() neglects to mask off the reserved bits in the SPTE that were used to store the upper chunk of the GFN. As a result, KVM fails to detect MMIO accesses whose GPA overlaps the repurprosed bits, which in turn causes guest panics and hangs. Fix the bug by generating a mask that covers the lower chunk of the GFN, i.e. the bits that aren't shifted by the L1TF mitigation. The alternative approach would be to explicitly zero the five reserved bits that are used to store the upper chunk of the GFN, but that requires additional run-time computation and makes an already-ugly bit of code even more inscrutable. I considered adding a WARN_ON_ONCE(low_phys_bits-1 <= PAGE_SHIFT) to warn if GENMASK_ULL() generated a nonsensical value, but that seemed silly since that would mean a system that supports VMX has less than 18 bits of physical address space... Reported-by: Sakari Ailus <[email protected]> Fixes: d9b47449c1a1 ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs") Cc: Junaid Shahid <[email protected]> Cc: Jim Mattson <[email protected]> Cc: [email protected] Reviewed-by: Junaid Shahid <[email protected]> Tested-by: Sakari Ailus <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2018-10-01KVM: nVMX: Fix emulation of VM_ENTRY_LOAD_BNDCFGSLiran Alon1-2/+11
L2 IA32_BNDCFGS should be updated with vmcs12->guest_bndcfgs only when VM_ENTRY_LOAD_BNDCFGS is specified in vmcs12->vm_entry_controls. Otherwise, L2 IA32_BNDCFGS should be set to vmcs01->guest_bndcfgs which is L1 IA32_BNDCFGS. Reviewed-by: Nikita Leshchenko <[email protected]> Reviewed-by: Darren Kenny <[email protected]> Signed-off-by: Liran Alon <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2018-10-01KVM: x86: Do not use kvm_x86_ops->mpx_supported() directlyLiran Alon2-2/+2
Commit a87036add092 ("KVM: x86: disable MPX if host did not enable MPX XSAVE features") introduced kvm_mpx_supported() to return true iff MPX is enabled in the host. However, that commit seems to have missed replacing some calls to kvm_x86_ops->mpx_supported() to kvm_mpx_supported(). Complete original commit by replacing remaining calls to kvm_mpx_supported(). Fixes: a87036add092 ("KVM: x86: disable MPX if host did not enable MPX XSAVE features") Suggested-by: Sean Christopherson <[email protected]> Signed-off-by: Liran Alon <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2018-10-01KVM: nVMX: Do not expose MPX VMX controls when guest MPX disabledLiran Alon1-6/+20
Before this commit, KVM exposes MPX VMX controls to L1 guest only based on if KVM and host processor supports MPX virtualization. However, these controls should be exposed to guest only in case guest vCPU supports MPX. Without this change, a L1 guest running with kernel which don't have commit 691bd4340bef ("kvm: vmx: allow host to access guest MSR_IA32_BNDCFGS") asserts in QEMU on the following: qemu-kvm: error: failed to set MSR 0xd90 to 0x0 qemu-kvm: .../qemu-2.10.0/target/i386/kvm.c:1801 kvm_put_msrs: Assertion 'ret == cpu->kvm_msr_buf->nmsrs failed' This is because L1 KVM kvm_init_msr_list() will see that vmx_mpx_supported() (As it only checks MPX VMX controls support) and therefore KVM_GET_MSR_INDEX_LIST IOCTL will include MSR_IA32_BNDCFGS. However, later when L1 will attempt to set this MSR via KVM_SET_MSRS IOCTL, it will fail because !guest_cpuid_has_mpx(vcpu). Therefore, fix the issue by exposing MPX VMX controls to L1 guest only when vCPU supports MPX. Fixes: 36be0b9deb23 ("KVM: x86: Add nested virtualization support for MPX") Reported-by: Eyal Moscovici <[email protected]> Reviewed-by: Nikita Leshchenko <[email protected]> Reviewed-by: Darren Kenny <[email protected]> Signed-off-by: Liran Alon <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2018-10-01x86/build: Remove unused CONFIG_AS_CRC32Masahiro Yamada1-1/+0
CONFIG_AS_CRC32 is not used anywhere. Its last user was removed by 0cb6c969ed9d ("net, lib: kill arch_fast_hash library bits") Signed-off-by: Masahiro Yamada <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2018-10-01x86/asm: Use CC_SET()/CC_OUT() in __cmpxchg_double()Uros Bizjak1-4/+6
Replace open-coded use of the SETcc instruction with CC_SET()/CC_OUT() in __cmpxchg_double(). Signed-off-by: Uros Bizjak <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Link: https://lkml.kernel.org/r/CAFULd4YdvwwhXWHqqPsGk5+TLG71ozgSscTZNsqmrm+Jzg941w@mail.gmail.com
2018-09-29Merge branch 'x86-urgent-for-linus' of ↵Greg Kroah-Hartman1-19/+0
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Thomas writes: "A single fix for the AMD memory encryption boot code so it does not read random garbage instead of the cached encryption bit when a kexec kernel is allocated above the 32bit address limit." * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/boot: Fix kexec booting failure in the SEV bit detection code
2018-09-28x86/intel_rdt: Use perf infrastructure for measurementsReinette Chatre1-115/+189
The success of a cache pseudo-locked region is measured using performance monitoring events that are programmed directly at the time the user requests a measurement. Modifying the performance event registers directly is not appropriate since it circumvents the in-kernel perf infrastructure that exists to manage these resources and provide resource arbitration to the performance monitoring hardware. The cache pseudo-locking measurements are modified to use the in-kernel perf infrastructure. Performance events are created and validated with the appropriate perf API. The performance counters are still read as directly as possible to avoid the additional cache hits. This is done safely by first ensuring with the perf API that the counters have been programmed correctly and only accessing the counters in an interrupt disabled section where they are not able to be moved. As part of the transition to the in-kernel perf infrastructure the L2 and L3 measurements are split into two separate measurements that can be triggered independently. This separation prevents additional cache misses incurred during the extra testing code used to decide if a L2 and/or L3 measurement should be made. Signed-off-by: Reinette Chatre <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lkml.kernel.org/r/fc24e728b446404f42c78573c506e98cd0599873.1537468643.git.reinette.chatre@intel.com
2018-09-28x86/intel_rdt: Create required perf event attributesReinette Chatre1-0/+26
A perf event has many attributes that are maintained in a separate structure that should be provided when a new perf_event is created. In preparation for the transition to perf_events the required attribute structures are created for all the events that may be used in the measurements. Most attributes for all the events are identical. The actual configuration, what specifies what needs to be measured, is what will be different between the events used. This configuration needs to be done with X86_CONFIG that cannot be used as part of the designated initializers used here, this will be introduced later. Although they do look identical at this time the attribute structures needs to be maintained separately since a perf_event will maintain a pointer to its unique attributes. In support of patch testing the new structs are given the unused attribute until their use in later patches. Signed-off-by: Reinette Chatre <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lkml.kernel.org/r/1822f6164e221a497648d108913d056ab675d5d0.1537377064.git.reinette.chatre@intel.com
2018-09-28x86/intel_rdt: Remove local register variablesReinette Chatre1-44/+9
Local register variables were used in an effort to improve the accuracy of the measurement of cache residency of a pseudo-locked region. This was done to ensure that only the cache residency of the memory is measured and not the cache residency of the variables used to perform the measurement. While local register variables do accomplish the goal they do require significant care since different architectures have different registers available. Local register variables also cannot be used with valuable developer tools like KASAN. Significant testing has shown that similar accuracy in measurement results can be obtained by replacing local register variables with regular local variables. Make use of local variables in the critical code but do so with READ_ONCE() to prevent the compiler from merging or refetching reads. Ensure these variables are initialized before the measurement starts, and ensure it is only the local variables that are accessed during the measurement. With the removal of the local register variables and using READ_ONCE() there is no longer a motivation for using a direct wrmsr call (that avoids the additional tracing code that may clobber the local register variables). Signed-off-by: Reinette Chatre <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lkml.kernel.org/r/f430f57347414e0691765d92b144758ab93d8407.1537377064.git.reinette.chatre@intel.com
2018-09-28perf/x86: Add helper to obtain performance counter indexReinette Chatre2-0/+22
perf_event_read_local() is the safest way to obtain measurements associated with performance events. In some cases the overhead introduced by perf_event_read_local() affects the measurements and the use of rdpmcl() is needed. rdpmcl() requires the index of the performance counter used so a helper is introduced to determine the index used by a provided performance event. The index used by a performance event may change when interrupts are enabled. A check is added to ensure that the index is only accessed with interrupts disabled. Even with this check the use of this counter needs to be done with care to ensure it is queried and used within the same disabled interrupts section. This change introduces a new checkpatch warning: CHECK: extern prototypes should be avoided in .h files +extern int x86_perf_rdpmc_index(struct perf_event *event); This warning was discussed and designated as a false positive in http://lkml.kernel.org/r/[email protected] Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Reinette Chatre <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lkml.kernel.org/r/b277ffa78a51254f5414f7b1bc1923826874566e.1537377064.git.reinette.chatre@intel.com
2018-09-28x86: DT: use for_each_of_cpu_node iteratorRob Herring1-1/+1
Use the for_each_of_cpu_node iterator to iterate over cpu nodes. This has the side effect of defaulting to iterating using "cpu" node names in preference to the deprecated (for FDT) device_type == "cpu". Cc: Ingo Molnar <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: [email protected] Reviewed-by: Thomas Gleixner <[email protected]> Signed-off-by: Rob Herring <[email protected]>
2018-09-28x86/fpu: Remove VLA usage of skcipherKees Cook1-14/+16
In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: [email protected] Signed-off-by: Kees Cook <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2018-09-27x86/hyperv: Remove unused includeYueHaibing1-1/+0
Remove including <linux/version.h>. It's not needed. Signed-off-by: YueHaibing <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Michael Kelley <[email protected]> Cc: "K. Y. Srinivasan" <[email protected]> Cc: Haiyang Zhang <[email protected]> Cc: Stephen Hemminger <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: <[email protected]> Cc: <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2018-09-27x86/hyperv: Suppress "PCI: Fatal: No config space access function found"Dexuan Cui1-0/+19
A Generation-2 Linux VM on Hyper-V doesn't have the legacy PCI bus, and users always see the scary warning, which is actually harmless. Suppress it. Signed-off-by: Dexuan Cui <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Michael Kelley <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: KY Srinivasan <[email protected]> Cc: Haiyang Zhang <[email protected]> Cc: Stephen Hemminger <[email protected]> Cc: "[email protected]" <[email protected]> Cc: Olaf Aepfle <[email protected]> Cc: Andy Whitcroft <[email protected]> Cc: Jason Wang <[email protected]> Cc: Vitaly Kuznetsov <[email protected]> Cc: Marcelo Cerri <[email protected]> Cc: Josh Poulson <[email protected]> Link: https://lkml.kernel.org/r/ <KU1P153MB0166D977DC930996C4BF538ABF1D0@KU1P153MB0166.APCP153.PROD.OUTLOOK.COM
2018-09-27x86/mm/cpa: Optimize __cpa_flush_range()Peter Zijlstra1-1/+1
If we IPI for WBINDV, then we might as well kill the entire TLB too. But if we don't have to invalidate cache, there is no reason not to use a range TLB flush. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Dave Hansen <[email protected]> Cc: Bin Yang <[email protected]> Cc: Mark Gross <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2018-09-27x86/mm/cpa: Factor common code between cpa_flush_*()Peter Zijlstra1-16/+13
The start of cpa_flush_range() and cpa_flush_array() is the same, use a common function. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Dave Hansen <[email protected]> Cc: Bin Yang <[email protected]> Cc: Mark Gross <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2018-09-27x86/mm/cpa: Move CLFLUSH test into cpa_flush_array()Peter Zijlstra1-11/+16
Rather than guarding cpa_flush_array() users with a CLFLUSH test, put it inside. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Dave Hansen <[email protected]> Cc: Bin Yang <[email protected]> Cc: Mark Gross <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2018-09-27x86/mm/cpa: Move CLFLUSH test into cpa_flush_range()Peter Zijlstra1-8/+7
Rather than guarding all cpa_flush_range() uses with a CLFLUSH test, put it inside. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Dave Hansen <[email protected]> Cc: Bin Yang <[email protected]> Cc: Mark Gross <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2018-09-27x86/mm/cpa: Use flush_tlb_kernel_range()Peter Zijlstra1-4/+5
Both cpa_flush_range() and cpa_flush_array() have a well specified range, use that to do a range based TLB invalidate. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Dave Hansen <[email protected]> Cc: Bin Yang <[email protected]> Cc: Mark Gross <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2018-09-27x86/mm/cpa: Unconditionally avoid WBINDV when we canPeter Zijlstra1-16/+2
CAT has happened, WBINDV is bad (even before CAT blowing away the entire cache on a multi-core platform wasn't nice), try not to use it ever. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Dave Hansen <[email protected]> Cc: Bin Yang <[email protected]> Cc: Mark Gross <[email protected]> Link: https://lkml.kernel.org/r/[email protected]