aboutsummaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)AuthorFilesLines
2020-08-22x86/msr: Make source of unrecognised MSR writes unambiguousChris Down1-2/+2
In many cases, task_struct.comm isn't enough to distinguish the offender, since for interpreted languages it's likely just going to be "python3" or whatever. Add the pid to make it unambiguous. [ bp: Make the printk string a single line for easier grepping. ] Signed-off-by: Chris Down <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/6f6fbd0ee6c99bc5e47910db700a6642159db01b.1598011595.git.chris@chrisdown.name
2020-08-22x86/msr: Prevent userspace MSR access from dominating the consoleChris Down1-3/+15
Applications which manipulate MSRs from userspace often do so infrequently, and all at once. As such, the default printk ratelimit architecture supplied by pr_err_ratelimited() doesn't do enough to prevent kmsg becoming completely overwhelmed with their messages and pushing other salient information out of the circular buffer. In one case, I saw over 80% of kmsg being filled with these messages, and the default kmsg buffer being completely filled less than 5 minutes after boot(!). Make things much less aggressive, while still achieving the original goal of fiter_write(). Operators will still get warnings that MSRs are being manipulated from userspace, but they won't have other also potentially useful messages pushed out of the kmsg buffer. Of course, one can boot with `allow_writes=1` to avoid these messages at all, but that then has the downfall that one doesn't get _any_ notification at all about these problems in the first place, and so is much less likely to forget to fix it. One might rather it was less binary: it was still logged, just less often, so that application developers _do_ have the incentive to improve their current methods, without the kernel having to push other useful stuff out of the kmsg buffer. This one example isn't the point, of course: I'm sure there are plenty of other non-ideal-but-pragmatic cases where people are writing to MSRs from userspace right now, and it will take time for those people to find other solutions. Overall, keep the intent of the original patch, while mitigating its sometimes heavy effects on kmsg composition. [ bp: Massage a bit. ] Signed-off-by: Chris Down <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/563994ef132ce6cffd28fc659254ca37d032b5ef.1598011595.git.chris@chrisdown.name
2020-08-21KVM: Pass MMU notifier range flags to kvm_unmap_hva_range()Will Deacon2-2/+4
The 'flags' field of 'struct mmu_notifier_range' is used to indicate whether invalidate_range_{start,end}() are permitted to block. In the case of kvm_mmu_notifier_invalidate_range_start(), this field is not forwarded on to the architecture-specific implementation of kvm_unmap_hva_range() and therefore the backend cannot sensibly decide whether or not to block. Add an extra 'flags' parameter to kvm_unmap_hva_range() so that architectures are aware as to whether or not they are permitted to block. Cc: <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Suzuki K Poulose <[email protected]> Cc: James Morse <[email protected]> Signed-off-by: Will Deacon <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2020-08-21Merge tag 'for-linus-5.9-rc2-tag' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull xen fixes from Juergen Gross: "One build fix and a minor fix for suppressing a useless warning when booting a Xen dom0 via UEFI" * tag 'for-linus-5.9-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: Fix build error when CONFIG_ACPI is not set/enabled: efi: avoid error message when booting under Xen
2020-08-21x86/entry/64: Do not use RDPID in paranoid entry to accomodate KVMSean Christopherson1-4/+6
KVM has an optmization to avoid expensive MRS read/writes on VMENTER/EXIT. It caches the MSR values and restores them either when leaving the run loop, on preemption or when going out to user space. The affected MSRs are not required for kernel context operations. This changed with the recently introduced mechanism to handle FSGSBASE in the paranoid entry code which has to retrieve the kernel GSBASE value by accessing per CPU memory. The mechanism needs to retrieve the CPU number and uses either LSL or RDPID if the processor supports it. Unfortunately RDPID uses MSR_TSC_AUX which is in the list of cached and lazily restored MSRs, which means between the point where the guest value is written and the point of restore, MSR_TSC_AUX contains a random number. If an NMI or any other exception which uses the paranoid entry path happens in such a context, then RDPID returns the random guest MSR_TSC_AUX value. As a consequence this reads from the wrong memory location to retrieve the kernel GSBASE value. Kernel GS is used to for all regular this_cpu_*() operations. If the GSBASE in the exception handler points to the per CPU memory of a different CPU then this has the obvious consequences of data corruption and crashes. As the paranoid entry path is the only place which accesses MSR_TSX_AUX (via RDPID) and the fallback via LSL is not significantly slower, remove the RDPID alternative from the entry path and always use LSL. The alternative would be to write MSR_TSC_AUX on every VMENTER and VMEXIT which would be inflicting massive overhead on that code path. [ tglx: Rewrote changelog ] Fixes: eaad981291ee3 ("x86/entry/64: Introduce the FIND_PERCPU_BASE macro") Reported-by: Tom Lendacky <[email protected]> Debugged-by: Tom Lendacky <[email protected]> Suggested-by: Andy Lutomirski <[email protected]> Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-08-21arm64/x86: KVM: Introduce steal-time capAndrew Jones1-0/+3
arm64 requires a vcpu fd (KVM_HAS_DEVICE_ATTR vcpu ioctl) to probe support for steal-time. However this is unnecessary, as only a KVM fd is required, and it complicates userspace (userspace may prefer delaying vcpu creation until after feature probing). Introduce a cap that can be checked instead. While x86 can already probe steal-time support with a kvm fd (KVM_GET_SUPPORTED_CPUID), we add the cap there too for consistency. Signed-off-by: Andrew Jones <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Reviewed-by: Steven Price <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-08-21crypto: x86/crc32c-intel - Use CRC32 mnemonicUros Bizjak1-12/+6
Current minimum required version of binutils is 2.23, which supports CRC32 instruction mnemonic. Replace the byte-wise specification of CRC32 with this proper mnemonic. The compiler is now able to pass memory operand to the instruction, so there is no need for a temporary register anymore. Some examples of the improvement: 12a: 48 8b 08 mov (%rax),%rcx 12d: f2 48 0f 38 f1 f1 crc32q %rcx,%rsi 133: 48 83 c0 08 add $0x8,%rax 137: 48 39 d0 cmp %rdx,%rax 13a: 75 ee jne 12a <crc32c_intel_update+0x1a> to: 125: f2 48 0f 38 f1 06 crc32q (%rsi),%rax 12b: 48 83 c6 08 add $0x8,%rsi 12f: 48 39 d6 cmp %rdx,%rsi 132: 75 f1 jne 125 <crc32c_intel_update+0x15> and: 146: 0f b6 08 movzbl (%rax),%ecx 149: f2 0f 38 f0 f1 crc32b %cl,%esi 14e: 48 83 c0 01 add $0x1,%rax 152: 48 39 d0 cmp %rdx,%rax 155: 75 ef jne 146 <crc32c_intel_update+0x36> to: 13b: f2 0f 38 f0 02 crc32b (%rdx),%eax 140: 48 83 c2 01 add $0x1,%rdx 144: 48 39 ca cmp %rcx,%rdx 147: 75 f2 jne 13b <crc32c_intel_update+0x2b> As the compiler has some more freedom w.r.t. register allocation, there is also a couple of reg-reg moves removed. There are no hidden states for CRC32 insn, so there is no need to mark assembly as volatile. v2: Introduce CRC32_INST define. Signed-off-by: Uros Bizjak <[email protected]> CC: Herbert Xu <[email protected]> CC: "David S. Miller" <[email protected]> CC: Thomas Gleixner <[email protected]> CC: Ingo Molnar <[email protected]> CC: Borislav Petkov <[email protected]> CC: "H. Peter Anvin" <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2020-08-20amd64: switch csum_partial_copy_generic() to new calling conventionsAl Viro3-123/+94
... and fold handling of misaligned case into it. Implementation note: we stash the "will we need to rol8 the sum in the end" flag into the MSB of %rcx (the lower 32 bits are used for length); the rest is pretty straightforward. Signed-off-by: Al Viro <[email protected]>
2020-08-20i386: propagate the calling conventions change down to ↵Al Viro2-88/+47
csum_partial_copy_generic() ... and don't bother zeroing destination on error Signed-off-by: Al Viro <[email protected]>
2020-08-20saner calling conventions for csum_and_copy_..._user()Al Viro4-70/+32
All callers of these primitives will * discard anything we might've copied in case of error * ignore the csum value in case of error * always pass 0xffffffff as the initial sum, so the resulting csum value (in case of success, that is) will never be 0. That suggest the following calling conventions: * don't pass err_ptr - just return 0 on error. * don't bother with zeroing destination, etc. in case of error * don't pass the initial sum - just use 0xffffffff. This commit does the minimal conversion in the instances of csum_and_copy_...(); the changes of actual asm code behind them are done later in the series. Note that this asm code is often shared with csum_partial_copy_nocheck(); the difference is that csum_partial_copy_nocheck() passes 0 for initial sum while csum_and_copy_..._user() pass 0xffffffff. Fortunately, we are free to pass 0xffffffff in all cases and subsequent patches will use that freedom without any special comments. A part that could be split off: parisc and uml/i386 claimed to have csum_and_copy_to_user() instances of their own, but those were identical to the generic one, so we simply drop them. Not sure if it's worth a separate commit... Signed-off-by: Al Viro <[email protected]>
2020-08-20csum_partial_copy_nocheck(): drop the last argumentAl Viro3-7/+5
It's always 0. Note that we theoretically could use ~0U as well - result will be the same modulo 0xffff, _if_ the damn thing did the right thing for any value of initial sum; later we'll make use of that when convenient. However, unlike csum_and_copy_..._user(), there are instances that did not work for arbitrary initial sums; c6x is one such. Signed-off-by: Al Viro <[email protected]>
2020-08-20unify generic instances of csum_partial_copy_nocheck()Al Viro2-16/+1
quite a few architectures have the same csum_partial_copy_nocheck() - simply memcpy() the data and then return the csum of the copy. hexagon, parisc, ia64, s390, um: explicitly spelled out that way. arc, arm64, csky, h8300, m68k/nommu, microblaze, mips/GENERIC_CSUM, nds32, nios2, openrisc, riscv, unicore32: end up picking the same thing spelled out in lib/checksum.h (with varying amounts of perversions along the way). everybody else (alpha, arm, c6x, m68k/mmu, mips/!GENERIC_CSUM, powerpc, sh, sparc, x86, xtensa) have non-generic variants. For all except c6x the declaration is in their asm/checksum.h. c6x uses the wrapper from asm-generic/checksum.h that would normally lead to the lib/checksum.h instance, but in case of c6x we end up using an asm function from arch/c6x instead. Screw that mess - have architectures with private instances define _HAVE_ARCH_CSUM_AND_COPY in their asm/checksum.h and have the default one right in net/checksum.h conditional on _HAVE_ARCH_CSUM_AND_COPY *not* defined. Signed-off-by: Al Viro <[email protected]>
2020-08-20x86/umip: Add emulation/spoofing for SLDT and STR instructionsBrendan Shanks1-13/+27
Add emulation/spoofing of SLDT and STR for both 32- and 64-bit processes. Wine users have found a small number of Windows apps using SLDT that were crashing when run on UMIP-enabled systems. Originally-by: Ricardo Neri <[email protected]> Reported-by: Andreas Rammhold <[email protected]> Signed-off-by: Brendan Shanks <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Reviewed-by: Ricardo Neri <[email protected]> Tested-by: Ricardo Neri <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-20x86: switch to kernel_clone()Christian Brauner1-1/+1
The old _do_fork() helper is removed in favor of the new kernel_clone() helper. The latter adheres to naming conventions for kernel internal syscall helpers. Signed-off-by: Christian Brauner <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected]
2020-08-20efi/x86: Move 32-bit code into efi_32.cArd Biesheuvel3-86/+37
Now that the old memmap code has been removed, some code that was left behind in arch/x86/platform/efi/efi.c is only used for 32-bit builds, which means it can live in efi_32.c as well. So move it over. Signed-off-by: Ard Biesheuvel <[email protected]>
2020-08-20efi/x86: Mark kernel rodata non-executable for mixed modeArvind Sankar1-0/+2
When remapping the kernel rodata section RO in the EFI pagetables, the protection flags that were used for the text section are being reused, but the rodata section should not be marked executable. Cc: <[email protected]> Signed-off-by: Arvind Sankar <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Ard Biesheuvel <[email protected]>
2020-08-20x86/MCE/AMD, EDAC/mce_amd: Remove struct smca_hwid.xec_bitmapYazen Ghannam2-23/+22
The Extended Error Code Bitmap (xec_bitmap) for a Scalable MCA bank type was intended to be used by the kernel to filter out invalid error codes on a system. However, this is unnecessary after a few product releases because the hardware will only report valid error codes. Thus, there's no need for it with future systems. Remove the xec_bitmap field and all references to it. Signed-off-by: Yazen Ghannam <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-20x86/build: Declutter the build outputIngo Molnar1-4/+0
We have some really ancient debug printouts in the x86 boot image build code: Setup is 14108 bytes (padded to 14336 bytes). System is 8802 kB CRC 27e909d4 None of these ever helped debug any sort of breakage that I know of, and they clutter the build output. Remove them - if anyone needs the see the various interim stages of this to debug an obscure bug, they can add these printfs and more. We still keep this one: Kernel: arch/x86/boot/bzImage is ready (#19) As a sentimental leftover, plus the '#19' build count tag is mildly useful. Signed-off-by: Ingo Molnar <[email protected]> Cc: [email protected] Cc: [email protected]
2020-08-20Fix build error when CONFIG_ACPI is not set/enabled:Randy Dunlap1-0/+1
../arch/x86/pci/xen.c: In function ‘pci_xen_init’: ../arch/x86/pci/xen.c:410:2: error: implicit declaration of function ‘acpi_noirq_set’; did you mean ‘acpi_irq_get’? [-Werror=implicit-function-declaration] acpi_noirq_set(); Fixes: 88e9ca161c13 ("xen/pci: Use acpi_noirq_set() helper to avoid #ifdef") Signed-off-by: Randy Dunlap <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Cc: Andy Shevchenko <[email protected]> Cc: Bjorn Helgaas <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Juergen Gross <[email protected]>
2020-08-20crypto: algapi - Remove skbuff.h inclusionHerbert Xu6-0/+6
The header file algapi.h includes skbuff.h unnecessarily since all we need is a forward declaration for struct sk_buff. This patch removes that inclusion. Unfortunately skbuff.h pulls in a lot of things and drivers over the years have come to rely on it so this patch adds a lot of missing inclusions that result from this. Signed-off-by: Herbert Xu <[email protected]>
2020-08-19x86/boot/compressed: Use builtin mem functions for decompressorArvind Sankar2-9/+3
Since commits c041b5ad8640 ("x86, boot: Create a separate string.h file to provide standard string functions") fb4cac573ef6 ("x86, boot: Move memcmp() into string.h and string.c") the decompressor stub has been using the compiler's builtin memcpy, memset and memcmp functions, _except_ where it would likely have the largest impact, in the decompression code itself. Remove the #undef's of memcpy and memset in misc.c so that the decompressor code also uses the compiler builtins. The rationale given in the comment doesn't really apply: just because some functions use the out-of-line version is no reason to not use the builtin version in the rest. Replace the comment with an explanation of why memzero and memmove are being #define'd. Drop the suggestion to #undef in boot/string.h as well: the out-of-line versions are not really optimized versions, they're generic code that's good enough for the preboot environment. The compiler will likely generate better code for constant-size memcpy/memset/memcmp if it is allowed to. Most decompressors' performance is unchanged, with the exception of LZ4 and 64-bit ZSTD. Before After ARCH LZ4 73ms 10ms 32 LZ4 120ms 10ms 64 ZSTD 90ms 74ms 64 Measurements on QEMU on 2.2GHz Broadwell Xeon, using defconfig kernels. Decompressor code size has small differences, with the largest being that 64-bit ZSTD decreases just over 2k. The largest code size increase was on 64-bit XZ, of about 400 bytes. Signed-off-by: Arvind Sankar <[email protected]> Suggested-by: Nick Terrell <[email protected]> Tested-by: Nick Terrell <[email protected]> Acked-by: Kees Cook <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-08-19cacheinfo: Move resctrl's get_cache_id() to the cacheinfo header fileJames Morse1-15/+2
resctrl/core.c defines get_cache_id() for use in its cpu-hotplug callbacks. This gets the id attribute of the cache at the corresponding level of a CPU. Later rework means this private function needs to be shared. Move it to the header file. The name conflicts with a different definition in intel_cacheinfo.c, name it get_cpu_cacheinfo_id() to show its relation with get_cpu_cacheinfo(). Now this is visible on other architectures, check the id attribute has actually been set. Signed-off-by: James Morse <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Babu Moger <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-19x86/resctrl: Add struct rdt_cache::arch_has_{sparse, empty}_bitmapsJames Morse3-39/+22
Intel CPUs expect the cache bitmap provided by user-space to have on a single span of 1s, whereas AMD can support bitmaps like 0xf00f. Arm's MPAM support also allows sparse bitmaps. Similarly, Intel CPUs check at least one bit set, whereas AMD CPUs are quite happy with an empty bitmap. Arm's MPAM allows an empty bitmap. To move resctrl out to /fs/, platform differences like this need to be explained. Add two resource properties arch_has_{empty,sparse}_bitmaps. Test these around the relevant parts of cbm_validate(). Merging the validate calls causes AMD to gain the min_cbm_bits test needed for Haswell, but as it always sets this value to 1, it will never match. [ bp: Massage commit message. ] Signed-off-by: James Morse <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Babu Moger <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-19x86/cpu: Fix typos and improve the comments in sync_core()Ingo Molnar1-8/+8
- Fix typos. - Move the compiler barrier comment to the top, because it's valid for the whole function, not just the legacy branch. Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Ricardo Neri <[email protected]>
2020-08-19x86/resctrl: Merge AMD/Intel parse_bw() callsJames Morse3-61/+5
Now after arch_needs_linear has been added, the parse_bw() calls are almost the same between AMD and Intel. The difference is '!is_mba_sc()', which is not checked on AMD. This will always be true on AMD CPUs as mba_sc cannot be enabled as is_mba_linear() is false. Removing this duplication means user-space visible behaviour and error messages are not validated or generated in different places. Reviewed-by : Babu Moger <[email protected]> Signed-off-by: James Morse <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-19x86/resctrl: Add struct rdt_membw::arch_needs_linear to explain AMD/Intel ↵James Morse3-1/+12
MBA difference The configuration values user-space provides to the resctrl filesystem are ABI. To make this work on another architecture, all the ABI bits should be moved out of /arch/x86 and under /fs. To do this, the differences between AMD and Intel CPUs needs to be explained to resctrl via resource properties, instead of function pointers that let the arch code accept subtly different values on different platforms/architectures. For MBA, Intel CPUs reject configuration attempts for non-linear resources, whereas AMD ignore this field as its MBA resource is never linear. To merge the parse/validate functions, this difference needs to be explained. Add struct rdt_membw::arch_needs_linear to indicate the arch code needs the linear property to be true to configure this resource. AMD can set this and delay_linear to false. Intel can set arch_needs_linear to true to keep the existing "No support for non-linear MB domains" error message for affected platforms. [ bp: convert "we" etc to passive voice. ] Signed-off-by: James Morse <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Reviewed-by: Babu Moger <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-19x86/resctrl: Use is_closid_match() in more placesJames Morse1-16/+14
rdtgroup_tasks_assigned() and show_rdt_tasks() loop over threads testing for a CTRL/MON group match by closid/rmid with the provided rdtgrp. Further down the file are helpers to do this, move these further up and make use of them here. These helpers additionally check for alloc/mon capable. This is harmless as rdtgroup_mkdir() tests these capable flags before allowing the config directories to be created. Signed-off-by: James Morse <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-18x86/resctrl: Use container_of() in delayed_work handlersJames Morse1-11/+2
mbm_handle_overflow() and cqm_handle_limbo() are both provided with the domain's work_struct when called, but use get_domain_from_cpu() to find the domain, along with the appropriate error handling. container_of() saves some list walking and bitmap testing, use that instead. Signed-off-by: James Morse <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-18x86/resctrl: Fix stale commentJames Morse1-1/+1
The comment in rdtgroup_init() refers to the non existent function rdt_mount(), which has now been renamed rdt_get_tree(). Fix the comment. Signed-off-by: James Morse <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-18x86/resctrl: Remove struct rdt_membw::max_delayJames Morse2-7/+4
max_delay is used by x86's __get_mem_config_intel() as a local variable. Remove it, replacing it with a local variable. Signed-off-by: James Morse <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-18x86/resctrl: Remove unused struct mbm_state::chunks_bwJames Morse2-4/+1
Nothing reads struct mbm_states's chunks_bw value, its a copy of chunks. Remove it. Signed-off-by: James Morse <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-18perf/x86/intel: Support per-thread RDPMC TopDown metricsKan Liang2-12/+83
Starts from Ice Lake, the TopDown metrics are directly available as fixed counters and do not require generic counters. Also, the TopDown metrics can be collected per thread. Extend the RDPMC usage to support per-thread TopDown metrics. The RDPMC index of the PERF_METRICS will be output if RDPMC users ask for the RDPMC index of the metrics events. To support per thread RDPMC TopDown, the metrics and slots counters have to be saved/restored during the context switching. The last_period and period_left are not used in the counting mode. Use the fields for saved_metric and saved_slots. Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-18perf/x86/intel: Support TopDown metrics on Ice LakeKan Liang4-0/+135
Ice Lake supports the hardware TopDown metrics feature, which can free up the scarce GP counters. Update the event constraints for the metrics events. The metric counters do not exist, which are mapped to a dummy offset. The sharing between multiple users of the same metric without multiplexing is not allowed. Implement set_topdown_event_period for Ice Lake. The values in PERF_METRICS MSR are derived from the fixed counter 3. Both registers should start from zero. Implement update_topdown_event for Ice Lake. The metric is reported by multiplying the metric (fraction) with slots. To maintain accurate measurements, both registers are cleared for each update. The fixed counter 3 should always be cleared before the PERF_METRICS. Implement td_attr for the new metrics events and the new slots fixed counter. Make them visible to the perf user tools. Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-18perf/x86: Add a macro for RDPMC offset of fixed countersKan Liang2-1/+5
The RDPMC base offset of fixed counters is hard-code. Use a meaningful name to replace the magic number to improve the readability of the code. Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-18perf/x86/intel: Generic support for hardware TopDown metricsKan Liang5-15/+257
Intro ===== The TopDown Microarchitecture Analysis (TMA) Method is a structured analysis methodology to identify critical performance bottlenecks in out-of-order processors. Current perf has supported the method. The method works well, but there is one problem. To collect the TopDown events, several GP counters have to be used. If a user wants to collect other events at the same time, the multiplexing probably be triggered, which impacts the accuracy. To free up the scarce GP counters, the hardware TopDown metrics feature is introduced from Ice Lake. The hardware implements an additional "metrics" register and a new Fixed Counter 3 that measures pipeline "slots". The TopDown events can be calculated from them instead. Events ====== The level 1 TopDown has four metrics. There is no event-code assigned to the TopDown metrics. Four metric events are exported as separate perf events, which map to the internal "metrics" counter register. Those events do not exist in hardware, but can be allocated by the scheduler. For the event mapping, a special 0x00 event code is used, which is reserved for fake events. The metric events start from umask 0x10. When setting up the metric events, they point to the Fixed Counter 3. They have to be specially handled. - Add the update_topdown_event() callback to read the additional metrics MSR and generate the metrics. - Add the set_topdown_event_period() callback to initialize metrics MSR and the fixed counter 3. - Add a variable n_metric_event to track the number of the accepted metrics events. The sharing between multiple users of the same metric without multiplexing is not allowed. - Only enable/disable the fixed counter 3 when there are no other active TopDown events, which avoid the unnecessary writing of the fixed control register. - Disable the PMU when reading the metrics event. The metrics MSR and the fixed counter 3 are read separately. The values may be modified by an NMI. All four metric events don't support sampling. Since they will be handled specially for event update, a flag PERF_X86_EVENT_TOPDOWN is introduced to indicate this case. The slots event can support both sampling and counting. For counting, the flag is also applied. For sampling, it will be handled normally as other normal events. Groups ====== The slots event is required in a Topdown group. To avoid reading the METRICS register multiple times, the metrics and slots value can only be updated by slots event in a group. All active slots and metrics events will be updated one time. Therefore, the slots event must be before any metric events in a Topdown group. NMI ====== The METRICS related register may be overflow. The bit 48 of the STATUS register will be set. If so, PERF_METRICS and Fixed counter 3 are required to be reset. The patch also update all active slots and metrics events in the NMI handler. The update_topdown_event() has to read two registers separately. The values may be modified by an NMI. PMU has to be disabled before calling the function. RDPMC ====== RDPMC is temporarily disabled. A later patch will enable it. Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-18perf/x86/intel: Use switch in intel_pmu_disable/enable_eventKan Liang1-8/+28
Currently, the if-else is used in the intel_pmu_disable/enable_event to check the type of an event. It works well, but with more and more types added later, e.g., perf metrics, compared to the switch statement, the if-else may impair the readability of the code. There is no harm to use the switch statement to replace the if-else here. Also, some optimizing compilers may compile a switch statement into a jump-table which is more efficient than if-else for a large number of cases. The performance gain may not be observed for now, because the number of cases is only 5, but the benefits may be observed with more and more types added in the future. Use switch to replace the if-else in the intel_pmu_disable/enable_event. If the idx is invalid, print a warning. For the case INTEL_PMC_IDX_FIXED_BTS in intel_pmu_disable_event, don't need to check the event->attr.precise_ip. Use return for the case. Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-18perf/x86/intel: Fix the name of perf METRICSKan Liang1-1/+1
Bit 15 of the PERF_CAPABILITIES MSR indicates that the perf METRICS feature is supported. The perf METRICS is not a PEBS feature. Rename pebs_metrics_available perf_metrics. The bit is not used in the current code. It will be used in a later patch. Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-18perf/x86/intel: Move BTS index to 47Kan Liang1-2/+2
The bit 48 in the PERF_GLOBAL_STATUS is used to indicate the overflow status of the PERF_METRICS counters. Move the BTS index to the bit 47. Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-18perf/x86/intel: Introduce the fourth fixed counterKan Liang1-3/+20
The fourth fixed counter, TOPDOWN.SLOTS, is introduced in Ice Lake to measure the level 1 TopDown events. Add MSR address and macros for the new fixed counter, which will be used in a later patch. Add comments to explain the event encoding rules for the fixed counters. Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-18perf/x86/intel: Name the global status bit in NMI handlerKan Liang2-12/+14
Magic numbers are used in the current NMI handler for the global status bit. Use a meaningful name to replace the magic numbers to improve the readability of the code. Remove a Tab for all GLOBAL_STATUS_* and INTEL_PMC_IDX_FIXED_BTS macros to reduce the length of the line. Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-18perf/x86: Use event_base_rdpmc for the RDPMC userspace supportKan Liang1-8/+3
The RDPMC index is always re-calculated for the RDPMC userspace support, which is unnecessary. The RDPMC index value is stored in the variable event_base_rdpmc for the kernel usage, which can be used for RDPMC userspace support as well. Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-18x86/cpu: Use XGETBV and XSETBV mnemonics in fpu/internal.hUros Bizjak1-5/+2
Current minimum required version of binutils is 2.23, which supports XGETBV and XSETBV instruction mnemonics. Replace the byte-wise specification of XGETBV and XSETBV with these proper mnemonics. Signed-off-by: Uros Bizjak <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-17kvm: x86: Toggling CR4.PKE does not load PDPTEs in PAE modeJim Mattson1-1/+1
See the SDM, volume 3, section 4.4.1: If PAE paging would be in use following an execution of MOV to CR0 or MOV to CR4 (see Section 4.1.1) and the instruction is modifying any of CR0.CD, CR0.NW, CR0.PG, CR4.PAE, CR4.PGE, CR4.PSE, or CR4.SMEP; then the PDPTEs are loaded from the address in CR3. Fixes: b9baba8614890 ("KVM, pkeys: expose CPUID/CR4 to guest") Cc: Huaitong Han <[email protected]> Signed-off-by: Jim Mattson <[email protected]> Reviewed-by: Peter Shier <[email protected]> Reviewed-by: Oliver Upton <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2020-08-17kvm: x86: Toggling CR4.SMAP does not load PDPTEs in PAE modeJim Mattson1-1/+1
See the SDM, volume 3, section 4.4.1: If PAE paging would be in use following an execution of MOV to CR0 or MOV to CR4 (see Section 4.1.1) and the instruction is modifying any of CR0.CD, CR0.NW, CR0.PG, CR4.PAE, CR4.PGE, CR4.PSE, or CR4.SMEP; then the PDPTEs are loaded from the address in CR3. Fixes: 0be0226f07d14 ("KVM: MMU: fix SMAP virtualization") Cc: Xiao Guangrong <[email protected]> Signed-off-by: Jim Mattson <[email protected]> Reviewed-by: Peter Shier <[email protected]> Reviewed-by: Oliver Upton <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2020-08-17KVM: x86: fix access code passed to gva_to_gpaPaolo Bonzini1-1/+3
The PK bit of the error code is computed dynamically in permission_fault and therefore need not be passed to gva_to_gpa: only the access bits (fetch, user, write) need to be passed down. Not doing so causes a splat in the pku test: WARNING: CPU: 25 PID: 5465 at arch/x86/kvm/mmu.h:197 paging64_walk_addr_generic+0x594/0x750 [kvm] Hardware name: Intel Corporation WilsonCity/WilsonCity, BIOS WLYDCRB1.SYS.0014.D62.2001092233 01/09/2020 RIP: 0010:paging64_walk_addr_generic+0x594/0x750 [kvm] Code: <0f> 0b e9 db fe ff ff 44 8b 43 04 4c 89 6c 24 30 8b 13 41 39 d0 89 RSP: 0018:ff53778fc623fb60 EFLAGS: 00010202 RAX: 0000000000000001 RBX: ff53778fc623fbf0 RCX: 0000000000000007 RDX: 0000000000000001 RSI: 0000000000000002 RDI: ff4501efba818000 RBP: 0000000000000020 R08: 0000000000000005 R09: 00000000004000e7 R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000007 R13: ff4501efba818388 R14: 10000000004000e7 R15: 0000000000000000 FS: 00007f2dcf31a700(0000) GS:ff4501f1c8040000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 0000001dea475005 CR4: 0000000000763ee0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 PKRU: 55555554 Call Trace: paging64_gva_to_gpa+0x3f/0xb0 [kvm] kvm_fixup_and_inject_pf_error+0x48/0xa0 [kvm] handle_exception_nmi+0x4fc/0x5b0 [kvm_intel] kvm_arch_vcpu_ioctl_run+0x911/0x1c10 [kvm] kvm_vcpu_ioctl+0x23e/0x5d0 [kvm] ksys_ioctl+0x92/0xb0 __x64_sys_ioctl+0x16/0x20 do_syscall_64+0x3e/0xb0 entry_SYSCALL_64_after_hwframe+0x44/0xa9 ---[ end trace d17eb998aee991da ]--- Reported-by: Sean Christopherson <[email protected]> Fixes: 897861479c064 ("KVM: x86: Add helper functions for illegal GPA checking and page fault injection") Signed-off-by: Paolo Bonzini <[email protected]>
2020-08-17x86/cpu: Use SERIALIZE in sync_core() when availableRicardo Neri2-8/+24
The SERIALIZE instruction gives software a way to force the processor to complete all modifications to flags, registers and memory from previous instructions and drain all buffered writes to memory before the next instruction is fetched and executed. Thus, it serves the purpose of sync_core(). Use it when available. Suggested-by: Andy Lutomirski <[email protected]> Signed-off-by: Ricardo Neri <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Tony Luck <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-15perf/x86/intel/uncore: Add BW counters for GT, IA and IO breakdownVaibhav Shankar1-3/+49
Linux only has support to read total DDR reads and writes. Here we add support to enable bandwidth breakdown-GT, IA and IO. Breakdown of BW is important to debug and optimize memory access. This can also be used for telemetry and improving the system software.The offsets for GT, IA and IO are added and these free running counters can be accessed via MMIO space. The BW breakdown can be measured using the following cmd: perf stat -e uncore_imc/gt_requests/,uncore_imc/ia_requests/,uncore_imc/io_requests/ 30.57 MiB uncore_imc/gt_requests/ 1346.13 MiB uncore_imc/ia_requests/ 190.97 MiB uncore_imc/io_requests/ 5.984572733 seconds time elapsed BW/s = <gt,ia,io>_requests/time elapsed Signed-off-by: Vaibhav Shankar <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-08-15Merge tag 'x86-urgent-2020-08-15' of ↵Linus Torvalds7-17/+58
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Ingo Molnar: "Misc fixes and small updates all around the place: - Fix mitigation state sysfs output - Fix an FPU xstate/sxave code assumption bug triggered by Architectural LBR support - Fix Lightning Mountain SoC TSC frequency enumeration bug - Fix kexec debug output - Fix kexec memory range assumption bug - Fix a boundary condition in the crash kernel code - Optimize porgatory.ro generation a bit - Enable ACRN guests to use X2APIC mode - Reduce a __text_poke() IRQs-off critical section for the benefit of PREEMPT_RT" * tag 'x86-urgent-2020-08-15' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/alternatives: Acquire pte lock with interrupts enabled x86/bugs/multihit: Fix mitigation reporting when VMX is not in use x86/fpu/xstate: Fix an xstate size check warning with architectural LBRs x86/purgatory: Don't generate debug info for purgatory.ro x86/tsr: Fix tsc frequency enumeration bug on Lightning Mountain SoC kexec_file: Correctly output debugging information for the PT_LOAD ELF header kexec: Improve & fix crash_exclude_mem_range() to handle overlapping ranges x86/crash: Correct the address boundary of function parameters x86/acrn: Remove redundant chars from ACRN signature x86/acrn: Allow ACRN guest to use X2APIC mode
2020-08-15Merge tag 'perf-urgent-2020-08-15' of ↵Linus Torvalds1-10/+36
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Ingo Molnar: "Misc fixes, an expansion of perf syscall access to CAP_PERFMON privileged tools, plus a RAPL HW-enablement for Intel SPR platforms" * tag 'perf-urgent-2020-08-15' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/x86/rapl: Add support for Intel SPR platform perf/x86/rapl: Support multiple RAPL unit quirks perf/x86/rapl: Fix missing psys sysfs attributes hw_breakpoint: Remove unused __register_perf_hw_breakpoint() declaration kprobes: Remove show_registers() function prototype perf/core: Take over CAP_SYS_PTRACE creds to CAP_PERFMON capability
2020-08-15x86/mm/64: Update comment in preallocate_vmalloc_pages()Joerg Roedel1-5/+10
The comment explaining why 4-level systems only need to allocate on the P4D level caused some confustion. Update it to better explain why on 4-level systems the allocation on PUD level is necessary. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]