aboutsummaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)AuthorFilesLines
2020-10-06perf/x86: Fix n_metric for cancelled txnPeter Zijlstra2-0/+4
When a group that has TopDown members is failed to be scheduled, any later TopDown groups will not return valid values. Here is an example. A background perf that occupies all the GP counters and the fixed counter 1. $perf stat -e "{cycles,cycles,cycles,cycles,cycles,cycles,cycles, cycles,cycles}:D" -a A user monitors a TopDown group. It works well, because the fixed counter 3 and the PERF_METRICS are available. $perf stat -x, --topdown -- ./workload retiring,bad speculation,frontend bound,backend bound, 18.0,16.1,40.4,25.5, Then the user tries to monitor a group that has TopDown members. Because of the cycles event, the group is failed to be scheduled. $perf stat -x, -e '{slots,topdown-retiring,topdown-be-bound, topdown-fe-bound,topdown-bad-spec,cycles}' -- ./workload <not counted>,,slots,0,0.00,, <not counted>,,topdown-retiring,0,0.00,, <not counted>,,topdown-be-bound,0,0.00,, <not counted>,,topdown-fe-bound,0,0.00,, <not counted>,,topdown-bad-spec,0,0.00,, <not counted>,,cycles,0,0.00,, The user tries to monitor a TopDown group again. It doesn't work anymore. $perf stat -x, --topdown -- ./workload ,,,,, In a txn, cancel_txn() is to truncate the event_list for a canceled group and update the number of events added in this transaction. However, the number of TopDown events added in this transaction is not updated. The kernel will probably fail to add new Topdown events. Fixes: 7b2c05a15d29 ("perf/x86/intel: Generic support for hardware TopDown metrics") Reported-by: Andi Kleen <[email protected]> Reported-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Kan Liang <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-10-06perf/x86: Fix n_pair for cancelled txnPeter Zijlstra2-1/+6
Kan reported that n_metric gets corrupted for cancelled transactions; a similar issue exists for n_pair for AMD's Large Increment thing. The problem was confirmed and confirmed fixed by Kim using: sudo perf stat -e "{cycles,cycles,cycles,cycles}:D" -a sleep 10 & # should succeed: sudo perf stat -e "{fp_ret_sse_avx_ops.all}:D" -a workload # should fail: sudo perf stat -e "{fp_ret_sse_avx_ops.all,fp_ret_sse_avx_ops.all,cycles}:D" -a workload # previously failed, now succeeds with this patch: sudo perf stat -e "{fp_ret_sse_avx_ops.all}:D" -a workload Fixes: 5738891229a2 ("perf/x86/amd: Add support for Large Increment per Cycle Events") Reported-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Kim Phillips <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-10-06x86/copy_mc: Introduce copy_mc_enhanced_fast_string()Dan Williams2-9/+59
The motivations to go rework memcpy_mcsafe() are that the benefit of doing slow and careful copies is obviated on newer CPUs, and that the current opt-in list of CPUs to instrument recovery is broken relative to those CPUs. There is no need to keep an opt-in list up to date on an ongoing basis if pmem/dax operations are instrumented for recovery by default. With recovery enabled by default the old "mcsafe_key" opt-in to careful copying can be made a "fragile" opt-out. Where the "fragile" list takes steps to not consume poison across cachelines. The discussion with Linus made clear that the current "_mcsafe" suffix was imprecise to a fault. The operations that are needed by pmem/dax are to copy from a source address that might throw #MC to a destination that may write-fault, if it is a user page. So copy_to_user_mcsafe() becomes copy_mc_to_user() to indicate the separate precautions taken on source and destination. copy_mc_to_kernel() is introduced as a non-SMAP version that does not expect write-faults on the destination, but is still prepared to abort with an error code upon taking #MC. The original copy_mc_fragile() implementation had negative performance implications since it did not use the fast-string instruction sequence to perform copies. For this reason copy_mc_to_kernel() fell back to plain memcpy() to preserve performance on platforms that did not indicate the capability to recover from machine check exceptions. However, that capability detection was not architectural and now that some platforms can recover from fast-string consumption of memory errors the memcpy() fallback now causes these more capable platforms to fail. Introduce copy_mc_enhanced_fast_string() as the fast default implementation of copy_mc_to_kernel() and finalize the transition of copy_mc_fragile() to be a platform quirk to indicate 'copy-carefully'. With this in place, copy_mc_to_kernel() is fast and recovery-ready by default regardless of hardware capability. Thanks to Vivek for identifying that copy_user_generic() is not suitable as the copy_mc_to_user() backend since the #MC handler explicitly checks ex_has_fault_handler(). Thanks to the 0day robot for catching a performance bug in the x86/copy_mc_to_user implementation. [ bp: Add the "why" for this change from the 0/2th message, massage. ] Fixes: 92b0729c34ca ("x86/mm, x86/mce: Add memcpy_mcsafe()") Reported-by: Erwin Tsaur <[email protected]> Reported-by: 0day robot <[email protected]> Signed-off-by: Dan Williams <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Tony Luck <[email protected]> Tested-by: Erwin Tsaur <[email protected]> Cc: <[email protected]> Link: https://lkml.kernel.org/r/160195562556.2163339.18063423034951948973.stgit@dwillia2-desk3.amr.corp.intel.com
2020-10-06x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}()Dan Williams15-278/+310
In reaction to a proposal to introduce a memcpy_mcsafe_fast() implementation Linus points out that memcpy_mcsafe() is poorly named relative to communicating the scope of the interface. Specifically what addresses are valid to pass as source, destination, and what faults / exceptions are handled. Of particular concern is that even though x86 might be able to handle the semantics of copy_mc_to_user() with its common copy_user_generic() implementation other archs likely need / want an explicit path for this case: On Fri, May 1, 2020 at 11:28 AM Linus Torvalds <[email protected]> wrote: > > On Thu, Apr 30, 2020 at 6:21 PM Dan Williams <[email protected]> wrote: > > > > However now I see that copy_user_generic() works for the wrong reason. > > It works because the exception on the source address due to poison > > looks no different than a write fault on the user address to the > > caller, it's still just a short copy. So it makes copy_to_user() work > > for the wrong reason relative to the name. > > Right. > > And it won't work that way on other architectures. On x86, we have a > generic function that can take faults on either side, and we use it > for both cases (and for the "in_user" case too), but that's an > artifact of the architecture oddity. > > In fact, it's probably wrong even on x86 - because it can hide bugs - > but writing those things is painful enough that everybody prefers > having just one function. Replace a single top-level memcpy_mcsafe() with either copy_mc_to_user(), or copy_mc_to_kernel(). Introduce an x86 copy_mc_fragile() name as the rename for the low-level x86 implementation formerly named memcpy_mcsafe(). It is used as the slow / careful backend that is supplanted by a fast copy_mc_generic() in a follow-on patch. One side-effect of this reorganization is that separating copy_mc_64.S to its own file means that perf no longer needs to track dependencies for its memcpy_64.S benchmarks. [ bp: Massage a bit. ] Signed-off-by: Dan Williams <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Tony Luck <[email protected]> Acked-by: Michael Ellerman <[email protected]> Cc: <[email protected]> Link: http://lore.kernel.org/r/CAHk-=wjSqtXAqfUJxFtWNwmguFASTgB0dz1dT3V-78Quiezqbg@mail.gmail.com Link: https://lkml.kernel.org/r/160195561680.2163339.11574962055305783722.stgit@dwillia2-desk3.amr.corp.intel.com
2020-10-06dma-mapping: move dma-debug.h to kernel/dma/Christoph Hellwig2-2/+0
Most of dma-debug.h is not required by anything outside of kernel/dma. Move the four declarations needed by dma-mappin.h or dma-ops providers into dma-mapping.h and dma-map-ops.h, and move the remainder of the file to kernel/dma/debug.h. Signed-off-by: Christoph Hellwig <[email protected]>
2020-10-06dma-mapping: merge <linux/dma-contiguous.h> into <linux/dma-map-ops.h>Christoph Hellwig2-2/+1
Merge dma-contiguous.h into dma-map-ops.h, after removing the comment describing the contiguous allocator into kernel/dma/contigous.c. Signed-off-by: Christoph Hellwig <[email protected]>
2020-10-06dma-mapping: split <linux/dma-mapping.h>Christoph Hellwig4-1/+5
Split out all the bits that are purely for dma_map_ops implementations and related code into a new <linux/dma-map-ops.h> header so that they don't get pulled into all the drivers. That also means the architecture specific <asm/dma-mapping.h> is not pulled in by <linux/dma-mapping.h> any more, which leads to a missing includes that were pulled in by the x86 or arm versions in a few not overly portable drivers. Signed-off-by: Christoph Hellwig <[email protected]>
2020-10-05Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller12-34/+120
Rejecting non-native endian BTF overlapped with the addition of support for it. The rest were more simple overlapping changes, except the renesas ravb binding update, which had to follow a file move as well as a YAML conversion. Signed-off-by: David S. Miller <[email protected]>
2020-10-04x86/xen: Fix typo in xen_pagetable_p2m_free()Hui Su1-1/+1
Fix typo in xen_pagetable_p2m_free(): s/Fortunatly/Fortunately Signed-off-by: Hui Su <[email protected]> Link: https://lore.kernel.org/r/20200927172836.GA7423@rlk [boris: reword commit message slightly] Signed-off-by: Boris Ostrovsky <[email protected]>
2020-10-04x86/xen: disable Firmware First mode for correctable memory errorsJuergen Gross1-0/+9
When running as Xen dom0 the kernel isn't responsible for selecting the error handling mode, this should be handled by the hypervisor. So disable setting FF mode when running as Xen pv guest. Not doing so might result in boot splats like: [ 7.509696] HEST: Enabling Firmware First mode for corrected errors. [ 7.510382] mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 2. [ 7.510383] mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 3. [ 7.510384] mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 4. [ 7.510384] mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 5. [ 7.510385] mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 6. [ 7.510386] mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 7. [ 7.510386] mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 8. Reason is that the HEST ACPI table contains the real number of MCA banks, while the hypervisor is emulating only 2 banks for guests. Cc: [email protected] Signed-off-by: Juergen Gross <[email protected]> Reviewed-by: Boris Ostrovsky <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Boris Ostrovsky <[email protected]>
2020-10-03Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds1-10/+12
Pull kvm fixes from Paolo Bonzini: "Two bugfixes" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: VMX: update PFEC_MASK/PFEC_MATCH together with PF intercept KVM: arm64: Restore missing ISB on nVHE __tlb_switch_to_guest
2020-10-03x86/events/amd/iommu: Fix sizeof mismatchColin Ian King1-1/+1
An incorrect sizeof is being used, struct attribute ** is not correct, it should be struct attribute *. Note that since ** is the same size as * this is not causing any issues. Improve this fix by using sizeof(*attrs) as this allows us to not even reference the type of the pointer. Addresses-Coverity: ("Sizeof not portable (SIZEOF_MISMATCH)") Fixes: 51686546304f ("x86/events/amd/iommu: Fix sysfs perf attribute groups") Signed-off-by: Colin Ian King <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-10-03perf/x86/intel: Check perf metrics feature for each CPUKan Liang1-0/+11
It might be possible that different CPUs have different CPU metrics on a platform. In this case, writing the GLOBAL_CTRL_EN_PERF_METRICS bit to the GLOBAL_CTRL register of a CPU, which doesn't support the TopDown perf metrics feature, causes MSR access error. Current TopDown perf metrics feature is enumerated using the boot CPU's PERF_CAPABILITIES MSR. The MSR only indicates the boot CPU supports this feature. Check the PERF_CAPABILITIES MSR for each CPU. If any CPU doesn't support the perf metrics feature, disable the feature globally. Fixes: 59a854e2f3b9 ("perf/x86/intel: Support TopDown metrics on Ice Lake") Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-10-03KVM: VMX: update PFEC_MASK/PFEC_MATCH together with PF interceptPaolo Bonzini1-10/+12
The PFEC_MASK and PFEC_MATCH fields in the VMCS reverse the meaning of the #PF intercept bit in the exception bitmap when they do not match. This means that, if PFEC_MASK and/or PFEC_MATCH are set, the hypervisor can get a vmexit for #PF exceptions even when the corresponding bit is clear in the exception bitmap. This is unexpected and is promptly detected by a WARN_ON_ONCE. To fix it, reset PFEC_MASK and PFEC_MATCH when the #PF intercept is disabled (as is common with enable_ept && !allow_smaller_maxphyaddr). Reported-by: Qian Cai <[email protected]>> Reported-by: Naresh Kamboju <[email protected]> Tested-by: Naresh Kamboju <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2020-10-03mm: remove compat_process_vm_{readv,writev}Christoph Hellwig3-4/+6
Now that import_iovec handles compat iovecs, the native syscalls can be used for the compat case as well. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-10-03fs: remove compat_sys_vmspliceChristoph Hellwig3-2/+3
Now that import_iovec handles compat iovecs, the native vmsplice syscall can be used for the compat case as well. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-10-03fs: remove the compat readv/writev syscallsChristoph Hellwig3-4/+6
Now that import_iovec handles compat iovecs, the native readv and writev syscalls can be used for the compat case as well. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-10-02x86: Support Generic Initiator only proximity domainsJonathan Cameron3-0/+24
In common with memoryless domains only register GI domains if the proximity node is not online. If a domain is already a memory containing domain, or a memoryless domain there is nothing to do just because it also contains a Generic Initiator. Signed-off-by: Jonathan Cameron <[email protected]> Acked-by: Borislav Petkov <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2020-10-02Merge branch 'for-next/mte' into for-next/coreWill Deacon1-1/+1
Add userspace support for the Memory Tagging Extension introduced by Armv8.5. (Catalin Marinas and others) * for-next/mte: (30 commits) arm64: mte: Fix typo in memory tagging ABI documentation arm64: mte: Add Memory Tagging Extension documentation arm64: mte: Kconfig entry arm64: mte: Save tags when hibernating arm64: mte: Enable swap of tagged pages mm: Add arch hooks for saving/restoring tags fs: Handle intra-page faults in copy_mount_options() arm64: mte: ptrace: Add NT_ARM_TAGGED_ADDR_CTRL regset arm64: mte: ptrace: Add PTRACE_{PEEK,POKE}MTETAGS support arm64: mte: Allow {set,get}_tagged_addr_ctrl() on non-current tasks arm64: mte: Restore the GCR_EL1 register after a suspend arm64: mte: Allow user control of the generated random tags via prctl() arm64: mte: Allow user control of the tag check mode via prctl() mm: Allow arm64 mmap(PROT_MTE) on RAM-based files arm64: mte: Validate the PROT_MTE request via arch_validate_flags() mm: Introduce arch_validate_flags() arm64: mte: Add PROT_MTE support to mmap() and mprotect() mm: Introduce arch_calc_vm_flag_bits() arm64: mte: Tags-aware aware memcmp_pages() implementation arm64: Avoid unnecessary clear_user_page() indirection ...
2020-10-02x86/dumpstack: Fix misleading instruction pointer error messageMark Mossberg1-1/+2
Printing "Bad RIP value" if copy_code() fails can be misleading for userspace pointers, since copy_code() can fail if the instruction pointer is valid but the code is paged out. This is because copy_code() calls copy_from_user_nmi() for userspace pointers, which disables page fault handling. This is reproducible in OOM situations, where it's plausible that the code may be reclaimed in the time between entry into the kernel and when this message is printed. This leaves a misleading log in dmesg that suggests instruction pointer corruption has occurred, which may alarm users. Change the message to state the error condition more precisely. [ bp: Massage a bit. ] Signed-off-by: Mark Mossberg <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-10-02crypto: x86/poly1305 - Remove assignments with no effectHerbert Xu1-3/+0
This patch removes a few ineffectual assignments from the function crypto_poly1305_setdctxkey. Reported-by: kernel test robot <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2020-10-01x86/uv/time: Use a flexible array in struct uv_rtc_timer_headGustavo A. R. Silva1-4/+3
There is a regular need in the kernel to provide a way to declare having a dynamically sized set of trailing elements in a structure. Kernel code should always use “flexible array members”[1] for these cases. The older style of one-element or zero-length arrays should no longer be used[2]. struct uv_rtc_timer_head contains a one-element array cpu[1]. Switch it to a flexible array and use the struct_size() helper to calculate the allocation size. Also, save some heap space in the process[3]. [1] https://en.wikipedia.org/wiki/Flexible_array_member [2] https://www.kernel.org/doc/html/v5.9-rc1/process/deprecated.html#zero-length-and-one-element-arrays [3] https://lore.kernel.org/lkml/20200518190114.GA7757@embeddedor/ [ bp: Massage a bit. ] Signed-off-by: Gustavo A. R. Silva <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Cc: Steve Wahl <[email protected]> Link: https://lkml.kernel.org/r/20201001145608.GA10204@embeddedor
2020-10-01x86/nmi: Fix nmi_handle() duration miscalculationLibing Zhou1-3/+2
When nmi_check_duration() is checking the time an NMI handler took to execute, the whole_msecs value used should be read from the @duration argument, not from the ->max_duration, the latter being used to store the current maximal duration. [ bp: Rewrite commit message. ] Fixes: 248ed51048c4 ("x86/nmi: Remove irq_work from the long duration NMI handler") Suggested-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Libing Zhou <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Changbin Du <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-10-01x86/asm: Replace __force_order with a memory clobberArvind Sankar3-24/+17
The CRn accessor functions use __force_order as a dummy operand to prevent the compiler from reordering CRn reads/writes with respect to each other. The fact that the asm is volatile should be enough to prevent this: volatile asm statements should be executed in program order. However GCC 4.9.x and 5.x have a bug that might result in reordering. This was fixed in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x, may reorder volatile asm statements with respect to each other. There are some issues with __force_order as implemented: - It is used only as an input operand for the write functions, and hence doesn't do anything additional to prevent reordering writes. - It allows memory accesses to be cached/reordered across write functions, but CRn writes affect the semantics of memory accesses, so this could be dangerous. - __force_order is not actually defined in the kernel proper, but the LLVM toolchain can in some cases require a definition: LLVM (as well as GCC 4.9) requires it for PIE code, which is why the compressed kernel has a definition, but also the clang integrated assembler may consider the address of __force_order to be significant, resulting in a reference that requires a definition. Fix this by: - Using a memory clobber for the write functions to additionally prevent caching/reordering memory accesses across CRn writes. - Using a dummy input operand with an arbitrary constant address for the read functions, instead of a global variable. This will prevent reads from being reordered across writes, while allowing memory loads to be cached/reordered across CRn reads, which should be safe. Signed-off-by: Arvind Sankar <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Reviewed-by: Miguel Ojeda <[email protected]> Tested-by: Nathan Chancellor <[email protected]> Tested-by: Sedat Dilek <[email protected]> Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602 Link: https://lore.kernel.org/lkml/[email protected]/ Link: https://lkml.kernel.org/r/[email protected]
2020-09-30x86/mce: Use idtentry_nmi_enter/exit()Thomas Gleixner1-2/+4
The recent fix for NMI vs. IRQ state tracking missed to apply the cure to the MCE handler. Fixes: ba1f2b2eaa2a ("x86/entry: Fix NMI vs IRQ state tracking") Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-30x86/mce: Drop AMD-specific "DEFERRED" case from Intel severity rule listTony Luck1-4/+0
Way back in v3.19 Intel and AMD shared the same machine check severity grading code. So it made sense to add a case for AMD DEFERRED errors in commit e3480271f592 ("x86, mce, severity: Extend the the mce_severity mechanism to handle UCNA/DEFERRED error") But later in v4.2 AMD switched to a separate grading function in commit bf80bbd7dcf5 ("x86/mce: Add an AMD severities-grading function") Belatedly drop the DEFERRED case from the Intel rule list. Signed-off-by: Tony Luck <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-30x86/mce: Add Skylake quirk for patrol scrub reported errorsBorislav Petkov1-2/+26
The patrol scrubber in Skylake and Cascade Lake systems can be configured to report uncorrected errors using a special signature in the machine check bank and to signal using CMCI instead of machine check. Update the severity calculation mechanism to allow specifying the model, minimum stepping and range of machine check bank numbers. Add a new rule to detect the special signature (on model 0x55, stepping >=4 in any of the memory controller banks). [ bp: Rewrite it. aegl: Productize it. ] Suggested-by: Youquan Song <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Co-developed-by: Tony Luck <[email protected]> Signed-off-by: Tony Luck <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-29bpf: x64: Do not emit sub/add 0, %rsp when !stack_depthMaciej Fijalkowski1-10/+23
There is no particular reason for keeping the "sub 0, %rsp" insn within the BPF's x64 JIT prologue. When tail call code was skipping the whole prologue section these 7 bytes that represent the rsp subtraction could not be simply discarded as the jump target address would be broken. An option to address that would be to substitute it with nop7. Right now tail call is skipping only first 11 bytes of target program's prologue and "sub X, %rsp" is the first insn that is processed, so if stack depth is zero then this insn could be omitted without the need for nop7 swap. Therefore, do not emit the "sub 0, %rsp" in prologue when program is not making use of R10 register. Also, make the emission of "add X, %rsp" conditional in tail call code logic and take into account the presence of mentioned insn when calculating the jump offsets. Signed-off-by: Maciej Fijalkowski <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-09-29bpf, x64: Drop "pop %rcx" instruction on BPF JIT epilogueMaciej Fijalkowski1-2/+0
Back when all of the callee-saved registers where always pushed to stack in x64 JIT prologue, tail call counter was placed at the bottom of the BPF program's stack frame that had a following layout: +-------------+ | ret addr | +-------------+ | rbp | <- rbp +-------------+ | | | free space | | from: | | sub $x,%rsp | | | +-------------+ | rbx | +-------------+ | r13 | +-------------+ | r14 | +-------------+ | r15 | +-------------+ | tail call | <- rsp | counter | +-------------+ In order to restore the callee saved registers, epilogue needed to explicitly toss away the tail call counter via "pop %rbx" insn, so that %rsp would be back at the place where %r15 was stored. Currently, the tail call counter is placed on stack *before* the callee saved registers (brackets on rbx through r15 mean that they are now pushed to stack only if they are used): +-------------+ | ret addr | +-------------+ | rbp | <- rbp +-------------+ | | | free space | | from: | | sub $x,%rsp | | | +-------------+ | tail call | | counter | +-------------+ ( rbx ) +-------------+ ( r13 ) +-------------+ ( r14 ) +-------------+ ( r15 ) <- rsp +-------------+ For the record, the epilogue insns consist of (assuming all of the callee saved registers are used by program): pop %r15 pop %r14 pop %r13 pop %rbx pop %rcx leaveq retq "pop %rbx" for getting rid of tail call counter was not an option anymore as it would overwrite the restored value of %rbx register, so it was changed to use the %rcx register. Since epilogue can start popping the callee saved registers right away without any additional work, the "pop %rcx" could be dropped altogether as "leave" insn will simply move the %rbp to %rsp. IOW, tail call counter does not need the explicit handling. Having in mind the explanation above and the actual reason for that, let's piggy back on "leave" insn for discarding the tail call counter from stack and remove the "pop %rcx" from epilogue. Signed-off-by: Maciej Fijalkowski <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-09-29PCI/PM: Rename pci_dev.d3_delay to d3hot_delayKrzysztof Wilczyński2-2/+2
PCI devices support two variants of the D3 power state: D3hot (main power present) D3cold (main power removed). Previously struct pci_dev contained: unsigned int d3_delay; /* D3->D0 transition time in ms */ unsigned int d3cold_delay; /* D3cold->D0 transition time in ms */ "d3_delay" refers specifically to the D3hot state. Rename it to "d3hot_delay" to avoid ambiguity and align with the ACPI "_DSM for Specifying Device Readiness Durations" in the PCI Firmware spec r3.2, sec 4.6.9. There is no change to the functionality. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Krzysztof Wilczyński <[email protected]> Signed-off-by: Bjorn Helgaas <[email protected]>
2020-09-29KVM: VMX: vmx_uret_msrs_list[] can be statickernel test robot1-1/+1
Fixes: 14a61b642de9 ("KVM: VMX: Rename "vmx_msr_index" to "vmx_uret_msrs_list"") Signed-off-by: kernel test robot <[email protected]> Message-Id: <20200928153714.GA6285@a3a878002045> Signed-off-by: Paolo Bonzini <[email protected]>
2020-09-29perf/x86/intel: Fix Ice Lake event constraint tableKan Liang1-1/+1
An error occues when sampling non-PEBS INST_RETIRED.PREC_DIST(0x01c0) event. perf record -e cpu/event=0xc0,umask=0x01/ -- sleep 1 Error: The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (cpu/event=0xc0,umask=0x01/). /bin/dmesg | grep -i perf may provide additional information. The idxmsk64 of the event is set to 0. The event never be successfully scheduled. The event should be limit to the fixed counter 0. Fixes: 6017608936c1 ("perf/x86/intel: Add Icelake support") Reported-by: Yi, Ammy <[email protected]> Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2020-09-29perf/x86/intel/uncore: Fix the scale of the IMC free-running eventsKan Liang1-6/+6
The "MiB" result of the IMC free-running bandwidth events, uncore_imc_free_running/read/ and uncore_imc_free_running/write/ are 16 times too small. The "MiB" value equals the raw IMC free-running bandwidth counter value times a "scale" which is inaccurate. The IMC free-running bandwidth events should be incremented per 64B cache line, not DWs (4 bytes). The "scale" should be 6.103515625e-5. Fix the "scale" for both Snow Ridge and Ice Lake. Fixes: 2b3b76b5ec67 ("perf/x86/intel/uncore: Add Ice Lake server uncore support") Fixes: ee49532b38dd ("perf/x86/intel/uncore: Add IMC uncore support for Snow Ridge") Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-29perf/x86/intel/uncore: Fix for iio mapping on Skylake ServerAlexander Antonov1-2/+5
Introduced early attributes /sys/devices/uncore_iio_<pmu_idx>/die* are initialized by skx_iio_set_mapping(), however, for example, for multiple segment platforms skx_iio_get_topology() returns -EPERM before a list of attributes in skx_iio_mapping_group will have been initialized. As a result the list is being NULL. Thus the warning "sysfs: (bin_)attrs not set by subsystem for group: uncore_iio_*/" appears and uncore_iio pmus are not available in sysfs. Clear IIO attr_update to properly handle the cases when topology information cannot be retrieved. Fixes: bb42b3d39781 ("perf/x86/intel/uncore: Expose an Uncore unit to IIO PMON mapping") Reported-by: Kyle Meyer <[email protected]> Suggested-by: Kan Liang <[email protected]> Signed-off-by: Alexander Antonov <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Alexei Budankov <[email protected]> Reviewed-by: Kan Liang <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-29perf/x86/msr: Add Jasper Lake supportKan Liang1-0/+1
The Jasper Lake processor is also a Tremont microarchitecture. From the perspective of perf MSR, there is nothing changed compared with Elkhart Lake. Share the code path with Elkhart Lake. Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-29perf/x86/intel: Add Jasper Lake supportKan Liang1-0/+1
The Jasper Lake processor is also a Tremont microarchitecture. From the perspective of Intel PMU, there is nothing changed compared with Elkhart Lake. Share the perf code with Elkhart Lake. Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-29perf/x86/intel/uncore: Reduce the number of CBOX countersKan Liang1-1/+1
An oops is triggered by the fuzzy test. [ 327.853081] unchecked MSR access error: RDMSR from 0x70c at rIP: 0xffffffffc082c820 (uncore_msr_read_counter+0x10/0x50 [intel_uncore]) [ 327.853083] Call Trace: [ 327.853085] <IRQ> [ 327.853089] uncore_pmu_event_start+0x85/0x170 [intel_uncore] [ 327.853093] uncore_pmu_event_add+0x1a4/0x410 [intel_uncore] [ 327.853097] ? event_sched_in.isra.118+0xca/0x240 There are 2 GP counters for each CBOX, but the current code claims 4 counters. Accessing the invalid registers triggers the oops. Fixes: 6e394376ee89 ("perf/x86/intel/uncore: Add Intel Icelake uncore support") Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-29perf/x86/intel/uncore: Update Ice Lake uncore unitsKan Liang1-4/+25
There are some updates for the Icelake model specific uncore performance monitors. (The update can be found at 10th generation intel core processors families specification update Revision 004, ICL068) 1) Counter 0 of ARB uncore unit is not available for software use 2) The global 'enable bit' (bit 29) and 'freeze bit' (bit 31) of MSR_UNC_PERF_GLOBAL_CTRL cannot be used to control counter behavior. Needs to use local enable in event select MSR. Accessing the modified bit/registers will be ignored by HW. Users may observe inaccurate results with the current code. The changes of the MSR_UNC_PERF_GLOBAL_CTRL imply that groups cannot be read atomically anymore. Although the error of the result for a group becomes a bit bigger, it still far lower than not using a group. The group support is still kept. Only Remove the *_box() related implementation. Since the counter 0 of ARB uncore unit is not available, update the MSR address for the ARB uncore unit. There is no change for IMC uncore unit, which only include free-running counters. Fixes: 6e394376ee89 ("perf/x86/intel/uncore: Add Intel Icelake uncore support") Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-29perf/x86/intel/uncore: Split the Ice Lake and Tiger Lake MSR uncore supportKan Liang3-2/+19
Previously, the MSR uncore for the Ice Lake and Tiger Lake are identical. The code path is shared. However, with recent update, the global MSR_UNC_PERF_GLOBAL_CTRL register and ARB uncore unit are changed for the Ice Lake. Split the Ice Lake and Tiger Lake MSR uncore support. The changes only impact the MSR ops() and the ARB uncore unit. Other codes can still be shared between the Ice Lake and the Tiger Lake. Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-28x86: Use tracepoint_enabled() for msr tracepoints instead of open coding itSteven Rostedt (VMware)1-11/+9
7f47d8cc039f ("x86, tracing, perf: Add trace point for MSR accesses") added tracing of msr read and write, but because of complexity in having tracepoints in headers, and even more so for a core header like msr.h, not to mention the bloat a tracepoint adds to inline functions, a helper function is needed to be called from the header. Use the new tracepoint_enabled() macro in tracepoint-defs.h to test if the tracepoint is active before calling the helper function, instead of open coding the same logic, which requires knowing the internals of a tracepoint. Cc: Andi Kleen <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2020-09-28KVM: x86: do not attempt TSC synchronization on guest writesPaolo Bonzini1-20/+10
KVM special-cases writes to MSR_IA32_TSC so that all CPUs have the same base for the TSC. This logic is complicated, and we do not want it to have any effect once the VM is started. In particular, if any guest started to synchronize its TSCs with writes to MSR_IA32_TSC rather than MSR_IA32_TSC_ADJUST, the additional effect of kvm_write_tsc code would be uncharted territory. Therefore, this patch makes writes to MSR_IA32_TSC behave essentially the same as writes to MSR_IA32_TSC_ADJUST when they come from the guest. A new selftest (which passes both before and after the patch) checks the current semantics of writes to MSR_IA32_TSC and MSR_IA32_TSC_ADJUST originating from both the host and the guest. Upcoming work to remove the special side effects of host-initiated writes to MSR_IA32_TSC and MSR_IA32_TSC_ADJUST will be able to build onto this test, adjusting the host side to use the new APIs and achieve the same effect. Signed-off-by: Paolo Bonzini <[email protected]>
2020-09-28KVM: nSVM: delay MSR permission processing to first nested VM runPaolo Bonzini1-3/+18
Allow userspace to set up the memory map after KVM_SET_NESTED_STATE; to do so, move the call to nested_svm_vmrun_msrpm inside the KVM_REQ_GET_NESTED_STATE_PAGES handler (which is currently not used by nSVM). This is similar to what VMX does already. Signed-off-by: Paolo Bonzini <[email protected]>
2020-09-28KVM: x86: rename KVM_REQ_GET_VMCS12_PAGESPaolo Bonzini3-8/+8
We are going to use it for SVM too, so use a more generic name. Signed-off-by: Paolo Bonzini <[email protected]>
2020-09-28KVM: x86: Introduce MSR filteringAlexander Graf3-1/+176
It's not desireable to have all MSRs always handled by KVM kernel space. Some MSRs would be useful to handle in user space to either emulate behavior (like uCode updates) or differentiate whether they are valid based on the CPU model. To allow user space to specify which MSRs it wants to see handled by KVM, this patch introduces a new ioctl to push filter rules with bitmaps into KVM. Based on these bitmaps, KVM can then decide whether to reject MSR access. With the addition of KVM_CAP_X86_USER_SPACE_MSR it can also deflect the denied MSR events to user space to operate on. If no filter is populated, MSR handling stays identical to before. Signed-off-by: Alexander Graf <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2020-09-28KVM: x86: VMX: Prevent MSR passthrough when MSR access is deniedAlexander Graf2-52/+181
We will introduce the concept of MSRs that may not be handled in kernel space soon. Some MSRs are directly passed through to the guest, effectively making them handled by KVM from user space's point of view. This patch introduces all logic required to ensure that MSRs that user space wants trapped are not marked as direct access for guests. Signed-off-by: Alexander Graf <[email protected]> Message-Id: <[email protected]> [Replace "_idx" with "_slot". - Paolo] Signed-off-by: Paolo Bonzini <[email protected]>
2020-09-28KVM: x86: SVM: Prevent MSR passthrough when MSR access is deniedAlexander Graf2-8/+76
We will introduce the concept of MSRs that may not be handled in kernel space soon. Some MSRs are directly passed through to the guest, effectively making them handled by KVM from user space's point of view. This patch introduces all logic required to ensure that MSRs that user space wants trapped are not marked as direct access for guests. Signed-off-by: Alexander Graf <[email protected]> Message-Id: <[email protected]> [Make terminology a bit more similar to VMX. - Paolo] Signed-off-by: Paolo Bonzini <[email protected]>
2020-09-28KVM: x86: Prepare MSR bitmaps for userspace tracked MSRsAaron Lewis4-70/+77
Prepare vmx and svm for a subsequent change that ensures the MSR permission bitmap is set to allow an MSR that userspace is tracking to force a vmx_vmexit in the guest. Signed-off-by: Aaron Lewis <[email protected]> Reviewed-by: Oliver Upton <[email protected]> [agraf: rebase, adapt SVM scheme to nested changes that came in between] Signed-off-by: Alexander Graf <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2020-09-28KVM: x86: Add infrastructure for MSR filteringAlexander Graf4-0/+10
In the following commits we will add pieces of MSR filtering. To ensure that code compiles even with the feature half-merged, let's add a few stubs and struct definitions before the real patches start. Signed-off-by: Alexander Graf <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2020-09-28KVM: x86: Allow deflecting unknown MSR accesses to user spaceAlexander Graf3-6/+135
MSRs are weird. Some of them are normal control registers, such as EFER. Some however are registers that really are model specific, not very interesting to virtualization workloads, and not performance critical. Others again are really just windows into package configuration. Out of these MSRs, only the first category is necessary to implement in kernel space. Rarely accessed MSRs, MSRs that should be fine tunes against certain CPU models and MSRs that contain information on the package level are much better suited for user space to process. However, over time we have accumulated a lot of MSRs that are not the first category, but still handled by in-kernel KVM code. This patch adds a generic interface to handle WRMSR and RDMSR from user space. With this, any future MSR that is part of the latter categories can be handled in user space. Furthermore, it allows us to replace the existing "ignore_msrs" logic with something that applies per-VM rather than on the full system. That way you can run productive VMs in parallel to experimental ones where you don't care about proper MSR handling. Signed-off-by: Alexander Graf <[email protected]> Reviewed-by: Jim Mattson <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2020-09-28KVM: x86: Return -ENOENT on unimplemented MSRsAlexander Graf1-1/+1
When we find an MSR that we can not handle, bubble up that error code as MSR error return code. Follow up patches will use that to expose the fact that an MSR is not handled by KVM to user space. Suggested-by: Aaron Lewis <[email protected]> Signed-off-by: Alexander Graf <[email protected]> Message-Id: <[email protected]> Reviewed-by: Jim Mattson <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>