aboutsummaryrefslogtreecommitdiff
path: root/arch/x86/kernel/cpu
AgeCommit message (Collapse)AuthorFilesLines
2023-01-23x86/resctrl: Add interface to read mbm_total_bytes_configBabu Moger3-1/+129
The event configuration can be viewed by the user by reading the configuration file /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config. The event configuration settings are domain specific and will affect all the CPUs in the domain. Following are the types of events supported: ==== =========================================================== Bits Description ==== =========================================================== 6 Dirty Victims from the QOS domain to all types of memory 5 Reads to slow memory in the non-local NUMA domain 4 Reads to slow memory in the local NUMA domain 3 Non-temporal writes to non-local NUMA domain 2 Non-temporal writes to local NUMA domain 1 Reads to memory in the non-local NUMA domain 0 Reads to memory in the local NUMA domain ==== =========================================================== By default, the mbm_total_bytes_config is set to 0x7f to count all the event types. For example: $cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config 0=0x7f;1=0x7f;2=0x7f;3=0x7f In this case, the event mbm_total_bytes is configured with 0x7f on domains 0 to 3. Signed-off-by: Babu Moger <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-23x86/resctrl: Support monitor configurationBabu Moger3-1/+13
Add a new field in struct mon_evt to support Bandwidth Monitoring Event Configuration (BMEC) and also update the "mon_features" display. The resctrl file "mon_features" will display the supported events and files that can be used to configure those events if monitor configuration is supported. Before the change: $ cat /sys/fs/resctrl/info/L3_MON/mon_features llc_occupancy mbm_total_bytes mbm_local_bytes After the change when BMEC is supported: $ cat /sys/fs/resctrl/info/L3_MON/mon_features llc_occupancy mbm_total_bytes mbm_total_bytes_config mbm_local_bytes mbm_local_bytes_config Signed-off-by: Babu Moger <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-23x86/resctrl: Add __init attribute to rdt_get_mon_l3_config()Babu Moger3-2/+3
In an upcoming change, rdt_get_mon_l3_config() needs to call rdt_cpu_has() to query the monitor related features. It cannot be called right now because rdt_cpu_has() has the __init attribute but rdt_get_mon_l3_config() doesn't. Add the __init attribute to rdt_get_mon_l3_config() that is only called by get_rdt_mon_resources() that already has the __init attribute. Also make rdt_cpu_has() available to by rdt_get_mon_l3_config() via the internal header file. Signed-off-by: Babu Moger <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Borislav Petkov (AMD) <[email protected]>
2023-01-23x86/resctrl: Detect and configure Slow Memory Bandwidth AllocationBabu Moger3-6/+40
The QoS slow memory configuration details are available via CPUID_Fn80000020_EDX_x02. Detect the available details and initialize the rest to defaults. Signed-off-by: Babu Moger <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-23x86/resctrl: Include new features in command line optionsBabu Moger1-0/+4
Add the command line options to enable or disable the new resctrl features: smba: Slow Memory Bandwidth Allocation bmec: Bandwidth Monitor Event Configuration. Signed-off-by: Babu Moger <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-23x86/cpufeatures: Add Bandwidth Monitoring Event Configuration feature flagBabu Moger2-0/+3
Newer AMD processors support the new feature Bandwidth Monitoring Event Configuration (BMEC). The feature support is identified via CPUID Fn8000_0020_EBX_x0[3]: EVT_CFG - Bandwidth Monitoring Event Configuration (BMEC) The bandwidth monitoring events mbm_total_bytes and mbm_local_bytes are set to count all the total and local reads/writes, respectively. With the introduction of slow memory, the two counters are not enough to count all the different types of memory events. Therefore, BMEC provides the option to configure mbm_total_bytes and mbm_local_bytes to count the specific type of events. Each BMEC event has a configuration MSR which contains one field for each bandwidth type that can be used to configure the bandwidth event to track any combination of supported bandwidth types. The event will count requests from every bandwidth type bit that is set in the corresponding configuration register. Following are the types of events supported: ==== ======================================================== Bits Description ==== ======================================================== 6 Dirty Victims from the QOS domain to all types of memory 5 Reads to slow memory in the non-local NUMA domain 4 Reads to slow memory in the local NUMA domain 3 Non-temporal writes to non-local NUMA domain 2 Non-temporal writes to local NUMA domain 1 Reads to memory in the non-local NUMA domain 0 Reads to memory in the local NUMA domain ==== ======================================================== By default, the mbm_total_bytes configuration is set to 0x7F to count all the event types and the mbm_local_bytes configuration is set to 0x15 to count all the local memory events. Feature description is available in the specification, "AMD64 Technology Platform Quality of Service Extensions, Revision: 1.03 Publication" at https://bugzilla.kernel.org/attachment.cgi?id=301365 Signed-off-by: Babu Moger <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-23x86/resctrl: Add a new resource type RDT_RESOURCE_SMBABabu Moger2-0/+13
Add a new resource type RDT_RESOURCE_SMBA to handle the QoS enforcement policies on the external slow memory. Mostly initialization of the essentials. Setting fflags to RFTYPE_RES_MB configures the SMBA resource to have the same resctrl files as the existing MBA resource. The SMBA resource has identical properties to the existing MBA resource. These properties will be enumerated in an upcoming change and exposed via resctrl because of this flag. Signed-off-by: Babu Moger <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-23x86/cpufeatures: Add Slow Memory Bandwidth Allocation feature flagBabu Moger1-0/+1
Add the new AMD feature X86_FEATURE_SMBA. With it, the QOS enforcement policies can be applied to external slow memory connected to the host. QOS enforcement is accomplished by assigning a Class Of Service (COS) to a processor and specifying allocations or limits for that COS for each resource to be allocated. This feature is identified by the CPUID function 0x8000_0020_EBX_x0[2]: L3SBE - L3 external slow memory bandwidth enforcement. CXL.memory is the only supported "slow" memory device. With SMBA, the hardware enables bandwidth allocation on the slow memory devices. If there are multiple slow memory devices in the system, then the throttling logic groups all the slow sources together and applies the limit on them as a whole. The presence of the SMBA feature (with CXL.memory) is independent of whether slow memory device is actually present in the system. If there is no slow memory in the system, then setting a SMBA limit will have no impact on the performance of the system. Presence of CXL memory can be identified by the numactl command: $numactl -H available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 node 0 size: 63678 MB node 0 free: 59542 MB node 1 cpus: node 1 size: 16122 MB node 1 free: 15627 MB node distances: node 0 1 0: 10 50 1: 50 10 CPU list for CXL memory will be empty. The cpu-cxl node distance is greater than cpu-to-cpu distances. Node 1 has the CXL memory in this case. CXL memory can also be identified using ACPI SRAT table and memory maps. Feature description is available in the specification, "AMD64 Technology Platform Quality of Service Extensions, Revision: 1.03 Publication # 56375 Revision: 1.03 Issue Date: February 2022" at https://bugzilla.kernel.org/attachment.cgi?id=301365 See also https://www.amd.com/en/support/tech-docs/amd64-technology-platform-quality-service-extensions Signed-off-by: Babu Moger <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-23x86/resctrl: Replace smp_call_function_many() with on_each_cpu_mask()Babu Moger2-29/+11
on_each_cpu_mask() runs the function on each CPU specified by cpumask, which may include the local processor. Replace smp_call_function_many() with on_each_cpu_mask() to simplify the code. Signed-off-by: Babu Moger <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-22Merge tag 'sched_urgent_for_v6.2_rc6' of ↵Linus Torvalds1-0/+9
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler fixes from Borislav Petkov: - Make sure the scheduler doesn't use stale frequency scaling values when latter get disabled due to a value error - Fix a NULL pointer access on UP configs - Use the proper locking when updating CPU capacity * tag 'sched_urgent_for_v6.2_rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/aperfmperf: Erase stale arch_freq_scale values when disabling frequency invariance readings sched/core: Fix NULL pointer access fault in sched_setaffinity() with non-SMP configs sched/fair: Fixes for capacity inversion detection sched/uclamp: Fix a uninitialized variable warnings
2023-01-21x86/microcode/intel: Print old and new revision during early bootAshok Raj1-12/+16
Make early loading message match late loading message and print both old and new revisions. This is helpful to know what the BIOS loaded revision is before an early update. Cache the early BIOS revision before the microcode update and have print_ucode_info() print both the old and new revision in the same format as microcode_reload_late(). [ bp: Massage, remove useless comment. ] Signed-off-by: Ashok Raj <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-21x86/microcode/intel: Pass the microcode revision to print_ucode_info() directlyAshok Raj1-21/+9
print_ucode_info() takes a struct ucode_cpu_info pointer as parameter. Its sole purpose is to print the microcode revision. The only available ucode_cpu_info always describes the currently loaded microcode revision. After a microcode update is successful, this is the new revision, or on failure it is the original revision. In preparation for future changes, replace the struct ucode_cpu_info pointer parameter with a plain integer which contains the revision number and adjust the call sites accordingly. No functional change. [ bp: - Fix + cleanup commit message. - Revert arbitrary, unrelated change. ] Signed-off-by: Ashok Raj <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-21x86/microcode: Adjust late loading result reporting messageAshok Raj1-4/+7
During late microcode loading, the "Reload completed" message is issued unconditionally, regardless of success or failure. Adjust the message to report the result of the update. [ bp: Massage. ] Fixes: 9bd681251b7c ("x86/microcode: Announce reload operation's completion") Suggested-by: Thomas Gleixner <[email protected]> Signed-off-by: Ashok Raj <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Tony Luck <[email protected]> Link: https://lore.kernel.org/lkml/874judpqqd.ffs@tglx/
2023-01-21x86/microcode: Check CPU capabilities after late microcode update correctlyAshok Raj2-13/+29
The kernel caches each CPU's feature bits at boot in an x86_capability[] structure. However, the capabilities in the BSP's copy can be turned off as a result of certain command line parameters or configuration restrictions, for example the SGX bit. This can cause a mismatch when comparing the values before and after the microcode update. Another example is X86_FEATURE_SRBDS_CTRL which gets added only after microcode update: --- cpuid.before 2023-01-21 14:54:15.652000747 +0100 +++ cpuid.after 2023-01-21 14:54:26.632001024 +0100 @@ -10,7 +10,7 @@ CPU: 0x00000004 0x04: eax=0x00000000 ebx=0x00000000 ecx=0x00000000 edx=0x00000000 0x00000005 0x00: eax=0x00000040 ebx=0x00000040 ecx=0x00000003 edx=0x11142120 0x00000006 0x00: eax=0x000027f7 ebx=0x00000002 ecx=0x00000001 edx=0x00000000 - 0x00000007 0x00: eax=0x00000000 ebx=0x029c6fbf ecx=0x40000000 edx=0xbc002400 + 0x00000007 0x00: eax=0x00000000 ebx=0x029c6fbf ecx=0x40000000 edx=0xbc002e00 ^^^ and which proves for a gazillionth time that late loading is a bad bad idea. microcode_check() is called after an update to report any previously cached CPUID bits which might have changed due to the update. Therefore, store the cached CPU caps before the update and compare them with the CPU caps after the microcode update has succeeded. Thus, the comparison is done between the CPUID *hardware* bits before and after the upgrade instead of using the cached, possibly runtime modified values in BSP's boot_cpu_data copy. As a result, false warnings about CPUID bits changes are avoided. [ bp: - Massage. - Add SRBDS_CTRL example. - Add kernel-doc. - Incorporate forgotten review feedback from dhansen. ] Fixes: 1008c52c09dc ("x86/CPU: Add a microcode loader callback") Signed-off-by: Ashok Raj <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-20x86/microcode: Add a parameter to microcode_check() to store CPU capabilitiesAshok Raj2-9/+15
Add a parameter to store CPU capabilities before performing a microcode update so that CPU capabilities can be compared before and after update. [ bp: Massage. ] Signed-off-by: Ashok Raj <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-18x86/microcode: Use the DEVICE_ATTR_RO() macroGuangju Wang[baidu]1-3/+3
Use DEVICE_ATTR_RO() helper instead of open-coded DEVICE_ATTR(), which makes the code a bit shorter and easier to read. No change in functionality. Signed-off-by: Guangju Wang[baidu] <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-17Drivers: hv: Setup synic registers in case of nested root partitionJinank Jain1-0/+65
Child partitions are free to allocate SynIC message and event page but in case of root partition it must use the pages allocated by Microsoft Hypervisor (MSHV). Base address for these pages can be found using synthetic MSRs exposed by MSHV. There is a slight difference in those MSRs for nested vs non-nested root partition. Signed-off-by: Jinank Jain <[email protected]> Reviewed-by: Nuno Das Neves <[email protected]> Reviewed-by: Michael Kelley <[email protected]> Link: https://lore.kernel.org/r/cb951fb1ad6814996fc54f4a255c5841a20a151f.1672639707.git.jinankjain@linux.microsoft.com Signed-off-by: Wei Liu <[email protected]>
2023-01-16x86/aperfmperf: Erase stale arch_freq_scale values when disabling frequency ↵Yair Podemsky1-0/+9
invariance readings Once disable_freq_invariance_work is called the scale_freq_tick function will not compute or update the arch_freq_scale values. However the scheduler will still read these values and use them. The result is that the scheduler might perform unfair decisions based on stale values. This patch adds the step of setting the arch_freq_scale values for all cpus to the default (max) value SCHED_CAPACITY_SCALE, Once all cpus have the same arch_freq_scale value the scaling is meaningless. Signed-off-by: Yair Podemsky <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-13x86/cpu: Remove misleading commentJuergen Gross1-1/+1
The comment of the "#endif" after setup_disable_pku() is wrong. As the related #ifdef is only a few lines above, just remove the comment. Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-13cpuidle, intel_idle: Fix CPUIDLE_FLAG_IBRSPeter Zijlstra1-1/+1
objtool to the rescue: vmlinux.o: warning: objtool: intel_idle_ibrs+0x17: call to spec_ctrl_current() leaves .noinstr.text section vmlinux.o: warning: objtool: intel_idle_ibrs+0x27: call to wrmsrl.constprop.0() leaves .noinstr.text section Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Tested-by: Tony Lindgren <[email protected]> Tested-by: Ulf Hansson <[email protected]> Acked-by: Rafael J. Wysocki <[email protected]> Acked-by: Frederic Weisbecker <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-13x86/gsseg: Use the LKGS instruction if available for load_gs_index()H. Peter Anvin (Intel)1-0/+1
The LKGS instruction atomically loads a segment descriptor into the %gs descriptor registers, *except* that %gs.base is unchanged, and the base is instead loaded into MSR_IA32_KERNEL_GS_BASE, which is exactly what we want this function to do. Signed-off-by: H. Peter Anvin (Intel) <[email protected]> Signed-off-by: Xin Li <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected] Cc: Andy Lutomirski <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Linus Torvalds <[email protected]>
2023-01-12x86/hyperv: Add support for detecting nested hypervisorJinank Jain1-0/+7
Detect if Linux is running as a nested hypervisor in the root partition for Microsoft Hypervisor, using flags provided by MSHV. Expose a new variable hv_nested that is used later for decisions specific to the nested use case. Signed-off-by: Jinank Jain <[email protected]> Reviewed-by: Michael Kelley <[email protected]> Link: https://lore.kernel.org/r/8e3e7112806e81d2292a66a56fe547162754ecea.1672639707.git.jinankjain@linux.microsoft.com Signed-off-by: Wei Liu <[email protected]>
2023-01-12x86/bugs: Reset speculation control settings on initBreno Leitao1-1/+9
Currently, x86_spec_ctrl_base is read at boot time and speculative bits are set if Kconfig items are enabled. For example, IBRS is enabled if CONFIG_CPU_IBRS_ENTRY is configured, etc. These MSR bits are not cleared if the mitigations are disabled. This is a problem when kexec-ing a kernel that has the mitigation disabled from a kernel that has the mitigation enabled. In this case, the MSR bits are not cleared during the new kernel boot. As a result, this might have some performance degradation that is hard to pinpoint. This problem does not happen if the machine is (hard) rebooted because the bit will be cleared by default. [ bp: Massage. ] Suggested-by: Pawan Gupta <[email protected]> Signed-off-by: Breno Leitao <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-10x86/resctrl: Fix event counts regression in reused RMIDsPeter Newman1-16/+33
When creating a new monitoring group, the RMID allocated for it may have been used by a group which was previously removed. In this case, the hardware counters will have non-zero values which should be deducted from what is reported in the new group's counts. resctrl_arch_reset_rmid() initializes the prev_msr value for counters to 0, causing the initial count to be charged to the new group. Resurrect __rmid_read() and use it to initialize prev_msr correctly. Unlike before, __rmid_read() checks for error bits in the MSR read so that callers don't need to. Fixes: 1d81d15db39c ("x86/resctrl: Move mbm_overflow_count() into resctrl_arch_rmid_read()") Signed-off-by: Peter Newman <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Tested-by: Babu Moger <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected]
2023-01-10x86/resctrl: Fix task CLOSID/RMID update racePeter Newman1-1/+11
When the user moves a running task to a new rdtgroup using the task's file interface or by deleting its rdtgroup, the resulting change in CLOSID/RMID must be immediately propagated to the PQR_ASSOC MSR on the task(s) CPUs. x86 allows reordering loads with prior stores, so if the task starts running between a task_curr() check that the CPU hoisted before the stores in the CLOSID/RMID update then it can start running with the old CLOSID/RMID until it is switched again because __rdtgroup_move_task() failed to determine that it needs to be interrupted to obtain the new CLOSID/RMID. Refer to the diagram below: CPU 0 CPU 1 ----- ----- __rdtgroup_move_task(): curr <- t1->cpu->rq->curr __schedule(): rq->curr <- t1 resctrl_sched_in(): t1->{closid,rmid} -> {1,1} t1->{closid,rmid} <- {2,2} if (curr == t1) // false IPI(t1->cpu) A similar race impacts rdt_move_group_tasks(), which updates tasks in a deleted rdtgroup. In both cases, use smp_mb() to order the task_struct::{closid,rmid} stores before the loads in task_curr(). In particular, in the rdt_move_group_tasks() case, simply execute an smp_mb() on every iteration with a matching task. It is possible to use a single smp_mb() in rdt_move_group_tasks(), but this would require two passes and a means of remembering which task_structs were updated in the first loop. However, benchmarking results below showed too little performance impact in the simple approach to justify implementing the two-pass approach. Times below were collected using `perf stat` to measure the time to remove a group containing a 1600-task, parallel workload. CPU: Intel(R) Xeon(R) Platinum P-8136 CPU @ 2.00GHz (112 threads) # mkdir /sys/fs/resctrl/test # echo $$ > /sys/fs/resctrl/test/tasks # perf bench sched messaging -g 40 -l 100000 task-clock time ranges collected using: # perf stat rmdir /sys/fs/resctrl/test Baseline: 1.54 - 1.60 ms smp_mb() every matching task: 1.57 - 1.67 ms [ bp: Massage commit message. ] Fixes: ae28d1aae48a ("x86/resctrl: Use an IPI instead of task_work_add() to update PQR_ASSOC MSR") Fixes: 0efc89be9471 ("x86/intel_rdt: Update task closid immediately on CPU in rmdir and unmount") Signed-off-by: Peter Newman <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Reviewed-by: Babu Moger <[email protected]> Cc: <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-10x86/cpu: Remove redundant extern x86_read_arch_cap_msr()Ashok Raj3-2/+2
The prototype for the x86_read_arch_cap_msr() function has moved to arch/x86/include/asm/cpu.h - kill the redundant definition in arch/x86/kernel/cpu.h and include the header. Signed-off-by: Ashok Raj <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Daniel Sneddon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-10x86/mce: Mask out non-address bits from machine check bankTony Luck1-5/+9
Systems that support various memory encryption schemes (MKTME, TDX, SEV) use high order physical address bits to indicate which key should be used for a specific memory location. When a memory error is reported, some systems may report those key bits in the IA32_MCi_ADDR machine check MSR. The Intel SDM has a footnote for the contents of the address register that says: "Useful bits in this field depend on the address methodology in use when the register state is saved." AMD Processor Programming Reference has a more explicit description of the MCA_ADDR register: "For physical addresses, the most significant bit is given by Core::X86::Cpuid::LongModeInfo[PhysAddrSize]." Add a new #define MCI_ADDR_PHYSADDR for the mask of valid physical address bits within the machine check bank address register. Use this mask for recoverable machine check handling and in the EDAC driver to ignore any key bits that may be present. [ Tony: Based on independent fixes proposed by Fan Du and Isaku Yamahata ] Reported-by: Isaku Yamahata <[email protected]> Reported-by: Fan Du <[email protected]> Signed-off-by: Tony Luck <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Yazen Ghannam <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-07x86/mce/dev-mcelog: use strscpy() to instead of strncpy()Xu Panda1-2/+1
The implementation of strscpy() is more robust and safer. That's now the recommended way to copy NUL terminated strings. Signed-off-by: Xu Panda <[email protected]> Signed-off-by: Yang Yang <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Tony Luck <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-04x86/bugs: Flush IBP in ib_prctl_set()Rodrigo Branco1-0/+2
We missed the window between the TIF flag update and the next reschedule. Signed-off-by: Rodrigo Branco <[email protected]> Reviewed-by: Borislav Petkov (AMD) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: <[email protected]>
2022-12-29Merge branch 'kvm-late-6.1' into HEADPaolo Bonzini1-1/+1
x86: * Change tdp_mmu to a read-only parameter * Separate TDP and shadow MMU page fault paths * Enable Hyper-V invariant TSC control selftests: * Use TAP interface for kvm_binary_stats_test and tsc_msrs_test Signed-off-by: Paolo Bonzini <[email protected]>
2022-12-29x86/hyperv: Add HV_EXPOSE_INVARIANT_TSC defineVitaly Kuznetsov1-1/+1
Avoid open coding BIT(0) of HV_X64_MSR_TSC_INVARIANT_CONTROL by adding a dedicated define. While there's only one user at this moment, the upcoming KVM implementation of Hyper-V Invariant TSC feature will need to use it as well. Reviewed-by: Michael Kelley <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Reviewed-by: Sean Christopherson <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-12-28x86/mce: Add support for Extended Physical Address MCA changesSmita Koralahalli3-8/+33
Newer AMD CPUs support more physical address bits. That is, the MCA_ADDR registers on Scalable MCA systems contain the ErrorAddr in bits [56:0] instead of [55:0]. Hence, the existing LSB field from bits [61:56] in MCA_ADDR must be moved around to accommodate the larger ErrorAddr size. MCA_CONFIG[McaLsbInStatusSupported] indicates this change. If set, the LSB field will be found in MCA_STATUS rather than MCA_ADDR. Each logical CPU has unique MCA bank in hardware and is not shared with other logical CPUs. Additionally, on SMCA systems, each feature bit may be different for each bank within same logical CPU. Check for MCA_CONFIG[McaLsbInStatusSupported] for each MCA bank and for each CPU. Additionally, all MCA banks do not support maximum ErrorAddr bits in MCA_ADDR. Some banks might support fewer bits but the remaining bits are marked as reserved. [ Yazen: Rebased and fixed up formatting. bp: Massage comments. ] Signed-off-by: Smita Koralahalli <[email protected]> Signed-off-by: Yazen Ghannam <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-12-28x86/mce: Define a function to extract ErrorAddr from MCA_ADDRSmita Koralahalli3-18/+17
Move MCA_ADDR[ErrorAddr] extraction into a separate helper function. This will be further refactored to support extended ErrorAddr bits in MCA_ADDR in newer AMD CPUs. [ bp: Massage. ] Signed-off-by: Smita Koralahalli <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Yazen Ghannam <[email protected]> Link: https://lore.kernel.org/all/[email protected]/
2022-12-26x86/microcode/AMD: Handle multiple glued containers properlyBorislav Petkov1-3/+4
It can happen that - especially during testing - the microcode blobs of all families are all glued together in the initrd. The current code doesn't check whether the current container matched a microcode patch and continues to the next one, which leads to save_microcode_in_initrd_amd() to look at the next and thus wrong one: microcode: parse_container: ucode: 0xffff88807e9d9082 microcode: verify_patch: buf: 0xffff88807e9d90ce, buf_size: 26428 microcode: verify_patch: proc_id: 0x8082, patch_fam: 0x17, this family: 0x17 microcode: verify_patch: buf: 0xffff88807e9d9d56, buf_size: 23220 microcode: verify_patch: proc_id: 0x8012, patch_fam: 0x17, this family: 0x17 microcode: parse_container: MATCH: eq_id: 0x8012, patch proc_rev_id: 0x8012 <-- matching patch found microcode: verify_patch: buf: 0xffff88807e9da9de, buf_size: 20012 microcode: verify_patch: proc_id: 0x8310, patch_fam: 0x17, this family: 0x17 microcode: verify_patch: buf: 0xffff88807e9db666, buf_size: 16804 microcode: Invalid type field (0x414d44) in container file section header. microcode: Patch section fail <-- checking chokes on the microcode magic value of the next container. microcode: parse_container: saving container 0xffff88807e9d9082 microcode: save_microcode_in_initrd_amd: scanned containers, data: 0xffff88807e9d9082, size: 9700a and now if there's a next (and last container) it'll use that in save_microcode_in_initrd_amd() and not find a proper patch, ofc. Fix that by moving the out: label up, before the desc->mc check which jots down the pointer of the matching patch and is used to signal to the caller that it has found a matching patch in the current container. Signed-off-by: Borislav Petkov <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-12-26x86/microcode/AMD: Rename a couple of functionsBorislav Petkov1-7/+7
- Rename apply_microcode_early_amd() to early_apply_microcode(): simplify the name so that it is clear what it does and when does it do it. - Rename __load_ucode_amd() to find_blobs_in_containers(): the new name actually explains what it does. Document some. No functional changes. Signed-off-by: Borislav Petkov <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-12-16Merge tag 'driver-core-6.2-rc1' of ↵Linus Torvalds1-2/+2
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core Pull driver core updates from Greg KH: "Here is the set of driver core and kernfs changes for 6.2-rc1. The "big" change in here is the addition of a new macro, container_of_const() that will preserve the "const-ness" of a pointer passed into it. The "problem" of the current container_of() macro is that if you pass in a "const *", out of it can comes a non-const pointer unless you specifically ask for it. For many usages, we want to preserve the "const" attribute by using the same call. For a specific example, this series changes the kobj_to_dev() macro to use it, allowing it to be used no matter what the const value is. This prevents every subsystem from having to declare 2 different individual macros (i.e. kobj_const_to_dev() and kobj_to_dev()) and having the compiler enforce the const value at build time, which having 2 macros would not do either. The driver for all of this have been discussions with the Rust kernel developers as to how to properly mark driver core, and kobject, objects as being "non-mutable". The changes to the kobject and driver core in this pull request are the result of that, as there are lots of paths where kobjects and device pointers are not modified at all, so marking them as "const" allows the compiler to enforce this. So, a nice side affect of the Rust development effort has been already to clean up the driver core code to be more obvious about object rules. All of this has been bike-shedded in quite a lot of detail on lkml with different names and implementations resulting in the tiny version we have in here, much better than my original proposal. Lots of subsystem maintainers have acked the changes as well. Other than this change, included in here are smaller stuff like: - kernfs fixes and updates to handle lock contention better - vmlinux.lds.h fixes and updates - sysfs and debugfs documentation updates - device property updates All of these have been in the linux-next tree for quite a while with no problems" * tag 'driver-core-6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (58 commits) device property: Fix documentation for fwnode_get_next_parent() firmware_loader: fix up to_fw_sysfs() to preserve const usb.h: take advantage of container_of_const() device.h: move kobj_to_dev() to use container_of_const() container_of: add container_of_const() that preserves const-ness of the pointer driver core: fix up missed drivers/s390/char/hmcdrv_dev.c class.devnode() conversion. driver core: fix up missed scsi/cxlflash class.devnode() conversion. driver core: fix up some missing class.devnode() conversions. driver core: make struct class.devnode() take a const * driver core: make struct class.dev_uevent() take a const * cacheinfo: Remove of_node_put() for fw_token device property: Add a blank line in Kconfig of tests device property: Rename goto label to be more precise device property: Move PROPERTY_ENTRY_BOOL() a bit down device property: Get rid of __PROPERTY_ENTRY_ARRAY_EL*SIZE*() kernfs: fix all kernel-doc warnings and multiple typos driver core: pass a const * into of_device_uevent() kobject: kset_uevent_ops: make name() callback take a const * kobject: kset_uevent_ops: make filter() callback take a const * kobject: make kobject_namespace take a const * ...
2022-12-14Merge tag 'x86_core_for_v6.2' of ↵Linus Torvalds3-58/+76
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 core updates from Borislav Petkov: - Add the call depth tracking mitigation for Retbleed which has been long in the making. It is a lighterweight software-only fix for Skylake-based cores where enabling IBRS is a big hammer and causes a significant performance impact. What it basically does is, it aligns all kernel functions to 16 bytes boundary and adds a 16-byte padding before the function, objtool collects all functions' locations and when the mitigation gets applied, it patches a call accounting thunk which is used to track the call depth of the stack at any time. When that call depth reaches a magical, microarchitecture-specific value for the Return Stack Buffer, the code stuffs that RSB and avoids its underflow which could otherwise lead to the Intel variant of Retbleed. This software-only solution brings a lot of the lost performance back, as benchmarks suggest: https://lore.kernel.org/all/[email protected]/ That page above also contains a lot more detailed explanation of the whole mechanism - Implement a new control flow integrity scheme called FineIBT which is based on the software kCFI implementation and uses hardware IBT support where present to annotate and track indirect branches using a hash to validate them - Other misc fixes and cleanups * tag 'x86_core_for_v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (80 commits) x86/paravirt: Use common macro for creating simple asm paravirt functions x86/paravirt: Remove clobber bitmask from .parainstructions x86/debug: Include percpu.h in debugreg.h to get DECLARE_PER_CPU() et al x86/cpufeatures: Move X86_FEATURE_CALL_DEPTH from bit 18 to bit 19 of word 11, to leave space for WIP X86_FEATURE_SGX_EDECCSSA bit x86/Kconfig: Enable kernel IBT by default x86,pm: Force out-of-line memcpy() objtool: Fix weak hole vs prefix symbol objtool: Optimize elf_dirty_reloc_sym() x86/cfi: Add boot time hash randomization x86/cfi: Boot time selection of CFI scheme x86/ibt: Implement FineIBT objtool: Add --cfi to generate the .cfi_sites section x86: Add prefix symbols for function padding objtool: Add option to generate prefix symbols objtool: Avoid O(bloody terrible) behaviour -- an ode to libelf objtool: Slice up elf_create_section_symbol() kallsyms: Revert "Take callthunks into account" x86: Unconfuse CONFIG_ and X86_FEATURE_ namespaces x86/retpoline: Fix crash printing warning x86/paravirt: Fix a !PARAVIRT build warning ...
2022-12-13Merge tag 'mm-stable-2022-12-13' of ↵Linus Torvalds1-2/+2
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull MM updates from Andrew Morton: - More userfaultfs work from Peter Xu - Several convert-to-folios series from Sidhartha Kumar and Huang Ying - Some filemap cleanups from Vishal Moola - David Hildenbrand added the ability to selftest anon memory COW handling - Some cpuset simplifications from Liu Shixin - Addition of vmalloc tracing support by Uladzislau Rezki - Some pagecache folioifications and simplifications from Matthew Wilcox - A pagemap cleanup from Kefeng Wang: we have VM_ACCESS_FLAGS, so use it - Miguel Ojeda contributed some cleanups for our use of the __no_sanitize_thread__ gcc keyword. This series should have been in the non-MM tree, my bad - Naoya Horiguchi improved the interaction between memory poisoning and memory section removal for huge pages - DAMON cleanups and tuneups from SeongJae Park - Tony Luck fixed the handling of COW faults against poisoned pages - Peter Xu utilized the PTE marker code for handling swapin errors - Hugh Dickins reworked compound page mapcount handling, simplifying it and making it more efficient - Removal of the autonuma savedwrite infrastructure from Nadav Amit and David Hildenbrand - zram support for multiple compression streams from Sergey Senozhatsky - David Hildenbrand reworked the GUP code's R/O long-term pinning so that drivers no longer need to use the FOLL_FORCE workaround which didn't work very well anyway - Mel Gorman altered the page allocator so that local IRQs can remnain enabled during per-cpu page allocations - Vishal Moola removed the try_to_release_page() wrapper - Stefan Roesch added some per-BDI sysfs tunables which are used to prevent network block devices from dirtying excessive amounts of pagecache - David Hildenbrand did some cleanup and repair work on KSM COW breaking - Nhat Pham and Johannes Weiner have implemented writeback in zswap's zsmalloc backend - Brian Foster has fixed a longstanding corner-case oddity in file[map]_write_and_wait_range() - sparse-vmemmap changes for MIPS, LoongArch and NIOS2 from Feiyang Chen - Shiyang Ruan has done some work on fsdax, to make its reflink mode work better under xfstests. Better, but still not perfect - Christoph Hellwig has removed the .writepage() method from several filesystems. They only need .writepages() - Yosry Ahmed wrote a series which fixes the memcg reclaim target beancounting - David Hildenbrand has fixed some of our MM selftests for 32-bit machines - Many singleton patches, as usual * tag 'mm-stable-2022-12-13' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (313 commits) mm/hugetlb: set head flag before setting compound_order in __prep_compound_gigantic_folio mm: mmu_gather: allow more than one batch of delayed rmaps mm: fix typo in struct pglist_data code comment kmsan: fix memcpy tests mm: add cond_resched() in swapin_walk_pmd_entry() mm: do not show fs mm pc for VM_LOCKONFAULT pages selftests/vm: ksm_functional_tests: fixes for 32bit selftests/vm: cow: fix compile warning on 32bit selftests/vm: madv_populate: fix missing MADV_POPULATE_(READ|WRITE) definitions mm/gup_test: fix PIN_LONGTERM_TEST_READ with highmem mm,thp,rmap: fix races between updates of subpages_mapcount mm: memcg: fix swapcached stat accounting mm: add nodes= arg to memory.reclaim mm: disable top-tier fallback to reclaim on proactive reclaim selftests: cgroup: make sure reclaim target memcg is unprotected selftests: cgroup: refactor proactive reclaim code to reclaim_until() mm: memcg: fix stale protection of reclaim target memcg mm/mmap: properly unaccount memory on mas_preallocate() failure omfs: remove ->writepage jfs: remove ->writepage ...
2022-12-13Merge tag 'x86_microcode_for_v6.2' of ↵Linus Torvalds4-323/+196
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 microcode and IFS updates from Borislav Petkov: "The IFS (In-Field Scan) stuff goes through tip because the IFS driver uses the same structures and similar functionality as the microcode loader and it made sense to route it all through this branch so that there are no conflicts. - Add support for multiple testing sequences to the Intel In-Field Scan driver in order to be able to run multiple different test patterns. Rework things and remove the BROKEN dependency so that the driver can be enabled (Jithu Joseph) - Remove the subsys interface usage in the microcode loader because it is not really needed - A couple of smaller fixes and cleanups" * tag 'x86_microcode_for_v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (24 commits) x86/microcode/intel: Do not retry microcode reloading on the APs x86/microcode/intel: Do not print microcode revision and processor flags platform/x86/intel/ifs: Add missing kernel-doc entry Revert "platform/x86/intel/ifs: Mark as BROKEN" Documentation/ABI: Update IFS ABI doc platform/x86/intel/ifs: Add current_batch sysfs entry platform/x86/intel/ifs: Remove reload sysfs entry platform/x86/intel/ifs: Add metadata validation platform/x86/intel/ifs: Use generic microcode headers and functions platform/x86/intel/ifs: Add metadata support x86/microcode/intel: Use a reserved field for metasize x86/microcode/intel: Add hdr_type to intel_microcode_sanity_check() x86/microcode/intel: Reuse microcode_sanity_check() x86/microcode/intel: Use appropriate type in microcode_sanity_check() x86/microcode/intel: Reuse find_matching_signature() platform/x86/intel/ifs: Remove memory allocation from load path platform/x86/intel/ifs: Remove image loading during init platform/x86/intel/ifs: Return a more appropriate error code platform/x86/intel/ifs: Remove unused selection x86/microcode: Drop struct ucode_cpu_info.valid ...
2022-12-13Merge tag 'x86_cpu_for_v6.2' of ↵Linus Torvalds12-372/+278
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 cpu updates from Borislav Petkov: - Split MTRR and PAT init code to accomodate at least Xen PV and TDX guests which do not get MTRRs exposed but only PAT. (TDX guests do not support the cache disabling dance when setting up MTRRs so they fall under the same category) This is a cleanup work to remove all the ugly workarounds for such guests and init things separately (Juergen Gross) - Add two new Intel CPUs to the list of CPUs with "normal" Energy Performance Bias, leading to power savings - Do not do bus master arbitration in C3 (ARB_DISABLE) on modern Centaur CPUs * tag 'x86_cpu_for_v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (26 commits) x86/mtrr: Make message for disabled MTRRs more descriptive x86/pat: Handle TDX guest PAT initialization x86/cpuid: Carve out all CPUID functionality x86/cpu: Switch to cpu_feature_enabled() for X86_FEATURE_XENPV x86/cpu: Remove X86_FEATURE_XENPV usage in setup_cpu_entry_area() x86/cpu: Drop 32-bit Xen PV guest code in update_task_stack() x86/cpu: Remove unneeded 64-bit dependency in arch_enter_from_user_mode() x86/cpufeatures: Add X86_FEATURE_XENPV to disabled-features.h x86/acpi/cstate: Optimize ARB_DISABLE on Centaur CPUs x86/mtrr: Simplify mtrr_ops initialization x86/cacheinfo: Switch cache_ap_init() to hotplug callback x86: Decouple PAT and MTRR handling x86/mtrr: Add a stop_machine() handler calling only cache_cpu_init() x86/mtrr: Let cache_aps_delayed_init replace mtrr_aps_delayed_init x86/mtrr: Get rid of __mtrr_enabled bool x86/mtrr: Simplify mtrr_bp_init() x86/mtrr: Remove set_all callback from struct mtrr_ops x86/mtrr: Disentangle MTRR init from PAT init x86/mtrr: Move cache control code to cacheinfo.c x86/mtrr: Split MTRR-specific handling from cache dis/enabling ...
2022-12-12Merge tag 'pull-iov_iter' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull iov_iter updates from Al Viro: "iov_iter work; most of that is about getting rid of direction misannotations and (hopefully) preventing more of the same for the future" * tag 'pull-iov_iter' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: use less confusing names for iov_iter direction initializers iov_iter: saner checks for attempt to copy to/from iterator [xen] fix "direction" argument of iov_iter_kvec() [vhost] fix 'direction' argument of iov_iter_{init,bvec}() [target] fix iov_iter_bvec() "direction" argument [s390] memcpy_real(): WRITE is "data source", not destination... [s390] zcore: WRITE is "data source", not destination... [infiniband] READ is "data destination", not source... [fsi] WRITE is "data source", not destination... [s390] copy_oldmem_kernel() - WRITE is "data source", not destination csum_and_copy_to_iter(): handle ITER_DISCARD get rid of unlikely() on page_copy_sane() calls
2022-12-12Merge tag 'random-6.2-rc1-for-linus' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/crng/random Pull random number generator updates from Jason Donenfeld: - Replace prandom_u32_max() and various open-coded variants of it, there is now a new family of functions that uses fast rejection sampling to choose properly uniformly random numbers within an interval: get_random_u32_below(ceil) - [0, ceil) get_random_u32_above(floor) - (floor, U32_MAX] get_random_u32_inclusive(floor, ceil) - [floor, ceil] Coccinelle was used to convert all current users of prandom_u32_max(), as well as many open-coded patterns, resulting in improvements throughout the tree. I'll have a "late" 6.1-rc1 pull for you that removes the now unused prandom_u32_max() function, just in case any other trees add a new use case of it that needs to converted. According to linux-next, there may be two trivial cases of prandom_u32_max() reintroductions that are fixable with a 's/.../.../'. So I'll have for you a final conversion patch doing that alongside the removal patch during the second week. This is a treewide change that touches many files throughout. - More consistent use of get_random_canary(). - Updates to comments, documentation, tests, headers, and simplification in configuration. - The arch_get_random*_early() abstraction was only used by arm64 and wasn't entirely useful, so this has been replaced by code that works in all relevant contexts. - The kernel will use and manage random seeds in non-volatile EFI variables, refreshing a variable with a fresh seed when the RNG is initialized. The RNG GUID namespace is then hidden from efivarfs to prevent accidental leakage. These changes are split into random.c infrastructure code used in the EFI subsystem, in this pull request, and related support inside of EFISTUB, in Ard's EFI tree. These are co-dependent for full functionality, but the order of merging doesn't matter. - Part of the infrastructure added for the EFI support is also used for an improvement to the way vsprintf initializes its siphash key, replacing an sleep loop wart. - The hardware RNG framework now always calls its correct random.c input function, add_hwgenerator_randomness(), rather than sometimes going through helpers better suited for other cases. - The add_latent_entropy() function has long been called from the fork handler, but is a no-op when the latent entropy gcc plugin isn't used, which is fine for the purposes of latent entropy. But it was missing out on the cycle counter that was also being mixed in beside the latent entropy variable. So now, if the latent entropy gcc plugin isn't enabled, add_latent_entropy() will expand to a call to add_device_randomness(NULL, 0), which adds a cycle counter, without the absent latent entropy variable. - The RNG is now reseeded from a delayed worker, rather than on demand when used. Always running from a worker allows it to make use of the CPU RNG on platforms like S390x, whose instructions are too slow to do so from interrupts. It also has the effect of adding in new inputs more frequently with more regularity, amounting to a long term transcript of random values. Plus, it helps a bit with the upcoming vDSO implementation (which isn't yet ready for 6.2). - The jitter entropy algorithm now tries to execute on many different CPUs, round-robining, in hopes of hitting even more memory latencies and other unpredictable effects. It also will mix in a cycle counter when the entropy timer fires, in addition to being mixed in from the main loop, to account more explicitly for fluctuations in that timer firing. And the state it touches is now kept within the same cache line, so that it's assured that the different execution contexts will cause latencies. * tag 'random-6.2-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random: (23 commits) random: include <linux/once.h> in the right header random: align entropy_timer_state to cache line random: mix in cycle counter when jitter timer fires random: spread out jitter callback to different CPUs random: remove extraneous period and add a missing one in comments efi: random: refresh non-volatile random seed when RNG is initialized vsprintf: initialize siphash key using notifier random: add back async readiness notifier random: reseed in delayed work rather than on-demand random: always mix cycle counter in add_latent_entropy() hw_random: use add_hwgenerator_randomness() for early entropy random: modernize documentation comment on get_random_bytes() random: adjust comment to account for removed function random: remove early archrandom abstraction random: use random.trust_{bootloader,cpu} command line option only stackprotector: actually use get_random_canary() stackprotector: move get_random_canary() into stackprotector.h treewide: use get_random_u32_inclusive() when possible treewide: use get_random_u32_{above,below}() instead of manual loop treewide: use get_random_u32_below() instead of deprecated function ...
2022-12-12Merge tag 'ras_core_for_v6.2' of ↵Linus Torvalds2-16/+25
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 RAS updates from Borislav Petkov: - Fix confusing output from /sys/kernel/debug/ras/daemon_active - Add another MCE severity error case to the Intel error severity table to promote UC and AR errors to panic severity and remove the corresponding code condition doing that. - Make sure the thresholding and deferred error interrupts on AMD SMCA systems clear the all registers reporting an error so that there are no multiple errors logged for the same event * tag 'ras_core_for_v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: RAS: Fix return value from show_trace() x86/mce: Use severity table to handle uncorrected errors in kernel x86/MCE/AMD: Clear DFR errors found in THR handler
2022-12-12Merge tag 'x86_splitlock_for_6.2' of ↵Linus Torvalds1-10/+53
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 splitlock updates from Dave Hansen: "Add a sysctl to control the split lock misery mode. This enables users to reduce the penalty inflicted on split lock users. There are some proprietary, binary-only games which became entirely unplayable with the old penalty. Anyone opting into the new mode is, of course, more exposed to the DoS nasitness inherent with split locks, but they can play their games again" * tag 'x86_splitlock_for_6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/split_lock: Add sysctl to control the misery mode
2022-12-12Merge tag 'x86_cache_for_6.2' of ↵Linus Torvalds4-17/+4
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 cache resource control updates from Dave Hansen: "These declare the resource control (rectrl) MSRs a bit more normally and clean up an unnecessary structure member: - Remove unnecessary arch_has_empty_bitmaps structure memory - Move rescrtl MSR defines into msr-index.h, like normal MSRs" * tag 'x86_cache_for_6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/resctrl: Move MSR defines into msr-index.h x86/resctrl: Remove arch_has_empty_bitmaps
2022-12-12Merge tag 'x86_sgx_for_6.2' of ↵Linus Torvalds5-17/+34
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 sgx updates from Dave Hansen: "The biggest deal in this series is support for a new hardware feature that allows enclaves to detect and mitigate single-stepping attacks. There's also a minor performance tweak and a little piece of the kmap_atomic() -> kmap_local() transition. Summary: - Introduce a new SGX feature (Asynchrounous Exit Notification) for bare-metal enclaves and KVM guests to mitigate single-step attacks - Increase batching to speed up enclave release - Replace kmap/kunmap_atomic() calls" * tag 'x86_sgx_for_6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/sgx: Replace kmap/kunmap_atomic() calls KVM/VMX: Allow exposing EDECCSSA user leaf function to KVM guest x86/sgx: Allow enclaves to use Asynchrounous Exit Notification x86/sgx: Reduce delay and interference of enclave release
2022-12-12Merge tag 'hyperv-next-signed-20221208' of ↵Linus Torvalds1-0/+6
git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux Pull hyperv updates from Wei Liu: - Drop unregister syscore from hyperv_cleanup to avoid hang (Gaurav Kohli) - Clean up panic path for Hyper-V framebuffer (Guilherme G. Piccoli) - Allow IRQ remapping to work without x2apic (Nuno Das Neves) - Fix comments (Olaf Hering) - Expand hv_vp_assist_page definition (Saurabh Sengar) - Improvement to page reporting (Shradha Gupta) - Make sure TSC clocksource works when Linux runs as the root partition (Stanislav Kinsburskiy) * tag 'hyperv-next-signed-20221208' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux: x86/hyperv: Remove unregister syscore call from Hyper-V cleanup iommu/hyper-v: Allow hyperv irq remapping without x2apic clocksource: hyper-v: Add TSC page support for root partition clocksource: hyper-v: Use TSC PFN getter to map vvar page clocksource: hyper-v: Introduce TSC PFN getter clocksource: hyper-v: Introduce a pointer to TSC page x86/hyperv: Expand definition of struct hv_vp_assist_page PCI: hv: update comment in x86 specific hv_arch_irq_unmask hv: fix comment typo in vmbus_channel/low_latency drivers: hv, hyperv_fb: Untangle and refactor Hyper-V panic notifiers video: hyperv_fb: Avoid taking busy spinlock on panic path hv_balloon: Add support for configurable order free page reporting mm/page_reporting: Add checks for page_reporting_order param
2022-12-05x86/microcode/intel: Do not retry microcode reloading on the APsAshok Raj1-7/+1
The retries in load_ucode_intel_ap() were in place to support systems with mixed steppings. Mixed steppings are no longer supported and there is only one microcode image at a time. Any retries will simply reattempt to apply the same image over and over without making progress. [ bp: Zap the circumstantial reasoning from the commit message. ] Fixes: 06b8534cb728 ("x86/microcode: Rework microcode loading") Signed-off-by: Ashok Raj <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected]
2022-12-05x86/mtrr: Make message for disabled MTRRs more descriptiveJuergen Gross1-1/+3
Instead of just saying "Disabled" when MTRRs are disabled for any reason, tell what is disabled and why. Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-12-03x86/microcode/intel: Do not print microcode revision and processor flagsAshok Raj1-8/+0
collect_cpu_info() is used to collect the current microcode revision and processor flags on every CPU. It had a weird mechanism to try to mimick a "once" functionality in the sense that, that information should be issued only when it is differing from the previous CPU. However (1): the new calling sequence started doing that in parallel: microcode_init() |-> schedule_on_each_cpu(setup_online_cpu) |-> collect_cpu_info() resulting in multiple redundant prints: microcode: sig=0x50654, pf=0x80, revision=0x2006e05 microcode: sig=0x50654, pf=0x80, revision=0x2006e05 microcode: sig=0x50654, pf=0x80, revision=0x2006e05 However (2): dumping this here is not that important because the kernel does not support mixed silicon steppings microcode. Finally! Besides, there is already a pr_info() in microcode_reload_late() that shows both the old and new revisions. What is more, the CPU signature (sig=0x50654) and Processor Flags (pf=0x80) above aren't that useful to the end user, they are available via /proc/cpuinfo and they don't change anyway. Remove the redundant pr_info(). [ bp: Heavily massage. ] Fixes: b6f86689d5b7 ("x86/microcode: Rip out the subsys interface gunk") Reported-by: Tony Luck <[email protected]> Signed-off-by: Ashok Raj <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]