aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2018-11-28x86/speculation: Prepare for conditional IBPB in switch_mm()Thomas Gleixner4-36/+118
The IBPB speculation barrier is issued from switch_mm() when the kernel switches to a user space task with a different mm than the user space task which ran last on the same CPU. An additional optimization is to avoid IBPB when the incoming task can be ptraced by the outgoing task. This optimization only works when switching directly between two user space tasks. When switching from a kernel task to a user space task the optimization fails because the previous task cannot be accessed anymore. So for quite some scenarios the optimization is just adding overhead. The upcoming conditional IBPB support will issue IBPB only for user space tasks which have the TIF_SPEC_IB bit set. This requires to handle the following cases: 1) Switch from a user space task (potential attacker) which has TIF_SPEC_IB set to a user space task (potential victim) which has TIF_SPEC_IB not set. 2) Switch from a user space task (potential attacker) which has TIF_SPEC_IB not set to a user space task (potential victim) which has TIF_SPEC_IB set. This needs to be optimized for the case where the IBPB can be avoided when only kernel threads ran in between user space tasks which belong to the same process. The current check whether two tasks belong to the same context is using the tasks context id. While correct, it's simpler to use the mm pointer because it allows to mangle the TIF_SPEC_IB bit into it. The context id based mechanism requires extra storage, which creates worse code. When a task is scheduled out its TIF_SPEC_IB bit is mangled as bit 0 into the per CPU storage which is used to track the last user space mm which was running on a CPU. This bit can be used together with the TIF_SPEC_IB bit of the incoming task to make the decision whether IBPB needs to be issued or not to cover the two cases above. As conditional IBPB is going to be the default, remove the dubious ptrace check for the IBPB always case and simply issue IBPB always when the process changes. Move the storage to a different place in the struct as the original one created a hole. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Tim Chen <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/speculation: Avoid __switch_to_xtra() callsThomas Gleixner2-2/+26
The TIF_SPEC_IB bit does not need to be evaluated in the decision to invoke __switch_to_xtra() when: - CONFIG_SMP is disabled - The conditional STIPB mode is disabled The TIF_SPEC_IB bit still controls IBPB in both cases so the TIF work mask checks might invoke __switch_to_xtra() for nothing if TIF_SPEC_IB is the only set bit in the work masks. Optimize it out by masking the bit at compile time for CONFIG_SMP=n and at run time when the static key controlling the conditional STIBP mode is disabled. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Tim Chen <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/process: Consolidate and simplify switch_to_xtra() codeThomas Gleixner5-22/+37
Move the conditional invocation of __switch_to_xtra() into an inline function so the logic can be shared between 32 and 64 bit. Remove the handthrough of the TSS pointer and retrieve the pointer directly in the bitmap handling function. Use this_cpu_ptr() instead of the per_cpu() indirection. This is a preparatory change so integration of conditional indirect branch speculation optimization happens only in one place. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Tim Chen <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/speculation: Prepare for per task indirect branch speculation controlTim Chen5-5/+41
To avoid the overhead of STIBP always on, it's necessary to allow per task control of STIBP. Add a new task flag TIF_SPEC_IB and evaluate it during context switch if SMT is active and flag evaluation is enabled by the speculation control code. Add the conditional evaluation to x86_virt_spec_ctrl() as well so the guest/host switch works properly. This has no effect because TIF_SPEC_IB cannot be set yet and the static key which controls evaluation is off. Preparatory patch for adding the control code. [ tglx: Simplify the context switch logic and make the TIF evaluation depend on SMP=y and on the static key controlling the conditional update. Rename it to TIF_SPEC_IB because it controls both STIBP and IBPB ] Signed-off-by: Tim Chen <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/speculation: Add command line control for indirect branch speculationThomas Gleixner3-19/+156
Add command line control for user space indirect branch speculation mitigations. The new option is: spectre_v2_user= The initial options are: - on: Unconditionally enabled - off: Unconditionally disabled -auto: Kernel selects mitigation (default off for now) When the spectre_v2= command line argument is either 'on' or 'off' this implies that the application to application control follows that state even if a contradicting spectre_v2_user= argument is supplied. Originally-by: Tim Chen <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/speculation: Unify conditional spectre v2 print functionsThomas Gleixner1-13/+4
There is no point in having two functions and a conditional at the call site. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Tim Chen <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/speculataion: Mark command line parser data __initdataThomas Gleixner1-2/+2
No point to keep that around. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Tim Chen <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/speculation: Mark string arrays const correctlyThomas Gleixner1-3/+3
checkpatch.pl muttered when reshuffling the code: WARNING: static const char * array should probably be static const char * const Fix up all the string arrays. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Tim Chen <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/speculation: Reorder the spec_v2 codeThomas Gleixner1-84/+84
Reorder the code so it is better grouped. No functional change. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Tim Chen <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/l1tf: Show actual SMT stateThomas Gleixner1-2/+3
Use the now exposed real SMT state, not the SMT sysfs control knob state. This reflects the state of the system when the mitigation status is queried. This does not change the warning in the VMX launch code. There the dependency on the control knob makes sense because siblings could be brought online anytime after launching the VM. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Tim Chen <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/speculation: Rework SMT state changeThomas Gleixner3-12/+16
arch_smt_update() is only called when the sysfs SMT control knob is changed. This means that when SMT is enabled in the sysfs control knob the system is considered to have SMT active even if all siblings are offline. To allow finegrained control of the speculation mitigations, the actual SMT state is more interesting than the fact that siblings could be enabled. Rework the code, so arch_smt_update() is invoked from each individual CPU hotplug function, and simplify the update function while at it. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Tim Chen <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28sched/smt: Expose sched_smt_present static keyThomas Gleixner2-3/+19
Make the scheduler's 'sched_smt_present' static key globaly available, so it can be used in the x86 speculation control code. Provide a query function and a stub for the CONFIG_SMP=n case. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Tim Chen <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/Kconfig: Select SCHED_SMT if SMP enabledThomas Gleixner1-7/+1
CONFIG_SCHED_SMT is enabled by all distros, so there is not a real point to have it configurable. The runtime overhead in the core scheduler code is minimal because the actual SMT scheduling parts are conditional on a static key. This allows to expose the scheduler's SMT state static key to the speculation control code. Alternatively the scheduler's static key could be made always available when CONFIG_SMP is enabled, but that's just adding an unused static key to every other architecture for nothing. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Tim Chen <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28sched/smt: Make sched_smt_present track topologyPeter Zijlstra (Intel)1-8/+11
Currently the 'sched_smt_present' static key is enabled when at CPU bringup SMT topology is observed, but it is never disabled. However there is demand to also disable the key when the topology changes such that there is no SMT present anymore. Implement this by making the key count the number of cores that have SMT enabled. In particular, the SMT topology bits are set before interrrupts are enabled and similarly, are cleared after interrupts are disabled for the last time and the CPU dies. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Tim Chen <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/speculation: Reorganize speculation control MSRs updateTim Chen1-17/+29
The logic to detect whether there's a change in the previous and next task's flag relevant to update speculation control MSRs is spread out across multiple functions. Consolidate all checks needed for updating speculation control MSRs into the new __speculation_ctrl_update() helper function. This makes it easy to pick the right speculation control MSR and the bits in MSR_IA32_SPEC_CTRL that need updating based on TIF flags changes. Originally-by: Thomas Lendacky <[email protected]> Signed-off-by: Tim Chen <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/speculation: Rename SSBD update functionsThomas Gleixner3-11/+11
During context switch, the SSBD bit in SPEC_CTRL MSR is updated according to changes of the TIF_SSBD flag in the current and next running task. Currently, only the bit controlling speculative store bypass disable in SPEC_CTRL MSR is updated and the related update functions all have "speculative_store" or "ssb" in their names. For enhanced mitigation control other bits in SPEC_CTRL MSR need to be updated as well, which makes the SSB names inadequate. Rename the "speculative_store*" functions to a more generic name. No functional change. Signed-off-by: Tim Chen <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/speculation: Disable STIBP when enhanced IBRS is in useTim Chen1-0/+7
If enhanced IBRS is active, STIBP is redundant for mitigating Spectre v2 user space exploits from hyperthread sibling. Disable STIBP when enhanced IBRS is used. Signed-off-by: Tim Chen <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/speculation: Move STIPB/IBPB string conditionals out of cpu_show_common()Tim Chen1-2/+18
The Spectre V2 printout in cpu_show_common() handles conditionals for the various mitigation methods directly in the sprintf() argument list. That's hard to read and will become unreadable if more complex decisions need to be made for a particular method. Move the conditionals for STIBP and IBPB string selection into helper functions, so they can be extended later on. Signed-off-by: Tim Chen <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/speculation: Remove unnecessary ret variable in cpu_show_common()Tim Chen1-4/+1
Signed-off-by: Tim Chen <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/speculation: Clean up spectre_v2_parse_cmdline()Tim Chen1-14/+13
Remove the unnecessary 'else' statement in spectre_v2_parse_cmdline() to save an indentation level. Signed-off-by: Tim Chen <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/speculation: Update the TIF_SSBD commentTim Chen1-1/+1
"Reduced Data Speculation" is an obsolete term. The correct new name is "Speculative store bypass disable" - which is abbreviated into SSBD. Signed-off-by: Tim Chen <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Jon Masters <[email protected]> Cc: Waiman Long <[email protected]> Cc: Greg KH <[email protected]> Cc: Dave Stewart <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-11-28x86/retpoline: Remove minimal retpoline supportZhenzhong Duan2-13/+2
Now that CONFIG_RETPOLINE hard depends on compiler support, there is no reason to keep the minimal retpoline support around which only provided basic protection in the assembly files. Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Zhenzhong Duan <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/f06f0a89-5587-45db-8ed2-0a9d6638d5c0@default
2018-11-28x86/retpoline: Make CONFIG_RETPOLINE depend on compiler supportZhenzhong Duan5-13/+10
Since retpoline capable compilers are widely available, make CONFIG_RETPOLINE hard depend on the compiler capability. Break the build when CONFIG_RETPOLINE is enabled and the compiler does not support it. Emit an error message in that case: "arch/x86/Makefile:226: *** You are building kernel with non-retpoline compiler, please update your compiler.. Stop." [dwmw: Fail the build with non-retpoline compiler] Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Zhenzhong Duan <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Masahiro Yamada <[email protected]> Cc: Michal Marek <[email protected]> Cc: <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/cca0cb20-f9e2-4094-840b-fb0f8810cd34@default
2018-11-28ALSA: usb-audio: Add vendor and product name for Dell WD19 DockHui Wang1-0/+10
Like the Dell WD15 Dock, the WD19 Dock (0bda:402e) doens't provide useful string for the vendor and product names too. In order to share the UCM with WD15, here we keep the profile_name same as the WD15. Signed-off-by: Hui Wang <[email protected]> Signed-off-by: Takashi Iwai <[email protected]>
2018-11-28netfilter: nf_tables: deactivate expressions in rule replecement routineTaehee Yoo1-11/+4
There is no expression deactivation call from the rule replacement path, hence, chain counter is not decremented. A few steps to reproduce the problem: %nft add table ip filter %nft add chain ip filter c1 %nft add chain ip filter c1 %nft add rule ip filter c1 jump c2 %nft replace rule ip filter c1 handle 3 accept %nft flush ruleset <jump c2> expression means immediate NFT_JUMP to chain c2. Reference count of chain c2 is increased when the rule is added. When rule is deleted or replaced, the reference counter of c2 should be decreased via nft_rule_expr_deactivate() which calls nft_immediate_deactivate(). Splat looks like: [ 214.396453] WARNING: CPU: 1 PID: 21 at net/netfilter/nf_tables_api.c:1432 nf_tables_chain_destroy.isra.38+0x2f9/0x3a0 [nf_tables] [ 214.398983] Modules linked in: nf_tables nfnetlink [ 214.398983] CPU: 1 PID: 21 Comm: kworker/1:1 Not tainted 4.20.0-rc2+ #44 [ 214.398983] Workqueue: events nf_tables_trans_destroy_work [nf_tables] [ 214.398983] RIP: 0010:nf_tables_chain_destroy.isra.38+0x2f9/0x3a0 [nf_tables] [ 214.398983] Code: 00 00 00 fc ff df 48 89 fa 48 c1 ea 03 80 3c 02 00 0f 85 8e 00 00 00 48 8b 7b 58 e8 e1 2c 4e c6 48 89 df e8 d9 2c 4e c6 eb 9a <0f> 0b eb 96 0f 0b e9 7e fe ff ff e8 a7 7e 4e c6 e9 a4 fe ff ff e8 [ 214.398983] RSP: 0018:ffff8881152874e8 EFLAGS: 00010202 [ 214.398983] RAX: 0000000000000001 RBX: ffff88810ef9fc28 RCX: ffff8881152876f0 [ 214.398983] RDX: dffffc0000000000 RSI: 1ffff11022a50ede RDI: ffff88810ef9fc78 [ 214.398983] RBP: 1ffff11022a50e9d R08: 0000000080000000 R09: 0000000000000000 [ 214.398983] R10: 0000000000000000 R11: 0000000000000000 R12: 1ffff11022a50eba [ 214.398983] R13: ffff888114446e08 R14: ffff8881152876f0 R15: ffffed1022a50ed6 [ 214.398983] FS: 0000000000000000(0000) GS:ffff888116400000(0000) knlGS:0000000000000000 [ 214.398983] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 214.398983] CR2: 00007fab9bb5f868 CR3: 000000012aa16000 CR4: 00000000001006e0 [ 214.398983] Call Trace: [ 214.398983] ? nf_tables_table_destroy.isra.37+0x100/0x100 [nf_tables] [ 214.398983] ? __kasan_slab_free+0x145/0x180 [ 214.398983] ? nf_tables_trans_destroy_work+0x439/0x830 [nf_tables] [ 214.398983] ? kfree+0xdb/0x280 [ 214.398983] nf_tables_trans_destroy_work+0x5f5/0x830 [nf_tables] [ ... ] Fixes: bb7b40aecbf7 ("netfilter: nf_tables: bogus EBUSY in chain deletions") Reported by: Christoph Anton Mitterer <[email protected]> Link: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=914505 Link: https://bugzilla.kernel.org/show_bug.cgi?id=201791 Signed-off-by: Taehee Yoo <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]>
2018-11-28x86/MCE/AMD: Fix the thresholding machinery initialization orderBorislav Petkov1-13/+6
Currently, the code sets up the thresholding interrupt vector and only then goes about initializing the thresholding banks. Which is wrong, because an early thresholding interrupt would cause a NULL pointer dereference when accessing those banks and prevent the machine from booting. Therefore, set the thresholding interrupt vector only *after* having initialized the banks successfully. Fixes: 18807ddb7f88 ("x86/mce/AMD: Reset Threshold Limit after logging error") Reported-by: Rafał Miłecki <[email protected]> Reported-by: John Clemens <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Tested-by: Rafał Miłecki <[email protected]> Tested-by: John Clemens <[email protected]> Cc: Aravind Gopalakrishnan <[email protected]> Cc: [email protected] Cc: [email protected] Cc: Tony Luck <[email protected]> Cc: [email protected] Cc: Yazen Ghannam <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Link: https://bugzilla.kernel.org/show_bug.cgi?id=201291
2018-11-28mtd: spi-nor: fix erase_type array to indicate current map confTudor Ambarus1-2/+29
BFPT advertises all the erase types supported by all the possible map configurations. Mask out the erase types that are not supported by the current map configuration. Backward compatibility test done on sst26vf064b. Fixes: b038e8e3be72 ("mtd: spi-nor: parse SFDP Sector Map Parameter Table") Reported-by: Alexander Sverdlin <[email protected]> Signed-off-by: Tudor Ambarus <[email protected]> Tested-by: Alexander Sverdlin <[email protected]> Signed-off-by: Boris Brezillon <[email protected]>
2018-11-28usb: gadget: u_ether: fix unsafe list iterationMarek Szyprowski1-5/+6
list_for_each_entry_safe() is not safe for deleting entries from the list if the spin lock, which protects it, is released and reacquired during the list iteration. Fix this issue by replacing this construction with a simple check if list is empty and removing the first entry in each iteration. This is almost equivalent to a revert of the commit mentioned in the Fixes: tag. This patch fixes following issue: --->8--- Unable to handle kernel NULL pointer dereference at virtual address 00000104 pgd = (ptrval) [00000104] *pgd=00000000 Internal error: Oops: 817 [#1] PREEMPT SMP ARM Modules linked in: CPU: 1 PID: 84 Comm: kworker/1:1 Not tainted 4.20.0-rc2-next-20181114-00009-g8266b35ec404 #1061 Hardware name: SAMSUNG EXYNOS (Flattened Device Tree) Workqueue: events eth_work PC is at rx_fill+0x60/0xac LR is at _raw_spin_lock_irqsave+0x50/0x5c pc : [<c065fee0>] lr : [<c0a056b8>] psr: 80000093 sp : ee7fbee8 ip : 00000100 fp : 00000000 r10: 006000c0 r9 : c10b0ab0 r8 : ee7eb5c0 r7 : ee7eb614 r6 : ee7eb5ec r5 : 000000dc r4 : ee12ac00 r3 : ee12ac24 r2 : 00000200 r1 : 60000013 r0 : ee7eb5ec Flags: Nzcv IRQs off FIQs on Mode SVC_32 ISA ARM Segment none Control: 10c5387d Table: 6d5dc04a DAC: 00000051 Process kworker/1:1 (pid: 84, stack limit = 0x(ptrval)) Stack: (0xee7fbee8 to 0xee7fc000) ... [<c065fee0>] (rx_fill) from [<c0143b7c>] (process_one_work+0x200/0x738) [<c0143b7c>] (process_one_work) from [<c0144118>] (worker_thread+0x2c/0x4c8) [<c0144118>] (worker_thread) from [<c014a8a4>] (kthread+0x128/0x164) [<c014a8a4>] (kthread) from [<c01010b4>] (ret_from_fork+0x14/0x20) Exception stack(0xee7fbfb0 to 0xee7fbff8) ... ---[ end trace 64480bc835eba7d6 ]--- Fixes: fea14e68ff5e ("usb: gadget: u_ether: use better list accessors") Signed-off-by: Marek Szyprowski <[email protected]> Signed-off-by: Felipe Balbi <[email protected]>
2018-11-27sched, trace: Fix prev_state output in sched_switch tracepointPavankumar Kondeti1-1/+11
commit 3f5fe9fef5b2 ("sched/debug: Fix task state recording/printout") tried to fix the problem introduced by a previous commit efb40f588b43 ("sched/tracing: Fix trace_sched_switch task-state printing"). However the prev_state output in sched_switch is still broken. task_state_index() uses fls() which considers the LSB as 1. Left shifting 1 by this value gives an incorrect mapping to the task state. Fix this by decrementing the value returned by __get_task_state() before shifting. Link: http://lkml.kernel.org/r/[email protected] Cc: [email protected] Fixes: 3f5fe9fef5b2 ("sched/debug: Fix task state recording/printout") Signed-off-by: Pavankumar Kondeti <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27function_graph: Have profiler use curr_ret_stack and not depthSteven Rostedt (VMware)1-2/+2
The profiler uses trace->depth to find its entry on the ret_stack, but the depth may not match the actual location of where its entry is (if an interrupt were to preempt the processing of the profiler for another function, the depth and the curr_ret_stack will be different). Have it use the curr_ret_stack as the index to find its ret_stack entry instead of using the depth variable, as that is no longer guaranteed to be the same. Cc: [email protected] Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27function_graph: Reverse the order of pushing the ret_stack and the callbackSteven Rostedt (VMware)1-4/+6
The function graph profiler uses the ret_stack to store the "subtime" and reuse it by nested functions and also on the return. But the current logic has the profiler callback called before the ret_stack is updated, and it is just modifying the ret_stack that will later be allocated (it's just lucky that the "subtime" is not touched when it is allocated). This could also cause a crash if we are at the end of the ret_stack when this happens. By reversing the order of the allocating the ret_stack and then calling the callbacks attached to a function being traced, the ret_stack entry is no longer used before it is allocated. Cc: [email protected] Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27function_graph: Move return callback before update of curr_ret_stackSteven Rostedt (VMware)1-9/+13
In the past, curr_ret_stack had two functions. One was to denote the depth of the call graph, the other is to keep track of where on the ret_stack the data is used. Although they may be slightly related, there are two cases where they need to be used differently. The one case is that it keeps the ret_stack data from being corrupted by an interrupt coming in and overwriting the data still in use. The other is just to know where the depth of the stack currently is. The function profiler uses the ret_stack to save a "subtime" variable that is part of the data on the ret_stack. If curr_ret_stack is modified too early, then this variable can be corrupted. The "max_depth" option, when set to 1, will record the first functions going into the kernel. To see all top functions (when dealing with timings), the depth variable needs to be lowered before calling the return hook. But by lowering the curr_ret_stack, it makes the data on the ret_stack still being used by the return hook susceptible to being overwritten. Now that there's two variables to handle both cases (curr_ret_depth), we can move them to the locations where they can handle both cases. Cc: [email protected] Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27function_graph: Use new curr_ret_depth to manage depth instead of curr_ret_stackSteven Rostedt (VMware)3-8/+17
Currently, the depth of the ret_stack is determined by curr_ret_stack index. The issue is that there's a race between setting of the curr_ret_stack and calling of the callback attached to the return of the function. Commit 03274a3ffb44 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") moved the calling of the callback to after the setting of the curr_ret_stack, even stating that it was safe to do so, when in fact, it was the reason there was a barrier() there (yes, I should have commented that barrier()). Not only does the curr_ret_stack keep track of the current call graph depth, it also keeps the ret_stack content from being overwritten by new data. The function profiler, uses the "subtime" variable of ret_stack structure and by moving the curr_ret_stack, it allows for interrupts to use the same structure it was using, corrupting the data, and breaking the profiler. To fix this, there needs to be two variables to handle the call stack depth and the pointer to where the ret_stack is being used, as they need to change at two different locations. Cc: [email protected] Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27function_graph: Make ftrace_push_return_trace() staticSteven Rostedt (VMware)2-4/+1
As all architectures now call function_graph_enter() to do the entry work, no architecture should ever call ftrace_push_return_trace(). Make it static. This is needed to prepare for a fix of a design bug on how the curr_ret_stack is used. Cc: [email protected] Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27sparc/function_graph: Simplify with function_graph_enter()Steven Rostedt (VMware)1-10/+1
The function_graph_enter() function does the work of calling the function graph hook function and the management of the shadow stack, simplifying the work done in the architecture dependent prepare_ftrace_return(). Have sparc use the new code, and remove the shadow stack management as well as having to set up the trace structure. This is needed to prepare for a fix of a design bug on how the curr_ret_stack is used. Cc: "David S. Miller" <[email protected]> Cc: [email protected] Cc: [email protected] Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27sh/function_graph: Simplify with function_graph_enter()Steven Rostedt (VMware)1-14/+2
The function_graph_enter() function does the work of calling the function graph hook function and the management of the shadow stack, simplifying the work done in the architecture dependent prepare_ftrace_return(). Have superh use the new code, and remove the shadow stack management as well as having to set up the trace structure. This is needed to prepare for a fix of a design bug on how the curr_ret_stack is used. Cc: Yoshinori Sato <[email protected]> Cc: Rich Felker <[email protected]> Cc: [email protected] Cc: [email protected] Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27s390/function_graph: Simplify with function_graph_enter()Steven Rostedt (VMware)1-11/+2
The function_graph_enter() function does the work of calling the function graph hook function and the management of the shadow stack, simplifying the work done in the architecture dependent prepare_ftrace_return(). Have s390 use the new code, and remove the shadow stack management as well as having to set up the trace structure. This is needed to prepare for a fix of a design bug on how the curr_ret_stack is used. Acked-by: Martin Schwidefsky <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Julian Wiedmann <[email protected]> Cc: [email protected] Cc: [email protected] Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27riscv/function_graph: Simplify with function_graph_enter()Steven Rostedt (VMware)1-12/+2
The function_graph_enter() function does the work of calling the function graph hook function and the management of the shadow stack, simplifying the work done in the architecture dependent prepare_ftrace_return(). Have riscv use the new code, and remove the shadow stack management as well as having to set up the trace structure. This is needed to prepare for a fix of a design bug on how the curr_ret_stack is used. Cc: Greentime Hu <[email protected]> Cc: Alan Kao <[email protected]> Cc: [email protected] Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Palmer Dabbelt <[email protected]> Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27powerpc/function_graph: Simplify with function_graph_enter()Steven Rostedt (VMware)1-13/+2
The function_graph_enter() function does the work of calling the function graph hook function and the management of the shadow stack, simplifying the work done in the architecture dependent prepare_ftrace_return(). Have powerpc use the new code, and remove the shadow stack management as well as having to set up the trace structure. This is needed to prepare for a fix of a design bug on how the curr_ret_stack is used. Cc: Benjamin Herrenschmidt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: [email protected] Cc: [email protected] Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27parisc: function_graph: Simplify with function_graph_enter()Steven Rostedt (VMware)1-14/+3
The function_graph_enter() function does the work of calling the function graph hook function and the management of the shadow stack, simplifying the work done in the architecture dependent prepare_ftrace_return(). Have parisc use the new code, and remove the shadow stack management as well as having to set up the trace structure. This is needed to prepare for a fix of a design bug on how the curr_ret_stack is used. Cc: "James E.J. Bottomley" <[email protected]> Cc: Helge Deller <[email protected]> Cc: [email protected] Cc: [email protected] Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27nds32: function_graph: Simplify with function_graph_enter()Steven Rostedt (VMware)1-16/+2
The function_graph_enter() function does the work of calling the function graph hook function and the management of the shadow stack, simplifying the work done in the architecture dependent prepare_ftrace_return(). Have nds32 use the new code, and remove the shadow stack management as well as having to set up the trace structure. This is needed to prepare for a fix of a design bug on how the curr_ret_stack is used. Cc: Greentime Hu <[email protected]> Cc: [email protected] Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27MIPS: function_graph: Simplify with function_graph_enter()Steven Rostedt (VMware)1-12/+2
The function_graph_enter() function does the work of calling the function graph hook function and the management of the shadow stack, simplifying the work done in the architecture dependent prepare_ftrace_return(). Have MIPS use the new code, and remove the shadow stack management as well as having to set up the trace structure. This is needed to prepare for a fix of a design bug on how the curr_ret_stack is used. Cc: Ralf Baechle <[email protected]> Cc: Paul Burton <[email protected]> Cc: James Hogan <[email protected]> Cc: [email protected] Cc: [email protected] Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27microblaze: function_graph: Simplify with function_graph_enter()Steven Rostedt (VMware)1-13/+2
The function_graph_enter() function does the work of calling the function graph hook function and the management of the shadow stack, simplifying the work done in the architecture dependent prepare_ftrace_return(). Have microblaze use the new code, and remove the shadow stack management as well as having to set up the trace structure. This is needed to prepare for a fix of a design bug on how the curr_ret_stack is used. Cc: Michal Simek <[email protected]> Cc: [email protected] Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27arm64: function_graph: Simplify with function_graph_enter()Steven Rostedt (VMware)1-14/+1
The function_graph_enter() function does the work of calling the function graph hook function and the management of the shadow stack, simplifying the work done in the architecture dependent prepare_ftrace_return(). Have arm64 use the new code, and remove the shadow stack management as well as having to set up the trace structure. This is needed to prepare for a fix of a design bug on how the curr_ret_stack is used. Cc: Catalin Marinas <[email protected]> Cc: [email protected] Cc: [email protected] Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Acked-by: Will Deacon <[email protected]> Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27ARM: function_graph: Simplify with function_graph_enter()Steven Rostedt (VMware)1-16/+1
The function_graph_enter() function does the work of calling the function graph hook function and the management of the shadow stack, simplifying the work done in the architecture dependent prepare_ftrace_return(). Have ARM use the new code, and remove the shadow stack management as well as having to set up the trace structure. This is needed to prepare for a fix of a design bug on how the curr_ret_stack is used. Cc: Russell King <[email protected]> Cc: [email protected] Cc: [email protected] Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27x86/function_graph: Simplify with function_graph_enter()Steven Rostedt (VMware)1-14/+1
The function_graph_enter() function does the work of calling the function graph hook function and the management of the shadow stack, simplifying the work done in the architecture dependent prepare_ftrace_return(). Have x86 use the new code, and remove the shadow stack management as well as having to set up the trace structure. This is needed to prepare for a fix of a design bug on how the curr_ret_stack is used. Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: [email protected] Cc: [email protected] Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-11-27lan743x: Enable driver to work with LAN7431Bryan Whitehead2-0/+2
This driver was designed to work with both LAN7430 and LAN7431. The only difference between the two is the LAN7431 has support for external phy. This change adds LAN7431 to the list of recognized devices supported by this driver. Updates for v2: changed 'fixes' tag to match defined format fixes: 23f0703c125b ("lan743x: Add main source files for new lan743x driver") Signed-off-by: Bryan Whitehead <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2018-11-27tipc: fix lockdep warning during node deleteJon Maloy1-2/+5
We see the following lockdep warning: [ 2284.078521] ====================================================== [ 2284.078604] WARNING: possible circular locking dependency detected [ 2284.078604] 4.19.0+ #42 Tainted: G E [ 2284.078604] ------------------------------------------------------ [ 2284.078604] rmmod/254 is trying to acquire lock: [ 2284.078604] 00000000acd94e28 ((&n->timer)#2){+.-.}, at: del_timer_sync+0x5/0xa0 [ 2284.078604] [ 2284.078604] but task is already holding lock: [ 2284.078604] 00000000f997afc0 (&(&tn->node_list_lock)->rlock){+.-.}, at: tipc_node_stop+0xac/0x190 [tipc] [ 2284.078604] [ 2284.078604] which lock already depends on the new lock. [ 2284.078604] [ 2284.078604] [ 2284.078604] the existing dependency chain (in reverse order) is: [ 2284.078604] [ 2284.078604] -> #1 (&(&tn->node_list_lock)->rlock){+.-.}: [ 2284.078604] tipc_node_timeout+0x20a/0x330 [tipc] [ 2284.078604] call_timer_fn+0xa1/0x280 [ 2284.078604] run_timer_softirq+0x1f2/0x4d0 [ 2284.078604] __do_softirq+0xfc/0x413 [ 2284.078604] irq_exit+0xb5/0xc0 [ 2284.078604] smp_apic_timer_interrupt+0xac/0x210 [ 2284.078604] apic_timer_interrupt+0xf/0x20 [ 2284.078604] default_idle+0x1c/0x140 [ 2284.078604] do_idle+0x1bc/0x280 [ 2284.078604] cpu_startup_entry+0x19/0x20 [ 2284.078604] start_secondary+0x187/0x1c0 [ 2284.078604] secondary_startup_64+0xa4/0xb0 [ 2284.078604] [ 2284.078604] -> #0 ((&n->timer)#2){+.-.}: [ 2284.078604] del_timer_sync+0x34/0xa0 [ 2284.078604] tipc_node_delete+0x1a/0x40 [tipc] [ 2284.078604] tipc_node_stop+0xcb/0x190 [tipc] [ 2284.078604] tipc_net_stop+0x154/0x170 [tipc] [ 2284.078604] tipc_exit_net+0x16/0x30 [tipc] [ 2284.078604] ops_exit_list.isra.8+0x36/0x70 [ 2284.078604] unregister_pernet_operations+0x87/0xd0 [ 2284.078604] unregister_pernet_subsys+0x1d/0x30 [ 2284.078604] tipc_exit+0x11/0x6f2 [tipc] [ 2284.078604] __x64_sys_delete_module+0x1df/0x240 [ 2284.078604] do_syscall_64+0x66/0x460 [ 2284.078604] entry_SYSCALL_64_after_hwframe+0x49/0xbe [ 2284.078604] [ 2284.078604] other info that might help us debug this: [ 2284.078604] [ 2284.078604] Possible unsafe locking scenario: [ 2284.078604] [ 2284.078604] CPU0 CPU1 [ 2284.078604] ---- ---- [ 2284.078604] lock(&(&tn->node_list_lock)->rlock); [ 2284.078604] lock((&n->timer)#2); [ 2284.078604] lock(&(&tn->node_list_lock)->rlock); [ 2284.078604] lock((&n->timer)#2); [ 2284.078604] [ 2284.078604] *** DEADLOCK *** [ 2284.078604] [ 2284.078604] 3 locks held by rmmod/254: [ 2284.078604] #0: 000000003368be9b (pernet_ops_rwsem){+.+.}, at: unregister_pernet_subsys+0x15/0x30 [ 2284.078604] #1: 0000000046ed9c86 (rtnl_mutex){+.+.}, at: tipc_net_stop+0x144/0x170 [tipc] [ 2284.078604] #2: 00000000f997afc0 (&(&tn->node_list_lock)->rlock){+.-.}, at: tipc_node_stop+0xac/0x19 [...} The reason is that the node timer handler sometimes needs to delete a node which has been disconnected for too long. To do this, it grabs the lock 'node_list_lock', which may at the same time be held by the generic node cleanup function, tipc_node_stop(), during module removal. Since the latter is calling del_timer_sync() inside the same lock, we have a potential deadlock. We fix this letting the timer cleanup function use spin_trylock() instead of just spin_lock(), and when it fails to grab the lock it just returns so that the timer handler can terminate its execution. This is safe to do, since tipc_node_stop() anyway is about to delete both the timer and the node instance. Fixes: 6a939f365bdb ("tipc: Auto removal of peer down node instance") Acked-by: Ying Xue <[email protected]> Signed-off-by: Jon Maloy <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2018-11-27lan743x: fix return value for lan743x_tx_napi_pollBryan Whitehead1-5/+5
The lan743x driver, when under heavy traffic load, has been noticed to sometimes hang, or cause a kernel panic. Debugging reveals that the TX napi poll routine was returning the wrong value, 'weight'. Most other drivers return 0. And call napi_complete, instead of napi_complete_done. Additionally when creating the tx napi poll routine. Changed netif_napi_add, to netif_tx_napi_add. Updates for v3: changed 'fixes' tag to match defined format Updates for v2: use napi_complete, instead of napi_complete_done in lan743x_tx_napi_poll use netif_tx_napi_add, instead of netif_napi_add for registration of tx napi poll routine fixes: 23f0703c125b ("lan743x: Add main source files for new lan743x driver") Signed-off-by: Bryan Whitehead <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2018-11-27net: via: via-velocity: fix spelling mistake "alignement" -> "alignment"Colin Ian King1-1/+1
The text in array velocity_gstrings contains a spelling mistake, rename rx_frame_alignement_errors to rx_frame_alignment_errors. Signed-off-by: Colin Ian King <[email protected]> Signed-off-by: David S. Miller <[email protected]>