aboutsummaryrefslogtreecommitdiff
path: root/arch/mips
AgeCommit message (Collapse)AuthorFilesLines
2017-03-28KVM: MIPS/VZ: Support guest hardware page table walkerJames Hogan2-0/+88
Add support for VZ guest CP0_PWBase, CP0_PWField, CP0_PWSize, and CP0_PWCtl registers for controlling the guest hardware page table walker (HTW) present on P5600 and P6600 cores. These guest registers need initialising on R6, context switching, and exposing via the KVM ioctl API when they are present. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS/VZ: Support guest segmentation controlJames Hogan2-1/+247
Add support for VZ guest CP0_SegCtl0, CP0_SegCtl1, and CP0_SegCtl2 registers, as found on P5600 and P6600 cores. These guest registers need initialising, context switching, and exposing via the KVM ioctl API when they are present. They also require the GVA -> GPA translation code for handling a GVA root exception to be updated to interpret the segmentation registers and decode the faulting instruction enough to detect EVA memory access instructions. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS/VZ: Support guest CP0_[X]ContextConfigJames Hogan2-2/+64
Add support for VZ guest CP0_ContextConfig and CP0_XContextConfig (MIPS64 only) registers, as found on P5600 and P6600 cores. These guest registers need initialising, context switching, and exposing via the KVM ioctl API when they are present. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS/VZ: Support guest CP0_BadInstr[P]James Hogan2-0/+50
Add support for VZ guest CP0_BadInstr and CP0_BadInstrP registers, as found on most VZ capable cores. These guest registers need context switching, and exposing via the KVM ioctl API when they are present. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS: Add VZ support to build systemJames Hogan2-3/+32
Add support for the MIPS Virtualization (VZ) ASE to the MIPS KVM build system. For now KVM can only be configured for T&E or VZ and not both, but the design of the user facing APIs support the possibility of having both available, so this could change in future. Note that support for various optional guest features (some of which can't be turned off) are implemented in immediately following commits, so although it should now be possible to build VZ support, it may not work yet on your hardware. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS: Implement VZ supportJames Hogan9-0/+2477
Add the main support for the MIPS Virtualization ASE (A.K.A. VZ) to MIPS KVM. The bulk of this work is in vz.c, with various new state and definitions elsewhere. Enough is implemented to be able to run on a minimal VZ core. Further patches will fill out support for guest features which are optional or can be disabled. Signed-off-by: James Hogan <[email protected]> Acked-by: Ralf Baechle <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS: Update exit handler for VZJames Hogan1-13/+18
The general guest exit handler needs a few tweaks for VZ compared to trap & emulate, which for now are made directly depending on CONFIG_KVM_MIPS_VZ: - There is no need to re-enable the hardware page table walker (HTW), as it can be left enabled during guest mode operation with VZ. - There is no need to perform a privilege check, as any guest privilege violations should have already been detected by the hardware and triggered the appropriate guest exception. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS/Emulate: Drop CACHE emulation for VZJames Hogan1-0/+2
Ifdef out the trap & emulate CACHE instruction emulation functions for VZ. We will provide separate CACHE instruction emulation in vz.c, and we need to avoid linker errors due to the use of T&E specific MMU helpers. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS/Emulate: Update CP0_Compare emulation for VZJames Hogan1-1/+42
Update emulation of guest writes to CP0_Compare for VZ. There are two main differences compared to trap & emulate: - Writing to CP0_Compare in the VZ hardware guest context acks any pending timer, clearing CP0_Cause.TI. If we don't want an ack to take place we must carefully restore the TI bit if it was previously set. - Even with guest timer access disabled in CP0_GuestCtl0.GT, if the guest CP0_Count reaches the guest CP0_Compare the timer interrupt will assert. To prevent this we must set CP0_GTOffset to move the guest CP0_Count out of the way of the new guest CP0_Compare, either before or after depending on whether it is a forwards or backwards change. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS/TLB: Add VZ TLB managementJames Hogan3-0/+417
Add functions for MIPS VZ TLB management to tlb.c. kvm_vz_host_tlb_inv() will be used for invalidating root TLB entries after GPA page tables have been modified due to a KVM page fault. It arranges for a root GPA mapping to be flushed from the TLB, using the gpa_mm ASID or the current GuestID to do the probe. kvm_vz_local_flush_roottlb_all_guests() and kvm_vz_local_flush_guesttlb_all() flush all TLB entries in the corresponding TLB for guest mappings (GPA->RPA for root TLB with GuestID, and all entries for guest TLB). They will be used when starting a new GuestID cycle, when VZ hardware is enabled/disabled, and also when switching to a guest when the guest TLB contents may be stale or belong to a different VM. kvm_vz_guest_tlb_lookup() converts a guest virtual address to a guest physical address using the guest TLB. This will be used to decode guest virtual addresses which are sometimes provided by VZ hardware in CP0_BadVAddr for certain exceptions when the guest physical address is unavailable. kvm_vz_save_guesttlb() and kvm_vz_load_guesttlb() will be used to preserve wired guest VTLB entries while a guest isn't running. Signed-off-by: James Hogan <[email protected]> Acked-by: Ralf Baechle <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS/Entry: Update entry code to support VZJames Hogan3-10/+131
Update MIPS KVM entry code to support VZ: - We need to set GuestCtl0.GM while in guest mode. - For cores supporting GuestID, we need to set the root GuestID to match the main GuestID while in guest mode so that the root TLB refill handler writes the correct GuestID into the TLB. - For cores without GuestID where the root ASID dealiases RVA/GPA mappings, we need to load that ASID from the gpa_mm rather than the per-VCPU guest_kernel_mm or guest_user_mm, since the root TLB maps guest physical addresses. We also need to restore the normal process ASID on exit. - The normal linux process pgd needs restoring on exit, as we can't leave the GPA mappings active for kernel code. - GuestCtl0 needs saving on exit for the GExcCode field, as it may be clobbered if a preemption occurs. We also need to move the TLB refill handler to the XTLB vector at offset 0x80 on 64-bit VZ kernels, as hardware will use Root.Status.KX to determine whether a TLB refill or XTLB Refill exception is to be taken on a root TLB miss from guest mode, and KX needs to be set for kernel code to be able to access the 64-bit segments. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS: Abstract guest CP0 register access for VZJames Hogan4-95/+264
Abstract the MIPS KVM guest CP0 register access macros into inline functions which are generated by macros. This allows them to be generated differently for VZ, where they will usually need to access the hardware guest CP0 context rather than the saved values in RAM. Accessors for each individual register are generated using these macros: - __BUILD_KVM_*_SW() for registers which are not present in the VZ hardware guest context, so kvm_{read,write}_c0_guest_##name() will access the saved value in RAM regardless of whether VZ is enabled. - __BUILD_KVM_*_HW() for registers which are present in the VZ hardware guest context, so kvm_{read,write}_c0_guest_##name() will access the hardware register when VZ is enabled. These build the underlying accessors using further macros: - __BUILD_KVM_*_SAVED() builds e.g. kvm_{read,write}_sw_gc0_##name() functions for accessing the saved versions of the registers in RAM. This is used for implementing the common kvm_{read,write}_c0_guest_##name() accessors with T&E where registers are always stored in RAM, but are also available with VZ HW registers to allow them to be accessed while saved. - __BUILD_KVM_*_VZ() builds e.g. kvm_{read,write}_vz_gc0_##name() functions for accessing the VZ hardware guest context registers directly. This is used for implementing the common kvm_{read,write}_c0_guest_##name() accessors with VZ. - __BUILD_KVM_*_WRAP() builds wrappers with different names, which allows the common kvm_{read,write}_c0_guest_##name() functions to be implemented using the VZ accessors while still having the SAVED accessors available too. - __BUILD_KVM_SAVE_VZ() builds functions for saving and restoring VZ hardware guest context register state to RAM, improving conciseness of VZ context saving and restoring. Similar macros exist for generating modifiers (set, clear, change), either with a normal unlocked read/modify/write, or using atomic LL/SC sequences. These changes change the types of 32-bit registers to u32 instead of unsigned long, which requires some changes to printk() functions in MIPS KVM. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS: Add guest exit exception callbackJames Hogan4-0/+32
Add a callback for MIPS KVM implementations to handle the VZ guest exit exception. Currently the trap & emulate implementation contains a stub which reports an internal error, but the callback will be used properly by the VZ implementation. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS: Add hardware_{enable,disable} callbackJames Hogan3-2/+19
Add an implementation callback for the kvm_arch_hardware_enable() and kvm_arch_hardware_disable() architecture functions, with simple stubs for trap & emulate. This is in preparation for VZ which will make use of them. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS: Add callback to check extensionJames Hogan3-2/+19
Add an implementation callback for checking presence of KVM extensions. This allows implementation specific extensions to be provided without ifdefs in mips.c. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS: Init timer frequency from callbackJames Hogan4-11/+10
Currently the software emulated timer is initialised to a frequency of 100MHz by kvm_mips_init_count(), but this isn't suitable for VZ where the frequency of the guest timer matches that of the host. Add a count_hz argument so the caller can specify the default frequency, and move the call from kvm_arch_vcpu_create() to the implementation specific vcpu_setup() callback, so that VZ can specify a different frequency. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS: Add VZ & TE capabilitiesJames Hogan1-0/+9
Add new KVM_CAP_MIPS_VZ and KVM_CAP_MIPS_TE capabilities, and in order to allow MIPS KVM to support VZ without confusing old users (which expect the trap & emulate implementation), define and start checking KVM_CREATE_VM type codes. The codes available are: - KVM_VM_MIPS_TE = 0 This is the current value expected from the user, and will create a VM using trap & emulate in user mode, confined to the user mode address space. This may in future become unavailable if the kernel is only configured to support VZ, in which case the EINVAL error will be returned and KVM_CAP_MIPS_TE won't be available even though KVM_CAP_MIPS_VZ is. - KVM_VM_MIPS_VZ = 1 This can be provided when the KVM_CAP_MIPS_VZ capability is available to create a VM using VZ, with a fully virtualized guest virtual address space. If VZ support is unavailable in the kernel, the EINVAL error will be returned (although old kernels without the KVM_CAP_MIPS_VZ capability may well succeed and create a trap & emulate VM). This is designed to allow the desired implementation (T&E vs VZ) to be potentially chosen at runtime rather than being fixed in the kernel configuration. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS: Extend counters & events for VZ GExcCodesJames Hogan3-1/+37
Extend MIPS KVM stats counters and kvm_transition trace event codes to cover hypervisor exceptions, which have their own GExcCode field in CP0_GuestCtl0 with up to 32 hypervisor exception cause codes. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS: Update kvm_lose_fpu() for VZJames Hogan1-8/+12
Update the implementation of kvm_lose_fpu() for VZ, where there is no need to enable the FPU/MSA in the root context if the FPU/MSA state is loaded but disabled in the guest context. The trap & emulate implementation needs to disable FPU/MSA in the root context when the guest disables them in order to catch the COP1 unusable or MSA disabled exception when they're used and pass it on to the guest. For VZ however as long as the context is loaded and enabled in the root context, the guest can enable and disable it in the guest context without the hypervisor having to do much, and will take guest exceptions without hypervisor intervention if used without being enabled in the guest context. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS/Emulate: Implement 64-bit MMIO emulationJames Hogan1-1/+28
Implement additional MMIO emulation for MIPS64, including 64-bit loads/stores, and 32-bit unsigned loads. These are only exposed on 64-bit VZ hosts. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS/Emulate: De-duplicate MMIO emulationJames Hogan1-156/+50
Refactor MIPS KVM MMIO load/store emulation to reduce code duplication. Each duplicate differed slightly anyway, and it will simplify adding 64-bit MMIO support for VZ. kvm_mips_emulate_store() and kvm_mips_emulate_load() can now return EMULATE_DO_MMIO (as possibly originally intended). We therefore stop calling either of these from kvm_mips_emulate_inst(), which is now only used by kvm_trap_emul_handle_cop_unusable() which is picky about return values. Signed-off-by: James Hogan <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28KVM: MIPS: Implement HYPCALL emulationJames Hogan6-1/+69
Emulate the HYPCALL instruction added in the VZ ASE and used by the MIPS paravirtualised guest support that is already merged. The new hypcall.c handles arguments and the return value. No actual hypercalls are yet supported, but this still allows us to safely step over hypercalls and set an error code in the return value for forward compatibility. Non-zero HYPCALL codes are not handled. We also document the hypercall ABI which asm/kvm_para.h uses. Signed-off-by: James Hogan <[email protected]> Acked-by: Ralf Baechle <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: Andreas Herrmann <[email protected]> Cc: David Daney <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected]
2017-03-28MIPS: asm/tlb.h: Add UNIQUE_GUEST_ENTRYHI() macroJames Hogan1-2/+4
Add a distinct UNIQUE_GUEST_ENTRYHI() macro for invalidation of guest TLB entries by KVM, using addresses in KSeg1 rather than KSeg0. This avoids conflicts with guest invalidation routines when there is no EHINV bit to mark the whole entry as invalid, avoiding guest machine check exceptions on Cavium Octeon III. Signed-off-by: James Hogan <[email protected]> Acked-by: Ralf Baechle <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28MIPS: Add some missing guest CP0 accessors & defsJames Hogan1-2/+14
Add some missing guest accessors and register field definitions for KVM for MIPS VZ to make use of. Guest CP0_LLAddr register accessors and definitions for the LLB field allow KVM to clear the guest LLB to cancel in-progress LL/SC atomics on restore, and to emulate accesses by the guest to the CP0_LLAddr register. Bitwise modifiers and definitions for the guest CP0_Wired and CP0_Config1 registers allow KVM to modify fields within the CP0_Wired and CP0_Config1 registers. Finally a definition for the CP0_Config5.SBRI bit allows KVM to initialise and allow modification of the guest version of the SBRI bit. Signed-off-by: James Hogan <[email protected]> Acked-by: Ralf Baechle <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28MIPS: Probe guest MVHJames Hogan2-1/+7
Probe for availablility of M{T,F}HC0 instructions used with e.g. XPA in the VZ guest context, and make it available via cpu_guest_has_mvh. This will be helpful in properly emulating the MAAR registers in KVM for MIPS VZ. Signed-off-by: James Hogan <[email protected]> Acked-by: Ralf Baechle <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28MIPS: Probe guest CP0_UserLocalJames Hogan2-1/+8
Probe for presence of guest CP0_UserLocal register and expose via cpu_guest_has_userlocal. This register is optional pre-r6, so this will allow KVM to only save/restore/expose the guest CP0_UserLocal register if it exists. Signed-off-by: James Hogan <[email protected]> Acked-by: Ralf Baechle <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28MIPS: Separate MAAR V bit into VL and VH for XPAJames Hogan3-7/+13
The MAAR V bit has been renamed VL since another bit called VH is added at the top of the register when it is extended to 64-bits on a 32-bit processor with XPA. Rename the V definition, fix the various users, and add definitions for the VH bit. Also add a definition for the MAARI Index field. Signed-off-by: James Hogan <[email protected]> Acked-by: Ralf Baechle <[email protected]> Cc: Paul Burton <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-28MIPS: Add defs & probing of UFRJames Hogan3-0/+7
Add definitions and probing of the UFR bit in Config5. This bit allows user mode control of the FR bit (floating point register mode). It is present if the UFRP bit is set in the floating point implementation register. This is a capability KVM may want to expose to guest kernels, even though Linux is unlikely to ever use it due to the implications for multi-threaded programs. Signed-off-by: James Hogan <[email protected]> Acked-by: Ralf Baechle <[email protected]> Cc: Paul Burton <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: "Radim Krčmář" <[email protected]> Cc: [email protected] Cc: [email protected]
2017-03-24net: Introduce SO_INCOMING_NAPI_IDSridhar Samudrala1-0/+1
This socket option returns the NAPI ID associated with the queue on which the last frame is received. This information can be used by the apps to split the incoming flows among the threads based on the Rx queue on which they are received. If the NAPI ID actually represents a sender_cpu then the value is ignored and 0 is returned. Signed-off-by: Sridhar Samudrala <[email protected]> Signed-off-by: Alexander Duyck <[email protected]> Acked-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-03-22sock: introduce SO_MEMINFO getsockoptJosh Hunt1-0/+3
Allows reading of SK_MEMINFO_VARS via socket option. This way an application can get all meminfo related information in single socket option call instead of multiple calls. Adds helper function, sk_get_meminfo(), and uses that for both getsockopt and sock_diag_put_meminfo(). Suggested by Eric Dumazet. Signed-off-by: Josh Hunt <[email protected]> Reviewed-by: Jason Baron <[email protected]> Acked-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-03-22MIPS: IRQ Stack: Unwind IRQ stack onto task stackMatt Redfearn4-20/+60
When the separate IRQ stack was introduced, stack unwinding only proceeded as far as the top of the IRQ stack, leading to kernel backtraces being less useful, lacking the trace of what was interrupted. Fix this by providing a means for the kernel to unwind the IRQ stack onto the interrupted task stack. The processor state is saved to the kernel task stack on interrupt. The IRQ_STACK_START macro reserves an unsigned long at the top of the IRQ stack where the interrupted task stack pointer can be saved. After the active stack is switched to the IRQ stack, save the interrupted tasks stack pointer to the reserved location. Fix the stack unwinding code to look for the frame being the top of the IRQ stack and if so get the next frame from the saved location. The existing test does not work with the separate stack since the ra is no longer pointed at ret_from_{irq,exception}. The test to stop unwinding the stack 32 bytes from the top of a stack must be modified to allow unwinding to continue up to the location of the saved task stack pointer when on the IRQ stack. The low / high marks of the stack are set depending on whether the sp is on an irq stack or not. Signed-off-by: Matt Redfearn <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: Marcin Nowakowski <[email protected]> Cc: Masanari Iida <[email protected]> Cc: Chris Metcalf <[email protected]> Cc: James Hogan <[email protected]> Cc: Paul Burton <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jason A. Donenfeld <[email protected]> Cc: Andrew Morton <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15788/ Signed-off-by: Ralf Baechle <[email protected]>
2017-03-21MIPS: c-r4k: Fix Loongson-3's vcache/scache waysize calculationHuacai Chen1-0/+2
If scache.waysize is 0, r4k___flush_cache_all() will do nothing and then cause bugs. BTW, though vcache.waysize isn't being used by now, we also fix its calculation. Signed-off-by: Huacai Chen <[email protected]> Cc: John Crispin <[email protected]> Cc: Steven J . Hill <[email protected]> Cc: Fuxin Zhang <[email protected]> Cc: Zhangjin Wu <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15756/ Signed-off-by: Ralf Baechle <[email protected]>
2017-03-21MIPS: Flush wrong invalid FTLB entry for huge pageHuacai Chen1-4/+21
On VTLB+FTLB platforms (such as Loongson-3A R2), FTLB's pagesize is usually configured the same as PAGE_SIZE. In such a case, Huge page entry is not suitable to write in FTLB. Unfortunately, when a huge page is created, its page table entries haven't created immediately. Then the TLB refill handler will fetch an invalid page table entry which has no "HUGE" bit, and this entry may be written to FTLB. Since it is invalid, TLB load/store handler will then use tlbwi to write the valid entry at the same place. However, the valid entry is a huge page entry which isn't suitable for FTLB. Our solution is to modify build_huge_handler_tail. Flush the invalid old entry (whether it is in FTLB or VTLB, this is in order to reduce branches) and use tlbwr to write the valid new entry. Signed-off-by: Rui Wang <[email protected]> Signed-off-by: Huacai Chen <[email protected]> Cc: John Crispin <[email protected]> Cc: Steven J . Hill <[email protected]> Cc: Fuxin Zhang <[email protected]> Cc: Zhangjin Wu <[email protected]> Cc: Huacai Chen <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15754/ Signed-off-by: Ralf Baechle <[email protected]>
2017-03-21MIPS: Check TLB before handle_ri_rdhwr() for Loongson-3Huacai Chen2-6/+15
Loongson-3's micro TLB (ITLB) is not strictly a subset of JTLB. That means: when a JTLB entry is replaced by hardware, there may be an old valid entry exists in ITLB. So, a TLB miss exception may occur while handle_ri_rdhwr() is running because it try to access EPC's content. However, handle_ri_rdhwr() doesn't clear EXL, which makes a TLB Refill exception be treated as a TLB Invalid exception and tlbp may fail. In this case, if FTLB (which is usually set-associative instead of set- associative) is enabled, a tlbp failure will cause an invalid tlbwi, which will hang the whole system. This patch rename handle_ri_rdhwr_vivt to handle_ri_rdhwr_tlbp and use it for Loongson-3. It try to solve the same problem described as below, but more straightforwards. https://patchwork.linux-mips.org/patch/12591/ I think Loongson-2 has the same problem, but it has no FTLB, so we just keep it as is. Signed-off-by: Huacai Chen <[email protected]> Cc: Rui Wang <[email protected]> Cc: John Crispin <[email protected]> Cc: Steven J . Hill <[email protected]> Cc: Fuxin Zhang <[email protected]> Cc: Zhangjin Wu <[email protected]> Cc: Huacai Chen <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15753/ Signed-off-by: Ralf Baechle <[email protected]>
2017-03-21MIPS: Add MIPS_CPU_FTLB for Loongson-3A R2Huacai Chen1-1/+1
Loongson-3A R2 and newer CPU have FTLB, but Config0.MT is 1, so add MIPS_CPU_FTLB to the CPU options. Signed-off-by: Huacai Chen <[email protected]> Cc: John Crispin <[email protected]> Cc: Steven J . Hill <[email protected]> Cc: Fuxin Zhang <[email protected]> Cc: Zhangjin Wu <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15752/ Signed-off-by: Ralf Baechle <[email protected]>
2017-03-21MIPS: Lantiq: fix missing xbar kernel panicHauke Mehrtens1-1/+1
Commit 08b3c894e565 ("MIPS: lantiq: Disable xbar fpi burst mode") accidentally requested the resources from the pmu address region instead of the xbar registers region, but the check for the return value of request_mem_region() was wrong. Commit 98ea51cb0c8c ("MIPS: Lantiq: Fix another request_mem_region() return code check") fixed the check of the return value of request_mem_region() which made the kernel panics. This patch now makes use of the correct memory region for the cross bar. Fixes: 08b3c894e565 ("MIPS: lantiq: Disable xbar fpi burst mode") Signed-off-by: Hauke Mehrtens <[email protected]> Cc: John Crispin <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: <[email protected]> # 4.4.x- Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15751 Signed-off-by: Ralf Baechle <[email protected]>
2017-03-21MIPS: smp-cps: Fix retrieval of VPE mask on big endian CPUsMatt Redfearn1-1/+1
The vpe_mask member of struct core_boot_config is of type atomic_t, which is a 32bit type. In cps-vec.S this member was being retrieved by a PTR_L macro, which on 64bit systems is a 64bit load. On little endian systems this is OK, since the double word that is retrieved will have the required less significant word in the correct position. However, on big endian systems the less significant word of the load is retrieved from address+4, and the more significant from address+0. The destination register therefore ends up with the required word in the more significant word e.g. when starting the second VP of a big endian 64bit system, the load PTR_L ta2, COREBOOTCFG_VPEMASK(a0) ends up setting register ta2 to 0x0000000300000000 When this value is written to the CPC it is ignored, since it is invalid to write anything larger than 4 bits. This results in any VP other than VP0 in a core failing to start in 64bit big endian systems. Change the load to a 32bit load word instruction to fix the bug. Fixes: f12401d7219f ("MIPS: smp-cps: Pull boot config retrieval out of mips_cps_boot_vpes") Signed-off-by: Matt Redfearn <[email protected]> Cc: Paul Burton <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15787/ Signed-off-by: Ralf Baechle <[email protected]>
2017-03-09arch, mm: convert all architectures to use 5level-fixup.hKirill A. Shutemov2-0/+2
If an architecture uses 4level-fixup.h we don't need to do anything as it includes 5level-fixup.h. If an architecture uses pgtable-nop*d.h, define __ARCH_USE_5LEVEL_HACK before inclusion of the header. It makes asm-generic code to use 5level-fixup.h. If an architecture has 4-level paging or folds levels on its own, include 5level-fixup.h directly. Signed-off-by: Kirill A. Shutemov <[email protected]> Acked-by: Michal Hocko <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-03-08MIPS: Wire up statx system callJames Hogan5-6/+13
Wire up the statx system call for MIPS, which was introduced in commit a528d35e8bfc ("statx: Add a system call to make enhanced file info available"). Signed-off-by: James Hogan <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15387/ Signed-off-by: Ralf Baechle <[email protected]>
2017-03-08MIPS: Include asm/ptrace.h now linux/sched.h doesn'tJames Hogan1-0/+1
Use of the task_pt_regs() based macros in MIPS' asm/processor.h for accessing the user context on the kernel stack need the definition of struct pt_regs from asm/ptrace.h. __own_fpu() in asm/fpu.h uses these macros but implicitly depended on linux/sched.h to include asm/ptrace.h. Since commit f780d89a0e82 ("sched/headers: Remove <asm/ptrace.h> from <linux/sched.h>") however linux/sched.h no longer includes asm/ptrace.h, so include it explicitly from asm/fpu.h where it is needed instead. This fixes build errors such as: ./arch/mips/include/asm/fpu.h: In function '__own_fpu': ./arch/mips/include/asm/processor.h:385:31: error: invalid application of 'sizeof' to incomplete type 'struct pt_regs' THREAD_SIZE - 32 - sizeof(struct pt_regs)) ^ Fixes: f780d89a0e82 ("sched/headers: Remove <asm/ptrace.h> from <linux/sched.h>") Signed-off-by: James Hogan <[email protected]> Acked-by: Ingo Molnar <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15386/ Signed-off-by: Ralf Baechle <[email protected]>
2017-03-08MIPS: ralink: Fix typos in rt3883 pinctrlJohn Crispin1-2/+2
There are two copy & paste errors in the definition of the 5GHz LNA and second ethernet pinmux. Fixes: f576fb6a0700 ("MIPS: ralink: cleanup the soc specific pinmux data") Signed-off-by: John Crispin <[email protected]> Signed-off-by: Daniel Golle <[email protected]> Cc: [email protected] Cc: <[email protected]> # 3.19.x- Patchwork: https://patchwork.linux-mips.org/patch/15328/ Signed-off-by: James Hogan <[email protected]>
2017-03-08MIPS: End spinlocks with .insnPaul Burton1-4/+4
When building for microMIPS we need to ensure that the assembler always knows that there is code at the target of a branch or jump. Recent toolchains will fail to link a microMIPS kernel when this isn't the case due to what it thinks is a branch to non-microMIPS code. mips-mti-linux-gnu-ld kernel/built-in.o: .spinlock.text+0x2fc: Unsupported branch between ISA modes. mips-mti-linux-gnu-ld final link failed: Bad value This is due to inline assembly labels in spinlock.h not being followed by an instruction mnemonic, either due to a .subsection pseudo-op or the end of the inline asm block. Fix this with a .insn direction after such labels. Signed-off-by: Paul Burton <[email protected]> Signed-off-by: James Hogan <[email protected]> Reviewed-by: Maciej W. Rozycki <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: [email protected] Cc: [email protected] Cc: <[email protected]> Patchwork: https://patchwork.linux-mips.org/patch/15325/ Signed-off-by: James Hogan <[email protected]>
2017-03-08MIPS: Force o32 fp64 support on 32bit MIPS64r6 kernelsJames Hogan1-1/+1
When a 32-bit kernel is configured to support MIPS64r6 (CPU_MIPS64_R6), MIPS_O32_FP64_SUPPORT won't be selected as it should be because MIPS32_O32 is disabled (o32 is already the default ABI available on 32-bit kernels). This results in userland FP breakage as CP0_Status.FR is read-only 1 since r6 (when an FPU is present) so __enable_fpu() will fail to clear FR. This causes the FPU emulator to get used which will incorrectly emulate 32-bit FPU registers. Force o32 fp64 support in this case by also selecting MIPS_O32_FP64_SUPPORT from CPU_MIPS64_R6 if 32BIT. Fixes: 4e9d324d4288 ("MIPS: Require O32 FP64 support for MIPS64 with O32 compat") Signed-off-by: James Hogan <[email protected]> Reviewed-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: [email protected] Cc: <[email protected]> # 4.0.x- Patchwork: https://patchwork.linux-mips.org/patch/15310/ Signed-off-by: James Hogan <[email protected]>
2017-03-08MIPS: Add missing include filesArnd Bergmann14-0/+20
After the split of linux/sched.h, several platforms in arch/mips stopped building. Add the respective additional #include statements to fix the problem I first tried adding these into asm/processor.h, but ran into circular header dependencies with that which I could not figure out. The commit I listed as causing the problem is the branch merge, as there is likely a combination of multiple patches in that branch. Signed-off-by: Arnd Bergmann <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Fixes: 1827adb11ad2 ("Merge branch 'WIP.sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-03-05uaccess: drop duplicate includes from asm/uaccess.hAl Viro1-2/+0
Signed-off-by: Al Viro <[email protected]>
2017-03-05uaccess: move VERIFY_{READ,WRITE} definitions to linux/uaccess.hAl Viro1-3/+0
Signed-off-by: Al Viro <[email protected]>
2017-03-03sched/headers: Remove the <linux/topology.h> include from <linux/sched.h>Ingo Molnar1-0/+1
It's used only by a single (rarely used) inline function (task_node(p)), which we can move to <linux/sched/topology.h>. ( Add <linux/nodemask.h>, because we rely on that. ) Acked-by: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2017-03-03sched/headers: Move task-stack related APIs from <linux/sched.h> to ↵Ingo Molnar2-2/+2
<linux/sched/task_stack.h> Split out the task->stack related functionality, which is not really part of the core scheduler APIs. Only keep task_thread_info() because it's used by sched.h. Update the code that uses those facilities. Acked-by: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-03-03sched/headers: Move task->mm handling methods to <linux/sched/mm.h>Ingo Molnar1-1/+1
Move the following task->mm helper APIs into a new header file, <linux/sched/mm.h>, to further reduce the size and complexity of <linux/sched.h>. Here are how the APIs are used in various kernel files: # mm_alloc(): arch/arm/mach-rpc/ecard.c fs/exec.c include/linux/sched/mm.h kernel/fork.c # __mmdrop(): arch/arc/include/asm/mmu_context.h include/linux/sched/mm.h kernel/fork.c # mmdrop(): arch/arm/mach-rpc/ecard.c arch/m68k/sun3/mmu_emu.c arch/x86/mm/tlb.c drivers/gpu/drm/amd/amdkfd/kfd_process.c drivers/gpu/drm/i915/i915_gem_userptr.c drivers/infiniband/hw/hfi1/file_ops.c drivers/vfio/vfio_iommu_spapr_tce.c fs/exec.c fs/proc/base.c fs/proc/task_mmu.c fs/proc/task_nommu.c fs/userfaultfd.c include/linux/mmu_notifier.h include/linux/sched/mm.h kernel/fork.c kernel/futex.c kernel/sched/core.c mm/khugepaged.c mm/ksm.c mm/mmu_context.c mm/mmu_notifier.c mm/oom_kill.c virt/kvm/kvm_main.c # mmdrop_async_fn(): include/linux/sched/mm.h # mmdrop_async(): include/linux/sched/mm.h kernel/fork.c # mmget_not_zero(): fs/userfaultfd.c include/linux/sched/mm.h mm/oom_kill.c # mmput(): arch/arc/include/asm/mmu_context.h arch/arc/kernel/troubleshoot.c arch/frv/mm/mmu-context.c arch/powerpc/platforms/cell/spufs/context.c arch/sparc/include/asm/mmu_context_32.h drivers/android/binder.c drivers/gpu/drm/etnaviv/etnaviv_gem.c drivers/gpu/drm/i915/i915_gem_userptr.c drivers/infiniband/core/umem.c drivers/infiniband/core/umem_odp.c drivers/infiniband/core/uverbs_main.c drivers/infiniband/hw/mlx4/main.c drivers/infiniband/hw/mlx5/main.c drivers/infiniband/hw/usnic/usnic_uiom.c drivers/iommu/amd_iommu_v2.c drivers/iommu/intel-svm.c drivers/lguest/lguest_user.c drivers/misc/cxl/fault.c drivers/misc/mic/scif/scif_rma.c drivers/oprofile/buffer_sync.c drivers/vfio/vfio_iommu_type1.c drivers/vhost/vhost.c drivers/xen/gntdev.c fs/exec.c fs/proc/array.c fs/proc/base.c fs/proc/task_mmu.c fs/proc/task_nommu.c fs/userfaultfd.c include/linux/sched/mm.h kernel/cpuset.c kernel/events/core.c kernel/events/uprobes.c kernel/exit.c kernel/fork.c kernel/ptrace.c kernel/sys.c kernel/trace/trace_output.c kernel/tsacct.c mm/memcontrol.c mm/memory.c mm/mempolicy.c mm/migrate.c mm/mmu_notifier.c mm/nommu.c mm/oom_kill.c mm/process_vm_access.c mm/rmap.c mm/swapfile.c mm/util.c virt/kvm/async_pf.c # mmput_async(): include/linux/sched/mm.h kernel/fork.c mm/oom_kill.c # get_task_mm(): arch/arc/kernel/troubleshoot.c arch/powerpc/platforms/cell/spufs/context.c drivers/android/binder.c drivers/gpu/drm/etnaviv/etnaviv_gem.c drivers/infiniband/core/umem.c drivers/infiniband/core/umem_odp.c drivers/infiniband/hw/mlx4/main.c drivers/infiniband/hw/mlx5/main.c drivers/infiniband/hw/usnic/usnic_uiom.c drivers/iommu/amd_iommu_v2.c drivers/iommu/intel-svm.c drivers/lguest/lguest_user.c drivers/misc/cxl/fault.c drivers/misc/mic/scif/scif_rma.c drivers/oprofile/buffer_sync.c drivers/vfio/vfio_iommu_type1.c drivers/vhost/vhost.c drivers/xen/gntdev.c fs/proc/array.c fs/proc/base.c fs/proc/task_mmu.c include/linux/sched/mm.h kernel/cpuset.c kernel/events/core.c kernel/exit.c kernel/fork.c kernel/ptrace.c kernel/sys.c kernel/trace/trace_output.c kernel/tsacct.c mm/memcontrol.c mm/memory.c mm/mempolicy.c mm/migrate.c mm/mmu_notifier.c mm/nommu.c mm/util.c # mm_access(): fs/proc/base.c include/linux/sched/mm.h kernel/fork.c mm/process_vm_access.c # mm_release(): arch/arc/include/asm/mmu_context.h fs/exec.c include/linux/sched/mm.h include/uapi/linux/sched.h kernel/exit.c kernel/fork.c Acked-by: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-03-02sched/headers: Prepare to move the task_lock()/unlock() APIs to ↵Ingo Molnar1-0/+1
<linux/sched/task.h> But first update the code that uses these facilities with the new header. Acked-by: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>