aboutsummaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2018-10-19parisc: Add PDC PAT cell_info() and pd_get_pdc_revisions() functionsHelge Deller2-13/+98
Add wrappers for the PDC_PAT_CELL_GET_INFO and PDC_PAT_PD_GET_PDC_INTERF_REV PAT PDC subfunctions. Both provide access to the PAT capability bitfield which can guide us if simultaneous PTLBs are allowed on the bus, and if firmware will rendezvous all processors within PDCE_Check in case of an HPMC. Signed-off-by: Helge Deller <[email protected]>
2018-10-19parisc: Drop two instructions from pte lookup codeHelge Deller1-3/+1
Remove two instruction from the hot path. The temporary move to %r9 is unneccessary, and the zero-inialization of pte happens twice. Signed-off-by: Helge Deller <[email protected]>
2018-10-19parisc: Use zdep for shlw macro on PA1.1 and PA2.0Helge Deller1-8/+1
The zdep and depw,z mnemonics generate the same code. The assembler will accept the depw,z mnemonic when generating PA 1.x code. The zdep mnemonic is okay when generating PA 2.0 code. This patch changes depw,z to zdep in the current shlw macro, while the binary code will be the same. Signed-off-by: Helge Deller <[email protected]> Signed-off-by: John David Anglin <[email protected]>
2018-10-19Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller20-102/+147
net/sched/cls_api.c has overlapping changes to a call to nlmsg_parse(), one (from 'net') added rtm_tca_policy instead of NULL to the 5th argument, and another (from 'net-next') added cb->extack instead of NULL to the 6th argument. net/ipv4/ipmr_base.c is a case of a bug fix in 'net' being done to code which moved (to mr_table_dump)) in 'net-next'. Thanks to David Ahern for the heads up. Signed-off-by: David S. Miller <[email protected]>
2018-10-19sparc: Fix parport build warnings.David S. Miller1-0/+2
If PARPORT_PC_FIFO is not enabled, do not provide the dma lock macros and lock definition. Otherwise: ./arch/sparc/include/asm/parport.h:24:24: warning: ‘dma_spin_lock’ defined but not used [-Wunused-variable] static DEFINE_SPINLOCK(dma_spin_lock); ^~~~~~~~~~~~~ ./include/linux/spinlock_types.h:81:39: note: in definition of macro ‘DEFINE_SPINLOCK’ #define DEFINE_SPINLOCK(x) spinlock_t x = __SPIN_LOCK_UNLOCKED(x) Signed-off-by: David S. Miller <[email protected]>
2018-10-19x86/kvm/nVMX: tweak shadow fieldsVitaly Kuznetsov2-9/+6
It seems we have some leftovers from times when 'unrestricted guest' wasn't exposed to L1. Stop shadowing GUEST_CS_{BASE,LIMIT,AR_SELECTOR} and GUEST_ES_BASE, shadow GUEST_SS_AR_BYTES as it was found that some hypervisors (e.g. Hyper-V without Enlightened VMCS) access it pretty often. Suggested-by: Paolo Bonzini <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2018-10-19arm64: KVM: Guests can skip __install_bp_hardening_cb()s HYP workJames Morse1-0/+9
enable_smccc_arch_workaround_1() passes NULL as the hyp_vecs start and end if the HVC conduit is in use, and ARM_SMCCC_ARCH_WORKAROUND_1 is detected. If the guest kernel happened to be built with KVM_INDIRECT_VECTORS, we go on to allocate a slot, memcpy() the empty workaround in and do the appropriate cache maintenance. This works as we always tell memcpy() the range is 0, so it never accesses the NULL src pointer, but we still do the cache maintenance. If hyp_vecs_start is NULL we know we're a guest, just update the fn like the !KVM_INDIRECT_VECTORS version. Reviewed-by: Julien Thierry <[email protected]> Acked-by: Marc Zyngier <[email protected]> Signed-off-by: James Morse <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-10-19Merge tag 'kvmarm-for-v4.20' of ↵Paolo Bonzini39-369/+814
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD KVM/arm updates for 4.20 - Improved guest IPA space support (32 to 52 bits) - RAS event delivery for 32bit - PMU fixes - Guest entry hardening - Various cleanups
2018-10-19x86/intel_rdt: Prevent pseudo-locking from using stale pointersJithu Joseph4-12/+55
When the last CPU in an rdt_domain goes offline, its rdt_domain struct gets freed. Current pseudo-locking code is unaware of this scenario and tries to dereference the freed structure in a few places. Add checks to prevent pseudo-locking code from doing this. While further work is needed to seamlessly restore resource groups (not just pseudo-locking) to their configuration when the domain is brought back online, the immediate issue of invalid pointers is addressed here. Fixes: f4e80d67a5274 ("x86/intel_rdt: Resctrl files reflect pseudo-locked information") Fixes: 443810fe61605 ("x86/intel_rdt: Create debugfs files for pseudo-locking testing") Fixes: 746e08590b864 ("x86/intel_rdt: Create character device exposing pseudo-locked region") Fixes: 33dc3e410a0d9 ("x86/intel_rdt: Make CPU information accessible for pseudo-locked regions") Signed-off-by: Jithu Joseph <[email protected]> Signed-off-by: Reinette Chatre <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lkml.kernel.org/r/231f742dbb7b00a31cc104416860e27dba6b072d.1539384145.git.reinette.chatre@intel.com
2018-10-19KVM: arm64: Safety check PSTATE when entering guest and handle ILChristoffer Dall5-2/+44
This commit adds a paranoid check when entering the guest to make sure we don't attempt running guest code in an equally or more privilged mode than the hypervisor. We also catch other accidental programming of the SPSR_EL2 which results in an illegal exception return and report this safely back to the user. Signed-off-by: Christoffer Dall <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2018-10-19KVM: PPC: Book3S HV: Don't use streamlined entry path on early POWER9 chipsPaul Mackerras1-2/+11
This disables the use of the streamlined entry path for radix guests on early POWER9 chips that need the workaround added in commit a25bd72badfa ("powerpc/mm/radix: Workaround prefetch issue with KVM", 2017-07-24), because the streamlined entry path does not include that workaround. This also means that we can't do nested HV-KVM on those chips. Since the chips that need that workaround are the same ones that can't run both radix and HPT guests at the same time on different threads of a core, we use the existing 'no_mixing_hpt_and_radix' variable that identifies those chips to identify when we can't use the new guest entry path, and when we can't do nested virtualization. Signed-off-by: Paul Mackerras <[email protected]>
2018-10-19arm64: use the generic swiotlb_dma_opsChristoph Hellwig4-210/+55
Now that the generic swiotlb code supports non-coherent DMA we can switch to it for arm64. For that we need to refactor the existing alloc/free/mmap/pgprot helpers to be used as the architecture hooks, and implement the standard arch_sync_dma_for_{device,cpu} hooks for cache maintaincance in the streaming dma hooks, which also implies using the generic dma_coherent flag in struct device. Note that we need to keep the old is_device_dma_coherent function around for now, so that the shared arm/arm64 Xen code keeps working. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Catalin Marinas <[email protected]>
2018-10-19swiotlb: don't dip into swiotlb pool for coherent allocationsChristoph Hellwig1-3/+3
All architectures that support swiotlb also have a zone that backs up these less than full addressing allocations (usually ZONE_DMA32). Because of that it is rather pointless to fall back to the global swiotlb buffer if the normal dma direct allocation failed - the only thing this will do is to eat up bounce buffers that would be more useful to serve streaming mappings. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Catalin Marinas <[email protected]> Acked-by: Konrad Rzeszutek Wilk <[email protected]>
2018-10-19swiotlb: remove the overflow bufferChristoph Hellwig2-3/+3
Like all other dma mapping drivers just return an error code instead of an actual memory buffer. The reason for the overflow buffer was that at the time swiotlb was invented there was no way to check for dma mapping errors, but this has long been fixed. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Catalin Marinas <[email protected]> Reviewed-by: Robin Murphy <[email protected]> Reviewed-by: Konrad Rzeszutek Wilk <[email protected]>
2018-10-19s390/perf: Return error when debug_register failsThomas Richter1-1/+5
Return an error when the function debug_register() fails allocating the debug handle. Also remove the registered debug handle when the initialization fails later on. Signed-off-by: Thomas Richter <[email protected]> Reviewed-by: Hendrik Brueckner <[email protected]> Signed-off-by: Martin Schwidefsky <[email protected]>
2018-10-19x86/swiotlb: Enable swiotlb for > 4GiG RAM on 32-bit kernelsChristoph Hellwig1-2/+0
We already build the swiotlb code for 32-bit kernels with PAE support, but the code to actually use swiotlb has only been enabled for 64-bit kernels for an unknown reason. Before Linux v4.18 we paper over this fact because the networking code, the SCSI layer and some random block drivers implemented their own bounce buffering scheme. [ mingo: Changelog fixes. ] Fixes: 21e07dba9fb1 ("scsi: reduce use of block bounce buffers") Fixes: ab74cfebafa3 ("net: remove the PCI_DMA_BUS_IS_PHYS check in illegal_highdma") Reported-by: Matthew Whitehead <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Matthew Whitehead <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-10-19powerpc/time: Fix clockevent_decrementer initalisation for PR KVMMichael Ellerman1-0/+4
In the recent commit 8b78fdb045de ("powerpc/time: Use clockevents_register_device(), fixing an issue with large decrementer") we changed the way we initialise the decrementer clockevent(s). We no longer initialise the mult & shift values of decrementer_clockevent itself. This has the effect of breaking PR KVM, because it uses those values in kvmppc_emulate_dec(). The symptom is guest kernels spin forever mid-way through boot. For now fix it by assigning back to decrementer_clockevent the mult and shift values. Fixes: 8b78fdb045de ("powerpc/time: Use clockevents_register_device(), fixing an issue with large decrementer") Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc/aout: Fix struct user definition to use user_pt_regsMichael Ellerman1-1/+1
I'm pretty sure this is dead code, it's only used by the a.out core dump code, and we don't support a.out. We should remove it. But while it's in the tree it should be using the ABI version of pt_regs which is called user_pt_regs in the kernel, because the whole struct is written to the core dump and so its size shouldn't change. Note this isn't a uapi header so we don't need an ifdef. Fixes: 002af9391bfb ("powerpc: Split user/kernel definitions of struct pt_regs") Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc/uapi: Fix sigcontext definition to use user_pt_regsMichael Ellerman1-1/+5
My recent patch to split pt_regs between user and kernel missed the usage in struct sigcontext. Because this is a user visible struct it should be using the user visible definition, which when we're building for the kernel is called struct user_pt_regs. As far as I can see this hasn't actually caused a bug (yet), because we don't use the sizeof() the sigcontext->regs anywhere. But we should still fix it to avoid confusion and future bugs. Fixes: 002af9391bfb ("powerpc: Split user/kernel definitions of struct pt_regs") Reported-by: Madhavan Srinivasan <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-10-18Merge branches 'clk-imx6-mmdc', 'clk-qcom-krait', 'clk-rockchip' and ↵Stephen Boyd4-0/+61
'clk-smp2s11-match' into clk-next - iMX6 MMDC clks - Qualcomm Krait CPU clk support * clk-imx6-mmdc: clk: imx6q: add mmdc0 ipg clock clk: imx6sl: add mmdc ipg clocks clk: imx6sll: add mmdc1 ipg clock clk: imx6sx: add mmdc1 ipg clock clk: imx6ul: add mmdc1 ipg clock * clk-qcom-krait: clk: qcom: Add safe switch hook for krait mux clocks dt-bindings: clock: Document qcom,krait-cc clk: qcom: Add Krait clock controller driver dt-bindings: arm: Document qcom,kpss-gcc clk: qcom: Add KPSS ACC/GCC driver clk: qcom: Add support for Krait clocks clk: qcom: Add IPQ806X's HFPLLs clk: qcom: Add MSM8960/APQ8064's HFPLLs dt-bindings: clock: Document qcom,hfpll clk: qcom: Add HFPLL driver clk: qcom: Add support for High-Frequency PLLs (HFPLLs) ARM: Add Krait L2 register accessor functions * clk-rockchip: clk: rockchip: Fix static checker warning in rockchip_ddrclk_get_parent call clk: rockchip: use the newly added clock-id for hdmi on RK3066 clk: rockchip: add clock-id for HCLK_HDMI on rk3066 clk: rockchip: fix wrong mmc sample phase shift for rk3328 clk: rockchip: improve rk3288 pll rates for better hdmi output * clk-smp2s11-match: clk: s2mps11: Add used attribute to s2mps11_dt_match clk: s2mps11: Fix matching when built as module and DT node contains compatible
2018-10-18ubd: remove use of blk_rq_map_sgChristoph Hellwig1-104/+54
There is no good reason to create a scatterlist in the ubd driver, it can just iterate the request directly. Signed-off-by: Christoph Hellwig <[email protected]> [rw: Folded in improvements as discussed with hch and jens] Signed-off-by: Richard Weinberger <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2018-10-19powerpc/io: remove old GCC version implementationChristophe Leroy1-20/+0
GCC 4.6 is the minimum supported now. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc: Add -Werror at arch/powerpc levelMichael Ellerman13-21/+2
Back when I added -Werror in commit ba55bd74360e ("powerpc: Add configurable -Werror for arch/powerpc") I did it by adding it to most of the arch Makefiles. At the time we excluded math-emu, because apparently it didn't build cleanly. But that seems to have been fixed somewhere in the interim. So move the -Werror addition to the top-level of the arch, this saves us from repeating it in every Makefile and means we won't forget to add it to any new sub-dirs. Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc: Move core kernel logic into arch/powerpc/KbuildMichael Ellerman2-12/+16
This is a nice cleanup, arch/powerpc/Makefile is long and messy so moving this out helps a little. It also allows us to do: $ make arch/powerpc Which can be helpful if you just want to compile test some changes to arch code and not link everything. Finally it also gives us a single place to do subdir-cc-flags assignments which affect the whole of arch/powerpc, which we will do in a future patch. Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc/traps: remove redundant in_interrupt panic in die()Christophe Leroy1-2/+0
do_exit() already includes a test to panic() is in_interrupt() This patch removes powerpc one which is redundant. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc/prom_init: Generate "phandle" instead of "linux, phandle"Benjamin Herrenschmidt1-8/+5
When creating the boot-time FDT from an actual Open Firmware live tree, let's generate "phandle" properties for the phandles instead of the old deprecated "linux,phandle". Signed-off-by: Benjamin Herrenschmidt <[email protected]> [mpe: Unsplit warning printf()] Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc: Check prom_init for disallowed sectionsBenjamin Herrenschmidt1-0/+16
prom_init.c must not modify the kernel image outside of the .bss.prominit section. Thus make sure that prom_init.o doesn't have anything in any of these: .data .bss .init.data Signed-off-by: Benjamin Herrenschmidt <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc/prom_init: Move __prombss to it's own section and store it in .bssBenjamin Herrenschmidt2-1/+4
This makes __prombss its own section, and for now store it in .bss. This will give us the ability later to store it elsewhere and/or free it after boot (it's about 8KB). Signed-off-by: Benjamin Herrenschmidt <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc/prom_init: Move a few remaining statics to appropriate sectionsBenjamin Herrenschmidt1-5/+5
Signed-off-by: Benjamin Herrenschmidt <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc/prom_init: Move const structures to __initconstBenjamin Herrenschmidt1-3/+3
As they are no longer used past the end of prom_init Signed-off-by: Benjamin Herrenschmidt <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc/prom_init: Move ibm_arch_vec to __prombssBenjamin Herrenschmidt1-1/+7
Make the existing initialized definition constant and copy it to a __prombss copy Signed-off-by: Benjamin Herrenschmidt <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc/prom_init: Move prom_radix_disable to __prombssBenjamin Herrenschmidt1-1/+6
Initialize it dynamically instead of statically Signed-off-by: Benjamin Herrenschmidt <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc/prom_init: Remove support for OPAL v2Benjamin Herrenschmidt1-115/+10
We removed support for running under any OPAL version earlier than v3 in 2015 (they never saw the light of day anyway), but we kept some leftovers of this support in prom_init.c, so let's take it out. Signed-off-by: Benjamin Herrenschmidt <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc/prom_init: Replace __initdata with __prombss when applicableBenjamin Herrenschmidt1-26/+29
This replaces all occurrences of __initdata for uninitialized data with a new __prombss Currently __promdata is defined to be __initdata but we'll eventually change that. Signed-off-by: Benjamin Herrenschmidt <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc/pseries: Add driver for PAPR SCM regionsOliver O'Halloran3-0/+353
Adds a driver that implements support for enabling and accessing PAPR SCM regions. Unfortunately due to how the PAPR interface works we can't use the existing of_pmem driver (yet) because: a) The guest is required to use the H_SCM_BIND_MEM h-call to add add the SCM region to it's physical address space, and b) There is currently no mechanism for relating a bare of_pmem region to the backing DIMM (or not-a-DIMM for our case). Both of these are easily handled by rolling the functionality into a seperate driver so here we are... Acked-by: Dan Williams <[email protected]> Signed-off-by: Oliver O'Halloran <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc/pseries: PAPR persistent memory supportOliver O'Halloran10-4/+193
This patch implements support for discovering storage class memory devices at boot and for handling hotplug of new regions via RTAS hotplug events. Signed-off-by: Oliver O'Halloran <[email protected]> [mpe: Fix CONFIG_MEMORY_HOTPLUG=n build] Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc/traps: fix machine check handlers to use pr_cont()Christophe Leroy2-41/+41
When printing the machine check cause, the cause appears on the following line due to bad use of printk without \n: [ 33.663993] Machine check in kernel mode. [ 33.664011] Caused by (from SRR1=9032): [ 33.664036] Data access error at address c90c8000 This patch fixes it by using pr_cont() for the second part: [ 133.258131] Machine check in kernel mode. [ 133.258146] Caused by (from SRR1=9032): Data access error at address c90c8000 Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc/book3e: redefine pte_mkprivileged() and pte_mkuser()Christophe Leroy1-0/+16
Book3e defines both _PAGE_USER and _PAGE_PRIVILEGED, so the nohash default pte_mkprivileged() and pte_mkuser() are not usable. This patch redefines them for book3e. In theorie, only pte_mkprivileged() needs to be redefined because _PAGE_USER includes _PAGE_PRIVILEGED, but it is less confusing to redefine both. Fixes: a0da4bc166f2 ("powerpc/mm: Allow platforms to redefine some helpers") Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19powerpc/mm: Make pte_pgprot return all pte bitsAneesh Kumar K.V9-40/+10
Other archs do the same and instead of adding required pte bits (which got masked out) in __ioremap_at(), make sure we filter only pfn bits out. Fixes: 26973fa5ac0e ("powerpc/mm: use pte helpers in generic code") Reviewed-by: Christophe Leroy <[email protected]> Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-10-18Merge branches 'acpi-pm' and 'pm-sleep'Rafael J. Wysocki10-247/+334
* acpi-pm: ACPI / PM: LPIT: Register sysfs attributes based on FADT * pm-sleep: x86-32, hibernate: Adjust in_suspend after resumed on 32bit system x86-32, hibernate: Set up temporary text mapping for 32bit system x86-32, hibernate: Switch to relocated restore code during resume on 32bit system x86-32, hibernate: Switch to original page table after resumed x86-32, hibernate: Use the page size macro instead of constant value x86-32, hibernate: Use temp_pgt as the temporary page table x86, hibernate: Rename temp_level4_pgt to temp_pgt x86-32, hibernate: Enable CONFIG_ARCH_HIBERNATION_HEADER on 32bit system x86, hibernate: Extract the common code of 64/32 bit system x86-32/asm/power: Create stack frames in hibernate_asm_32.S PM / hibernate: Check the success of generating md5 digest before hibernation x86, hibernate: Fix nosave_regions setup for hibernation PM / sleep: Show freezing tasks that caused a suspend abort PM / hibernate: Documentation: fix image_size default value
2018-10-18arm/arm64: KVM: Enable 32 bits kvm vcpu events supportDongjiu Geng1-1/+0
The commit 539aee0edb9f ("KVM: arm64: Share the parts of get/set events useful to 32bit") shares the get/set events helper for arm64 and arm32, but forgot to share the cap extension code. User space will check whether KVM supports vcpu events by checking the KVM_CAP_VCPU_EVENTS extension Acked-by: James Morse <[email protected]> Reviewed-by : Suzuki K Poulose <[email protected]> Signed-off-by: Dongjiu Geng <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2018-10-18arm/arm64: KVM: Rename function kvm_arch_dev_ioctl_check_extension()Dongjiu Geng3-4/+4
Rename kvm_arch_dev_ioctl_check_extension() to kvm_arch_vm_ioctl_check_extension(), because it does not have any relationship with device. Renaming this function can make code readable. Cc: James Morse <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Signed-off-by: Dongjiu Geng <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2018-10-18kprobes, x86/ptrace.h: Make regs_get_kernel_stack_nth() not fault on bad stackSteven Rostedt (VMware)1-7/+35
Andy had some concerns about using regs_get_kernel_stack_nth() in a new function regs_get_kernel_argument() as if there's any error in the stack code, it could cause a bad memory access. To be on the safe side, call probe_kernel_read() on the stack address to be extra careful in accessing the memory. A helper function, regs_get_kernel_stack_nth_addr(), was added to just return the stack address (or NULL if not on the stack), that will be used to find the address (and could be used by other functions) and read the address with kernel_probe_read(). Requested-by: Andy Lutomirski <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]> Reviewed-by: Joel Fernandes (Google) <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Masami Hiramatsu <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-10-17sparc: vDSO: Silence an uninitialized variable warningDan Carpenter1-1/+3
Smatch complains that "val" would be uninitialized if kstrtoul() fails. Fixes: 9a08862a5d2e ("vDSO for sparc") Signed-off-by: Dan Carpenter <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2018-10-17sparc: Fix syscall fallback bugs in VDSO.David S. Miller1-1/+11
First, the trap number for 32-bit syscalls is 0x10. Also, only negate the return value when syscall error is indicated by the carry bit being set. Signed-off-by: David S. Miller <[email protected]>
2018-10-18x86/mcelog: Remove one mce_helper definitionSebastian Andrzej Siewior1-3/+0
Commit 5de97c9f6d85f ("x86/mce: Factor out and deprecate the /dev/mcelog driver") moved the old interface into one file including mce_helper definition as static and "extern". Remove one. Fixes: 5de97c9f6d85f ("x86/mce: Factor out and deprecate the /dev/mcelog driver") Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> CC: "H. Peter Anvin" <[email protected]> CC: Ingo Molnar <[email protected]> CC: Thomas Gleixner <[email protected]> CC: Tony Luck <[email protected]> CC: linux-edac <[email protected]> CC: x86-ml <[email protected]> Link: http://lkml.kernel.org/r/[email protected]
2018-10-17ARM: Add Krait L2 register accessor functionsStephen Boyd4-0/+61
Krait CPUs have a handful of L2 cache controller registers that live behind a cp15 based indirection register. First you program the indirection register (l2cpselr) to point the L2 'window' register (l2cpdr) at what you want to read/write. Then you read/write the 'window' register to do what you want. The l2cpselr register is not banked per-cpu so we must lock around accesses to it to prevent other CPUs from re-pointing l2cpdr underneath us. Cc: Mark Rutland <[email protected]> Cc: Russell King <[email protected]> Acked-by: Bjorn Andersson <[email protected]> Signed-off-by: Stephen Boyd <[email protected]> Signed-off-by: Sricharan R <[email protected]> Tested-by: Craig Tatlor <[email protected]> Signed-off-by: Stephen Boyd <[email protected]>
2018-10-17KVM: VMX: enable nested virtualization by defaultPaolo Bonzini1-1/+1
With live migration support and finally a good solution for exception event injection, nested VMX should be ready for having a stable userspace ABI. The results of syzkaller fuzzing are not perfect but not horrible either (and might be partially due to running on GCE, so that effectively we're testing three-level nesting on a fork of upstream KVM!). Enabling it by default seems like a nice way to conclude the 4.20 pull request. :) Unfortunately, enabling nested SVM in 2009 (commit 4b6e4dca701) was a bit premature. However, until live migration support is in place we can reasonably expect that it does not offer much in terms of ABI guarantees. Therefore we are still in time to break things and conform as much as possible to the interface used for VMX. Suggested-by: Jim Mattson <[email protected]> Suggested-by: Liran Alon <[email protected]> Reviewed-by: Liran Alon <[email protected]> Celebrated-by: Liran Alon <[email protected]> Celebrated-by: Wanpeng Li <[email protected]> Celebrated-by: Wincy Van <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2018-10-17KVM/x86: Use 32bit xor to clear registers in svm.cUros Bizjak2-16/+18
x86_64 zero-extends 32bit xor operation to a full 64bit register. Also add a comment and remove unnecessary instruction suffix in vmx.c Signed-off-by: Uros Bizjak <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2018-10-17kvm: x86: Introduce KVM_CAP_EXCEPTION_PAYLOADJim Mattson1-0/+5
This is a per-VM capability which can be enabled by userspace so that the faulting linear address will be included with the information about a pending #PF in L2, and the "new DR6 bits" will be included with the information about a pending #DB in L2. With this capability enabled, the L1 hypervisor can now intercept #PF before CR2 is modified. Under VMX, the L1 hypervisor can now intercept #DB before DR6 and DR7 are modified. When userspace has enabled KVM_CAP_EXCEPTION_PAYLOAD, it should generally provide an appropriate payload when injecting a #PF or #DB exception via KVM_SET_VCPU_EVENTS. However, to support restoring old checkpoints, this payload is not required. Note that bit 16 of the "new DR6 bits" is set to indicate that a debug exception (#DB) or a breakpoint exception (#BP) occurred inside an RTM region while advanced debugging of RTM transactional regions was enabled. This is the reverse of DR6.RTM, which is cleared in this scenario. This capability also enables exception.pending in struct kvm_vcpu_events, which allows userspace to distinguish between pending and injected exceptions. Reported-by: Jim Mattson <[email protected]> Suggested-by: Paolo Bonzini <[email protected]> Signed-off-by: Jim Mattson <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>