aboutsummaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2017-09-01powerpc/eeh: Delete an error out of memory message at init timeMarkus Elfring1-4/+1
Omit an extra message for a memory allocation failure in eeh_dev_init(). This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <[email protected]> [mpe: Do not drop the message that can happen at runtime and lead to an event not being handled] Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc/mm: Use seq_putc() in two functionsMarkus Elfring2-2/+2
Two single characters (line breaks) should be put into a sequence. Thus use the corresponding function "seq_putc". This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01crypto/nx: Add P9 NX specific error codes for 842 engineHaren Myneni1-0/+3
This patch adds changes for checking P9 specific 842 engine error codes. These errros are reported in coprocessor status block (CSB) for failures. Signed-off-by: Haren Myneni <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc/32: remove a NOP from memset()Christophe Leroy2-3/+11
memset() is patched after initialisation to activate the optimised part which uses cache instructions. Today we have a 'b 2f' to skip the optimised patch, which then gets replaced by a NOP, implying a useless cycle consumption. As we have a 'bne 2f' just before, we could use that instruction for the live patching, hence removing the need to have a dedicated 'b 2f' to be replaced by a NOP. This patch changes the 'bne 2f' by a 'b 2f'. During init, that 'b 2f' is then replaced by 'bne 2f' Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc/32: optimise memset()Christophe Leroy1-7/+14
There is no need to extend the set value to an int when the length is lower than 4 as in that case we only do byte stores. We can therefore immediately branch to the part handling it. By separating it from the normal case, we are able to eliminate a few actions on the destination pointer. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: fix location of two EXPORT_SYMBOLChristophe Leroy2-2/+2
Commit 9445aa1a3062a ("ppc: move exports to definitions") added EXPORT_SYMBOL() for memset() and flush_hash_pages() in the middle of the functions. This patch moves them at the end of the two functions. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc/32: add memset16()Christophe Leroy2-1/+17
Commit 694fc88ce271f ("powerpc/string: Implement optimized memset variants") added memset16(), memset32() and memset64() for the 64 bits PPC. On 32 bits, memset64() is not relevant, and as shown below, the generic version of memset32() gives a good code, so only memset16() is candidate for an optimised version. 000009c0 <memset32>: 9c0: 2c 05 00 00 cmpwi r5,0 9c4: 39 23 ff fc addi r9,r3,-4 9c8: 4d 82 00 20 beqlr 9cc: 7c a9 03 a6 mtctr r5 9d0: 94 89 00 04 stwu r4,4(r9) 9d4: 42 00 ff fc bdnz 9d0 <memset32+0x10> 9d8: 4e 80 00 20 blr The last part of memset() handling the not 4-bytes multiples operates on bytes, making it unsuitable for handling word without modification. As it would increase memset() complexity, it is better to implement memset16() from scratch. In addition it has the advantage of allowing a more optimised memset16() than what we would have by using the memset() function. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: Wrap register number correctly for string load/store instructionsPaul Mackerras1-2/+4
Michael Ellerman reported that emulate_loadstore() was trying to access element 32 of regs->gpr[], which doesn't exist, when emulating a string store instruction. This is because the string load and store instructions (lswi, lswx, stswi and stswx) are defined to wrap around from register 31 to register 0 if the number of bytes being loaded or stored is sufficiently large. This wrapping was not implemented in the emulation code. To fix it, we mask the register number after incrementing it. Reported-by: Michael Ellerman <[email protected]> Fixes: c9f6f4ed95d4 ("powerpc: Implement emulation of string loads and stores") Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: Emulate load/store floating point as integer word instructionsPaul Mackerras2-17/+48
This adds emulation for the lfiwax, lfiwzx and stfiwx instructions. This necessitated adding a new flag to indicate whether a floating point or an integer conversion was needed for LOAD_FP and STORE_FP, so this moves the size field in op->type up 4 bits. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: Use instruction emulation infrastructure to handle alignment faultsPaul Mackerras3-777/+34
This replaces almost all of the instruction emulation code in fix_alignment() with calls to analyse_instr(), emulate_loadstore() and emulate_dcbz(). The only emulation code left is the SPE emulation code; analyse_instr() etc. do not handle SPE instructions at present. One result of this is that we can now handle alignment faults on all the new VSX load and store instructions that were added in POWER9. VSX loads/stores will take alignment faults for unaligned accesses to cache-inhibited memory. Another effect is that we no longer rely on the DAR and DSISR values set by the processor. With this, we now need to include the instruction emulation code unconditionally. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: Separate out load/store emulation into its own functionPaul Mackerras2-113/+154
This moves the parts of emulate_step() that deal with emulating load and store instructions into a new function called emulate_loadstore(). This is to make it possible to reuse this code in the alignment handler. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: Handle opposite-endian processes in emulation codePaul Mackerras2-60/+131
This adds code to the load and store emulation code to byte-swap the data appropriately when the process being emulated is set to the opposite endianness to that of the kernel. This also enables the emulation for the multiple-register loads and stores (lmw, stmw, lswi, stswi, lswx, stswx) to work for little-endian. In little-endian mode, the partial word at the end of a transfer for lsw*/stsw* (when the byte count is not a multiple of 4) is loaded/stored at the least-significant end of the register. Additionally, this fixes a bug in the previous code in that it could call read_mem/write_mem with a byte count that was not 1, 2, 4 or 8. Note that this only works correctly on processors with "true" little-endian mode, such as IBM POWER processors from POWER6 on, not the so-called "PowerPC" little-endian mode that uses address swizzling as implemented on the old 32-bit 603, 604, 740/750, 74xx CPUs. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: Set regs->dar if memory access fails in emulate_step()Paul Mackerras1-22/+52
This adds code to the instruction emulation code to set regs->dar to the address of any memory access that fails. This address is not necessarily the same as the effective address of the instruction, because if the memory access is unaligned, it might cross a page boundary and fault on the second page. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: Emulate the dcbz instructionPaul Mackerras2-0/+34
This adds code to analyse_instr() and emulate_step() to understand the dcbz (data cache block zero) instruction. The emulate_dcbz() function is made public so it can be used by the alignment handler in future. (The apparently unnecessary cropping of the address to 32 bits is there because it will be needed in that situation.) Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: Emulate load/store floating double pair instructionsPaul Mackerras1-16/+52
This adds lfdp[x] and stfdp[x] to the set of instructions that analyse_instr() and emulate_step() understand. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: Emulate vector element load/store instructionsPaul Mackerras1-2/+36
This adds code to analyse_instr() and emulate_step() to handle the vector element loads and stores: lvebx, lvehx, lvewx, stvebx, stvehx, stvewx. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: Emulate FP/vector/VSX loads/stores correctly when regs not livePaul Mackerras3-267/+203
At present, the analyse_instr/emulate_step code checks for the relevant MSR_FP/VEC/VSX bit being set when a FP/VMX/VSX load or store is decoded, but doesn't recheck the bit before reading or writing the relevant FP/VMX/VSX register in emulate_step(). Since we don't have preemption disabled, it is possible that we get preempted between checking the MSR bit and doing the register access. If that happened, then the registers would have been saved to the thread_struct for the current process. Accesses to the CPU registers would then potentially read stale values, or write values that would never be seen by the user process. Another way that the registers can become non-live is if a page fault occurs when accessing user memory, and the page fault code calls a copy routine that wants to use the VMX or VSX registers. To fix this, the code for all the FP/VMX/VSX loads gets restructured so that it forms an image in a local variable of the desired register contents, then disables preemption, checks the MSR bit and either sets the CPU register or writes the value to the thread struct. Similarly, the code for stores checks the MSR bit, copies either the CPU register or the thread struct to a local variable, then reenables preemption and then copies the register image to memory. If the instruction being emulated is in the kernel, then we must not use the register values in the thread_struct. In this case, if the relevant MSR enable bit is not set, then emulate_step refuses to emulate the instruction. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: Make load/store emulation use larger memory accessesPaul Mackerras1-129/+106
At the moment, emulation of loads and stores of up to 8 bytes to unaligned addresses on a little-endian system uses a sequence of single-byte loads or stores to memory. This is rather inefficient, and the code is hard to follow because it has many ifdefs. In addition, the Power ISA has requirements on how unaligned accesses are performed, which are not met by doing all accesses as sequences of single-byte accesses. Emulation of VSX loads and stores uses __copy_{to,from}_user, which means the emulation code has no control on the size of accesses. To simplify this, we add new copy_mem_in() and copy_mem_out() functions for accessing memory. These use a sequence of the largest possible aligned accesses, up to 8 bytes (or 4 on 32-bit systems), to copy memory between a local buffer and user memory. We then rewrite {read,write}_mem_unaligned and the VSX load/store emulation using these new functions. These new functions also simplify the code in do_fp_load() and do_fp_store() for the unaligned cases. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: Add emulation for the addpcis instructionPaul Mackerras1-3/+11
The addpcis instruction puts the sum of the next instruction address plus a constant into a register. Since the result depends on the address of the instruction, it will give an incorrect result if it is single-stepped out of line, which is what the *probes subsystem will currently do if a probe is placed on an addpcis instruction. This fixes the problem by adding emulation of it to analyse_instr(). Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: Don't update CR0 in emulation of popcnt, prty, bpermd instructionsPaul Mackerras1-6/+6
The architecture shows the least-significant bit of the instruction word as reserved for the popcnt[bwd], prty[wd] and bpermd instructions, that is, these instructions never update CR0. Therefore this changes the emulation of these instructions to skip the CR0 update. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: Fix emulation of the isel instructionPaul Mackerras1-8/+10
The case added for the isel instruction was added inside a switch statement which uses the 10-bit minor opcode field in the 0x7fe bits of the instruction word. However, for the isel instruction, the minor opcode field is only the 0x3e bits, and the 0x7c0 bits are used for the "BC" field, which indicates which CR bit to use to select the result. Therefore, for the isel emulation to work correctly when BC != 0, we need to match on ((instr >> 1) & 0x1f) == 15). To do this, we pull the isel case out of the switch statement and put it in an if statement of its own. Fixes: e27f71e5ff3c ("powerpc/lib/sstep: Add isel instruction emulation") Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc/64: Fix update forms of loads and stores to write 64-bit EAPaul Mackerras2-55/+58
When a 64-bit processor is executing in 32-bit mode, the update forms of load and store instructions are required by the architecture to write the full 64-bit effective address into the RA register, though only the bottom 32 bits are used to address memory. Currently, the instruction emulation code writes the truncated address to the RA register. This fixes it by keeping the full 64-bit EA in the instruction_op structure, truncating the address in emulate_step() where it is used to address memory, rather than in the address computations in analyse_instr(). Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: Handle most loads and stores in instruction emulation codePaul Mackerras6-62/+710
This extends the instruction emulation infrastructure in sstep.c to handle all the load and store instructions defined in the Power ISA v3.0, except for the atomic memory operations, ldmx (which was never implemented), lfdp/stfdp, and the vector element load/stores. The instructions added are: Integer loads and stores: lbarx, lharx, lqarx, stbcx., sthcx., stqcx., lq, stq. VSX loads and stores: lxsiwzx, lxsiwax, stxsiwx, lxvx, lxvl, lxvll, lxvdsx, lxvwsx, stxvx, stxvl, stxvll, lxsspx, lxsdx, stxsspx, stxsdx, lxvw4x, lxsibzx, lxvh8x, lxsihzx, lxvb16x, stxvw4x, stxsibx, stxvh8x, stxsihx, stxvb16x, lxsd, lxssp, lxv, stxsd, stxssp, stxv. These instructions are handled both in the analyse_instr phase and in the emulate_step phase. The code for lxvd2ux and stxvd2ux has been taken out, as those instructions were never implemented in any processor and have been taken out of the architecture, and their opcodes have been reused for other instructions in POWER9 (lxvb16x and stxvb16x). The emulation for the VSX loads and stores uses helper functions which don't access registers or memory directly, which can hopefully be reused by KVM later. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: Don't check MSR FP/VMX/VSX enable bits in analyse_instr()Paul Mackerras1-42/+12
This removes the checks for the FP/VMX/VSX enable bits in the MSR from analyse_instr() and adds them to emulate_step() instead. The reason for this is that we may want to use analyse_instr() in a situation where the FP/VMX/VSX register values are stored in the current thread_struct and the FP/VMX/VSX enable bits in the MSR image in the pt_regs are zero. Since analyse_instr() doesn't make any changes to register state, it is reasonable for it to indicate what the effect of an instruction would be even though the relevant enable bit is off. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01powerpc: Change analyse_instr so it doesn't modify *regsPaul Mackerras2-257/+396
The analyse_instr function currently doesn't just work out what an instruction does, it also executes those instructions whose effect is only to update CPU registers that are stored in struct pt_regs. This is undesirable because optprobes uses analyse_instr to work out if an instruction could be successfully emulated in future. This changes analyse_instr so it doesn't modify *regs; instead it stores information in the instruction_op structure to indicate what registers (GPRs, CR, XER, LR) would be set and what value they would be set to. A companion function called emulate_update_regs() can then use that information to update a pt_regs struct appropriately. As a minor cleanup, this replaces inline asm using the cntlzw and cntlzd instructions with calls to __builtin_clz() and __builtin_clzl(). Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-09-01KVM: PPC: Book3S HV: Fix memory leak in kvm_vm_ioctl_get_htab_fdnixiaoming1-0/+1
We do ctx = kzalloc(sizeof(*ctx), GFP_KERNEL) and then later on call anon_inode_getfd(), but if that fails we don't free ctx, so that memory gets leaked. To fix it, this adds kfree(ctx) in the failure path. Signed-off-by: nixiaoming <[email protected]> Reviewed-by: Paolo Bonzini <[email protected]> Signed-off-by: Paul Mackerras <[email protected]>
2017-08-31Merge branch 'for-4.14/fs' into libnvdimm-for-nextDan Williams209-1260/+3870
2017-08-31KVM: update to new mmu_notifier semantic v2Jérôme Glisse6-35/+0
Calls to mmu_notifier_invalidate_page() were replaced by calls to mmu_notifier_invalidate_range() and are now bracketed by calls to mmu_notifier_invalidate_range_start()/end() Remove now useless invalidate_page callback. Changed since v1 (Linus Torvalds) - remove now useless kvm_arch_mmu_notifier_invalidate_page() Signed-off-by: Jérôme Glisse <[email protected]> Tested-by: Mike Galbraith <[email protected]> Tested-by: Adam Borowski <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: Radim Krčmář <[email protected]> Cc: [email protected] Cc: Kirill A. Shutemov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andrea Arcangeli <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-08-31powerpc/powernv: update to new mmu_notifier semanticJérôme Glisse1-10/+0
Calls to mmu_notifier_invalidate_page() were replaced by calls to mmu_notifier_invalidate_range() and now are bracketed by calls to mmu_notifier_invalidate_range_start()/end() Remove now useless invalidate_page callback. Signed-off-by: Jérôme Glisse <[email protected]> Cc: [email protected] Cc: Alistair Popple <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andrea Arcangeli <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-08-31libnvdimm, nd_blk: remove mmio_flush_range()Robin Murphy2-3/+0
mmio_flush_range() suffers from a lack of clearly-defined semantics, and is somewhat ambiguous to port to other architectures where the scope of the writeback implied by "flush" and ordering might matter, but MMIO would tend to imply non-cacheable anyway. Per the rationale in 67a3e8fe9015 ("nd_blk: change aperture mapping from WC to WB"), the only existing use is actually to invalidate clean cache lines for ARCH_MEMREMAP_PMEM type mappings *without* writeback. Since the recent cleanup of the pmem API, that also now happens to be the exact purpose of arch_invalidate_pmem(), which would be a far more well-defined tool for the job. Rather than risk potentially inconsistent implementations of mmio_flush_range() for the sake of one callsite, streamline things by removing it entirely and instead move the ARCH_MEMREMAP_PMEM related definitions up to the libnvdimm level, so they can be shared by NFIT as well. This allows NFIT to be enabled for arm64. Signed-off-by: Robin Murphy <[email protected]> Signed-off-by: Dan Williams <[email protected]>
2017-08-31binfmt_flat: fix arch/m32r and arch/microblaze flat_put_addr_at_rp()Randy Dunlap2-2/+3
Change the m32r flat_put_addr_at_rp() function to return int and always return 0. The microblaze function already returned 0 so just change its function return type from void to int. Seven (7) other arch-es already have this function as returning an int type result. Fixes: 468138d78510 (binfmt_flat: flat_{get,put}_addr_from_rp() should be able to fail) Signed-off-by: Randy Dunlap <[email protected]> Cc: Al Viro <[email protected]> Reported-by: kbuild test robot <[email protected]> Signed-off-by: Al Viro <[email protected]>
2017-08-31teach SYSCALL_DEFINE/COMPAT_SYSCALL_DEFINE to handle __bitwise argumentsAl Viro1-2/+3
Signed-off-by: Al Viro <[email protected]>
2017-08-31x86/xen: Get rid of paravirt op adjust_exception_frameJuergen Gross12-77/+133
When running as Xen pv-guest the exception frame on the stack contains %r11 and %rcx additional to the other data pushed by the processor. Instead of having a paravirt op being called for each exception type prepend the Xen specific code to each exception entry. When running as Xen pv-guest just use the exception entry with prepended instructions, otherwise use the entry without the Xen specific code. [ tglx: Merged through tip to avoid ugly merge conflict ] Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected]
2017-08-31x86/eisa: Add missing includeThomas Gleixner1-0/+1
The seperation of the EISA init missed to include linux/io.h which breaks the build with some special configurations. Reported-by: Ingo Molnar <[email protected]> Fixes: f7eaf6e00fd5 ("x86/boot: Move EISA setup to a separate file") Signed-off-by: Thomas Gleixner <[email protected]>
2017-08-31x86: bpf_jit: small optimization in emit_bpf_tail_call()Eric Dumazet1-5/+4
Saves 4 bytes replacing following instructions : lea rax, [rsi + rdx * 8 + offsetof(...)] mov rax, qword ptr [rax] cmp rax, 0 by : mov rax, [rsi + rdx * 8 + offsetof(...)] test rax, rax Signed-off-by: Eric Dumazet <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Daniel Borkmann <[email protected]> Acked-by: Daniel Borkmann <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-08-31Merge tag 'irqchip-4.14' of ↵Thomas Gleixner5-10/+82
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into irq/core Pull irqchip updates for 4.14 from Marc Zyngier: - irqchip-specific part of the monster GICv4 series - new UniPhier AIDET irqchip driver - new variants of some Freescale MSI widget - blanket removal of of_node->full_name in printk - random collection of fixes
2017-08-31arm64: dts: ls1046a: Add MSI dts nodeMinghuan Lian1-0/+31
LS1046a includes 3 MSI controllers. Each controller supports 128 interrupts. Acked-by: Rob Herring <[email protected]> Signed-off-by: Minghuan Lian <[email protected]> Signed-off-by: Hou Zhiqiang <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2017-08-31arm64: dts: ls1043a: Share all MSIsMinghuan Lian1-3/+3
In order to maximize the use of MSI, a PCIe controller will share all MSI controllers. The patch changes "msi-parent" to refer to all MSI controller dts nodes. Signed-off-by: Minghuan Lian <[email protected]> Signed-off-by: Hou Zhiqiang <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2017-08-31arm: dts: ls1021a: Share all MSIsMinghuan Lian1-2/+2
In order to maximize the use of MSI, a PCIe controller will share all MSI controllers. The patch changes msi-parent to refer to all MSI controller dts nodes. Signed-off-by: Minghuan Lian <[email protected]> Signed-off-by: Hou Zhiqiang <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2017-08-31arm64: dts: ls1043a: Fix typo of MSI compatible stringMinghuan Lian1-3/+3
"1" should be replaced by "l". This is a typo. The patch is to fix it. Signed-off-by: Minghuan Lian <[email protected]> Signed-off-by: Hou Zhiqiang <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2017-08-31arm: dts: ls1021a: Fix typo of MSI compatible stringMinghuan Lian1-2/+2
"1" should be replaced by "l". This is a typo. The patch is to fix it. Signed-off-by: Minghuan Lian <[email protected]> Signed-off-by: Hou Zhiqiang <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2017-08-31irqchip/gic-v3-its: Add VPE interrupt maskingMarc Zyngier2-0/+8
When masking/unmasking a doorbell interrupt, it is necessary to issue an invalidation to the corresponding redistributor. We use the DirectLPI feature by writting directly to the corresponding redistributor. Reviewed-by: Thomas Gleixner <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2017-08-31irqchip/gic-v3-its: Add VPENDBASER/VPROPBASER accessorsMarc Zyngier2-0/+33
V{PEND,PROP}BASER being 64bit registers, they need some ad-hoc accessors on 32bit, specially given that VPENDBASER contains a Valid bit, making the access a bit convoluted. Reviewed-by: Thomas Gleixner <[email protected]> Reviewed-by: Eric Auger <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2017-08-31x86/idt: Remove superfluous ALIGNmentJiri Slaby1-1/+1
Commit 87e81786b13b ("x86/idt: Move early IDT setup out of 32-bit asm") switched early_ignore_irq to use ENTRY. ENTRY aligns the code, so there is no need for one more ALIGN right before the function. And add one \n after the function to separate it from the data. Signed-off-by: Jiri Slaby <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Josh Poimboeuf <[email protected]> Link: http://lkml.kernel.org/r/[email protected]
2017-08-31xen/mmu: set MMU_NORMAL_PT_UPDATE in remap_area_mfn_pte_fnWei Liu1-1/+1
No functional change because MMU_NORMAL_PT_UPDATE is in fact 0. Set it to make the code consistent with similar code in mmu_pv.c Signed-off-by: Wei Liu <[email protected]> Reviewed-by: Boris Ostrovsky <[email protected]> Signed-off-by: Boris Ostrovsky <[email protected]>
2017-08-31xen: remove unused function xen_set_domain_pte()Juergen Gross2-22/+0
The function xen_set_domain_pte() is used nowhere in the kernel. Remove it. Signed-off-by: Juergen Gross <[email protected]> Acked-by: Steven Rostedt (VMware) <[email protected]> Reviewed-by: Boris Ostrovsky <[email protected]> Signed-off-by: Boris Ostrovsky <[email protected]>
2017-08-31xen: remove tests for pvh mode in pure pv pathsJuergen Gross3-31/+2
Remove the last tests for XENFEAT_auto_translated_physmap in pure PV-domain specific paths. PVH V1 is gone and the feature will always be "false" in PV guests. Signed-off-by: Juergen Gross <[email protected]> Reviewed-by: Boris Ostrovsky <[email protected]> Signed-off-by: Boris Ostrovsky <[email protected]>
2017-08-31tracing/hyper-v: Trace hyperv_mmu_flush_tlb_others()Vitaly Kuznetsov2-0/+47
Add Hyper-V tracing subsystem and trace hyperv_mmu_flush_tlb_others(). Tracing is done the same way we do xen_mmu_flush_tlb_others(). Signed-off-by: Vitaly Kuznetsov <[email protected]> Reviewed-by: Andy Shevchenko <[email protected]> Reviewed-by: Stephen Hemminger <[email protected]> Reviewed-by: Steven Rostedt (VMware) <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Haiyang Zhang <[email protected]> Cc: Jork Loeser <[email protected]> Cc: K. Y. Srinivasan <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Simon Xiao <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-31x86/hyper-v: Support extended CPU ranges for TLB flush hypercallsVitaly Kuznetsov2-3/+140
Hyper-V hosts may support more than 64 vCPUs, we need to use HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX/LIST_EX hypercalls in this case. Signed-off-by: Vitaly Kuznetsov <[email protected]> Reviewed-by: Andy Shevchenko <[email protected]> Reviewed-by: Stephen Hemminger <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Haiyang Zhang <[email protected]> Cc: Jork Loeser <[email protected]> Cc: K. Y. Srinivasan <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Simon Xiao <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-31Merge branch 'x86/mm' into x86/platform, to pick up TLB flush dependencyIngo Molnar185-822/+3423
Signed-off-by: Ingo Molnar <[email protected]>