aboutsummaryrefslogtreecommitdiff
path: root/arch/powerpc/include
AgeCommit message (Collapse)AuthorFilesLines
2019-07-08Merge tag 's390-5.3-1' of ↵Linus Torvalds1-2/+0
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 updates from Vasily Gorbik: - Improve stop_machine wait logic: replace cpu_relax_yield call in generic stop_machine function with a weak stop_machine_yield function. This is overridden on s390, which yields the current cpu to the neighbouring cpu after a couple of retries, instead of blindly giving up the cpu to the hipervisor. This significantly improves stop_machine performance on s390 in overcommitted scenarios. This includes common code changes which have been Acked by Peter Zijlstra and Thomas Gleixner. - Improve jump label transformation speed: transform jump labels without using stop_machine. - Refactoring of the vfio-ccw cp handling, simplifying the code and avoiding unneeded allocating/copying. - Various vfio-ccw fixes (ccw translation, state machine). - Add support for vfio-ap queue interrupt control in the guest. This includes s390 kvm changes which have been Acked by Christian Borntraeger. - Add protected virtualization support for virtio-ccw. - Enforce both CONFIG_SMP and CONFIG_HOTPLUG_CPU, which allows to remove some code which most likely isn't working at all, besides that s390 didn't even compile for !CONFIG_SMP. - Support for special flagged EP11 CPRBs for zcrypt. - Handle PCI devices with no support for new MIO instructions. - Avoid KASAN false positives in reworked stack unwinder. - Couple of fixes for the QDIO layer. - Convert s390 specific documentation to ReST format. - Let s390 crypto modules return -ENODEV instead of -EOPNOTSUPP if hardware is missing. This way our modules behave like most other modules and which is also what systemd's systemd-modules-load.service expects. - Replace defconfig with performance_defconfig, so there is one config file less to maintain. - Remove the SCLP call home device driver, which was never useful. - Cleanups all over the place. * tag 's390-5.3-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (83 commits) docs: s390: s390dbf: typos and formatting, update crash command docs: s390: unify and update s390dbf kdocs at debug.c docs: s390: restore important non-kdoc parts of s390dbf.rst vfio-ccw: Fix the conversion of Format-0 CCWs to Format-1 s390/pci: correctly handle MIO opt-out s390/pci: deal with devices that have no support for MIO instructions s390: ap: kvm: Enable PQAP/AQIC facility for the guest s390: ap: implement PAPQ AQIC interception in kernel vfio: ap: register IOMMU VFIO notifier s390: ap: kvm: add PQAP interception for AQIC s390/unwind: cleanup unused READ_ONCE_TASK_STACK s390/kasan: avoid false positives during stack unwind s390/qdio: don't touch the dsci in tiqdio_add_input_queues() s390/qdio: (re-)initialize tiqdio list entries s390/dasd: Fix a precision vs width bug in dasd_feature_list() s390/cio: introduce driver_override on the css bus vfio-ccw: make convert_ccw0_to_ccw1 static vfio-ccw: Remove copy_ccw_from_iova() vfio-ccw: Factor out the ccw0-to-ccw1 transition vfio-ccw: Copy CCW data outside length calculation ...
2019-07-06powerpc: Move PPC_HA() PPC_HI() and PPC_LO() to ppc-opcode.hChristophe Leroy1-0/+9
PPC_HA() PPC_HI() and PPC_LO() macros are nice macros. Move them from module64.c to ppc-opcode.h in order to use them in other places. Signed-off-by: Christophe Leroy <[email protected]> [mpe: Clean up formatting in new code, drop duplicates in ftrace.c] Signed-off-by: Michael Ellerman <[email protected]>
2019-07-05powerpc/64: reuse PPC32 static inline flush_dcache_range()Christophe Leroy2-6/+18
This patch drops the assembly PPC64 version of flush_dcache_range() and re-uses the PPC32 static inline version. With GCC 8.1, the following code is generated: void flush_test(unsigned long start, unsigned long stop) { flush_dcache_range(start, stop); } 0000000000000130 <.flush_test>: 130: 3d 22 00 00 addis r9,r2,0 132: R_PPC64_TOC16_HA .data+0x8 134: 81 09 00 00 lwz r8,0(r9) 136: R_PPC64_TOC16_LO .data+0x8 138: 3d 22 00 00 addis r9,r2,0 13a: R_PPC64_TOC16_HA .data+0xc 13c: 80 e9 00 00 lwz r7,0(r9) 13e: R_PPC64_TOC16_LO .data+0xc 140: 7d 48 00 d0 neg r10,r8 144: 7d 43 18 38 and r3,r10,r3 148: 7c 00 04 ac hwsync 14c: 4c 00 01 2c isync 150: 39 28 ff ff addi r9,r8,-1 154: 7c 89 22 14 add r4,r9,r4 158: 7c 83 20 50 subf r4,r3,r4 15c: 7c 89 3c 37 srd. r9,r4,r7 160: 41 82 00 1c beq 17c <.flush_test+0x4c> 164: 7d 29 03 a6 mtctr r9 168: 60 00 00 00 nop 16c: 60 00 00 00 nop 170: 7c 00 18 ac dcbf 0,r3 174: 7c 63 42 14 add r3,r3,r8 178: 42 00 ff f8 bdnz 170 <.flush_test+0x40> 17c: 7c 00 04 ac hwsync 180: 4c 00 01 2c isync 184: 4e 80 00 20 blr 188: 60 00 00 00 nop 18c: 60 00 00 00 nop Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-05powerpc/32: define helpers to get L1 cache sizes.Christophe Leroy2-11/+29
This patch defines C helpers to retrieve the size of cache blocks and uses them in the cacheflush functions. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-05powerpc/64: flush_inval_dcache_range() becomes flush_dcache_range()Christophe Leroy1-1/+0
On most arches having function flush_dcache_range(), including PPC32, this function does a writeback and invalidation of the cache bloc. On PPC64, flush_dcache_range() only does a writeback while flush_inval_dcache_range() does the invalidation in addition. In addition it looks like within arch/powerpc/, there are no PPC64 platforms using flush_dcache_range() This patch drops the existing 64 bits version of flush_dcache_range() and renames flush_inval_dcache_range() into flush_dcache_range(). Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-05powerpc: slightly improve cache helpersChristophe Leroy1-4/+4
Cache instructions (dcbz, dcbi, dcbf and dcbst) take two registers that are summed to obtain the target address. Using 'Z' constraint and '%y0' argument gives GCC the opportunity to use both registers instead of only one with the second being forced to 0. Suggested-by: Segher Boessenkool <[email protected]> Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-05powerpc/book3s: Use config independent helpers for page table walkAneesh Kumar K.V3-2/+71
Even when we have HugeTLB and THP disabled, kernel linear map can still be mapped with hugepages. This is only an issue with radix translation because hash MMU doesn't map kernel linear range in linux page table and other kernel map areas are not mapped using hugepage. Add config independent helpers and put WARN_ON() when we don't expect things to be mapped via hugepages. Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-05powerpc/mm: Remove unused variable declarationAneesh Kumar K.V1-1/+0
Since commit 0034d395f89d ("powerpc/mm/hash64: Map all the kernel regions in the same 0xc range") __kernel_virt_size is not used anymore. Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-04powerpc/pseries: Protect against hogging the cpu while setting up the statsNaveen N. Rao1-1/+1
When enabling or disabling the vcpu dispatch statistics, we do a lot of work including allocating/deallocating memory across all possible cpus for the DTL buffer. In order to guard against hogging the cpu for too long, track the time we're taking and yield the processor if necessary. Signed-off-by: Naveen N. Rao <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-04powerpc/pseries: Provide vcpu dispatch statisticsNaveen N. Rao1-0/+6
For Shared Processor LPARs, the POWER Hypervisor maintains a relatively static mapping of the LPAR processors (vcpus) to physical processor chips (representing the "home" node) and tries to always dispatch vcpus on their associated physical processor chip. However, under certain scenarios, vcpus may be dispatched on a different processor chip (away from its home node). The actual physical processor number on which a certain vcpu is dispatched is available to the guest in the 'processor_id' field of each DTL entry. The guest can discover the home node of each vcpu through the H_HOME_NODE_ASSOCIATIVITY(flags=1) hcall. The guest can also discover the associativity of physical processors, as represented in the DTL entry, through the H_HOME_NODE_ASSOCIATIVITY(flags=2) hcall. These can then be compared to determine if the vcpu was dispatched on its home node or not. If the vcpu was not dispatched on the home node, it is possible to determine if the vcpu was dispatched in a different chip, socket or drawer. Introduce a procfs file /proc/powerpc/vcpudispatch_stats that can be used to obtain these statistics. Writing '1' to this file enables collecting the statistics, while writing '0' disables the statistics. The statistics themselves are available by reading the procfs file. By default, the DTLB log for each vcpu is processed 50 times a second so as not to miss any entries. This processing frequency can be changed through /proc/powerpc/vcpudispatch_stats_freq. Signed-off-by: Naveen N. Rao <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-04powerpc/pseries: Move mm/book3s64/vphn.c under platforms/pseries/Naveen N. Rao1-0/+24
hcall_vphn() is specific to pseries and will be used in a subsequent patch. So, move it to a more appropriate place under arch/powerpc/platforms/pseries. Also merge vphn.h into lppaca.h and update vphn selftest to use the new files. Signed-off-by: Naveen N. Rao <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-04powerpc/pseries: Introduce rwlock to gatekeep DTLB usageNaveen N. Rao1-0/+2
Since we would be introducing a new user of the DTL buffer in a subsequent patch, we need a way to gatekeep use of the DTL buffer. The current debugfs interface for DTL allows registering and opening cpu-specific DTL buffers. Cpu specific files are exposed under debugfs 'powerpc/dtl/' node, and changing 'dtl_event_mask' in the same directory enables controlling the event mask used when registering DTL buffer for a particular cpu. Subsequently, we will be introducing a user of the DTL buffers that registers access to the DTL buffers across all cpus with the same event mask. To ensure these two users do not step on each other, we introduce a rwlock to gatekeep DTL buffer access. This fits the requirement of the current debugfs interface wanting to allow multiple independent cpu-specific users (read lock), and the subsequent user wanting exclusive access (write lock). Suggested-by: Michael Ellerman <[email protected]> Signed-off-by: Naveen N. Rao <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-04powerpc/pseries: Factor out DTL buffer allocation and registration routinesNaveen N. Rao1-0/+3
Introduce new helpers for DTL buffer allocation and registration and have the existing code use those. Signed-off-by: Naveen N. Rao <[email protected]> [mpe: Don't split error messages across lines, for grepability] Signed-off-by: Michael Ellerman <[email protected]>
2019-07-04powerpc/pseries: Use macros for referring to the DTL enable maskNaveen N. Rao1-0/+11
Introduce macros to encode the DTL enable mask fields and use those instead of hardcoding numbers. Acked-by: Nathan Lynch <[email protected]> Signed-off-by: Naveen N. Rao <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-03powerpc: Use the correct style for SPDX License IdentifierNishad Kamdar1-1/+1
This patch corrects the SPDX License Identifier style in the powerpc Hardware Architecture related files. Suggested-by: Joe Perches <[email protected]> Signed-off-by: Nishad Kamdar <[email protected]> Acked-by: Andrew Donnellan <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-03powerpc: Add barrier_nospec to raw_copy_in_user()Suraj Jitindar Singh1-0/+1
Commit ddf35cf3764b ("powerpc: Use barrier_nospec in copy_from_user()") Added barrier_nospec before loading from user-controlled pointers. The intention was to order the load from the potentially user-controlled pointer vs a previous branch based on an access_ok() check or similar. In order to achieve the same result, add a barrier_nospec to the raw_copy_in_user() function before loading from such a user-controlled pointer. Fixes: ddf35cf3764b ("powerpc: Use barrier_nospec in copy_from_user()") Signed-off-by: Suraj Jitindar Singh <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-03powerpc: remove device_to_mask()Christoph Hellwig1-8/+0
Use the dma_get_mask() helper from dma-mapping.h instead, as they are functionally identical. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Alexey Kardashevskiy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-03powerpc: Fix compile issue with force DAWRMichael Neuling1-7/+14
If you compile with KVM but without CONFIG_HAVE_HW_BREAKPOINT you fail at linking with: arch/powerpc/kvm/book3s_hv_rmhandlers.o:(.text+0x708): undefined reference to `dawr_force_enable' This was caused by commit c1fe190c0672 ("powerpc: Add force enable of DAWR on P9 option"). This moves a bunch of code around to fix this. It moves a lot of the DAWR code in a new file and creates a new CONFIG_PPC_DAWR to enable compiling it. Fixes: c1fe190c0672 ("powerpc: Add force enable of DAWR on P9 option") Signed-off-by: Michael Neuling <[email protected]> [mpe: Minor formatting in set_dawr()] Signed-off-by: Michael Ellerman <[email protected]>
2019-07-03powerpc/64s/radix: keep kernel ERAT over local process/guest invalidatesNicholas Piggin1-0/+9
ISA v3.0 radix modes provide SLBIA variants which can invalidate ERAT for effPID!=0 or for effLPID!=0, which allows user and guest invalidations to retain kernel/host ERAT entries. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-03powerpc/64s: Rename PPC_INVALIDATE_ERAT to PPC_ISA_3_0_INVALIDATE_ERATNicholas Piggin1-1/+1
This makes it clear to the caller that it can only be used on POWER9 and later CPUs. Signed-off-by: Nicholas Piggin <[email protected]> [mpe: Use "ISA_3_0" rather than "ARCH_300"] Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: remove bad stack branchNicholas Piggin2-7/+2
The bad stack test in interrupt handlers has a few problems. For performance it is taken in the common case, which is a fetch bubble and a waste of i-cache. For code development and maintainence, it requires yet another stack frame setup routine, and that constrains all exception handlers to follow the same register save pattern which inhibits future optimisation. Remove the test/branch and replace it with a trap. Teach the program check handler to use the emergency stack for this case. This does not result in quite so nice a message, however the SRR0 and SRR1 of the crashed interrupt can be seen in r11 and r12, as is the original r1 (adjusted by INT_FRAME_SIZE). These are the most important parts to debugging the issue. The original r9-12 and cr0 is lost, which is the main downside. kernel BUG at linux/arch/powerpc/kernel/exceptions-64s.S:847! Oops: Exception in kernel mode, sig: 5 [#1] BE SMP NR_CPUS=2048 NUMA PowerNV Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted NIP: c000000000009108 LR: c000000000cadbcc CTR: c0000000000090f0 REGS: c0000000fffcbd70 TRAP: 0700 Not tainted MSR: 9000000000021032 <SF,HV,ME,IR,DR,RI> CR: 28222448 XER: 20040000 CFAR: c000000000009100 IRQMASK: 0 GPR00: 000000000000003d fffffffffffffd00 c0000000018cfb00 c0000000f02b3166 GPR04: fffffffffffffffd 0000000000000007 fffffffffffffffb 0000000000000030 GPR08: 0000000000000037 0000000028222448 0000000000000000 c000000000ca8de0 GPR12: 9000000002009032 c000000001ae0000 c000000000010a00 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR20: c0000000f00322c0 c000000000f85200 0000000000000004 ffffffffffffffff GPR24: fffffffffffffffe 0000000000000000 0000000000000000 000000000000000a GPR28: 0000000000000000 0000000000000000 c0000000f02b391c c0000000f02b3167 NIP [c000000000009108] decrementer_common+0x18/0x160 LR [c000000000cadbcc] .vsnprintf+0x3ec/0x4f0 Call Trace: Instruction dump: 996d098a 994d098b 38610070 480246ed 48005518 60000000 38200000 718a4000 7c2a0b78 3821fd00 41c20008 e82d0970 <0981fd00> f92101a0 f9610170 f9810178 Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: move paca save area offsets into exception-64s.SNicholas Piggin1-14/+3
No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: move head-64.h code to exception-64s.S where it is usedNicholas Piggin2-253/+0
No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: move exception-64s.h code to exception-64s.S where it ↵Nicholas Piggin1-431/+0
is used No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: move KVM related code togetherNicholas Piggin1-19/+21
No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: remove STD_EXCEPTION_COMMON variantsNicholas Piggin2-24/+17
These are only called in one place each. No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: move EXCEPTION_PROLOG_2* to a more logical placeNicholas Piggin1-56/+57
No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: unwind exception-64s.h macrosNicholas Piggin2-120/+57
Many of these macros just specify 1-4 lines which are only called a few times each at most, and often just once. Remove this indirection. No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: Move EXCEPTION_COMMON additions into callersNicholas Piggin2-31/+15
More cases of code insertion via macros that does not add a great deal. All the additions have to be specified in the macro arguments, so they can just as well go after the macro. No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: Move EXCEPTION_COMMON handler and return branches ↵Nicholas Piggin1-13/+13
into callers The aim is to reduce the amount of indirection it takes to get through the exception handler macros, particularly where it provides little code sharing. No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: Make EXCEPTION_PROLOG_0 a gas macro for consistency ↵Nicholas Piggin1-12/+13
with others No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: KVM handler can set the HSRR trap bitNicholas Piggin2-5/+7
Move the KVM trap HSRR bit into the KVM handler, which can be conditionally applied when hsrr parameter is set. No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: merge KVM handler and skip variantsNicholas Piggin2-22/+14
Conditionally expand the skip case if it is specified. No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: consolidate maskable and non-maskable prologsNicholas Piggin1-67/+45
Conditionally expand the soft-masking test if a mask is passed in. No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: remove the "extra" macro parameterNicholas Piggin1-88/+70
Rather than pass in the soft-masking and KVM tests via macro that is passed to another macro to expand it, switch to usig gas macros and conditionally expand the soft-masking and KVM tests. The system reset with its idle test is open coded as it is a one-off. No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: move and tidy EXCEPTION_PROLOG_2 variantsNicholas Piggin1-45/+42
- Re-name the macros to _REAL and _VIRT suffixes rather than no and _RELON suffix. - Move the macro definitions together in the file. - Move RELOCATABLE ifdef inside the _VIRT macro. Further consolidation between variants does not buy much here. No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: consolidate EXCEPTION_PROLOG_2 with _NORI variantNicholas Piggin1-32/+11
Switch to a gas macro that conditionally expands the RI clearing instruction. No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: remove H concatenation for EXC_HV variantsNicholas Piggin2-143/+197
Replace all instances of this with gas macros that test the hsrr parameter and use the appropriate register names / labels. No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> [mpe: Remove extraneous 2nd check for 0xea0 in SOFTEN_TEST] Signed-off-by: Michael Ellerman <[email protected]>
2019-07-02powerpc/64s/exception: Remove unused SOFTEN_VALUE_0x980Michael Ellerman1-1/+0
Remove SOFTEN_VALUE_0x980, it's been unused since commit dabe859ec636 ("powerpc: Give hypervisor decrementer interrupts their own handler") (Sep 2012). Signed-off-by: Michael Ellerman <[email protected]>
2019-07-01powerpc: don't use asm-generic/ptrace.hChristoph Hellwig1-7/+22
Doing the indirection through macros for the regs accessors just makes them harder to read, so implement the helpers directly. Note that only the helpers actually used are implemented now. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Michael Ellerman <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]>
2019-07-01powerpc/64s/exception: fix line wrap and semicolon inconsistencies in macrosNicholas Piggin2-52/+52
By convention, all lines should be separated by a semicolons. Last line should have neither semicolon or line wrap. No generated code change. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-01powerpc/powernv: remove the unused vas_win_paste_addr and vas_win_id functionsChristoph Hellwig1-10/+0
These two function have never been used anywhere in the kernel tree since they were added to the kernel. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-01powerpc/powernv: remove unused NPU DMA codeChristoph Hellwig2-24/+0
None of these routines were ever used anywhere in the kernel tree since they were added to the kernel. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-01powerpc/powernv: remove the unused tunneling exportsChristoph Hellwig1-4/+0
These have been unused anywhere in the kernel tree ever since they've been added to the kernel. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-01powerpc/powernv: remove the unused pnv_pci_set_p2p functionChristoph Hellwig2-4/+0
This function has never been used anywhere in the kernel tree since it was added to the tree. We also now have proper PCIe P2P APIs in the core kernel, and any new P2P support should be using those. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-01powerpc/cacheflush: fix variable set but not usedQian Cai1-2/+5
The powerpc's flush_cache_vmap() is defined as a macro and never use both of its arguments, so it will generate a compilation warning, lib/ioremap.c: In function 'ioremap_page_range': lib/ioremap.c:203:16: warning: variable 'start' set but not used [-Wunused-but-set-variable] Fix it by making it an inline function. Signed-off-by: Qian Cai <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-07-01Merge branch 'fixes' into nextMichael Ellerman4-0/+44
Merge our fixes branch into next, this brings in a number of commits that fix bugs we don't want to hit in next, in particular the fix for CVE-2019-12817.
2019-06-22Merge tag 'powerpc-5.2-5' of ↵Linus Torvalds1-0/+7
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fixes from Michael Ellerman: "This is a frustratingly large batch at rc5. Some of these were sent earlier but were missed by me due to being distracted by other things, and some took a while to track down due to needing manual bisection on old hardware. But still we clearly need to improve our testing of KVM, and of 32-bit, so that we catch these earlier. Summary: seven fixes, all for bugs introduced this cycle. - The commit to add KASAN support broke booting on 32-bit SMP machines, due to a refactoring that moved some setup out of the secondary CPU path. - A fix for another 32-bit SMP bug introduced by the fast syscall entry implementation for 32-bit BOOKE. And a build fix for the same commit. - Our change to allow the DAWR to be force enabled on Power9 introduced a bug in KVM, where we clobber r3 leading to a host crash. - The same commit also exposed a previously unreachable bug in the nested KVM handling of DAWR, which could lead to an oops in a nested host. - One of the DMA reworks broke the b43legacy WiFi driver on some people's powermacs, fix it by enabling a 30-bit ZONE_DMA on 32-bit. - A fix for TLB flushing in KVM introduced a new bug, as it neglected to also flush the ERAT, this could lead to memory corruption in the guest. Thanks to: Aaro Koskinen, Christoph Hellwig, Christophe Leroy, Larry Finger, Michael Neuling, Suraj Jitindar Singh" * tag 'powerpc-5.2-5' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: KVM: PPC: Book3S HV: Invalidate ERAT when flushing guest TLB entries powerpc: enable a 30-bit ZONE_DMA for 32-bit pmac KVM: PPC: Book3S HV: Only write DAWR[X] when handling h_set_dawr in real mode KVM: PPC: Book3S HV: Fix r3 corruption in h_set_dabr() powerpc/32: fix build failure on book3e with KVM powerpc/booke: fix fast syscall entry on SMP powerpc/32s: fix initial setup of segment registers on secondary CPU
2019-06-21Merge tag 'spdx-5.2-rc6' of ↵Linus Torvalds4-15/+4
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx Pull still more SPDX updates from Greg KH: "Another round of SPDX updates for 5.2-rc6 Here is what I am guessing is going to be the last "big" SPDX update for 5.2. It contains all of the remaining GPLv2 and GPLv2+ updates that were "easy" to determine by pattern matching. The ones after this are going to be a bit more difficult and the people on the spdx list will be discussing them on a case-by-case basis now. Another 5000+ files are fixed up, so our overall totals are: Files checked: 64545 Files with SPDX: 45529 Compared to the 5.1 kernel which was: Files checked: 63848 Files with SPDX: 22576 This is a huge improvement. Also, we deleted another 20000 lines of boilerplate license crud, always nice to see in a diffstat" * tag 'spdx-5.2-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx: (65 commits) treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 507 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 506 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 505 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 504 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 503 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 502 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 501 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 499 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 498 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 497 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 496 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 495 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 491 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 490 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 489 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 488 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 487 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 486 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 485 ...
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner4-15/+4
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Enrico Weigelt <[email protected]> Reviewed-by: Kate Stewart <[email protected]> Reviewed-by: Allison Randal <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>