aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2021-03-29powerpc/40x: Prepare for enabling MMU in critical exception prologChristophe Leroy1-3/+37
In order the enable MMU early in exception prolog, implement CONFIG_VMAP_STACK principles in critical exception prolog. There is no intention to use CONFIG_VMAP_STACK on 40x, but related code will be used to enable MMU early in exception in a later patch. Also address (critirq_ctx - PAGE_OFFSET) directly instead of using tophys() in order to win one instruction. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3fd75ee54c48307119acdbf66cfea966c1463bbd.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/40x: Reorder a few instructions in critical exception prologChristophe Leroy1-4/+4
In order to ease preparation for CONFIG_VMAP_STACK, reorder a few instruction, especially save r1 into stack frame earlier. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c895ecf958c86d1736bdd2ff6f36626b55f35fd2.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/40x: Save SRR0/SRR1 and r10/r11 earlier in critical exceptionChristophe Leroy2-9/+8
In order to be able to switch MMU on in exception prolog, save SRR0 and SRR1 earlier. Also save r10 and r11 into stack earlier to better match with the normal exception prolog. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/79a93f253d72dc97ac968c9c62b5066960b688ed.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/40x: Change CRITICAL_EXCEPTION_PROLOG macro to a gas macroChristophe Leroy1-35/+36
Change CRITICAL_EXCEPTION_PROLOG macro to a gas macro to remove the ugly ; and \ on each line. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/73291fb9dc9ec58182c27a40dfc3db204e3f4024.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/40x: Don't use SPRN_SPRG_SCRATCH0/1 in TLB miss handlersChristophe Leroy1-21/+18
SPRN_SPRG_SCRATCH5 is used to save SPRN_PID. SPRN_SPRG_SCRATCH6 is already available. SPRN_PID is only 8 bits. We have r12 that contains CR. We only need to preserve CR0, so we have space available in r12 to save PID. Keep PID in r12 and free up SPRN_SPRG_SCRATCH5. Then In TLB miss handlers, instead of using SPRN_SPRG_SCRATCH0 and SPRN_SPRG_SCRATCH1, use SPRN_SPRG_SCRATCH5 and SPRN_SPRG_SCRATCH6 to avoid future conflicts with normal exception prologs. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4cdaa85d38e14d594ba902424060ec55babf2c42.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/traps: Declare unrecoverable_exception() as __noreturnChristophe Leroy2-2/+5
unrecoverable_exception() is never expected to return, most callers have an infiniteloop in case it returns. Ensure it really never returns by terminating it with a BUG(), and declare it __no_return. It always GCC to really simplify functions calling it. In the exemple below, it avoids the stack frame in the likely fast path and avoids code duplication for the exit. With this patch: 00000348 <interrupt_exit_kernel_prepare>: 348: 81 43 00 84 lwz r10,132(r3) 34c: 71 48 00 02 andi. r8,r10,2 350: 41 82 00 2c beq 37c <interrupt_exit_kernel_prepare+0x34> 354: 71 4a 40 00 andi. r10,r10,16384 358: 40 82 00 20 bne 378 <interrupt_exit_kernel_prepare+0x30> 35c: 80 62 00 70 lwz r3,112(r2) 360: 74 63 00 01 andis. r3,r3,1 364: 40 82 00 28 bne 38c <interrupt_exit_kernel_prepare+0x44> 368: 7d 40 00 a6 mfmsr r10 36c: 7c 11 13 a6 mtspr 81,r0 370: 7c 12 13 a6 mtspr 82,r0 374: 4e 80 00 20 blr 378: 48 00 00 00 b 378 <interrupt_exit_kernel_prepare+0x30> 37c: 94 21 ff f0 stwu r1,-16(r1) 380: 7c 08 02 a6 mflr r0 384: 90 01 00 14 stw r0,20(r1) 388: 48 00 00 01 bl 388 <interrupt_exit_kernel_prepare+0x40> 388: R_PPC_REL24 unrecoverable_exception 38c: 38 e2 00 70 addi r7,r2,112 390: 3d 00 00 01 lis r8,1 394: 7c c0 38 28 lwarx r6,0,r7 398: 7c c6 40 78 andc r6,r6,r8 39c: 7c c0 39 2d stwcx. r6,0,r7 3a0: 40 a2 ff f4 bne 394 <interrupt_exit_kernel_prepare+0x4c> 3a4: 38 60 00 01 li r3,1 3a8: 4b ff ff c0 b 368 <interrupt_exit_kernel_prepare+0x20> Without this patch: 00000348 <interrupt_exit_kernel_prepare>: 348: 94 21 ff f0 stwu r1,-16(r1) 34c: 93 e1 00 0c stw r31,12(r1) 350: 7c 7f 1b 78 mr r31,r3 354: 81 23 00 84 lwz r9,132(r3) 358: 71 2a 00 02 andi. r10,r9,2 35c: 41 82 00 34 beq 390 <interrupt_exit_kernel_prepare+0x48> 360: 71 29 40 00 andi. r9,r9,16384 364: 40 82 00 28 bne 38c <interrupt_exit_kernel_prepare+0x44> 368: 80 62 00 70 lwz r3,112(r2) 36c: 74 63 00 01 andis. r3,r3,1 370: 40 82 00 3c bne 3ac <interrupt_exit_kernel_prepare+0x64> 374: 7d 20 00 a6 mfmsr r9 378: 7c 11 13 a6 mtspr 81,r0 37c: 7c 12 13 a6 mtspr 82,r0 380: 83 e1 00 0c lwz r31,12(r1) 384: 38 21 00 10 addi r1,r1,16 388: 4e 80 00 20 blr 38c: 48 00 00 00 b 38c <interrupt_exit_kernel_prepare+0x44> 390: 7c 08 02 a6 mflr r0 394: 90 01 00 14 stw r0,20(r1) 398: 48 00 00 01 bl 398 <interrupt_exit_kernel_prepare+0x50> 398: R_PPC_REL24 unrecoverable_exception 39c: 80 01 00 14 lwz r0,20(r1) 3a0: 81 3f 00 84 lwz r9,132(r31) 3a4: 7c 08 03 a6 mtlr r0 3a8: 4b ff ff b8 b 360 <interrupt_exit_kernel_prepare+0x18> 3ac: 39 02 00 70 addi r8,r2,112 3b0: 3d 40 00 01 lis r10,1 3b4: 7c e0 40 28 lwarx r7,0,r8 3b8: 7c e7 50 78 andc r7,r7,r10 3bc: 7c e0 41 2d stwcx. r7,0,r8 3c0: 40 a2 ff f4 bne 3b4 <interrupt_exit_kernel_prepare+0x6c> 3c4: 38 60 00 01 li r3,1 3c8: 4b ff ff ac b 374 <interrupt_exit_kernel_prepare+0x2c> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1e883e9d93fdb256853d1434c8ad77c257349b2d.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29cxl: don't manipulate the mm.mm_users field directlyLaurent Dufour1-1/+1
It is better to rely on the API provided by the MM layer instead of directly manipulating the mm_users field. Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com> Acked-by: Frederic Barrat <fbarrat@linux.ibm.com> Acked-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210310174405.51044-1-ldufour@linux.ibm.com
2021-03-29powerpc/uprobes: Validation for prefixed instructionRavi Bangoria1-0/+7
As per ISA 3.1, prefixed instruction should not cross 64-byte boundary. So don't allow Uprobe on such prefixed instruction. There are two ways probed instruction is changed in mapped pages. First, when Uprobe is activated, it searches for all the relevant pages and replace instruction in them. In this case, if that probe is on the 64-byte unaligned prefixed instruction, error out directly. Second, when Uprobe is already active and user maps a relevant page via mmap(), instruction is replaced via mmap() code path. But because Uprobe is invalid, entire mmap() operation can not be stopped. In this case just print an error and continue. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210311091538.368590-1-ravi.bangoria@linux.ibm.com
2021-03-29powerpc/signal: Use __get_user() to copy sigset_tChristopher M. Riedl3-3/+10
Usually sigset_t is exactly 8B which is a "trivial" size and does not warrant using __copy_from_user(). Use __get_user() directly in anticipation of future work to remove the trivial size optimizations from __copy_from_user(). The ppc32 implementation of get_sigset_t() previously called copy_from_user() which, unlike __copy_from_user(), calls access_ok(). Replacing this w/ __get_user() (no access_ok()) is fine here since both callsites in signal_32.c are preceded by an earlier access_ok(). Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-11-cmr@codefail.de
2021-03-29powerpc/signal64: Rewrite rt_sigreturn() to minimise uaccess switchesDaniel Axtens1-4/+6
Add uaccess blocks and use the 'unsafe' versions of functions doing user access where possible to reduce the number of times uaccess has to be opened/closed. Co-developed-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-10-cmr@codefail.de
2021-03-29powerpc/signal64: Rewrite handle_rt_signal64() to minimise uaccess switchesDaniel Axtens1-21/+36
Add uaccess blocks and use the 'unsafe' versions of functions doing user access where possible to reduce the number of times uaccess has to be opened/closed. There is no 'unsafe' version of copy_siginfo_to_user, so move it slightly to allow for a "longer" uaccess block. Co-developed-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-9-cmr@codefail.de
2021-03-29powerpc/signal64: Replace restore_sigcontext() w/ unsafe_restore_sigcontext()Christopher M. Riedl1-27/+41
Previously restore_sigcontext() performed a costly KUAP switch on every uaccess operation. These repeated uaccess switches cause a significant drop in signal handling performance. Rewrite restore_sigcontext() to assume that a userspace read access window is open by replacing all uaccess functions with their 'unsafe' versions. Modify the callers to first open, call unsafe_restore_sigcontext(), and then close the uaccess window. Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-8-cmr@codefail.de
2021-03-29powerpc/signal64: Replace setup_sigcontext() w/ unsafe_setup_sigcontext()Christopher M. Riedl1-27/+45
Previously setup_sigcontext() performed a costly KUAP switch on every uaccess operation. These repeated uaccess switches cause a significant drop in signal handling performance. Rewrite setup_sigcontext() to assume that a userspace write access window is open by replacing all uaccess functions with their 'unsafe' versions. Modify the callers to first open, call unsafe_setup_sigcontext() and then close the uaccess window. Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-7-cmr@codefail.de
2021-03-29powerpc/signal64: Remove TM ifdefery in middle of if/else blockChristopher M. Riedl2-51/+54
Both rt_sigreturn() and handle_rt_signal_64() contain TM-related ifdefs which break-up an if/else block. Provide stubs for the ifdef-guarded TM functions and remove the need for an ifdef in rt_sigreturn(). Rework the remaining TM ifdef in handle_rt_signal64() similar to commit f1cf4f93de2f ("powerpc/signal32: Remove ifdefery in middle of if/else"). Unlike in the commit for ppc32, the ifdef can't be removed entirely since uc_transact in sigframe depends on CONFIG_PPC_TRANSACTIONAL_MEM. Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-6-cmr@codefail.de
2021-03-29powerpc: Reference parameter in MSR_TM_ACTIVE() macroChristopher M. Riedl1-1/+1
Unlike the other MSR_TM_* macros, MSR_TM_ACTIVE does not reference or use its parameter unless CONFIG_PPC_TRANSACTIONAL_MEM is defined. This causes an 'unused variable' compile warning unless the variable is also guarded with CONFIG_PPC_TRANSACTIONAL_MEM. Reference but do nothing with the argument in the macro to avoid a potential compile warning. Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Reviewed-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-5-cmr@codefail.de
2021-03-29powerpc/signal64: Remove non-inline calls from setup_sigcontext()Christopher M. Riedl1-11/+21
The majority of setup_sigcontext() can be refactored to execute in an "unsafe" context assuming an open uaccess window except for some non-inline function calls. Move these out into a separate prepare_setup_sigcontext() function which must be called first and before opening up a uaccess window. Non-inline function calls should be avoided during a uaccess window for a few reasons: - KUAP should be enabled for as much kernel code as possible. Opening a uaccess window disables KUAP which means any code executed during this time contributes to a potential attack surface. - Non-inline functions default to traceable which means they are instrumented for ftrace. This adds more code which could run with KUAP disabled. - Powerpc does not currently support the objtool UACCESS checks. All code running with uaccess must be audited manually which means: less code -> less work -> fewer problems (in theory). A follow-up commit converts setup_sigcontext() to be "unsafe". Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-4-cmr@codefail.de
2021-03-29powerpc/signal: Add unsafe_copy_{vsx, fpr}_from_user()Christopher M. Riedl1-0/+26
Reuse the "safe" implementation from signal.c but call unsafe_get_user() directly in a loop to avoid the intermediate copy into a local buffer. Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Reviewed-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-3-cmr@codefail.de
2021-03-29powerpc/uaccess: Add unsafe_copy_from_user()Christopher M. Riedl1-0/+21
Use the same approach as unsafe_copy_to_user() but instead call unsafe_get_user() in a loop. Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-2-cmr@codefail.de
2021-03-29powerpc/qspinlock: Use generic smp_cond_load_relaxedDavidlohr Bueso2-16/+7
49a7d46a06c3 (powerpc: Implement smp_cond_load_relaxed()) added busy-waiting pausing with a preferred SMT priority pattern, lowering the priority (reducing decode cycles) during the whole loop slowpath. However, data shows that while this pattern works well with simple spinlocks, queued spinlocks benefit more being kept in medium priority, with a cpu_relax() instead, being a low+medium combo on powerpc. Data is from three benchmarks on a Power9: 9008-22L 64 CPUs with 2 sockets and 8 threads per core. 1. locktorture. This is data for the lowest and most artificial/pathological level, with increasing thread counts pounding on the lock. Metrics are total ops/minute. Despite some small hits in the 4-8 range, scenarios are either neutral or favorable to this patch. +=========+==========+==========+=======+ | # tasks | vanilla | dirty | %diff | +=========+==========+==========+=======+ | 2 | 46718565 | 48751350 | 4.35 | +---------+----------+----------+-------+ | 4 | 51740198 | 50369082 | -2.65 | +---------+----------+----------+-------+ | 8 | 63756510 | 62568821 | -1.86 | +---------+----------+----------+-------+ | 16 | 67824531 | 70966546 | 4.63 | +---------+----------+----------+-------+ | 32 | 53843519 | 61155508 | 13.58 | +---------+----------+----------+-------+ | 64 | 53005778 | 53104412 | 0.18 | +---------+----------+----------+-------+ | 128 | 53331980 | 54606910 | 2.39 | +=========+==========+==========+=======+ 2. sockperf (tcp throughput) Here a client will do one-way throughput tests to a localhost server, with increasing message sizes, dealing with the sk_lock. This patch shows to put the performance of the qspinlock back to par with that of the simple lock: simple-spinlock vanilla dirty Hmean 14 73.50 ( 0.00%) 54.44 * -25.93%* 73.45 * -0.07%* Hmean 100 654.47 ( 0.00%) 385.61 * -41.08%* 771.43 * 17.87%* Hmean 300 2719.39 ( 0.00%) 2181.67 * -19.77%* 2666.50 * -1.94%* Hmean 500 4400.59 ( 0.00%) 3390.77 * -22.95%* 4322.14 * -1.78%* Hmean 850 6726.21 ( 0.00%) 5264.03 * -21.74%* 6863.12 * 2.04%* 3. dbench (tmpfs) Configured to run with up to ncpusx8 clients, it shows both latency and throughput metrics. For the latency, with the exception of the 64 case, there is really nothing to go by: vanilla dirty Amean latency-1 1.67 ( 0.00%) 1.67 * 0.09%* Amean latency-2 2.15 ( 0.00%) 2.08 * 3.36%* Amean latency-4 2.50 ( 0.00%) 2.56 * -2.27%* Amean latency-8 2.49 ( 0.00%) 2.48 * 0.31%* Amean latency-16 2.69 ( 0.00%) 2.72 * -1.37%* Amean latency-32 2.96 ( 0.00%) 3.04 * -2.60%* Amean latency-64 7.78 ( 0.00%) 8.17 * -5.07%* Amean latency-512 186.91 ( 0.00%) 186.41 * 0.27%* For the dbench4 Throughput (misleading but traditional) there's a small but rather constant improvement: vanilla dirty Hmean 1 849.13 ( 0.00%) 851.51 * 0.28%* Hmean 2 1664.03 ( 0.00%) 1663.94 * -0.01%* Hmean 4 3073.70 ( 0.00%) 3104.29 * 1.00%* Hmean 8 5624.02 ( 0.00%) 5694.16 * 1.25%* Hmean 16 9169.49 ( 0.00%) 9324.43 * 1.69%* Hmean 32 11969.37 ( 0.00%) 12127.09 * 1.32%* Hmean 64 15021.12 ( 0.00%) 15243.14 * 1.48%* Hmean 512 14891.27 ( 0.00%) 15162.11 * 1.82%* Measuring the dbench4 Per-VFS Operation latency, shows some very minor differences within the noise level, around the 0-1% ranges. Fixes: 49a7d46a06c3 ("powerpc: Implement smp_cond_load_relaxed()") Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210318204702.71417-1-dave@stgolabs.net
2021-03-26powerpc/spinlock: Unserialize spin_is_lockedDavidlohr Bueso2-14/+1
c6f5d02b6a0f (locking/spinlocks/arm64: Remove smp_mb() from arch_spin_is_locked()) made it pretty official that the call semantics do not imply any sort of barriers, and any user that gets creative must explicitly do any serialization. This creativity, however, is nowadays pretty limited: 1. spin_unlock_wait() has been removed from the kernel in favor of a lock/unlock combo. Furthermore, queued spinlocks have now for a number of years no longer relied on _Q_LOCKED_VAL for the call, but any non-zero value to indicate a locked state. There were cases where the delayed locked store could lead to breaking mutual exclusion with crossed locking; such as with sysv ipc and netfilter being the most extreme. 2. The auditing Andrea did in verified that remaining spin_is_locked() no longer rely on such semantics. Most callers just use it to assert a lock is taken, in a debug nature. The only user that gets cute is NOLOCK qdisc, as of: 96009c7d500e (sched: replace __QDISC_STATE_RUNNING bit with a spin lock) ... which ironically went in the next day after c6f5d02b6a0f. This change replaces test_bit() with spin_is_locked() to know whether to take the busylock heuristic to reduce contention on the main qdisc lock. So any races against spin_is_locked() for archs that use LL/SC for spin_lock() will be benign and not break any mutual exclusion; furthermore, both the seqlock and busylock have the same scope. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210309015950.27688-3-dave@stgolabs.net
2021-03-26powerpc/spinlock: Define smp_mb__after_spinlock only onceDavidlohr Bueso3-5/+3
Instead of both queued and simple spinlocks doing it. Move it into the arch's spinlock.h. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210309015950.27688-2-dave@stgolabs.net
2021-03-26powerpc/ptrace: Convert gpr32_set_common() to user access blockChristophe Leroy2-13/+19
Use user access block in gpr32_set_common() instead of repetitive __get_user() which imply repetitive KUAP open/close. To get it clean, force inlining of the small set of tiny functions called inside the block. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bdcb8652c3bb4ab5b8b3bfd08147434be8fc04c9.1615398498.git.christophe.leroy@csgroup.eu
2021-03-26powerpc/futex: Switch to user_access blockChristophe Leroy1-7/+5
Use user_access_begin() instead of the access_ok/allow_access sequence. This brings the missing might_fault() check. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6cd202cdc4f939d47822e4ddd3c0856210431a58.1615398498.git.christophe.leroy@csgroup.eu
2021-03-26powerpc/net: Switch csum_and_copy_{to/from}_user to user_access blockChristophe Leroy1-11/+4
Use user_access_begin() instead of the might_sleep/access_ok/allow_access sequence. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2dee286d2d6dc9a27d99e31ac564bad4fae2cb49.1615398498.git.christophe.leroy@csgroup.eu
2021-03-26powerpc/lib: Don't use __put_user_asm_goto() outside of uaccess.hChristophe Leroy1-4/+9
__put_user_asm_goto() is internal to uaccess.h Use __put_kernel_nofault() instead. The generated code is identical. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3e32c4f0361933909368b68f5ee569e5de661c1b.1615398498.git.christophe.leroy@csgroup.eu
2021-03-26powerpc/syscalls: Use sys_old_select() in ppc_select()Christophe Leroy2-10/+3
Instead of opencodying the copy of parameters, use the generic sys_old_select(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4de983ad254739da1fe6e9f273baf387b7043ae0.1615398498.git.christophe.leroy@csgroup.eu
2021-03-26powerpc/uaccess: Move copy_mc_xxx() functions downChristophe Leroy1-26/+26
copy_mc_xxx() functions are in the middle of raw_copy functions. For clarity, move them out of the raw_copy functions block. They are using access_ok, so they need to be after the general functions in order to eventually allow the inclusion of asm-generic/uaccess.h in some future. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2cdecb6e5a2fcee6c158d18dd254b71ec0e0da4d.1615398498.git.christophe.leroy@csgroup.eu
2021-03-26powerpc/uaccess: Swap clear_user() and __clear_user()Christophe Leroy1-9/+8
It is clear_user() which is expected to call __clear_user(), not the reverse. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d8ec01fb22f33d87321451d5e5f01cb56dacaa39.1615398498.git.christophe.leroy@csgroup.eu
2021-03-26powerpc/uaccess: Also perform 64 bits copies in unsafe_copy_to_user() on ppc32Christophe Leroy1-3/+3
ppc32 has an efficiant 64 bits __put_user(), so also use it in order to unroll loops more. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ccc08a16eea682d6fa4acc957ffe34003a8f0844.1615398498.git.christophe.leroy@csgroup.eu
2021-03-26powerpc/pseries: export LPAR security flavor in lparcfgLaurent Dufour4-0/+11
This is helpful to read the security flavor from inside the LPAR. In /sys/kernel/debug/powerpc/security_features it can be seen if mitigations are on or off but not the level set through the ASMI menu. Furthermore, reporting it through /proc/powerpc/lparcfg allows an easy processing by the lparstat command [1]. Export it like this in /proc/powerpc/lparcfg: $ grep security_flavor /proc/powerpc/lparcfg security_flavor=1 Value follows what is documented on the IBM support page [2]: 0 Speculative execution fully enabled 1 Speculative execution controls to mitigate user-to-kernel attacks 2 Speculative execution controls to mitigate user-to-kernel and user-to-user side-channel attacks [1] https://groups.google.com/g/powerpc-utils-devel/c/NaKXvdyl_UI/m/wa2stpIDAQAJ [2] https://www.ibm.com/support/pages/node/715841 Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210305125554.5165-1-ldufour@linux.ibm.com
2021-03-24powerpc: Enable KFENCE for PPC32Christophe Leroy7-10/+57
Add architecture specific implementation details for KFENCE and enable KFENCE for the ppc32 architecture. In particular, this implements the required interface in <asm/kfence.h>. KFENCE requires that attributes for pages from its memory pool can individually be set. Therefore, force the Read/Write linear map to be mapped at page granularity. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Marco Elver <elver@google.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8dfe1bd2abde26337c1d8c1ad0acfcc82185e0d5.1614868445.git.christophe.leroy@csgroup.eu
2021-03-24powerpc/ptrace: Remove duplicate check from pt_regs_check()Denis Efremov1-2/+0
"offsetof(struct pt_regs, msr) == offsetof(struct user_pt_regs, msr)" checked in pt_regs_check() twice in a row. Remove the second check. Signed-off-by: Denis Efremov <efremov@linux.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210305112807.26299-1-efremov@linux.com
2021-03-24powerpc/pseries: Move hvc_vio_init_early() prototype to shared locationLee Jones3-3/+4
Fixes the following W=1 kernel build warning(s): drivers/tty/hvc/hvc_vio.c:385:13: warning: no previous prototype for ‘hvc_vio_init_early’ 385 | void __init hvc_vio_init_early(void) | ^~~~~~~~~~~~~~~~~~ Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210303124603.3150175-1-lee.jones@linaro.org
2021-03-24powerpc: Fix misspellings in tlbflush.hZhang Yunkai1-1/+1
The comment marking the end of the include guard is wrong, fix it up. Signed-off-by: Zhang Yunkai <zhang.yunkai@zte.com.cn> [mpe: Rewrite commit message] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210304031318.188447-1-zhang.yunkai@zte.com.cn
2021-03-24powerpc: Remove duplicate includesZhang Yunkai5-6/+0
asm/tm.h included in traps.c is duplicated. It is also included on the 62nd line. asm/udbg.h included in setup-common.c is duplicated. It is also included on the 61st line. asm/bug.h included in arch/powerpc/include/asm/book3s/64/mmu-hash.h is duplicated. It is also included on the 12th line. asm/tlbflush.h included in arch/powerpc/include/asm/pgtable.h is duplicated. It is also included on the 11th line. asm/page.h included in arch/powerpc/include/asm/thread_info.h is duplicated. It is also included on the 13th line. Signed-off-by: Zhang Yunkai <zhang.yunkai@zte.com.cn> [mpe: Squash together from multiple commits] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2021-03-24powerpc/prom: Mark identical_pvr_fixup as __initNathan Chancellor1-1/+1
If identical_pvr_fixup() is not inlined, there are two modpost warnings: WARNING: modpost: vmlinux.o(.text+0x54e8): Section mismatch in reference from the function identical_pvr_fixup() to the function .init.text:of_get_flat_dt_prop() The function identical_pvr_fixup() references the function __init of_get_flat_dt_prop(). This is often because identical_pvr_fixup lacks a __init annotation or the annotation of of_get_flat_dt_prop is wrong. WARNING: modpost: vmlinux.o(.text+0x551c): Section mismatch in reference from the function identical_pvr_fixup() to the function .init.text:identify_cpu() The function identical_pvr_fixup() references the function __init identify_cpu(). This is often because identical_pvr_fixup lacks a __init annotation or the annotation of identify_cpu is wrong. identical_pvr_fixup() calls two functions marked as __init and is only called by a function marked as __init so it should be marked as __init as well. At the same time, remove the inline keywork as it is not necessary to inline this function. The compiler is still free to do so if it feels it is worthwhile since commit 889b3c1245de ("compiler: remove CONFIG_OPTIMIZE_INLINING entirely"). Fixes: 14b3d926a22b ("[POWERPC] 4xx: update 440EP(x)/440GR(x) identical PVR issue workaround") Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://github.com/ClangBuiltLinux/linux/issues/1316 Link: https://lore.kernel.org/r/20210302200829.2680663-1-nathan@kernel.org
2021-03-24powerpc/fadump: Mark fadump_calculate_reserve_size as __initNathan Chancellor1-1/+1
If fadump_calculate_reserve_size() is not inlined, there is a modpost warning: WARNING: modpost: vmlinux.o(.text+0x5196c): Section mismatch in reference from the function fadump_calculate_reserve_size() to the function .init.text:parse_crashkernel() The function fadump_calculate_reserve_size() references the function __init parse_crashkernel(). This is often because fadump_calculate_reserve_size lacks a __init annotation or the annotation of parse_crashkernel is wrong. fadump_calculate_reserve_size() calls parse_crashkernel(), which is marked as __init and fadump_calculate_reserve_size() is called from within fadump_reserve_mem(), which is also marked as __init. Mark fadump_calculate_reserve_size() as __init to fix the section mismatch. Additionally, remove the inline keyword as it is not necessary to inline this function; the compiler is still free to do so if it feels it is worthwhile since commit 889b3c1245de ("compiler: remove CONFIG_OPTIMIZE_INLINING entirely"). Fixes: 11550dc0a00b ("powerpc/fadump: reuse crashkernel parameter for fadump memory reservation") Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://github.com/ClangBuiltLinux/linux/issues/1300 Link: https://lore.kernel.org/r/20210302195013.2626335-1-nathan@kernel.org
2021-03-24selftests/powerpc: Fix L1D flushing tests for Power10Russell Currey3-2/+6
The rfi_flush and entry_flush selftests work by using the PM_LD_MISS_L1 perf event to count L1D misses. The value of this event has changed over time: - Power7 uses 0x400f0 - Power8 and Power9 use both 0x400f0 and 0x3e054 - Power10 uses only 0x3e054 Rather than relying on raw values, configure perf to count L1D read misses in the most explicit way available. This fixes the selftests to work on systems without 0x400f0 as PM_LD_MISS_L1, and should change no behaviour for systems that the tests already worked on. The only potential downside is that referring to a specific perf event requires PMU support implemented in the kernel for that platform. Signed-off-by: Russell Currey <ruscur@russell.cc> Acked-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210223070227.2916871-1-ruscur@russell.cc
2021-03-24powerpc: Fix spelling of "droping" to "dropping" in traps.cBhaskar Chowdhury1-1/+1
s/droping/dropping/ Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com> Acked-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210224075547.763063-1-unixbhaskar@gmail.com
2021-03-24powerpc: remove unneeded semicolonJiapeng Chong1-1/+1
Fix the following coccicheck warnings: ./arch/powerpc/kernel/prom_init.c:2986:2-3: Unneeded semicolon. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1614151761-53721-1-git-send-email-jiapeng.chong@linux.alibaba.com
2021-03-24powerpc/chrp: Make hydra_init() staticGeert Uytterhoeven2-4/+1
Commit 407d418f2fd4c20a ("powerpc/chrp: Move PHB discovery") moved the sole call to hydra_init() to the source file where it is defined, so it can be made static. Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210223095345.2139416-1-geert@linux-m68k.org
2021-03-24powerpc/mm: Move the linear_mapping_mutex to the ifdef where it is usedSebastian Andrzej Siewior1-1/+1
The mutex linear_mapping_mutex is defined at the of the file while its only two user are within the CONFIG_MEMORY_HOTPLUG block. A compile without CONFIG_MEMORY_HOTPLUG set fails on PREEMPT_RT because its mutex implementation is smart enough to realize that it is unused. Move the definition of linear_mapping_mutex to ifdef block where it is used. Fixes: 1f73ad3e8d755 ("powerpc/mm: print warning in arch_remove_linear_mapping()") Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210219165648.2505482-1-bigeasy@linutronix.de
2021-03-14Linux 5.12-rc3Linus Torvalds1-1/+1
2021-03-14prctl: fix PR_SET_MM_AUXV kernel stack leakAlexey Dobriyan1-1/+1
Doing a prctl(PR_SET_MM, PR_SET_MM_AUXV, addr, 1); will copy 1 byte from userspace to (quite big) on-stack array and then stash everything to mm->saved_auxv. AT_NULL terminator will be inserted at the very end. /proc/*/auxv handler will find that AT_NULL terminator and copy original stack contents to userspace. This devious scheme requires CAP_SYS_RESOURCE. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-03-14Merge tag 'irq-urgent-2021-03-14' of ↵Linus Torvalds7-18/+8
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull irq fixes from Thomas Gleixner: "A set of irqchip updates: - Make the GENERIC_IRQ_MULTI_HANDLER configuration correct - Add a missing DT compatible string for the Ingenic driver - Remove the pointless debugfs_file pointer from struct irqdomain" * tag 'irq-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqchip/ingenic: Add support for the JZ4760 dt-bindings/irq: Add compatible string for the JZ4760B irqchip: Do not blindly select CONFIG_GENERIC_IRQ_MULTI_HANDLER ARM: ep93xx: Select GENERIC_IRQ_MULTI_HANDLER directly irqdomain: Remove debugfs_file from struct irq_domain
2021-03-14Merge tag 'timers-urgent-2021-03-14' of ↵Linus Torvalds1-21/+39
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull timer fix from Thomas Gleixner: "A single fix in for hrtimers to prevent an interrupt storm caused by the lack of reevaluation of the timers which expire in softirq context under certain circumstances, e.g. when the clock was set" * tag 'timers-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: hrtimer: Update softirq_expires_next correctly after __hrtimer_get_next_event()
2021-03-14Merge tag 'sched-urgent-2021-03-14' of ↵Linus Torvalds2-67/+63
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler fixes from Thomas Gleixner: "A set of scheduler updates: - Prevent a NULL pointer dereference in the migration_stop_cpu() mechanims - Prevent self concurrency of affine_move_task() - Small fixes and cleanups related to task migration/affinity setting - Ensure that sync_runqueues_membarrier_state() is invoked on the current CPU when it is in the cpu mask" * tag 'sched-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched/membarrier: fix missing local execution of ipi_sync_rq_state() sched: Simplify set_affinity_pending refcounts sched: Fix affine_move_task() self-concurrency sched: Optimize migration_cpu_stop() sched: Collate affine_move_task() stoppers sched: Simplify migration_cpu_stop() sched: Fix migration_cpu_stop() requeueing
2021-03-14Merge tag 'objtool-urgent-2021-03-14' of ↵Linus Torvalds2-6/+7
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull objtool fix from Thomas Gleixner: "A single objtool fix to handle the PUSHF/POPF validation correctly for the paravirt changes which modified arch_local_irq_restore not to use popf" * tag 'objtool-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: objtool,x86: Fix uaccess PUSHF/POPF validation
2021-03-14Merge tag 'locking-urgent-2021-03-14' of ↵Linus Torvalds3-10/+9
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking fixes from Thomas Gleixner: "A couple of locking fixes: - A fix for the static_call mechanism so it handles unaligned addresses correctly. - Make u64_stats_init() a macro so every instance gets a seperate lockdep key. - Make seqcount_latch_init() a macro as well to preserve the static variable which is used for the lockdep key" * tag 'locking-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: seqlock,lockdep: Fix seqcount_latch_init() u64_stats,lockdep: Fix u64_stats_init() vs lockdep static_call: Fix the module key fixup
2021-03-14Merge tag 'perf_urgent_for_v5.12-rc3' of ↵Linus Torvalds5-15/+51
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Borislav Petkov: - Make sure PMU internal buffers are flushed for per-CPU events too and properly handle PID/TID for large PEBS. - Handle the case properly when there's no PMU and therefore return an empty list of perf MSRs for VMX to switch instead of reading random garbage from the stack. * tag 'perf_urgent_for_v5.12-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/perf: Use RET0 as default for guest_get_msrs to handle "no PMU" case perf/x86/intel: Set PERF_ATTACH_SCHED_CB for large PEBS and LBR perf/core: Flush PMU internal buffers for per-CPU events