aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2020-07-26powerpc/watchpoint: Remove 512 byte boundaryRavi Bangoria1-2/+3
Power10 has removed 512 bytes boundary from match criteria i.e. the watch range can cross 512 bytes boundary. Note: ISA 3.1 Book III 9.4 match criteria includes 512 byte limit but that is a documentation mistake and hopefully will be fixed in the next version of ISA. Though, ISA 3.1 change log mentions about removal of 512B boundary: Multiple DEAW: Added a second Data Address Watchpoint. [H]DAR is set to the first byte of overlap. 512B boundary is removed. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200723090813.303838-11-ravi.bangoria@linux.ibm.com
2020-07-26powerpc/watchpoint: Return available watchpoints dynamicallyRavi Bangoria2-3/+6
So far Book3S Powerpc supported only one watchpoint. Power10 is introducing 2nd DAWR. Enable 2nd DAWR support for Power10. Availability of 2nd DAWR will depend on CPU_FTR_DAWR1. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200723090813.303838-10-ravi.bangoria@linux.ibm.com
2020-07-26powerpc/watchpoint: Guest support for 2nd DAWR hcallRavi Bangoria5-4/+13
2nd DAWR can be set/unset using H_SET_MODE hcall with resource value 5. Enable powervm guest support with that. This has no effect on kvm guest because kvm will return error if guest does hcall with resource value 5. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200723090813.303838-9-ravi.bangoria@linux.ibm.com
2020-07-26powerpc/watchpoint: Rename current H_SET_MODE DAWR macroRavi Bangoria3-3/+3
Current H_SET_MODE hcall macro name for setting/resetting DAWR0 is H_SET_MODE_RESOURCE_SET_DAWR. Add suffix 0 to macro name as well. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Reviewed-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200723090813.303838-8-ravi.bangoria@linux.ibm.com
2020-07-26powerpc/watchpoint: Set CPU_FTR_DAWR1 based on pa-features bitRavi Bangoria1-0/+2
As per the PAPR, bit 0 of byte 64 in pa-features property indicates availability of 2nd DAWR registers. i.e. If this bit is set, 2nd DAWR is present, otherwise not. Host generally uses "cpu-features", which masks "pa-features". But "cpu-features" are still not used for guests and thus this change is mostly applicable for guests only. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Tested-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200723090813.303838-7-ravi.bangoria@linux.ibm.com
2020-07-26powerpc/dt_cpu_ftrs: Add feature for 2nd DAWRRavi Bangoria2-1/+3
Add new device-tree feature for 2nd DAWR. If this feature is present, 2nd DAWR is supported, otherwise not. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200723090813.303838-6-ravi.bangoria@linux.ibm.com
2020-07-26powerpc/watchpoint: Enable watchpoint functionality on power10 guestRavi Bangoria1-1/+2
CPU_FTR_DAWR is by default enabled for host via CPU_FTRS_DT_CPU_BASE (controlled by CONFIG_PPC_DT_CPU_FTRS). But cpu-features device-tree node is not PAPR compatible and thus not yet used by kvm or pHyp guests. Enable watchpoint functionality on power10 guest (both kvm and powervm) by adding CPU_FTR_DAWR to CPU_FTRS_POWER10. Note that this change does not enable 2nd DAWR support. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Tested-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200723090813.303838-5-ravi.bangoria@linux.ibm.com
2020-07-26powerpc/watchpoint: Fix DAWR exception for CACHEOPRavi Bangoria1-1/+20
'ea' returned by analyse_instr() needs to be aligned down to cache block size for CACHEOP instructions. analyse_instr() does not set size for CACHEOP, thus size also needs to be calculated manually. Fixes: 27985b2a640e ("powerpc/watchpoint: Don't ignore extraneous exceptions blindly") Fixes: 74c6881019b7 ("powerpc/watchpoint: Prepare handler to handle more than one watchpoint") Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200723090813.303838-4-ravi.bangoria@linux.ibm.com
2020-07-26powerpc/watchpoint: Fix DAWR exception constraintRavi Bangoria1-31/+41
Pedro Miraglia Franco de Carvalho noticed that on p8/p9, DAR value is inconsistent with different type of load/store. Like for byte,word etc. load/stores, DAR is set to the address of the first byte of overlap between watch range and real access. But for quadword load/ store it's sometime set to the address of the first byte of real access whereas sometime set to the address of the first byte of overlap. This issue has been fixed in p10. In p10(ISA 3.1), DAR is always set to the address of the first byte of overlap. Commit 27985b2a640e ("powerpc/watchpoint: Don't ignore extraneous exceptions blindly") wrongly assumes that DAR is set to the address of the first byte of overlap for all load/stores on p8/p9 as well. Fix that. With the fix, we now rely on 'ea' provided by analyse_instr(). If analyse_instr() fails, generate event unconditionally on p8/p9, and on p10 generate event only if DAR is within a DAWR range. Note: 8xx is not affected. Fixes: 27985b2a640e ("powerpc/watchpoint: Don't ignore extraneous exceptions blindly") Fixes: 74c6881019b7 ("powerpc/watchpoint: Prepare handler to handle more than one watchpoint") Reported-by: Pedro Miraglia Franco de Carvalho <pedromfc@br.ibm.com> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200723090813.303838-3-ravi.bangoria@linux.ibm.com
2020-07-26powerpc/watchpoint: Fix 512 byte boundary limitRavi Bangoria1-1/+1
Milton Miller reported that we are aligning start and end address to wrong size SZ_512M. It should be SZ_512. Fix that. While doing this change I also found a case where ALIGN() comparison fails. Within a given aligned range, ALIGN() of two addresses does not match when start address is pointing to the first byte and end address is pointing to any other byte except the first one. But that's not true for ALIGN_DOWN(). ALIGN_DOWN() of any two addresses within that range will always point to the first byte. So use ALIGN_DOWN() instead of ALIGN(). Fixes: e68ef121c1f4 ("powerpc/watchpoint: Use builtin ALIGN*() macros") Reported-by: Milton Miller <miltonm@us.ibm.com> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Tested-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200723090813.303838-2-ravi.bangoria@linux.ibm.com
2020-07-26powerpc/book3s64/pkey: Disable pkey on POWER6 and beforeAneesh Kumar K.V1-0/+6
POWER6 only supports AMR update via privileged mode (MSR[PR] = 0, SPRN_AMR=29) The PR=1 (userspace) alias for that SPR (SPRN_AMR=13) was only supported from POWER7. Since we don't allow userspace modifying of AMR value we should disable pkey support on P6 and before. The hypervisor will still report pkey support via "ibm,processor-storage-keys". Hence also check for P7 CPU_FTR bit to decide on pkey support. Fixes: f491fe3fb41e ("powerpc/book3s64/pkeys: Simplify the key initialization") Reported-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200726132517.399076-1-aneesh.kumar@linux.ibm.com
2020-07-24powerpc/sstep: Fix incorrect CONFIG symbol in scv handlingMichael Ellerman1-1/+1
When I "fixed" the ppc64e build in Nick's recent patch, I typoed the CONFIG symbol, resulting in one that doesn't exist. Fix it to use the correct symbol. Reported-by: Christophe Leroy <christophe.leroy@csgroup.eu> Fixes: 7fa95f9adaee ("powerpc/64s: system call support for scv/rfscv instructions") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131609.1640533-1-mpe@ellerman.id.au
2020-07-24powerpc/test_emulate_sstep: Fix build errorMichael Ellerman1-0/+1
ppc64_book3e_allmodconfig fails with: arch/powerpc/lib/test_emulate_step.c: In function 'test_pld': arch/powerpc/lib/test_emulate_step.c:113:7: error: implicit declaration of function 'cpu_has_feature' 113 | if (!cpu_has_feature(CPU_FTR_ARCH_31)) { | ^~~~~~~~~~~~~~~ Add an include of cpu_has_feature.h to fix it. Fixes: b6b54b42722a ("powerpc/sstep: Add tests for prefixed integer load/stores") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724004109.1461709-1-mpe@ellerman.id.au
2020-07-23Merge branch 'scv' support into nextMichael Ellerman20-47/+445
From Nick's cover letter: Linux powerpc new system call instruction and ABI System Call Vectored (scv) ABI ============================== The scv instruction is introduced with POWER9 / ISA3, it comes with an rfscv counter-part. The benefit of these instructions is performance (trading slower SRR0/1 with faster LR/CTR registers, and entering the kernel with MSR[EE] and MSR[RI] left enabled, which can reduce MSR updates. The scv instruction has 128 levels (not enough to cover the Linux system call space). Assignment and advertisement ---------------------------- The proposal is to assign scv levels conservatively, and advertise them with HWCAP feature bits as we add support for more. Linux has not enabled FSCR[SCV] yet, so executing the scv instruction will cause the kernel to log a "SCV facility unavilable" message, and deliver a SIGILL with ILL_ILLOPC to the process. Linux has defined a HWCAP2 bit PPC_FEATURE2_SCV for SCV support, but does not set it. This change allocates the zero level ('scv 0'), advertised with PPC_FEATURE2_SCV, which will be used to provide normal Linux system calls (equivalent to 'sc'). Attempting to execute scv with other levels will cause a SIGILL to be delivered the same as before, but will not log a "SCV facility unavailable" message (because the processor facility is enabled). Calling convention ------------------ The proposal is for scv 0 to provide the standard Linux system call ABI with the following differences from sc convention[1]: - LR is to be volatile across scv calls. This is necessary because the scv instruction clobbers LR. From previous discussion, this should be possible to deal with in GCC clobbers and CFI. - cr1 and cr5-cr7 are volatile. This matches the C ABI and would allow the kernel system call exit to avoid restoring the volatile cr registers (although we probably still would anyway to avoid information leaks). - Error handling: The consensus among kernel, glibc, and musl is to move to using negative return values in r3 rather than CR0[SO]=1 to indicate error, which matches most other architectures, and is closer to a function call. Notes ----- - r0,r4-r8 are documented as volatile in the ABI, but the kernel patch as submitted currently preserves them. This is to leave room for deciding which way to go with these. Some small benefit was found by preserving them[1] but I'm not convinced it's worth deviating from the C function call ABI just for this. Release code should follow the ABI. Previous discussions: https://lists.ozlabs.org/pipermail/linuxppc-dev/2020-April/208691.html https://lists.ozlabs.org/pipermail/linuxppc-dev/2020-April/209268.html [1] https://github.com/torvalds/linux/blob/master/Documentation/powerpc/syscall64-abi.rst [2] https://lists.ozlabs.org/pipermail/linuxppc-dev/2020-April/209263.html
2020-07-23selftests/powerpc: Add test of memcmp at end of pageMichael Ellerman1-18/+22
Update our memcmp selftest, to test the case where we're comparing up to the end of a page and the subsequent page is not mapped. We have to make sure we don't read off the end of the page and cause a fault. We had a bug there in the past, fixed in commit d9470757398a ("powerpc/64: Fix memcmp reading past the end of src/dest"). Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200722055315.962391-1-mpe@ellerman.id.au
2020-07-23powerpc/powernv/idle: Exclude mfspr on HID1, 4, 5 on P9 and abovePratik Rajesh Sampat1-3/+3
POWER9 onwards the support for the registers HID1, HID4, HID5 has been receded. Although mfspr on the above registers worked in Power9, In Power10 simulator is unrecognized. Moving their assignment under the check for machines lower than Power9 Signed-off-by: Pratik Rajesh Sampat <psampat@linux.ibm.com> Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200721153708.89057-4-psampat@linux.ibm.com
2020-07-23powerpc/powernv/idle: Rename pnv_first_spr_loss_level variablePratik Rajesh Sampat1-9/+9
Replace the variable name from using "pnv_first_spr_loss_level" to "deep_spr_loss_state". pnv_first_spr_loss_level is supposed to be the earliest state that has OPAL_PM_LOSE_FULL_CONTEXT set, in other places the kernel uses the "deep" states as terminology. Hence renaming the variable to be coherent to its semantics. Signed-off-by: Pratik Rajesh Sampat <psampat@linux.ibm.com> Acked-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200721153708.89057-3-psampat@linux.ibm.com
2020-07-23powerpc/powernv/idle: Replace CPU feature check with PVR checkPratik Rajesh Sampat1-1/+1
The POWER9 idle driver contains implementation-specific details that means it is not suitable to run on any processor that implements ISA v3.0 (e.g., POWER10), so only init the driver when running on a POWER9. Signed-off-by: Pratik Rajesh Sampat <psampat@linux.ibm.com> [mpe: Use updated change log from Nick] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200721153708.89057-2-psampat@linux.ibm.com
2020-07-23powerpc/mm/hash64: Remove comment that is no longer validSantosh Sivaraj1-4/+0
hash_low_64.S was removed in commit a43c0eb8364c ("powerpc/mm: Convert 4k insert from asm to C") and flush_hash_page() is no longer called from any assembly routine. Signed-off-by: Santosh Sivaraj <santosh@fossix.org> [mpe: Tweak comment wording] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200721091915.205006-1-santosh@fossix.org
2020-07-23KVM: PPC: Fix typo on H_DISABLE_AND_GET hcallLeonardo Bras3-3/+3
On PAPR+ the hcall() on 0x1B0 is called H_DISABLE_AND_GET, but got defined as H_DISABLE_AND_GETC instead. This define was introduced with a typo in commit <b13a96cfb055> ("[PATCH] powerpc: Extends HCALL interface for InfiniBand usage"), and was later used without having the typo noticed. Signed-off-by: Leonardo Bras <leobras.c@gmail.com> Acked-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200707004812.190765-1-leobras.c@gmail.com
2020-07-23powerpc/spufs: Fix the type of ret in spufs_arch_write_noteChristoph Hellwig1-1/+1
Both the ->dump method and snprintf return an int. So switch to an int and properly handle errors from ->dump. Fixes: 5456ffdee666 ("powerpc/spufs: simplify spufs core dumping") Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200610085554.5647-1-hch@lst.de
2020-07-23powerpc/powernv: Machine check handler for POWER10Nicholas Piggin4-0/+96
Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200702233343.1128026-1-npiggin@gmail.com
2020-07-23powerpc/pseries: PCIE PHB resetWen Xiong1-63/+169
Several device drivers hit EEH(Extended Error handling) when triggering kdump on Pseries PowerVM. This patch implemented a reset of the PHBs in pci general code when triggering kdump. PHB reset stop all PCI transactions from normal kernel. We have tested the patch in several enviroments: - direct slot adapters - adapters under the switch - a VF adapter in PowerVM - a VF adapter/adapter in KVM guest. Signed-off-by: Wen Xiong <wenxiong@linux.vnet.ibm.com> [mpe: Fix broken whitespace, subject & SOB formatting] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1594651173-32166-1-git-send-email-wenxiong@linux.vnet.ibm.com
2020-07-23powerpc: Select ARCH_HAS_MEMBARRIER_SYNC_CORENicholas Piggin5-3/+22
powerpc return from interrupt and return from system call sequences are context synchronising. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200716013522.338318-1-npiggin@gmail.com
2020-07-23powerpc/64: Fix an out of date comment about MMIO orderingPalmer Dabbelt1-1/+1
This primitive has been renamed, but because it was spelled incorrectly in the first place it must have escaped the fixup patch. As far as I can tell this logic is still correct: smp_mb__after_spinlock() uses the default smp_mb() implementation, which is "sync" rather than "hwsync" but those are the same (though I'm not that familiar with PowerPC). Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200716193820.1141936-1-palmer@dabbelt.com
2020-07-23powerpc/test_emulate_step: Move extern declaration to sstep.hBalamuruhan S2-2/+2
fix checkpatch.pl warnings by moving extern declaration from source file to headerfile. Signed-off-by: Balamuruhan S <bala24@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200626095158.1031507-5-bala24@linux.ibm.com
2020-07-23powerpc/sstep: Introduce macros to retrieve Prefix instruction operandsBalamuruhan S2-6/+10
retrieve prefix instruction operands RA and pc relative bit R values using macros and adopt it in sstep.c and test_emulate_step.c. Signed-off-by: Balamuruhan S <bala24@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200626095158.1031507-4-bala24@linux.ibm.com
2020-07-23powerpc/test_emulate_step: Add negative tests for prefixed addiBalamuruhan S1-0/+10
testcases for `paddi` instruction to cover the negative case, if R is equal to 1 and RA is not equal to 0, the instruction form is invalid. Signed-off-by: Balamuruhan S <bala24@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200626095158.1031507-3-bala24@linux.ibm.com
2020-07-23powerpc/test_emulate_step: Enhancement to test negative scenariosBalamuruhan S1-9/+21
add provision to declare test is a negative scenario, verify whether emulation fails and avoid executing it. Signed-off-by: Balamuruhan S <bala24@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200626095158.1031507-2-bala24@linux.ibm.com
2020-07-23powerpc/xmon: Improve dumping prefixed instructionsJordan Niethe1-5/+6
Currently prefixed instructions are dumped as two separate word instructions. Use mread_instr() so that prefixed instructions are read as such and update the incrementor in the loop to take this into account. 'dump_func' is print_insn_powerpc() which comes from ppc-dis.c which is taken from binutils. When this is updated prefixed instructions will be disassembled. Currently dumping prefixed instructions looks like this: 0:mon> di c000000000094168 c000000000094168 0x06000000 .long 0x6000000 c00000000009416c 0x392a0003 addi r9,r10,3 c000000000094170 0x913f0028 stw r9,40(r31) c000000000094174 0xe93f002a lwa r9,40(r31) c000000000094178 0x7d234b78 mr r3,r9 c00000000009417c 0x383f0040 addi r1,r31,64 c000000000094180 0xebe1fff8 ld r31,-8(r1) c000000000094184 0x4e800020 blr c000000000094188 0x60000000 nop ... c000000000094190 0x3c4c0121 addis r2,r12,289 c000000000094194 0x38429670 addi r2,r2,-27024 c000000000094198 0x7c0802a6 mflr r0 c00000000009419c 0x60000000 nop c0000000000941a0 0xe9240100 ld r9,256(r4) c0000000000941a4 0x39400001 li r10,1 After this it looks like: 0:mon> di c000000000094168 c000000000094168 0x06000000 0x392a0003 .long 0x392a000306000000 c000000000094170 0x913f0028 stw r9,40(r31) c000000000094174 0xe93f002a lwa r9,40(r31) c000000000094178 0x7d234b78 mr r3,r9 c00000000009417c 0x383f0040 addi r1,r31,64 c000000000094180 0xebe1fff8 ld r31,-8(r1) c000000000094184 0x4e800020 blr c000000000094188 0x60000000 nop ... c000000000094190 0x3c4c0121 addis r2,r12,289 c000000000094194 0x38429570 addi r2,r2,-27280 c000000000094198 0x7c0802a6 mflr r0 c00000000009419c 0x60000000 nop c0000000000941a0 0xe9240100 ld r9,256(r4) c0000000000941a4 0x39400001 li r10,1 c0000000000941a8 0x3d02000b addis r8,r2,11 Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200602052728.18227-2-jniethe5@gmail.com
2020-07-23powerpc: Add a ppc_inst_as_str() helperJordan Niethe5-17/+36
There are quite a few places where instructions are printed, this is done using a '%x' format specifier. With the introduction of prefixed instructions, this does not work well. Currently in these places, ppc_inst_val() is used for the value for %x so only the first word of prefixed instructions are printed. When the instructions are word instructions, only a single word should be printed. For prefixed instructions both the prefix and suffix should be printed. To accommodate both of these situations, instead of a '%x' specifier use '%s' and introduce a helper, __ppc_inst_as_str() which returns a char *. The char * __ppc_inst_as_str() returns is buffer that is passed to it by the caller. It is cumbersome to require every caller of __ppc_inst_as_str() to now declare a buffer. To make it more convenient to use __ppc_inst_as_str(), wrap it in a macro that uses a compound statement to allocate a buffer on the caller's stack before calling it. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Reviewed-by: Joel Stanley <joel@jms.id.au> Acked-by: Segher Boessenkool <segher@kernel.crashing.org> [mpe: Drop 0x prefix to match most existings uses, especially xmon] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200602052728.18227-1-jniethe5@gmail.com
2020-07-23powerpc/sstep: Add tests for Prefixed Add ImmediateJordan Niethe2-0/+127
Use the existing support for testing compute type instructions to test Prefixed Add Immediate (paddi). The R bit of the paddi instruction controls whether current instruction address is used. Add test cases for when R=1 and for R=0. paddi has a 34 bit immediate field formed by concatenating si0 and si1. Add tests for the extreme values of this field. Skip the paddi tests if ISA v3.1 is unsupported. Some of these test cases were added by Balamuruhan S. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> [mpe: Fix conflicts with ppc-opcode.h changes, squash in .balign] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200525025923.19843-5-jniethe5@gmail.com
2020-07-23powerpc/sstep: Let compute tests specify a required cpu featureJordan Niethe1-0/+6
An a array of struct compute_test's are used to declare tests for compute instructions. Add a cpu_feature field to struct compute_test as an optional way to specify a cpu feature that must be present. If not present then skip the test. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200525025923.19843-4-jniethe5@gmail.com
2020-07-23powerpc/sstep: Set NIP in instruction emulation testsJordan Niethe1-0/+3
The tests for emulation of compute instructions execute and emulate an instruction and then compare the results to verify the emulation. In ISA v3.1 there are instructions that operate relative to the NIP. Therefore set the NIP in the regs used for the emulated instruction to the location of the executed instruction so they will give the same result. This is a rework of a patch by Balamuruhan S. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200525025923.19843-3-jniethe5@gmail.com
2020-07-23powerpc/sstep: Add tests for prefixed floating-point load/storesJordan Niethe2-0/+128
Add tests for the prefixed versions of the floating-point load/stores that are currently tested. This includes the following instructions: * Prefixed Load Floating-Point Single (plfs) * Prefixed Load Floating-Point Double (plfd) * Prefixed Store Floating-Point Single (pstfs) * Prefixed Store Floating-Point Double (pstfd) Skip the new tests if ISA v3.10 is unsupported. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> [mpe: Fix conflicts with ppc-opcode.h changes] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200525025923.19843-2-jniethe5@gmail.com
2020-07-23powerpc/sstep: Add tests for prefixed integer load/storesJordan Niethe2-0/+94
Add tests for the prefixed versions of the integer load/stores that are currently tested. This includes the following instructions: * Prefixed Load Doubleword (pld) * Prefixed Load Word and Zero (plwz) * Prefixed Store Doubleword (pstd) Skip the new tests if ISA v3.1 is unsupported. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> [mpe: Fix conflicts with ppc-opcode.h changes] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200525025923.19843-1-jniethe5@gmail.com
2020-07-22powerpc/64s: system call support for scv/rfscv instructionsNicholas Piggin20-44/+421
Add support for the scv instruction on POWER9 and later CPUs. For now this implements the zeroth scv vector 'scv 0', as identical to 'sc' system calls, with the exception that LR is not preserved, nor are volatile CR registers, and error is not indicated with CR0[SO], but by returning a negative errno. rfscv is implemented to return from scv type system calls. It can not be used to return from sc system calls because those are defined to preserve LR. getpid syscall throughput on POWER9 is improved by 26% (428 to 318 cycles), largely due to reducing mtmsr and mtspr. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Fix ppc64e build] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200611081203.995112-3-npiggin@gmail.com
2020-07-22powerpc/64s/exception: treat NIA below __end_interrupts as soft-maskedNicholas Piggin1-3/+24
The scv instruction causes an interrupt which can enter the kernel with MSR[EE]=1, thus allowing interrupts to hit at any time. These must not be taken as normal interrupts, because they come from MSR[PR]=0 context, and yet the kernel stack is not yet set up and r13 is not set to the PACA). Treat this as a soft-masked interrupt regardless of the soft masked state. This does not affect behaviour yet, because currently all interrupts are taken with MSR[EE]=0. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200611081203.995112-2-npiggin@gmail.com
2020-07-22powerpc/perf: BHRB control to disable BHRB logic when not usedAthira Rajeev3-6/+48
PowerISA v3.1 has few updates for the Branch History Rolling Buffer(BHRB). BHRB disable is controlled via Monitor Mode Control Register A (MMCRA) bit, namely "BHRB Recording Disable (BHRBRD)". This field controls whether BHRB entries are written when BHRB recording is enabled by other bits. This patch implements support for this BHRB disable bit. By setting 0b1 to this bit will disable the BHRB and by setting 0b0 to this bit will have BHRB enabled. This addresses backward compatibility (for older OS), since this bit will be cleared and hardware will be writing to BHRB by default. This patch addresses changes to set MMCRA (BHRBRD) at boot for power10 (there by the core will run faster) and enable this feature only on runtime ie, on explicit need from user. Also save/restore MMCRA in the restore path of state-loss idle state to make sure we keep BHRB disabled if it was not enabled on request at runtime. Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1594996707-3727-12-git-send-email-atrajeev@linux.vnet.ibm.com
2020-07-22powerpc/perf: Add Power10 BHRB filter support for ↵Athira Rajeev1-2/+11
PERF_SAMPLE_BRANCH_IND_CALL/COND PowerISA v3.1 introduce filtering support for PERF_SAMPLE_BRANCH_IND_CALL/COND. The patch adds BHRB filter support for "ind_call" and "cond" in power10_bhrb_filter_map(). Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1594996707-3727-11-git-send-email-atrajeev@linux.vnet.ibm.com
2020-07-22powerpc/perf: Ignore the BHRB kernel address filtering for P10Athira Rajeev1-1/+4
Commit bb19af816025 ("powerpc/perf: Prevent kernel address leak to userspace via BHRB buffer") added a check in bhrb_read() to filter the kernel address from BHRB buffer. This patch modified it to avoid that check for PowerISA v3.1 based processors, since PowerISA v3.1 allows only MSR[PR]=1 address to be written to BHRB buffer. Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1594996707-3727-10-git-send-email-atrajeev@linux.vnet.ibm.com
2020-07-22powerpc/perf: power10 Performance Monitoring supportAthira Rajeev7-11/+566
Base enablement patch to register performance monitoring hardware support for power10. Patch introduce the raw event encoding format, defines the supported list of events, config fields for the event attributes and their corresponding bit values which are exported via sysfs. Patch also enhances the support function in isa207_common.c to include power10 pmu hardware. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1594996707-3727-9-git-send-email-atrajeev@linux.vnet.ibm.com
2020-07-22powerpc/perf: Add Power10 PMU feature to DT CPU featuresMadhavan Srinivasan3-0/+28
Add Power10 feature function to DT CPU features, along with a Power10 specific init() to initialize PMU SPRs, sets the oprofile_cpu_type and cpu_features. This will enable performance monitoring unit (PMU) for Power10 in CPU features with "performance-monitor-power10". For Power ISA v3.1, BHRB disable is controlled via Monitor Mode Control Register A (MMCRA) bit, namely "BHRB Recording Disable (BHRBRD)". This patch initializes MMCRA BHRBRD to disable BHRB feature at boot for Power10. Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> [mpe: Move MMCRA_BHRB_DISABLE as noted by jpn, drop CPU setup changes] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1594996707-3727-8-git-send-email-atrajeev@linux.vnet.ibm.com
2020-07-22powerpc/xmon: Add PowerISA v3.1 PMU SPRsMadhavan Srinivasan1-0/+13
PowerISA v3.1 added three new perfromance monitoring unit (PMU) speical purpose register (SPR). They are Monitor Mode Control Register 3 (MMCR3), Sampled Instruction Event Register 2 (SIER2), Sampled Instruction Event Register 3 (SIER3). Patch here adds a new dump function dump_310_sprs to print these SPR values. Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1594996707-3727-7-git-send-email-atrajeev@linux.vnet.ibm.com
2020-07-22KVM: PPC: Book3S HV: Save/restore new PMU registersAthira Rajeev9-5/+71
Power ISA v3.1 has added new performance monitoring unit (PMU) special purpose registers (SPRs). They are: Monitor Mode Control Register 3 (MMCR3) Sampled Instruction Event Register A (SIER2) Sampled Instruction Event Register B (SIER3) Add support to save/restore these new SPRs while entering/exiting guest. Also include changes to support KVM_REG_PPC_MMCR3/SIER2/SIER3. Add new SPRs to KVM API documentation. Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1594996707-3727-6-git-send-email-atrajeev@linux.vnet.ibm.com
2020-07-22powerpc/perf: Add support for ISA3.1 PMU SPRsMadhavan Srinivasan5-0/+49
PowerISA v3.1 includes new performance monitoring unit(PMU) special purpose registers (SPRs). They are Monitor Mode Control Register 3 (MMCR3) Sampled Instruction Event Register 2 (SIER2) Sampled Instruction Event Register 3 (SIER3) MMCR3 is added for further sampling related configuration control. SIER2/SIER3 are added to provide additional information about the sampled instruction. Patch adds new PPMU flag called "PPMU_ARCH_31" to support handling of these new SPRs, updates the struct thread_struct to include these new SPRs, include MMCR3 in struct mmcr_regs. This is needed to support programming of MMCR3 SPR during event_enable/disable. Patch also adds the sysfs support for the MMCR3 SPR along with SPRN_ macros for these new pmu SPRs. Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> [mpe: Rename to PPMU_ARCH_31 as noted by jpn] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1594996707-3727-5-git-send-email-atrajeev@linux.vnet.ibm.com
2020-07-22powerpc/perf: Update Power PMU cache_events to u64 typeAthira Rajeev11-11/+11
Events of type PERF_TYPE_HW_CACHE was described for Power PMU as: int (*cache_events)[type][op][result]; where type, op, result values unpacked from the event attribute config value is used to generate the raw event code at runtime. So far the event code values which used to create these cache-related events were within 32 bit and `int` type worked. In power10, some of the event codes are of 64-bit value and hence update the Power PMU cache_events to `u64` type in `power_pmu` struct. Also propagate this change to existing all PMU driver code paths which are using ppmu->cache_events. Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1594996707-3727-4-git-send-email-atrajeev@linux.vnet.ibm.com
2020-07-22KVM: PPC: Book3S HV: Cleanup updates for kvm vcpu MMCRAthira Rajeev4-9/+31
Currently `kvm_vcpu_arch` stores all Monitor Mode Control registers in a flat array in order: mmcr0, mmcr1, mmcra, mmcr2, mmcrs Split this to give mmcra and mmcrs its own entries in vcpu and use a flat array for mmcr0 to mmcr2. This patch implements this cleanup to make code easier to read. Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> [mpe: Fix MMCRA/MMCR2 uapi breakage as noted by paulus] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1594996707-3727-3-git-send-email-atrajeev@linux.vnet.ibm.com
2020-07-22powerpc/perf: Update cpu_hw_event to use `struct` for storing MMCR registersAthira Rajeev10-94/+105
core-book3s currently uses array to store the MMCR registers as part of per-cpu `cpu_hw_events`. This patch does a clean up to use `struct` to store mmcr regs instead of array. This will make code easier to read and reduces chance of any subtle bug that may come in the future, say when new registers are added. Patch updates all relevant code that was using MMCR array ( cpuhw->mmcr[x]) to use newly introduced `struct`. This includes the PMU driver code for supported platforms (power5 to power9) and ISA macros for counter support functions. Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1594996707-3727-2-git-send-email-atrajeev@linux.vnet.ibm.com
2020-07-22powerpc/perf: Fix missing is_sier_aviable() during buildMadhavan Srinivasan1-0/+2
Compilation error: arch/powerpc/perf/perf_regs.c:80:undefined reference to `.is_sier_available' Currently is_sier_available() is part of core-book3s.c, which is added to build based on CONFIG_PPC_PERF_CTRS. A config with CONFIG_PERF_EVENTS and without CONFIG_PPC_PERF_CTRS will have a build break because of missing is_sier_available(). In practice it only breaks when CONFIG_FSL_EMB_PERF_EVENT=n because that also guards the usage of is_sier_available(). That only happens with CONFIG_PPC_BOOK3E_64=y and CONFIG_FSL_SOC_BOOKE=n. Patch adds is_sier_available() in asm/perf_event.h to fix the build break for configs missing CONFIG_PPC_PERF_CTRS. Fixes: 333804dc3b7a ("powerpc/perf: Update perf_regs structure to include SIER") Reported-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> [mpe: Add detail about CONFIG_FSL_SOC_BOOKE] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200614083604.302611-1-maddy@linux.ibm.com