aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2020-10-19powerpc/smp: Remove unnecessary variableSrikar Dronamraju1-9/+4
Commit 3ab33d6dc3e9 ("powerpc/smp: Optimize update_mask_by_l2") introduced submask_fn in update_mask_by_l2 to track the right submask. However commit f6606cfdfbcd ("powerpc/smp: Dont assume l2-cache to be superset of sibling") introduced sibling_mask in update_mask_by_l2 to track the same submask. Remove sibling_mask in favour of submask_fn. Signed-off-by: Srikar Dronamraju <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-16powerpc/mce: Avoid nmi_enter/exit in real mode on pseries hashGanesh Goudar1-4/+3
Use of nmi_enter/exit in real mode handler causes the kernel to panic and reboot on injecting SLB mutihit on pseries machine running in hash MMU mode, because these calls try to accesses memory outside RMO region in real mode handler where translation is disabled. Add check to not to use these calls on pseries machine running in hash MMU mode. Fixes: 116ac378bb3f ("powerpc/64s: machine check interrupt update NMI accounting") Cc: [email protected] # v5.8+ Signed-off-by: Ganesh Goudar <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-16powerpc/opal_elog: Handle multiple writes to ack attributeAneesh Kumar K.V1-3/+8
Even though we use self removing sysfs helper, we still need to make sure we do the final kobject delete conditionally. sysfs_remove_file_self() will handle parallel calls to remove the sysfs attribute file and returns true only in the caller that removed the attribute file. The other parallel callers are returned false. Do the final kobject delete checking the return value of sysfs_remove_file_self(). Signed-off-by: Aneesh Kumar K.V <[email protected]> Reviewed-by: Mahesh Salgaonkar <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-15Revert "powerpc/pci: unmap legacy INTx interrupts when a PHB is removed"Qian Cai2-120/+0
This reverts commit 3a3181e16fbde752007759f8759d25e0ff1fc425 which causes memory corruptions on POWER9 powernv. eg: pci_bus 0035:08: busn_res: [bus 08-0c] is released ============================================================================= BUG kmalloc-16 (Tainted: G W O ): Object already free ----------------------------------------------------------------------------- Disabling lock debugging due to kernel taint INFO: Allocated in pcibios_scan_phb+0x104/0x3e0 age=1960714 cpu=4 pid=1 __slab_alloc+0xa4/0xf0 __kmalloc+0x294/0x330 pcibios_scan_phb+0x104/0x3e0 pcibios_init+0x84/0x124 do_one_initcall+0xac/0x528 kernel_init_freeable+0x35c/0x3fc kernel_init+0x24/0x148 ret_from_kernel_thread+0x5c/0x80 INFO: Freed in pcibios_remove_bus+0x70/0x90 age=0 cpu=16 pid=1717146 kfree+0x49c/0x510 pcibios_remove_bus+0x70/0x90 pci_remove_bus+0xe4/0x110 pci_remove_bus_device+0x74/0x170 pci_remove_bus_device+0x4c/0x170 pci_stop_and_remove_bus_device_locked+0x34/0x50 remove_store+0xc0/0xe0 dev_attr_store+0x30/0x50 sysfs_kf_write+0x68/0xb0 kernfs_fop_write+0x114/0x260 vfs_write+0xe4/0x260 ksys_write+0x74/0x130 system_call_exception+0xf8/0x1d0 system_call_common+0xe8/0x218 INFO: Slab 0x0000000099caaf22 objects=178 used=174 fp=0x00000000006a64b0 flags=0x7fff8000000201 INFO: Object 0x00000000f360132d @offset=30192 fp=0x0000000000000000 Signed-off-by: Qian Cai <[email protected]> Acked-by: Oliver O'Halloran <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-14selftests/powerpc: Fix eeh-basic.sh exit codesOliver O'Halloran1-3/+6
The kselftests test running infrastructure expects tests to finish with an exit code of 4 if the test decided it should be skipped. Currently eeh-basic.sh exits with the number of devices that failed to recover, so if four devices didn't recover we'll report a skip instead of a fail. Fix this by checking if the return code is non-zero and report success and failure by returning 0 or 1 respectively. For the cases where should actually skip return 4. Fixes: 85d86c8aa52e ("selftests/powerpc: Add basic EEH selftest") Signed-off-by: Oliver O'Halloran <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-08cpufreq: powernv: Fix frame-size-overflow in powernv_cpufreq_reboot_notifierSrikar Dronamraju1-3/+6
The patch avoids allocating cpufreq_policy on stack hence fixing frame size overflow in 'powernv_cpufreq_reboot_notifier': drivers/cpufreq/powernv-cpufreq.c: In function powernv_cpufreq_reboot_notifier: drivers/cpufreq/powernv-cpufreq.c:906:1: error: the frame size of 2064 bytes is larger than 2048 bytes Fixes: cf30af76 ("cpufreq: powernv: Set the cpus to nominal frequency during reboot/kexec") Signed-off-by: Srikar Dronamraju <[email protected]> Reviewed-by: Daniel Axtens <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-08powerpc/time: Make get_tb() common to PPC32 and PPC64Christophe Leroy1-7/+3
mftbu() is always defined now, so the #ifdef can be removed and replaced by an IS_ENABLED(CONFIG_PPC64) inside the PPC32 version of get_tb(). Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/47e49717d2643169ffcbe5d507f184cf49f0fe95.1601556145.git.christophe.leroy@csgroup.eu
2020-10-08powerpc/time: Make get_tbl() common to PPC32 and PPC64Christophe Leroy1-7/+0
On PPC64, get_tbl() is defined as an alias of get_tb() which return the result of mftb(). That exactly the same as what the PPC32 version does. We don't need two versions. Remove the PPC64 definition of get_tbl() and use the PPC32 version for both. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/a8eaabb87d69534e533ebac805163e08146e05bd.1601556145.git.christophe.leroy@csgroup.eu
2020-10-08powerpc/time: Remove get_tbu()Christophe Leroy1-5/+0
get_tbu() is redundant with mftbu() and is not used anymore. Remove it. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/1746e2d11ea90c3f45877e1fcc6c79ce96cf6b98.1601556145.git.christophe.leroy@csgroup.eu
2020-10-08powerpc/time: Avoid using get_tbl() and get_tbu() internallyChristophe Leroy3-7/+7
get_tbl() is confusing as it returns the content of TBL register on PPC32 but the concatenation of TBL and TBU on PPC64. Use mftb() instead. Do the same with get_tbu() for consistency allthough it's name is less confusing. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/41573406a4eab98838decaa91649086fef1e6119.1601556145.git.christophe.leroy@csgroup.eu
2020-10-08powerpc/time: Make mftb() common to PPC32 and PPC64Christophe Leroy1-10/+4
No need to have two versions that are identical. CONFIG_PPC_CELL is only selected by PPC64 targets. CONFIG_E500 is the only PPC64 target selecting CONFIG_FSL_BOOK3E. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/6bf23ec744aab4ba63506a011f6a145ea35d620d.1601556145.git.christophe.leroy@csgroup.eu
2020-10-08powerpc/time: Rename mftbl() to mftb()Christophe Leroy2-4/+3
On PPC64, we have mftb(). On PPC32, we have mftbl() and an #define mftb() mftbl(). mftb() and mftbl() are equivalent, their purpose is to read the content of SPRN_TRBL, as returned by 'mftb' simplified instruction. binutils seems to define 'mftbl' instruction as an equivalent of 'mftb'. However in both 32 bits and 64 bits documentation, only 'mftb' is defined, and when performing a disassembly with objdump, the displayed instruction is 'mftb' No need to have two ways to do the same thing with different names, rename mftbl() to have only mftb(). Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/94dc68d3d9ef9eb549796d4b938b6ba0305a049b.1601556145.git.christophe.leroy@csgroup.eu
2020-10-08powerpc/32s: Remove #ifdef CONFIG_PPC_BOOK3S_32 in head_book3s_32.SChristophe Leroy1-15/+0
head_book3s_32.S is only built when CONFIG_PPC_BOOK3S_32 is selected. Remove all conditions based on CONFIG_PPC_BOOK3S_32 in the file. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/1b68632425d8866d147aea9005004e4594672211.1601975100.git.christophe.leroy@csgroup.eu
2020-10-08powerpc/32s: Rename head_32.S to head_book3s_32.SChristophe Leroy3-3/+5
Unlike PPC64 which had a single head_64.S, PPC32 are multiple ones. There is the head_32.S, selected by default based on the value of BITS and overridden based on some CONFIG_ values. This leads to thinking that it may be selected by different types of PPC32 platform but indeed it ends up being selected by book3s/32 only. Make that explicit by: - Not doing any default selection based on BITS. - Renaming head_32.S to head_book3s_32.S. - Get head_book3s_32.S selected only by CONFIG_PPC_BOOK3S_32. Signed-off-by: Christophe Leroy <[email protected]> [mpe: Fix head_$(BITS).o reference in arch/powerpc/Makefile] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/319d379f696412681c66a987cc75e6abf8f958d2.1601975100.git.christophe.leroy@csgroup.eu
2020-10-08powerpc/32s: Setup the early hash table at all time.Christophe Leroy4-38/+17
At the time being, an early hash table is set up when CONFIG_KASAN is selected. There is nothing wrong with setting such an early hash table all the time, even if it is not used. This is a statically allocated 256 kB table which lies in the init data section. This makes the code simpler and may in the future allow to setup early IO mappings with fixmap instead of hard coding BATs. Put create_hpte() and flush_hash_pages() in the .ref.text section in order to avoid warning for the reference to early_hash[]. This reference is removed by MMU_init_hw_patch() before init memory is freed. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/b8f8101c368b8a6451844a58d7bd7d83c14cf2aa.1601566529.git.christophe.leroy@csgroup.eu
2020-10-08powerpc/time: Remove ifdef in get_dec() and set_dec()Christophe Leroy3-14/+12
Move SPRN_PIT definition in reg.h. This allows to remove ifdef in get_dec() and set_dec() and makes them more readable. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/3c9a6eb0fc040868ac59be66f338d08fd017668d.1601549945.git.christophe.leroy@csgroup.eu
2020-10-08powerpc: Remove get_tb_or_rtc()Christophe Leroy3-9/+4
601 is gone, get_tb_or_rtc() is equivalent to get_tb(). Replace the former by the later. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/3e8a13ee83418630c753c30cb722ae682d5b2d39.1601362098.git.christophe.leroy@csgroup.eu
2020-10-08powerpc: Remove __USE_RTC()Christophe Leroy2-71/+9
Now that PowerPC 601 is gone, __USE_RTC() is never true. Remove it. That also leads to removing get_rtc() and get_rtcl() Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/4757e1ed21fe1968c761ae081d1f3d790a9673f8.1601362098.git.christophe.leroy@csgroup.eu
2020-10-08powerpc: Tidy up a bit after removal of PowerPC 601.Christophe Leroy2-32/+24
The removal of the 601 left some standalone blocks from former if/else. Drop the { } and re-indent. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/31c4cd093963f22831bf388449056ee045533d3b.1601362098.git.christophe.leroy@csgroup.eu
2020-10-08powerpc: Remove support for PowerPC 601Christophe Leroy17-206/+17
PowerPC 601 has been retired. Remove all associated specific code. CPU_FTRS_PPC601 has CPU_FTR_COHERENT_ICACHE and CPU_FTR_COMMON. CPU_FTR_COMMON is already present via other CPU_FTRS. None of the remaining CPU selects CPU_FTR_COHERENT_ICACHE. So CPU_FTRS_PPC601 can be removed from the possible features, hence can be removed completely. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/60b725d55e21beec3335175c20b77903ff98284f.1601362098.git.christophe.leroy@csgroup.eu
2020-10-08powerpc: Remove PowerPC 601Christophe Leroy2-24/+2
Powerpc 601 is 25 years old. It is not selected by any defconfig. It requires a lot of special handling as it deviates from the standard 6xx. Retire it. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/00a6948d659e017f8ca63437d1384222c3aede57.1601362098.git.christophe.leroy@csgroup.eu
2020-10-08powerpc: Drop SYNC_601() ISYNC_601() and SYNC()Christophe Leroy7-45/+2
Those macros are now empty at all time. Drop them. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/7990bb63fc53e460bfa94f8040184881d9e6fbc3.1601362098.git.christophe.leroy@csgroup.eu
2020-10-08powerpc: Remove CONFIG_PPC601_SYNC_FIXChristophe Leroy2-21/+0
This config option isn't in any defconfig. The very first versions of Powerpc 601 have a bug which requires additional sync before and/or after some instructions. This was more than 25 years ago and time has come to retire those buggy versions of the 601 from the kernel. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/55b46bff16705b1ae7bf0a60ccd522b1010ebf75.1601362098.git.christophe.leroy@csgroup.eu
2020-10-08powerpc: Remove SYNC on non 6xxChristophe Leroy3-3/+0
SYNC is usefull for Powerpc 601 only. On everything else, SYNC is empty. Remove it from code that is not made to run on 6xx. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/27951fa6c9a8f80724d1bc81a6117ac32343a55d.1601362098.git.christophe.leroy@csgroup.eu
2020-10-08powerpc/papr_scm: Add PAPR command family to pass-through command-setVaibhav Jain1-0/+3
Add NVDIMM_FAMILY_PAPR to the list of valid 'dimm_family_mask' acceptable by papr_scm. This is needed as since commit 92fe2aa859f5 ("libnvdimm: Validate command family indices") libnvdimm performs a validation of 'nd_cmd_pkg.nd_family' received as part of ND_CMD_CALL processing to ensure only known command families can use the general ND_CMD_CALL pass-through functionality. Without this change the ND_CMD_CALL pass-through targeting NVDIMM_FAMILY_PAPR error out with -EINVAL. Fixes: 92fe2aa859f5 ("libnvdimm: Validate command family indices") Signed-off-by: Vaibhav Jain <[email protected]> Reviewed-by: Ira Weiny <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-08powerpc/lmb-size: Use addr #size-cells value when fetching lmb-sizeAneesh Kumar K.V2-7/+13
Make it consistent with other usages. Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-08powerpc/book3s64/radix: Make radix_mem_block_size 64bitAneesh Kumar K.V2-2/+2
Similar to commit 89c140bbaeee ("pseries: Fix 64 bit logical memory block panic") make sure different variables tracking lmb_size are updated to be 64 bit. Fixes: af9d00e93a4f ("powerpc/mm/radix: Create separate mappings for hot-plugged memory") Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-08powerpc/memhotplug: Make lmb size 64bitAneesh Kumar K.V1-14/+29
Similar to commit 89c140bbaeee ("pseries: Fix 64 bit logical memory block panic") make sure different variables tracking lmb_size are updated to be 64 bit. This was found by code audit. Cc: [email protected] Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-08powerpc/drmem: Make lmb_size 64 bitAneesh Kumar K.V1-2/+2
Similar to commit 89c140bbaeee ("pseries: Fix 64 bit logical memory block panic") make sure different variables tracking lmb_size are updated to be 64 bit. This was found by code audit. Cc: [email protected] Signed-off-by: Aneesh Kumar K.V <[email protected]> Acked-by: Nathan Lynch <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-08powerpc/security: Fix link stack flush instructionNicholas Piggin3-13/+33
The inline execution path for the hardware assisted branch flush instruction failed to set CTR to the correct value before bcctr, causing a crash when the feature is enabled. Fixes: 4d24e21cc694 ("powerpc/security: Allow for processors that flush the link stack using the special bcctr") Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-07powerpc/hv-gpci: Add sysfs files inside hv-gpci device to show cpumaskKajol Jain2-0/+25
Patch here adds a cpumask attr to hv_gpci pmu along with ABI documentation. Primary use to expose the cpumask is for the perf tool which has the capability to parse the driver sysfs folder and understand the cpumask file. Having cpumask file will reduce the number of perf command line parameters (will avoid "-C" option in the perf tool command line). It can also notify the user which is the current cpu used to retrieve the counter data. command:# cat /sys/devices/hv_gpci/cpumask 0 Signed-off-by: Kajol Jain <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-07powerpc/perf/hv-gpci: Add cpu hotplug supportKajol Jain2-0/+47
Patch here adds cpu hotplug functions to hv_gpci pmu. A new cpuhp_state "CPUHP_AP_PERF_POWERPC_HV_GPCI_ONLINE" enum is added. The online callback function updates the cpumask only if its empty. As the primary intention of adding hotplug support is to designate a CPU to make HCALL to collect the counter data. The offline function test and clear corresponding cpu in a cpumask and update cpumask to any other active cpu. Signed-off-by: Kajol Jain <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-07Documentation/ABI: Add ABI documentation for hv-gpci formatKajol Jain1-0/+31
This patch adds ABI documentation for hv-gpci event format. Signed-off-by: Kajol Jain <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-07Documentation/ABI: Add ABI documentation for hv-24x7 formatKajol Jain1-0/+25
This patch adds ABI documentation for hv-24x7 format. Signed-off-by: Kajol Jain <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-07powerpc/perf/hv-gpci: Fix starting index valueKajol Jain1-3/+3
Commit 9e9f60108423f ("powerpc/perf/{hv-gpci, hv-common}: generate requests with counters annotated") adds a framework for defining gpci counters. In this patch, they adds starting_index value as '0xffffffffffffffff'. which is wrong as starting_index is of size 32 bits. Because of this, incase we try to run hv-gpci event we get error. In power9 machine: command#: perf stat -e hv_gpci/system_tlbie_count_and_time_tlbie_instructions_issued/ -C 0 -I 1000 event syntax error: '..bie_count_and_time_tlbie_instructions_issued/' \___ value too big for format, maximum is 4294967295 This patch fix this issue and changes starting_index value to '0xffffffff' After this patch: command#: perf stat -e hv_gpci/system_tlbie_count_and_time_tlbie_instructions_issued/ -C 0 -I 1000 1.000085786 1,024 hv_gpci/system_tlbie_count_and_time_tlbie_instructions_issued/ 2.000287818 1,024 hv_gpci/system_tlbie_count_and_time_tlbie_instructions_issued/ 2.439113909 17,408 hv_gpci/system_tlbie_count_and_time_tlbie_instructions_issued/ Fixes: 9e9f60108423 ("powerpc/perf/{hv-gpci, hv-common}: generate requests with counters annotated") Signed-off-by: Kajol Jain <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-07powerpc/pseries/eeh: Fix use of uninitialised variableOliver O'Halloran1-5/+4
If the RTAS call to query the PE address for a device fails we jump the err: label where an error message is printed along with the return code. However, the printed return code is from the "ret" variable which isn't set at that point since we assigned the result to "addr" instead. Fix this by consistently using the "ret" variable for the result of the RTAS call helpers an dropping the "addr" local variable" Fixes: 98ba956f6a38 ("powerpc/pseries/eeh: Rework device EEH PE determination") Signed-off-by: Oliver O'Halloran <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-07powerpc/eeh: Delete eeh_pe->config_addrOliver O'Halloran3-4/+3
The eeh_pe->config_addr field was supposed to be removed in commit 35d64734b643 ("powerpc/eeh: Clean up PE addressing") which made it largely unused. Finish the job. Signed-off-by: Oliver O'Halloran <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-06pseries/hotplug-memory: hot-add: skip redundant LMB lookupScott Cheloha3-3/+8
During memory hot-add, dlpar_add_lmb() calls memory_add_physaddr_to_nid() to determine which node id (nid) to use when later calling __add_memory(). This is wasteful. On pseries, memory_add_physaddr_to_nid() finds an appropriate nid for a given address by looking up the LMB containing the address and then passing that LMB to of_drconf_to_nid_single() to get the nid. In dlpar_add_lmb() we get this address from the LMB itself. In short, we have a pointer to an LMB and then we are searching for that LMB *again* in order to find its nid. If we call of_drconf_to_nid_single() directly from dlpar_add_lmb() we can skip the redundant lookup. The only error handling we need to duplicate from memory_add_physaddr_to_nid() is the fallback to the default nid when drconf_to_nid_single() returns -1 (NUMA_NO_NODE) or an invalid nid. Skipping the extra lookup makes hot-add operations faster, especially on machines with many LMBs. Consider an LPAR with 126976 LMBs. In one test, hot-adding 126000 LMBs on an upatched kernel took ~3.5 hours while a patched kernel completed the same operation in ~2 hours: Unpatched (12450 seconds): Sep 9 04:06:31 ltc-brazos1 drmgr[810169]: drmgr: -c mem -a -q 126000 Sep 9 04:06:31 ltc-brazos1 kernel: pseries-hotplug-mem: Attempting to hot-add 126000 LMB(s) [...] Sep 9 07:34:01 ltc-brazos1 kernel: pseries-hotplug-mem: Memory at 20000000 (drc index 80000002) was hot-added Patched (7065 seconds): Sep 8 21:49:57 ltc-brazos1 drmgr[877703]: drmgr: -c mem -a -q 126000 Sep 8 21:49:57 ltc-brazos1 kernel: pseries-hotplug-mem: Attempting to hot-add 126000 LMB(s) [...] Sep 8 23:27:42 ltc-brazos1 kernel: pseries-hotplug-mem: Memory at 20000000 (drc index 80000002) was hot-added It should be noted that the speedup grows more substantial when hot-adding LMBs at the end of the drconf range. This is because we are skipping a linear LMB search. To see the distinction, consider smaller hot-add test on the same LPAR. A perf-stat run with 10 iterations showed that hot-adding 4096 LMBs completed less than 1 second faster on a patched kernel: Unpatched: Performance counter stats for 'drmgr -c mem -a -q 4096' (10 runs): 104,753.42 msec task-clock # 0.992 CPUs utilized ( +- 0.55% ) 4,708 context-switches # 0.045 K/sec ( +- 0.69% ) 2,444 cpu-migrations # 0.023 K/sec ( +- 1.25% ) 394 page-faults # 0.004 K/sec ( +- 0.22% ) 445,902,503,057 cycles # 4.257 GHz ( +- 0.55% ) (66.67%) 8,558,376,740 stalled-cycles-frontend # 1.92% frontend cycles idle ( +- 0.88% ) (49.99%) 300,346,181,651 stalled-cycles-backend # 67.36% backend cycles idle ( +- 0.76% ) (50.01%) 258,091,488,691 instructions # 0.58 insn per cycle # 1.16 stalled cycles per insn ( +- 0.22% ) (66.67%) 70,568,169,256 branches # 673.660 M/sec ( +- 0.17% ) (50.01%) 3,100,725,426 branch-misses # 4.39% of all branches ( +- 0.20% ) (49.99%) 105.583 +- 0.589 seconds time elapsed ( +- 0.56% ) Patched: Performance counter stats for 'drmgr -c mem -a -q 4096' (10 runs): 104,055.69 msec task-clock # 0.993 CPUs utilized ( +- 0.32% ) 4,606 context-switches # 0.044 K/sec ( +- 0.20% ) 2,463 cpu-migrations # 0.024 K/sec ( +- 0.93% ) 394 page-faults # 0.004 K/sec ( +- 0.25% ) 442,951,129,921 cycles # 4.257 GHz ( +- 0.32% ) (66.66%) 8,710,413,329 stalled-cycles-frontend # 1.97% frontend cycles idle ( +- 0.47% ) (50.06%) 299,656,905,836 stalled-cycles-backend # 67.65% backend cycles idle ( +- 0.39% ) (50.02%) 252,731,168,193 instructions # 0.57 insn per cycle # 1.19 stalled cycles per insn ( +- 0.20% ) (66.66%) 68,902,851,121 branches # 662.173 M/sec ( +- 0.13% ) (49.94%) 3,100,242,882 branch-misses # 4.50% of all branches ( +- 0.15% ) (49.98%) 104.829 +- 0.325 seconds time elapsed ( +- 0.31% ) This is consistent. An add-by-count hot-add operation adds LMBs greedily, so LMBs near the start of the drconf range are considered first. On an otherwise idle LPAR with so many LMBs we would expect to find the LMBs we need near the start of the drconf range, hence the smaller speedup. Signed-off-by: Scott Cheloha <[email protected]> Reviewed-by: Laurent Dufour <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-06selftests/powerpc: Add a rtas_filter selftestAndrew Donnellan2-1/+286
Add a selftest to test the basic functionality of CONFIG_RTAS_FILTER. Signed-off-by: Andrew Donnellan <[email protected]> [mpe: Change rmo_start/end to 32-bit to avoid build errors on ppc64] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-06powerpc/rtas: Restrict RTAS requests from userspaceAndrew Donnellan2-0/+166
A number of userspace utilities depend on making calls to RTAS to retrieve information and update various things. The existing API through which we expose RTAS to userspace exposes more RTAS functionality than we actually need, through the sys_rtas syscall, which allows root (or anyone with CAP_SYS_ADMIN) to make any RTAS call they want with arbitrary arguments. Many RTAS calls take the address of a buffer as an argument, and it's up to the caller to specify the physical address of the buffer as an argument. We allocate a buffer (the "RMO buffer") in the Real Memory Area that RTAS can access, and then expose the physical address and size of this buffer in /proc/powerpc/rtas/rmo_buffer. Userspace is expected to read this address, poke at the buffer using /dev/mem, and pass an address in the RMO buffer to the RTAS call. However, there's nothing stopping the caller from specifying whatever address they want in the RTAS call, and it's easy to construct a series of RTAS calls that can overwrite arbitrary bytes (even without /dev/mem access). Additionally, there are some RTAS calls that do potentially dangerous things and for which there are no legitimate userspace use cases. In the past, this would not have been a particularly big deal as it was assumed that root could modify all system state freely, but with Secure Boot and lockdown we need to care about this. We can't fundamentally change the ABI at this point, however we can address this by implementing a filter that checks RTAS calls against a list of permitted calls and forces the caller to use addresses within the RMO buffer. The list is based off the list of calls that are used by the librtas userspace library, and has been tested with a number of existing userspace RTAS utilities. For compatibility with any applications we are not aware of that require other calls, the filter can be turned off at build time. Cc: [email protected] Reported-by: Daniel Axtens <[email protected]> Signed-off-by: Andrew Donnellan <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-06powerpc/perf: Exclude pmc5/6 from the irrelevant PMU group constraintsAthira Rajeev1-0/+10
PMU counter support functions enforces event constraints for group of events to check if all events in a group can be monitored. Incase of event codes using PMC5 and PMC6 ( 500fa and 600f4 respectively ), not all constraints are applicable, say the threshold or sample bits. But current code includes pmc5 and pmc6 in some group constraints (like IC_DC Qualifier bits) which is actually not applicable and hence results in those events not getting counted when scheduled along with group of other events. Patch fixes this by excluding PMC5/6 from constraints which are not relevant for it. Fixes: 7ffd948 ("powerpc/perf: factor out power8 pmu functions") Signed-off-by: Athira Rajeev <[email protected]> Reviewed-by: Madhavan Srinivasan <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-06powerpc/smp: Optimize update_coregroup_maskSrikar Dronamraju1-8/+22
All threads of a SMT4/SMT8 core can either be part of CPU's coregroup mask or outside the coregroup. Use this relation to reduce the number of iterations needed to find all the CPUs that share the same coregroup Use a temporary mask to iterate through the CPUs that may share coregroup mask. Also instead of setting one CPU at a time into cpu_coregroup_mask, copy the SMT4/SMT8/submask at one shot. Signed-off-by: Srikar Dronamraju <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-06powerpc/smp: Move coregroup mask updation to a new functionSrikar Dronamraju1-13/+19
Move the logic for updating the coregroup mask of a CPU to its own function. This will help in reworking the updation of coregroup mask in subsequent patch. Signed-off-by: Srikar Dronamraju <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-06powerpc/smp: Optimize update_mask_by_l2Srikar Dronamraju1-6/+45
All threads of a SMT4 core can either be part of this CPU's l2-cache mask or not related to this CPU l2-cache mask. Use this relation to reduce the number of iterations needed to find all the CPUs that share the same l2-cache. Use a temporary mask to iterate through the CPUs that may share l2_cache mask. Also instead of setting one CPU at a time into cpu_l2_cache_mask, copy the SMT4/sub mask at one shot. Signed-off-by: Srikar Dronamraju <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-06powerpc/smp: Check for duplicate topologies and consolidateSrikar Dronamraju1-0/+26
CACHE and COREGROUP domains are now part of default topology. However on systems that don't support CACHE or COREGROUP, these domains will eventually be degenerated. The degeneration happens per CPU. Do note the current fixup_topology() logic ensures that mask of a domain that is not supported on the current platform is set to the previous domain. Instead of waiting for the scheduler to degenerated try to consolidate based on their masks and sd_flags. This is done just before setting the scheduler topology. Signed-off-by: Srikar Dronamraju <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-06powerpc/smp: Depend on cpu_l1_cache_map when adding CPUsSrikar Dronamraju1-4/+3
Currently on hotplug/hotunplug, CPU iterates through all the CPUs in its core to find threads in its thread group. However this info is already captured in cpu_l1_cache_map. Hence reduce iterations and cleanup add_cpu_to_smallcore_masks function. Signed-off-by: Srikar Dronamraju <[email protected]> Tested-by: Satheesh Rajendran <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-06powerpc/smp: Stop passing mask to update_mask_by_l2Srikar Dronamraju1-4/+4
update_mask_by_l2 is called only once. But it passes cpu_l2_cache_mask as parameter. Instead of passing cpu_l2_cache_mask, use it directly in update_mask_by_l2. Signed-off-by: Srikar Dronamraju <[email protected]> Tested-by: Satheesh Rajendran <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-06powerpc/smp: Limit CPUs traversed to within a node.Srikar Dronamraju1-1/+1
All the arch specific topology cpumasks are within a node/DIE. However when setting these per CPU cpumasks, system traverses through all the online CPUs. This is redundant. Reduce the traversal to only CPUs that are online in the node to which the CPU belongs to. Signed-off-by: Srikar Dronamraju <[email protected]> Tested-by: Satheesh Rajendran <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-06powerpc/smp: Optimize remove_cpu_from_masksSrikar Dronamraju1-2/+9
While offlining a CPU, system currently iterate through all the CPUs in the DIE to clear sibling, l2_cache and smallcore maps. However if there are more cores in a DIE, system can end up spending more time iterating through CPUs which are completely unrelated. Optimize this by only iterating through smaller but relevant cpumap. If shared_cache is set, cpu_l2_cache_map should be relevant else cpu_sibling_map would be relevant. Signed-off-by: Srikar Dronamraju <[email protected]> Tested-by: Satheesh Rajendran <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-06powerpc/smp: Remove get_physical_package_idSrikar Dronamraju2-25/+0
Now that cpu_core_mask has been removed and topology_core_cpumask has been updated to use cpu_cpu_mask, we no more need get_physical_package_id. Signed-off-by: Srikar Dronamraju <[email protected]> Tested-by: Satheesh Rajendran <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]