aboutsummaryrefslogtreecommitdiff
path: root/arch/powerpc
AgeCommit message (Collapse)AuthorFilesLines
2019-04-21powerpc/mm: Move slb_addr_linit to early_init_mmuAneesh Kumar K.V3-11/+8
Avoid #ifdef in generic code. Also enables us to do this specific to MMU translation mode on book3s64 Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/mm: Add helpers for accessing hash translation related variablesAneesh Kumar K.V8-44/+154
We want to switch to allocating them runtime only when hash translation is enabled. Add helpers so that both book3s and nohash can be adapted to upcoming change easily. Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/mm: Remove PPC_MM_SLICES #ifdef for book3s64Aneesh Kumar K.V2-17/+0
Book3s64 always have PPC_MM_SLICES enabled. So remove the unncessary #ifdef Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/mm: Fix build error with FLATMEM book3s64 configAneesh Kumar K.V3-15/+17
The current value of MAX_PHYSMEM_BITS cannot work with 32 bit configs. We used to have MAX_PHYSMEM_BITS not defined without SPARSEMEM and 32 bit configs never expected a value to be set for MAX_PHYSMEM_BITS. Dependent code such as zsmalloc derived the right values based on other fields. Instead of finding a value that works with different configs, use new values only for book3s_64. For 64 bit booke, use the definition of MAX_PHYSMEM_BITS as per commit a7df61a0e2b6 ("[PATCH] ppc64: Increase sparsemem defaults") That change was done in 2005 and hopefully will work with book3e 64. Fixes: 8bc086899816 ("powerpc/mm: Only define MAX_PHYSMEM_BITS in SPARSEMEM configurations") Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/32s: Implement Kernel Userspace Access ProtectionChristophe Leroy6-0/+131
This patch implements Kernel Userspace Access Protection for book3s/32. Due to limitations of the processor page protection capabilities, the protection is only against writing. read protection cannot be achieved using page protection. The previous patch modifies the page protection so that RW user pages are RW for Key 0 and RO for Key 1, and it sets Key 0 for both user and kernel. This patch changes userspace segment registers are set to Ku 0 and Ks 1. When kernel needs to write to RW pages, the associated segment register is then changed to Ks 0 in order to allow write access to the kernel. In order to avoid having the read all segment registers when locking/unlocking the access, some data is kept in the thread_struct and saved on stack on exceptions. The field identifies both the first unlocked segment and the first segment following the last unlocked one. When no segment is unlocked, it contains value 0. As the hash_page() function is not able to easily determine if a protfault is due to a bad kernel access to userspace, protfaults need to be handled by handle_page_fault when KUAP is set. Signed-off-by: Christophe Leroy <[email protected]> [mpe: Drop allow_read/write_to/from_user() as they're now in kup.h, and adapt allow_user_access() to do nothing when to == NULL] Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/32s: Prepare Kernel Userspace Access ProtectionChristophe Leroy3-14/+16
This patch prepares Kernel Userspace Access Protection for book3s/32. Due to limitations of the processor page protection capabilities, the protection is only against writing. read protection cannot be achieved using page protection. book3s/32 provides the following values for PP bits: PP00 provides RW for Key 0 and NA for Key 1 PP01 provides RW for Key 0 and RO for Key 1 PP10 provides RW for all PP11 provides RO for all Today PP10 is used for RW pages and PP11 for RO pages, and user segment register's Kp and Ks are set to 1. This patch modifies page protection to use PP01 for RW pages and sets user segment registers to Kp 0 and Ks 0. This will allow to setup Userspace write access protection by settng Ks to 1 in the following patch. Kernel space segment registers remain unchanged. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/32s: Implement Kernel Userspace Execution Prevention.Christophe Leroy7-1/+85
To implement Kernel Userspace Execution Prevention, this patch sets NX bit on all user segments on kernel entry and clears NX bit on all user segments on kernel exit. Note that powerpc 601 doesn't have the NX bit, so KUEP will not work on it. A warning is displayed at startup. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/8xx: Add Kernel Userspace Access ProtectionChristophe Leroy5-0/+81
This patch adds Kernel Userspace Access Protection on the 8xx. When a page is RO or RW, it is set RO or RW for Key 0 and NA for Key 1. Up to now, the User group is defined with Key 0 for both User and Supervisor. By changing the group to Key 0 for User and Key 1 for Supervisor, this patch prevents the Kernel from being able to access user data. At exception entry, the kernel saves SPRN_MD_AP in the regs struct, and reapply the protection. At exception exit it restores SPRN_MD_AP with the value saved on exception entry. Signed-off-by: Christophe Leroy <[email protected]> [mpe: Drop allow_read/write_to/from_user() as they're now in kup.h] Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/8xx: Add Kernel Userspace Execution PreventionChristophe Leroy3-0/+20
This patch adds Kernel Userspace Execution Prevention on the 8xx. When a page is Executable, it is set Executable for Key 0 and NX for Key 1. Up to now, the User group is defined with Key 0 for both User and Supervisor. By changing the group to Key 0 for User and Key 1 for Supervisor, this patch prevents the Kernel from being able to execute user code. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/8xx: Only define APG0 and APG1Christophe Leroy1-6/+6
Since the 8xx implements hardware page table walk assistance, the PGD entries always point to a 4k aligned page, so the 2 upper bits of the APG are not clobbered anymore and remain 0. Therefore only APG0 and APG1 are used and need a definition. We set the other APG to the lowest permission level. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/32: Prepare for Kernel Userspace Access ProtectionChristophe Leroy3-6/+27
This patch adds ASM macros for saving, restoring and checking the KUAP state, and modifies setup_32 to call them on exceptions from kernel. The macros are defined as empty by default for when CONFIG_PPC_KUAP is not selected and/or for platforms which don't handle (yet) KUAP. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/32: Remove MSR_PR test when returning from syscallChristophe Leroy1-5/+0
syscalls are from user only, so we can account time without checking whether returning to kernel or user as it will only be user. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/mm: Detect bad KUAP faultsMichael Ellerman3-3/+29
When KUAP is enabled we have logic to detect page faults that occur outside of a valid user access region and are blocked by the AMR. What we don't have at the moment is logic to detect a fault *within* a valid user access region, that has been incorrectly blocked by AMR. This is not meant to ever happen, but it can if we incorrectly save/restore the AMR, or if the AMR was overwritten for some other reason. Currently if that happens we assume it's just a regular fault that will be corrected by handling the fault normally, so we just return. But there is nothing the fault handling code can do to fix it, so the fault just happens again and we spin forever, leading to soft lockups. So add some logic to detect that case and WARN() if we ever see it. Arguably it should be a BUG(), but it's more polite to fail the access and let the kernel continue, rather than taking down the box. There should be no data integrity issue with failing the fault rather than BUG'ing, as we're just going to disallow an access that should have been allowed. To make the code a little easier to follow, unroll the condition at the end of bad_kernel_fault() and comment each case, before adding the call to bad_kuap_fault(). Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/64s: Implement KUAP for Radix MMUMichael Ellerman10-3/+176
Kernel Userspace Access Prevention utilises a feature of the Radix MMU which disallows read and write access to userspace addresses. By utilising this, the kernel is prevented from accessing user data from outside of trusted paths that perform proper safety checks, such as copy_{to/from}_user() and friends. Userspace access is disabled from early boot and is only enabled when performing an operation like copy_{to/from}_user(). The register that controls this (AMR) does not prevent userspace from accessing itself, so there is no need to save and restore when entering and exiting userspace. When entering the kernel from the kernel we save AMR and if it is not blocking user access (because eg. we faulted doing a user access) we reblock user access for the duration of the exception (ie. the page fault) and then restore the AMR when returning back to the kernel. This feature can be tested by using the lkdtm driver (CONFIG_LKDTM=y) and performing the following: # (echo ACCESS_USERSPACE) > [debugfs]/provoke-crash/DIRECT If enabled, this should send SIGSEGV to the thread. We also add paranoid checking of AMR in switch and syscall return under CONFIG_PPC_KUAP_DEBUG. Co-authored-by: Michael Ellerman <[email protected]> Signed-off-by: Russell Currey <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/lib: Refactor __patch_instruction() to use __put_user_asm()Russell Currey1-2/+2
__patch_instruction() is called in early boot, and uses __put_user_size(), which includes the allow/prevent calls to enforce KUAP, which could either be called too early, or in the Radix case, forced to use "early_" versions of functions just to safely handle this one case. __put_user_asm() does not do this, and thus is safe to use both in early boot, and later on since in this case it should only ever be touching kernel memory. __patch_instruction() was previously refactored to use __put_user_size() in order to be able to return -EFAULT, which would allow the kernel to patch instructions in userspace, which should never happen. This has the functional change of causing faults on userspace addresses if KUAP is turned on, which should never happen in practice. A future enhancement could be to double check the patch address is definitely allowed to be tampered with by the kernel. Signed-off-by: Russell Currey <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/mm/radix: Use KUEP API for Radix MMURussell Currey2-3/+10
Execution protection already exists on radix, this just refactors the radix init to provide the KUEP setup function instead. Thus, the only functional change is that it can now be disabled. Signed-off-by: Russell Currey <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/64: Setup KUP on secondary CPUsRussell Currey2-1/+4
Some platforms (i.e. Radix MMU) need per-CPU initialisation for KUP. Any platforms that only want to do KUP initialisation once globally can just check to see if they're running on the boot CPU, or check if whatever setup they need has already been performed. Note that this is only for 64-bit. Signed-off-by: Russell Currey <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc: Add a framework for Kernel Userspace Access ProtectionChristophe Leroy9-14/+120
This patch implements a framework for Kernel Userspace Access Protection. Then subarches will have the possibility to provide their own implementation by providing setup_kuap() and allow/prevent_user_access(). Some platforms will need to know the area accessed and whether it is accessed from read, write or both. Therefore source, destination and size and handed over to the two functions. mpe: Rename to allow/prevent rather than unlock/lock, and add read/write wrappers. Drop the 32-bit code for now until we have an implementation for it. Add kuap to pt_regs for 64-bit as well as 32-bit. Don't split strings, use pr_crit_ratelimited(). Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Russell Currey <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc: Add skeleton for Kernel Userspace Execution PreventionChristophe Leroy4-5/+33
This patch adds a skeleton for Kernel Userspace Execution Prevention. Then subarches implementing it have to define CONFIG_PPC_HAVE_KUEP and provide setup_kuep() function. Signed-off-by: Christophe Leroy <[email protected]> [mpe: Don't split strings, use pr_crit_ratelimited()] Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc: Add framework for Kernel Userspace ProtectionChristophe Leroy4-0/+26
This patch adds a skeleton for Kernel Userspace Protection functionnalities like Kernel Userspace Access Protection and Kernel Userspace Execution Prevention The subsequent implementation of KUAP for radix makes use of a MMU feature in order to patch out assembly when KUAP is disabled or unsupported. This won't work unless there's an entry point for KUP support before the feature magic happens, so for PPC64 setup_kup() is called early in setup. On PPC32, feature_fixup() is done too early to allow the same. Suggested-by: Russell Currey <[email protected]> Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/powernv/idle: Restore AMR/UAMOR/AMOR after idleMichael Ellerman1-4/+23
In order to implement KUAP (Kernel Userspace Access Protection) on Power9 we will be using the AMR, and therefore indirectly the UAMOR/AMOR. So save/restore these regs in the idle code. Signed-off-by: Michael Ellerman <[email protected]>
2019-04-21powerpc/powernv/idle: Restore IAMR after idleRussell Currey1-0/+20
Without restoring the IAMR after idle, execution prevention on POWER9 with Radix MMU is overwritten and the kernel can freely execute userspace without faulting. This is necessary when returning from any stop state that modifies user state, as well as hypervisor state. To test how this fails without this patch, load the lkdtm driver and do the following: $ echo EXEC_USERSPACE > /sys/kernel/debug/provoke-crash/DIRECT which won't fault, then boot the kernel with powersave=off, where it will fault. Applying this patch will fix this. Fixes: 3b10d0095a1e ("powerpc/mm/radix: Prevent kernel execution of user space") Cc: [email protected] # v4.10+ Signed-off-by: Russell Currey <[email protected]> Reviewed-by: Akshay Adiga <[email protected]> Reviewed-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc: Add force enable of DAWR on P9 optionMichael Neuling6-17/+91
This adds a flag so that the DAWR can be enabled on P9 via: echo Y > /sys/kernel/debug/powerpc/dawr_enable_dangerous The DAWR was previously force disabled on POWER9 in: 9654153158 powerpc: Disable DAWR in the base POWER9 CPU features Also see Documentation/powerpc/DAWR-POWER9.txt This is a dangerous setting, USE AT YOUR OWN RISK. Some users may not care about a bad user crashing their box (ie. single user/desktop systems) and really want the DAWR. This allows them to force enable DAWR. This flag can also be used to disable DAWR access. Once this is cleared, all DAWR access should be cleared immediately and your machine once again safe from crashing. Userspace may get confused by toggling this. If DAWR is force enabled/disabled between getting the number of breakpoints (via PTRACE_GETHWDBGINFO) and setting the breakpoint, userspace will get an inconsistent view of what's available. Similarly for guests. For the DAWR to be enabled in a KVM guest, the DAWR needs to be force enabled in the host AND the guest. For this reason, this won't work on POWERVM as it doesn't allow the HCALL to work. Writes of 'Y' to the dawr_enable_dangerous file will fail if the hypervisor doesn't support writing the DAWR. To double check the DAWR is working, run this kernel selftest: tools/testing/selftests/powerpc/ptrace/ptrace-hwbreak.c Any errors/failures/skips mean something is wrong. Signed-off-by: Michael Neuling <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/numa: document topology_updates_enabled, disable by defaultNathan Lynch1-4/+10
Changing the NUMA associations for CPUs and memory at runtime is basically unsupported by the core mm, scheduler etc. We see all manner of crashes, warnings and instability when the pseries code tries to do this. Disable this behavior by default, and document the switch a bit. Signed-off-by: Nathan Lynch <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/numa: improve control of topology updatesNathan Lynch1-6/+12
When booted with "topology_updates=no", or when "off" is written to /proc/powerpc/topology_updates, NUMA reassignments are inhibited for PRRN and VPHN events. However, migration and suspend unconditionally re-enable reassignments via start_topology_update(). This is incoherent. Check the topology_updates_enabled flag in start/stop_topology_update() so that callers of those APIs need not be aware of whether reassignments are enabled. This allows the administrative decision on reassignments to remain in force across migrations and suspensions. Signed-off-by: Nathan Lynch <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/powernv: Squash sparse warnings in opal-call.cAndrew Donnellan1-0/+2
sparse complains a lot about opal-call.c: arch/powerpc/platforms/powernv/opal-call.c:128:1: warning: symbol 'opal_invalid_call' was not declared. Should it be static? arch/powerpc/platforms/powernv/opal-call.c:129:1: warning: symbol 'opal_console_write' was not declared. Should it be static? arch/powerpc/platforms/powernv/opal-call.c:130:1: warning: symbol 'opal_console_read' was not declared. Should it be static? Those symbols are forward declared in opal.h, but we can't include that because the function signatures in opal.h are different. So instead, just add an extra forward declaration to the OPAL_CALL macro to shut sparse up. Signed-off-by: Andrew Donnellan <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/crypto: Use cheaper random numbers for crc-vpmsum self-testGeorge Spelvin1-7/+3
This code was filling a 64K buffer from /dev/urandom in order to compute a CRC over (on average half of) it by two different methods, comparing the CRCs, and repeating. This is not a remotely security-critical application, so use the far faster and cheaper prandom_u32() generator. And, while we're at it, only fill as much of the buffer as we plan to use. Signed-off-by: George Spelvin <[email protected]> Acked-by: Daniel Axtens <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc: Remove duplicate headersJagadeesh Pagadala3-3/+0
Remove duplicate headers inclusions. Signed-off-by: Jagadeesh Pagadala <[email protected]> Reviewed-by: Mukesh Ojha <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/8xx: Fix possible device node reference leakWen Yang1-2/+1
The call to of_find_compatible_node() returns a node pointer with refcount incremented thus it must be explicitly decremented after the last usage. irq_domain_add_linear() also calls of_node_get() to increase refcount, so irq_domain() will not be affected when it is released. Detected by coccinelle. Fixes: a8db8cf0d894 ("irq_domain: Replace irq_alloc_host() with revmap-specific initializers") Signed-off-by: Wen Yang <[email protected]> Suggested-by: Christophe Leroy <[email protected]> Suggested-by: Michael Ellerman <[email protected]> Reviewed-by: Peng Hao <[email protected]> Reviewed-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/pseries: hwpoison the pages upon hitting UEGanesh Goudar3-1/+85
Add support to hwpoison the pages upon hitting machine check exception. This patch queues the address where UE is hit to percpu array and schedules work to plumb it into memory poison infrastructure. Reviewed-by: Mahesh Salgaonkar <[email protected]> Signed-off-by: Ganesh Goudar <[email protected]> [mpe: Combine #ifdefs, drop PPC_BIT8(), and empty inline stub] Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/83xx: Add missing of_node_put() after of_device_is_available()Julia Lawall1-1/+3
Add an of_node_put() when a tested device node is not available. Fixes: c026c98739c7e ("powerpc/83xx: Do not configure or probe disabled FSL DR USB controllers") Signed-off-by: Julia Lawall <[email protected]> Reviewed-by: Mukesh Ojha <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/pseries/pmem: Fix a set but not used valueQian Cai1-2/+1
The commit 4c5d87db4978 ("powerpc/pseries: PAPR persistent memory support") set a local variable "count" in dlpar_hp_pmem() but never use it. arch/powerpc/platforms/pseries/pmem.c: In function 'dlpar_hp_pmem': arch/powerpc/platforms/pseries/pmem.c:109:6: warning: variable 'count' set but not used Signed-off-by: Qian Cai <[email protected]> Reviewed-by: Mukesh Ojha <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/pseries/iommu: Fix set but not used valuesQian Cai1-8/+5
The commit b7d6bf4fdd47 ("powerpc/pseries/pci: Remove obsolete SW invalidate") left 2 variables unused. arch/powerpc/platforms/pseries/iommu.c:108:17: warning: variable 'tces' set but not used __be64 *tcep, *tces; ^~~~ arch/powerpc/platforms/pseries/iommu.c:132:17: warning: variable 'tces' set but not used __be64 *tcep, *tces; ^~~~ Also, the commit 68c0449ea16d ("powerpc/pseries/iommu: Use memory@ nodes in max RAM address calculation") set "ranges" in ddw_memory_hotplug_max() but never use it. arch/powerpc/platforms/pseries/iommu.c: In function 'ddw_memory_hotplug_max': arch/powerpc/platforms/pseries/iommu.c:948:7: warning: variable 'ranges' set but not used int ranges, n_mem_addr_cells, n_mem_size_cells, len; ^~~~~~ Signed-off-by: Qian Cai <[email protected]> Reviewed-by: Mukesh Ojha <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/mm: Silence unused-but-set-variable warningsQian Cai2-2/+4
pte_unmap() compiles away on some powerpc platforms, so silence the warnings below by making it a static inline function. mm/memory.c: In function 'copy_pte_range': mm/memory.c:820:24: warning: variable 'orig_dst_pte' set but not used mm/memory.c:820:9: warning: variable 'orig_src_pte' set but not used mm/madvise.c: In function 'madvise_free_pte_range': mm/madvise.c:318:9: warning: variable 'orig_pte' set but not used mm/swap_state.c: In function 'swap_ra_info': mm/swap_state.c:634:15: warning: variable 'orig_pte' set but not used Suggested-by: Christophe Leroy <[email protected]> Signed-off-by: Qian Cai <[email protected]> Reviewed-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/mm: move warning from resize_hpt_for_hotplug()Laurent Vivier4-16/+13
resize_hpt_for_hotplug() reports a warning when it cannot resize the hash page table ("Unable to resize hash page table to target order") but in some cases it's not a problem and can make user thinks something has not worked properly. This patch moves the warning to arch_remove_memory() to only report the problem when it is needed. Reviewed-by: David Gibson <[email protected]> Signed-off-by: Laurent Vivier <[email protected]> Reviewed-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/mm/radix: Don't do SLB preload when using the radix MMUAneesh Kumar K.V1-1/+2
Add radix_enabled() check to avoid SLB preload with radix translation. Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/configs: Enable CONFIG_USB_XHCI_HCD by defaultThomas Huth1-0/+1
Recent versions of QEMU provide a XHCI device by default these days instead of an old-fashioned OHCI device: https://git.qemu.org/?p=qemu.git;a=commitdiff;h=57040d451315320b7d27 So to get the keyboard working in the graphical console there again, we should now include XHCI support in the kernel by default, too. Signed-off-by: Thomas Huth <[email protected]> Reviewed-by: David Gibson <[email protected]> Acked-by: Joel Stanley <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/pseries/mce: Improve array initialization.Mahesh Salgaonkar1-26/+26
This is a follow up to the patch that fixed misleading print for TLB mutlihit due to wrongly populated mc_err_types[] array. Convert all the static array initialization to '[x] = val' style for better readability of array indexing and avoid any further confusion. Suggested-by: Michael Ellerman <[email protected]> Signed-off-by: Mahesh Salgaonkar <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/64: Fix booting large kernels with STRICT_KERNEL_RWXRussell Currey1-1/+3
With STRICT_KERNEL_RWX enabled anything marked __init is placed at a 16M boundary. This is necessary so that it can be repurposed later with different permissions. However, in kernels with text larger than 16M, this pushes early_setup past 32M, incapable of being reached by the branch instruction. Fix this by setting the CTR and branching there instead. Fixes: 1e0fc9d1eb2b ("powerpc/Kconfig: Enable STRICT_KERNEL_RWX for some configs") Signed-off-by: Russell Currey <[email protected]> [mpe: Fix it to work on BE by using DOTSYM()] Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/embedded6xx: Remove unused functions holly_power_off and holly_haltMathieu Malaterre1-12/+0
Silence the following warnings triggered using W=1: arch/powerpc/platforms/embedded6xx/holly.c:236:6: error: no previous prototype for 'holly_power_off' arch/powerpc/platforms/embedded6xx/holly.c:243:6: error: no previous prototype for 'holly_halt' Signed-off-by: Mathieu Malaterre <[email protected]> Reviewed-by: Mukesh Ojha <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/embedded6xx: Make some functions staticMathieu Malaterre1-3/+4
In commit cb9e4d10c448 ("[POWERPC] Add support for 750CL Holly board") new functions were added. Since most of these functions can be made static, make it so. Both holly_power_off and holly_halt functions were not changed since they are unused, making them static would have triggered the following warning (treated as error): arch/powerpc/platforms/embedded6xx/holly.c:244:13: error: 'holly_halt' defined but not used Silence the following warnings triggered using W=1: arch/powerpc/platforms/embedded6xx/holly.c:47:5: error: no previous prototype for 'holly_exclude_device' arch/powerpc/platforms/embedded6xx/holly.c:190:6: error: no previous prototype for 'holly_show_cpuinfo' arch/powerpc/platforms/embedded6xx/holly.c:196:17: error: no previous prototype for 'holly_restart' Signed-off-by: Mathieu Malaterre <[email protected]> Reviewed-by: Mukesh Ojha <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc: vdso: Make vdso32 installation conditional in vdso_installBen Hutchings1-0/+2
The 32-bit vDSO is not needed and not normally built for 64-bit little-endian configurations. However, the vdso_install target still builds and installs it. Add the same config condition as is normally used for the build. Fixes: e0d005916994 ("powerpc/vdso: Disable building the 32-bit VDSO ...") Signed-off-by: Ben Hutchings <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/mm/64: Document the sizes of/sizes mapped by Pxx_INDEX_SIZEMichael Ellerman4-16/+18
Add comments describing the size in bytes of the various levels of the page table tree, and the size of the virtual address space mapped by each level, to make it clear what the sizes are without having to also look up other definitions. The code that calculates the sizes actually uses sizeof(pgd_t) etc., so in theory these comments could skew vs the code, but the size of pgd_t etc. is unlikely to change very often. Signed-off-by: Michael Ellerman <[email protected]> Reviewed-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/highmem: Change BUG_ON() to WARN_ON()Christophe Leroy1-10/+4
In arch/powerpc/mm/highmem.c, BUG_ON() is called only when CONFIG_DEBUG_HIGHMEM is selected, this means the BUG_ON() is not vital and can be replaced by a a WARN_ON(). At the same time, use IS_ENABLED() instead of #ifdef to clean a bit. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc: Fix defconfig choice logic when cross compilingMichael Ellerman1-5/+4
Our logic for choosing defconfig doesn't work well in some situations. For example if you're on a ppc64le machine but you specify a non-empty CROSS_COMPILE, in order to use a non-default toolchain, then defconfig will give you ppc64_defconfig (big endian): $ make CROSS_COMPILE=~/toolchains/gcc-8/bin/powerpc-linux- defconfig *** Default configuration is based on 'ppc64_defconfig' This is because we assume that CROSS_COMPILE being set means we can't be on a ppc machine and rather than checking we just default to ppc64_defconfig. We should just ignore CROSS_COMPILE, instead check the machine with uname and if it's one of ppc, ppc64 or ppc64le then use that defconfig. If it's none of those then we fall back to ppc64_defconfig. Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20powerpc/32: Add ppc_defconfigMichael Ellerman1-0/+4
Add a generic 32-bit defconfig called ppc_defconfig. This means we'll have a defconfig matching "uname -m" for all cases. This config is mostly intended for build testing but if someone wants to tweak it to get it booting on something that would be fine too. Signed-off-by: Michael Ellerman <[email protected]> Tested-by: Mathieu Malaterre <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-04-20Merge branch 'fixes' into nextMichael Ellerman4-10/+14
Merge our fixes branch. In particular the radix segment exception handling fix is necessary to avoid odd crashes.
2019-04-19Make anon_inodes unconditionalDavid Howells1-1/+0
Make the anon_inodes facility unconditional so that it can be used by core VFS code and pidfd code. Signed-off-by: David Howells <[email protected]> Signed-off-by: Al Viro <[email protected]> [[email protected]: adapt commit message to mention pidfds] Signed-off-by: Christian Brauner <[email protected]>
2019-04-18Merge tag 'kvm-ppc-fixes-5.1-1' of ↵Paolo Bonzini2-4/+6
git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc into HEAD KVM/PPC fixes for 5.1 - Fix host hang in the HTM assist code for POWER9 - Take srcu read lock around memslot lookup
2019-04-18crypto: powerpc - convert to use crypto_simd_usable()Eric Biggers3-2/+7
Replace all calls to in_interrupt() in the PowerPC crypto code with !crypto_simd_usable(). This causes the crypto self-tests to test the no-SIMD code paths when CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y. The p8_ghash algorithm is currently failing and needs to be fixed, as it produces the wrong digest when no-SIMD updates are mixed with SIMD ones. Signed-off-by: Eric Biggers <[email protected]> Signed-off-by: Herbert Xu <[email protected]>