aboutsummaryrefslogtreecommitdiff
path: root/arch/powerpc/kernel/exceptions-64e.S
AgeCommit message (Collapse)AuthorFilesLines
2020-12-09powerpc/fault: Perform exception fixup in do_page_fault()Christophe Leroy1-1/+1
Exception fixup doesn't require the heady full regs saving, do it from do_page_fault() directly. For that, split bad_page_fault() in two parts. As bad_page_fault() can also be called from other places than handle_page_fault(), it will still perform exception fixup and fallback on __bad_page_fault(). handle_page_fault() directly calls __bad_page_fault() as the exception fixup will now be done by do_page_fault() Signed-off-by: Christophe Leroy <[email protected]> Reviewed-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/bd07d6fef9237614cd6d318d8f19faeeadaa816b.1607491748.git.christophe.leroy@csgroup.eu
2020-10-06powerpc/64e: remove 64s specific interrupt soft-mask codeNicholas Piggin1-10/+0
Since the assembly soft-masking code was moved to 64e specific, there are some 64s specific interrupt types still there. Remove them. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-06powerpc/64e: remove PACA_IRQ_EE_EDGENicholas Piggin1-1/+0
This is not used anywhere. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-04-01powerpc/64s: Implement interrupt exit logic in CNicholas Piggin1-7/+248
Implement the bulk of interrupt return logic in C. The asm return code must handle a few cases: restoring full GPRs, and emulating stack store. The stack store emulation is significantly simplfied, rather than creating a new return frame and switching to that before performing the store, it uses the PACA to keep a scratch register around to perform the store. The asm return code is moved into 64e for now. The new logic has made allowance for 64e, but I don't have a full environment that works well to test it, and even booting in emulated qemu is not great for stress testing. 64e shouldn't be too far off working with this, given a bit more testing and auditing of the logic. This is slightly faster on a POWER9 (page fault speed increases about 1.1%), probably due to reduced mtmsrd. mpe: Includes fixes from Nick for _TIF_EMULATE_STACK_STORE handling (including the fast_interrupt_return path), to remove trace_hardirqs_on(), and fixes the interrupt-return part of the MSR_VSX restore bug caught by tm-unavailable selftest. mpe: Incorporate fix from Nick: The return-to-kernel path has to replay any soft-pending interrupts if it is returning to a context that had interrupts soft-enabled. It has to do this carefully and avoid plain enabling interrupts if this is an irq context, which can cause multiple nesting of interrupts on the stack, and other unexpected issues. The code which avoided this case got the soft-mask state wrong, and marked interrupts as enabled before going around again to retry. This seems to be mostly harmless except when PREEMPT=y, this calls preempt_schedule_irq with irqs apparently enabled and runs into a BUG in kernel/sched/core.c Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michal Suchanek <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-04-01powerpc/64: Implement soft interrupt replay in CNicholas Piggin1-32/+0
When local_irq_enable() finds a pending soft-masked interrupt, it "replays" it by setting up registers like the initial interrupt entry, then calls into the low level handler to set up an interrupt stack frame and process the interrupt. This is not necessary, and uses more stack than needed. The high level interrupt handler can be called directly from C, with just pt_regs set up on stack. This should be faster and use less stack. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2019-11-13powerpc: unify definition of M_IF_NEEDEDJason Yan1-11/+1
M_IF_NEEDED is defined too many times. Move it to a common place and rename it to MAS2_M_IF_NEEDED which is much readable. Signed-off-by: Jason Yan <[email protected]> Reviewed-by: Christophe Leroy <[email protected]> Reviewed-by: Diana Craciun <[email protected]> Tested-by: Diana Craciun <[email protected]> Signed-off-by: Scott Wood <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-08-27powerpc/64: optimise LOAD_REG_IMMEDIATE_SYM()Christophe Leroy1-9/+13
Optimise LOAD_REG_IMMEDIATE_SYM() using a temporary register to parallelise operations. It reduces the path from 5 to 3 instructions. Suggested-by: Segher Boessenkool <[email protected]> Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/bad41ed02531bb0382420cbab50a0d7153b71767.1566311636.git.christophe.leroy@c-s.fr
2019-08-27powerpc: rewrite LOAD_REG_IMMEDIATE() as an intelligent macroChristophe Leroy1-5/+5
Today LOAD_REG_IMMEDIATE() is a basic #define which loads all parts on a value into a register, including the parts that are NUL. This means always 2 instructions on PPC32 and always 5 instructions on PPC64. And those instructions cannot run in parallele as they are updating the same register. Ex: LOAD_REG_IMMEDIATE(r1,THREAD_SIZE) in head_64.S results in: 3c 20 00 00 lis r1,0 60 21 00 00 ori r1,r1,0 78 21 07 c6 rldicr r1,r1,32,31 64 21 00 00 oris r1,r1,0 60 21 40 00 ori r1,r1,16384 Rewrite LOAD_REG_IMMEDIATE() with GAS macro in order to skip the parts that are NUL. Rename existing LOAD_REG_IMMEDIATE() as LOAD_REG_IMMEDIATE_SYM() and use that one for loading value of symbols which are not known at compile time. Now LOAD_REG_IMMEDIATE(r1,THREAD_SIZE) in head_64.S results in: 38 20 40 00 li r1,16384 Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/d60ce8dd3a383c7adbfc322bf1d53d81724a6000.1566311636.git.christophe.leroy@c-s.fr
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152Thomas Gleixner1-5/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Allison Randal <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-02-27powerpc/fsl: Fix the flush of branch predictor.Christophe Leroy1-0/+1
The commit identified below adds MC_BTB_FLUSH macro only when CONFIG_PPC_FSL_BOOK3E is defined. This results in the following error on some configs (seen several times with kisskb randconfig_defconfig) arch/powerpc/kernel/exceptions-64e.S:576: Error: Unrecognized opcode: `mc_btb_flush' make[3]: *** [scripts/Makefile.build:367: arch/powerpc/kernel/exceptions-64e.o] Error 1 make[2]: *** [scripts/Makefile.build:492: arch/powerpc/kernel] Error 2 make[1]: *** [Makefile:1043: arch/powerpc] Error 2 make: *** [Makefile:152: sub-make] Error 2 This patch adds a blank definition of MC_BTB_FLUSH for other cases. Fixes: 10c5e83afd4a ("powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)") Cc: Diana Craciun <[email protected]> Signed-off-by: Christophe Leroy <[email protected]> Reviewed-by: Daniel Axtens <[email protected]> Reviewed-by: Diana Craciun <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2019-02-23powerpc/64: Replace CURRENT_THREAD_INFO with PACA_THREAD_INFOChristophe Leroy1-1/+1
Now that current_thread_info is located at the beginning of 'current' task struct, CURRENT_THREAD_INFO macro is not really needed any more. This patch replaces it by loads of the value at PACA_THREAD_INFO(r13). Signed-off-by: Christophe Leroy <[email protected]> [mpe: Add PACA_THREAD_INFO rather than using PACACURRENT] Signed-off-by: Michael Ellerman <[email protected]>
2019-02-23powerpc: Activate CONFIG_THREAD_INFO_IN_TASKChristophe Leroy1-11/+0
This patch activates CONFIG_THREAD_INFO_IN_TASK which moves the thread_info into task_struct. Moving thread_info into task_struct has the following advantages: - It protects thread_info from corruption in the case of stack overflows. - Its address is harder to determine if stack addresses are leaked, making a number of attacks more difficult. This has the following consequences: - thread_info is now located at the beginning of task_struct. - The 'cpu' field is now in task_struct, and only exists when CONFIG_SMP is active. - thread_info doesn't have anymore the 'task' field. This patch: - Removes all recopy of thread_info struct when the stack changes. - Changes the CURRENT_THREAD_INFO() macro to point to current. - Selects CONFIG_THREAD_INFO_IN_TASK. - Modifies raw_smp_processor_id() to get ->cpu from current without including linux/sched.h to avoid circular inclusion and without including asm/asm-offsets.h to avoid symbol names duplication between ASM constants and C constants. - Modifies klp_init_thread_info() to take a task_struct pointer argument. Signed-off-by: Christophe Leroy <[email protected]> Reviewed-by: Nicholas Piggin <[email protected]> [mpe: Add task_stack.h to livepatch.h to fix build fails] Signed-off-by: Michael Ellerman <[email protected]>
2018-12-20powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)Diana Craciun1-1/+25
In order to protect against speculation attacks on indirect branches, the branch predictor is flushed at kernel entry to protect for the following situations: - userspace process attacking another userspace process - userspace process attacking the kernel Basically when the privillege level change (i.e. the kernel is entered), the branch predictor state is flushed. Signed-off-by: Diana Craciun <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-07-30powerpc: clean inclusions of asm/feature-fixups.hChristophe Leroy1-0/+1
files not using feature fixup don't need asm/feature-fixups.h files using feature fixup need asm/feature-fixups.h Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-07-24powerpc/64s: make PACA_IRQ_HARD_DIS track MSR[EE] closelyNicholas Piggin1-0/+4
When the masked interrupt handler clears MSR[EE] for an interrupt in the PACA_IRQ_MUST_HARD_MASK set, it does not set PACA_IRQ_HARD_DIS. This makes them get out of synch. With that taken into account, it's only low level irq manipulation (and interrupt entry before reconcile) where they can be out of synch. This makes the code less surprising. It also allows the IRQ replay code to rely on the IRQ_HARD_DIS value and not have to mtmsrd again in this case (e.g., for an external interrupt that has been masked). The bigger benefit might just be that there is not such an element of surprise in these two bits of state. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-02-08powerpc/64s: Fix may_hard_irq_enable() for PMI soft maskingNicholas Piggin1-0/+2
The soft IRQ masking code has to hard-disable interrupts in cases where the exception is not cleared by the masked handler. External interrupts used this approach for soft masking. Now recently PMU interrupts do the same thing. The soft IRQ masking code additionally allowed for interrupt handlers to hard-enable interrupts after soft-disabling them. The idea is to allow PMU interrupts through to profile interrupt handlers. So when interrupts are being replayed when there is a pending interrupt that requires hard-disabling, there is a test to prevent those handlers from hard-enabling them if there is a pending external interrupt. may_hard_irq_enable() handles this. After f442d00480 ("powerpc/64s: Add support to mask perf interrupts and replay them"), may_hard_irq_enable() could prematurely enable MSR[EE] when a PMU exception exists, which would result in the interrupt firing again while masked, and MSR[EE] being disabled again. I haven't seen that this could cause a serious problem, but it's more consistent to handle these soft-masked interrupts in the same way. So introduce a define for all types of interrupts that require MSR[EE] masking in their soft-disable handlers, and use that in may_hard_irq_enable(). Fixes: f442d004806e ("powerpc/64s: Add support to mask perf interrupts and replay them") Signed-off-by: Nicholas Piggin <[email protected]> Reviewed-by: Madhavan Srinivasan <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-01-19powerpc/64: Rename soft_enabled to irq_soft_maskMadhavan Srinivasan1-5/+5
Rename the paca->soft_enabled to paca->irq_soft_mask as it is no longer used as a flag for interrupt state, but a mask. Signed-off-by: Madhavan Srinivasan <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-01-19powerpc/64: Change soft_enabled from flag to bitmaskMadhavan Srinivasan1-5/+5
"paca->soft_enabled" is used as a flag to mask some of interrupts. Currently supported flags values and their details: soft_enabled MSR[EE] 0 0 Disabled (PMI and HMI not masked) 1 1 Enabled "paca->soft_enabled" is initialized to 1 to make the interripts as enabled. arch_local_irq_disable() will toggle the value when interrupts needs to disbled. At this point, the interrupts are not actually disabled, instead, interrupt vector has code to check for the flag and mask it when it occurs. By "mask it", it update interrupt paca->irq_happened and return. arch_local_irq_restore() is called to re-enable interrupts, which checks and replays interrupts if any occured. Now, as mentioned, current logic doesnot mask "performance monitoring interrupts" and PMIs are implemented as NMI. But this patchset depends on local_irq_* for a successful local_* update. Meaning, mask all possible interrupts during local_* update and replay them after the update. So the idea here is to reserve the "paca->soft_enabled" logic. New values and details: soft_enabled MSR[EE] 1 0 Disabled (PMI and HMI not masked) 0 1 Enabled Reason for the this change is to create foundation for a third mask value "0x2" for "soft_enabled" to add support to mask PMIs. When ->soft_enabled is set to a value "3", PMI interrupts are mask and when set to a value of "1", PMI are not mask. With this patch also extends soft_enabled as interrupt disable mask. Current flags are renamed from IRQ_[EN?DIS}ABLED to IRQS_ENABLED and IRQS_DISABLED. Patch also fixes the ptrace call to force the user to see the softe value to be alway 1. Reason being, even though userspace has no business knowing about softe, it is part of pt_regs. Like-wise in signal context. Signed-off-by: Madhavan Srinivasan <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-01-19powerpc/64: Add #defines for paca->soft_enabled flagsMadhavan Srinivasan1-3/+3
Two #defines IRQS_ENABLED and IRQS_DISABLED are added to be used when updating paca->soft_enabled. Replace the hardcoded values used when updating paca->soft_enabled with IRQ_(EN|DIS)ABLED #define. No logic change. Reviewed-by: Nicholas Piggin <[email protected]> Signed-off-by: Madhavan Srinivasan <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2017-04-30powerpc/64e: Fix hang when debugging programs with relocated kernelLiuHailong1-0/+12
Debug interrupts can be taken during interrupt entry, since interrupt entry does not automatically turn them off. The kernel will check whether the faulting instruction is between [interrupt_base_book3e, __end_interrupts], and if so clear MSR[DE] and return. However, when the kernel is built with CONFIG_RELOCATABLE, it can't use LOAD_REG_IMMEDIATE(r14,interrupt_base_book3e) and LOAD_REG_IMMEDIATE(r15,__end_interrupts), as they ignore relocation. Thus, if the kernel is actually running at a different address than it was built at, the address comparison will fail, and the exception entry code will hang at kernel_dbg_exc. r2(toc) is also not usable here, as r2 still holds data from the interrupted context, so LOAD_REG_ADDR() doesn't work either. So we use the *name@got* to get the EV of two labels directly. Test programs test.c shows as follows: int main(int argc, char *argv[]) { if (access("/proc/sys/kernel/perf_event_paranoid", F_OK) == -1) printf("Kernel doesn't have perf_event support\n"); } Steps to reproduce the bug, for example: 1) ./gdb ./test 2) (gdb) b access 3) (gdb) r 4) (gdb) s Signed-off-by: Liu Hailong <[email protected]> Signed-off-by: Jiang Xuexin <[email protected]> Reviewed-by: Jiang Biao <[email protected]> Reviewed-by: Liu Song <[email protected]> Reviewed-by: Huang Jian <[email protected]> [scottwood: cleaned up commit message, and specified bad behavior as a hang rather than an oops to correspond to mainline kernel behavior] Fixes: 1cb6e0649248 ("powerpc/book3e: support CONFIG_RELOCATABLE") Cc: <[email protected]> # 4.4.x- Signed-off-by: Scott Wood <[email protected]>
2016-11-28powerpc/64e: Don't branch to dot symbolsNicholas Piggin1-3/+3
This converts one that was missed by b1576fec7f4d ("powerpc: No need to use dot symbols when branching to a function"). Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2016-07-09powerpc32: provide VIRT_CPU_ACCOUNTINGChristophe Leroy1-2/+2
This patch provides VIRT_CPU_ACCOUTING to PPC32 architecture. PPC32 doesn't have the PACA structure, so we use the task_info structure to store the accounting data. In order to reuse on PPC32 the PPC64 functions, all u64 data has been replaced by 'unsigned long' so that it is u32 on PPC32 and u64 on PPC64 Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Scott Wood <[email protected]>
2016-06-14powerpc: Various typo fixesMichael Ellerman1-1/+1
Signed-off-by: Andrea Gelmini <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2015-10-27powerpc/book3e: support CONFIG_RELOCATABLETiejun Chen1-2/+7
book3e is different with book3s since 3s includes the exception vectors code in head_64.S as it relies on absolute addressing which is only possible within this compilation unit. So we have to get that label address with got. And when boot a relocated kernel, we should reset ipvr properly again after .relocate. Signed-off-by: Tiejun Chen <[email protected]> [scottwood: cleanup and ifdef removal] Signed-off-by: Scott Wood <[email protected]>
2015-10-27powerpc/book3e-64: rename interrupt_end_book3e with __end_interruptsTiejun Chen1-4/+4
Rename 'interrupt_end_book3e' to '__end_interrupts' so that the symbol can be used by both book3s and book3e. Signed-off-by: Tiejun Chen <[email protected]> [scottwood: edit changelog] Signed-off-by: Scott Wood <[email protected]>
2015-08-07powerpc/fsl: Force coherent memory on e500mc derivativesScott Wood1-5/+8
In CoreNet systems it is not allowed to mix M and non-M mappings to the same memory, and coherent DMA accesses are considered to be M mappings for this purpose. Ignoring this has been observed to cause hard lockups in non-SMP kernels on e6500. Furthermore, e6500 implements the LRAT (logical to real address table) which allows KVM guests to control the WIMGE bits. This means that KVM cannot force the M bit on the way it usually does, so the guest had better set it itself. Signed-off-by: Scott Wood <[email protected]>
2014-09-22powerpc/booke: Revert SPE/AltiVec common defines for interrupt numbersMihai Caraman1-2/+2
Book3E specification defines shared interrupt numbers for SPE and AltiVec units. Still SPE is present in e200/e500v2 cores while AltiVec is present in e6500 core. So we can currently decide at compile-time which unit to support exclusively. As Alexander Graf suggested, this will improve code readability especially in KVM. Use distinct defines to identify SPE/AltiVec interrupt numbers, reverting c58ce397 and 6b310fc5 patches that added common defines. Signed-off-by: Mihai Caraman <[email protected]> Acked-by: Scott Wood <[email protected]> Signed-off-by: Alexander Graf <[email protected]>
2014-06-11powerpc: Remove platforms/wsp and associated piecesMichael Ellerman1-16/+0
__attribute__ ((unused)) WSP is the last user of CONFIG_PPC_A2, so we remove that as well. Although CONFIG_PPC_ICSWX still exists, it's no longer selectable for any Book3E platform, so we can remove the code in mmu-book3e.h that depended on it. Signed-off-by: Michael Ellerman <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2014-04-23powerpc: Remove dot symbol usage in exception macrosAnton Blanchard1-4/+4
STD_EXCEPTION_COMMON, STD_EXCEPTION_COMMON_ASYNC and MASKABLE_EXCEPTION branch to the handler, so we can remove the explicit dot symbol and binutils will do the right thing. Signed-off-by: Anton Blanchard <[email protected]>
2014-04-23powerpc: Remove some unnecessary uses of _GLOBAL() and _STATIC()Anton Blanchard1-2/+2
There is no need to create a function descriptor for functions called locally out of assembly. Signed-off-by: Anton Blanchard <[email protected]>
2014-04-23powerpc: No need to use dot symbols when branching to a functionAnton Blanchard1-64/+64
binutils is smart enough to know that a branch to a function descriptor is actually a branch to the functions text address. Alan tells me that binutils has been doing this for 9 years. Signed-off-by: Anton Blanchard <[email protected]>
2014-03-19powerpc/booke64: Critical and machine check exception supportScott Wood1-59/+284
Add special state saving for critical and machine check exceptions. Most of this code could be used to handle debug exceptions taken from kernel space, but actually doing so is outside the scope of this patch. The various critical and machine check exceptions now point to their real handlers, rather than hanging the kernel. Signed-off-by: Scott Wood <[email protected]>
2014-03-19powerpc/booke64: Add crit/mc/debug support to EXCEPTION_COMMONScott Wood1-30/+33
Use the proper scratch SPRG and PACA region. Introduce level-specific macros to simplify usage and avoid needing to do a bunch of token pasting throughout EXCEPTION_COMMON(). Now that EXCEPTION_COMMON_DBG() is properly using the debug scratch register, there's no more need for the caller to move the value to the GEN scratch first. Signed-off-by: Scott Wood <[email protected]>
2014-03-19powerpc/booke64: Remove ints from EXCEPTION_COMMONScott Wood1-36/+46
The ints parameter was used to optionally insert RECONCILE_IRQ_STATE into EXCEPTION_COMMON. However, since it came at the end of EXCEPTION_COMMON, there was no real benefit for it to be there as opposed to being called separately by the caller of EXCEPTION_COMMON. The ints parameter was causing some hassle when trying to add an extra macro layer. Besides avoiding that, moving "ints" to the caller makes the code simpler by: - avoiding the asymmetry where INTS_RESTORE_HARD is called separately by the individual exception, but INTS_DISABLE was not - removing the no-op INTS_KEEP - not having an unnecessary macro parameter It also turned out to be necessary to delay the INTS_DISABLE in the case of special level exceptions until after we saved the old value of PACAIRQHAPPENED. Signed-off-by: Scott Wood <[email protected]>
2014-03-19powerpc/booke64: Use SPRG7 for VDSOScott Wood1-17/+2
Previously SPRG3 was marked for use by both VDSO and critical interrupts (though critical interrupts were not fully implemented). In commit 8b64a9dfb091f1eca8b7e58da82f1e7d1d5fe0ad ("powerpc/booke64: Use SPRG0/3 scratch for bolted TLB miss & crit int"), Mihai Caraman made an attempt to resolve this conflict by restoring the VDSO value early in the critical interrupt, but this has some issues: - It's incompatible with EXCEPTION_COMMON which restores r13 from the by-then-overwritten scratch (this cost me some debugging time). - It forces critical exceptions to be a special case handled differently from even machine check and debug level exceptions. - It didn't occur to me that it was possible to make this work at all (by doing a final "ld r13, PACA_EXCRIT+EX_R13(r13)") until after I made (most of) this patch. :-) It might be worth investigating using a load rather than SPRG on return from all exceptions (except TLB misses where the scratch never leaves the SPRG) -- it could save a few cycles. Until then, let's stick with SPRG for all exceptions. Since we cannot use SPRG4-7 for scratch without corrupting the state of a KVM guest, move VDSO to SPRG7 on book3e. Since neither SPRG4-7 nor critical interrupts exist on book3s, SPRG3 is still used for VDSO there. Signed-off-by: Scott Wood <[email protected]> Cc: Mihai Caraman <[email protected]> Cc: Anton Blanchard <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: [email protected]
2014-03-19powerpc/booke64: Fix exception numbersScott Wood1-6/+6
altivec_unavailable was commented as 0xf20 but the code uses 0x200. Note that 0xf20 is also used by ap_unavailable. altivec_assist was commented as 0x1700 but the code uses 0x220. critical_input was commented as 0x580 but the code uses 0x100. machine_check was commented and implemented as 0x200, which conflicts with altivec_assist (it only builds because MC_EXCEPTION_PROLOG is commented out). Changed to the fixed IVOR value of 0x000. Signed-off-by: Scott Wood <[email protected]>
2014-03-19powerpc/book3e: store crit/mc/dbg exception thread infoTiejun Chen1-3/+19
We need to store thread info to these exception thread info like something we already did for PPC32. Signed-off-by: Tiejun Chen <[email protected]> Signed-off-by: Scott Wood <[email protected]>
2014-01-10powerpc: Replaced tlbilx with tlbwe in the initialization codeDiana Craciun1-8/+2
On Freescale e6500 cores EPCR[DGTMI] controls whether guest supervisor state can execute TLB management instructions. If EPCR[DGTMI]=0 tlbwe and tlbilx are allowed to execute normally in the guest state. A hypervisor may choose to virtualize TLB1 and for this purpose it may use IPROT to protect the entries for being invalidated by the guest. However, because tlbwe and tlbilx execution in the guest state are sharing the same bit, it is not possible to have a scenario where tlbwe is allowed to be executed in guest state and tlbilx traps. When guest TLB management instructions are allowed to be executed in guest state the guest cannot use tlbilx to invalidate TLB1 guest entries. Linux is using tlbilx in the boot code to invalidate the temporary entries it creates when initializing the MMU. The patch is replacing the usage of tlbilx in initialization code with tlbwe with VALID bit cleared. Linux is also using tlbilx in other contexts (like huge pages or indirect entries) but removing the tlbilx from the initialization code offers the possibility to have scenarios under hypervisor which are not using huge pages or indirect entries. Signed-off-by: Diana Craciun <[email protected]> Signed-off-by: Scott Wood <[email protected]>
2014-01-07powerpc/booke64: Add LRAT error exception handlerMihai Caraman1-0/+17
LRAT (Logical to Real Address Translation) present in MMU v2 provides hardware translation from a logical page number (LPN) to a real page number (RPN) when tlbwe is executed by a guest or when a page table translation occurs from a guest virtual address. Add LRAT error exception handler to Booke3E 64-bit kernel and the basic KVM handler to avoid build breakage. This is a prerequisite for KVM LRAT support that will follow. Signed-off-by: Mihai Caraman <[email protected]> Signed-off-by: Scott Wood <[email protected]>
2013-10-16powerpc/booke64: Use common defines for AltiVec interrupts numbersMihai Caraman1-2/+3
On Book3E some SPE/FP/AltiVec interrupts share the same number. Use common defines to indentify these numbers. Signed-off-by: Mihai Caraman <[email protected]> Signed-off-by: Scott Wood <[email protected]>
2013-10-11powerpc/booke64: Check napping in performance monitor interruptKevin Hao1-0/+1
The performance monitor interrupt is asynchronous, so we should check if the current processor is in napping status in the handler of this interrupt. Signed-off-by: Kevin Hao <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-08-14powerpc/ppc64: Rename SOFT_DISABLE_INTS with RECONCILE_IRQ_STATETiejun Chen1-2/+2
The SOFT_DISABLE_INTS seems an odd name for something that updates the software state to be consistent with interrupts being hard disabled, so rename SOFT_DISABLE_INTS with RECONCILE_IRQ_STATE to avoid this confusion. Signed-off-by: Tiejun Chen <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-05-14powerpc/booke64: Fix kernel hangs at kernel_dbg_excScott Wood1-4/+4
MSR_DE is not cleared on entry to the kernel, and we don't clear it explicitly outside of debug code. If we have MSR_DE set in prime_debug_regs(), and the new thread has events enabled in DBCR0 (e.g. ICMP is set in thread->dbsr0, even though it was cleared in the real DBCR0 when the thread got scheduled out), we'll end up taking a debug exception in the kernel when DBCR0 is loaded. DSRR0 will not point to an exception vector, and the kernel ends up hanging at kernel_dbg_exc. Fix this by always clearing MSR_DE when we load new debug state. Another observed source of kernel_dbg_exc hangs is with the branch taken event. If this event is active, but we take a non-debug trap (e.g. a TLB miss or an asynchronous interrupt) before the next branch. We end up taking a branch-taken debug exception on the initial branch instruction of the exception vector, but because the debug exception is DBSR_BT rather than DBSR_IC we branch to kernel_dbg_exc before even checking the DSRR0 address. Fix this by checking for DBSR_BT as well as DBSR_IC, which is what 32-bit does and what the comments suggest was intended in the 64-bit code as well. Signed-off-by: Scott Wood <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-03-12powerpc/85xx: Add AltiVec support for e6500Kumar Gala1-0/+47
The e6500 core adds support for AltiVec on a Book-E class processor. Connect up all the various exception handling code and build config mechanisms to allow user spaces apps to utilize AltiVec. Signed-off-by: Kumar Gala <[email protected]>
2013-01-10powerpc: Move branch instruction from ACCOUNT_CPU_USER_ENTRY to callerHaren Myneni1-1/+2
[PATCH 1/6] powerpc: Move branch instruction from ACCOUNT_CPU_USER_ENTRY to caller The first instruction in ACCOUNT_CPU_USER_ENTRY is 'beq' which checks for exceptions coming from kernel mode. PPR value will be saved immediately after ACCOUNT_CPU_USER_ENTRY and is also for user level exceptions. So moved this branch instruction in the caller code. Signed-off-by: Haren Myneni <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2012-09-12powerpc/booke: Separate out restore_e5500/setup_e5500 routines.Varun Sethi1-16/+2
For the 64 bit case separate out e5500 cpu_setup and cpu_restore functions. The cpu_setup function (for the primary core) is passed the cpu_spec pointer, which is not there in case of the cpu_restore function. Also, in our case we will have to manipulate the CPU_FTR_EMB_HV flag on the primary core. Signed-off-by: Varun Sethi <[email protected]> Signed-off-by: Kumar Gala <[email protected]>
2012-09-07powerpc: Restore VDSO information on critical exception om BookEMihai Caraman1-1/+3
Critical exception on 64-bit booke uses user-visible SPRG3 as scratch. Restore VDSO information in SPRG3 on exception prolog. Use a common sprg3 field in PACA for all powerpc64 architectures. Signed-off-by: Mihai Caraman <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2012-09-05powerpc/booke64: Use SPRG0/3 scratch for bolted TLB miss & crit intMihai Caraman1-2/+15
Embedded.Hypervisor category defines GSPRG0..3 physical registers for guests. Avoid SPRG4-7 usage as scratch in host exception handlers, otherwise guest SPRG4-7 registers will be clobbered. For bolted TLB miss exception handlers, which is the version currently supported by KVM, use SPRN_SPRG_GEN_SCRATCH aka SPRG0 instead of SPRN_SPRG_TLB_SCRATCH aka SPRG6. Keep using TLB PACA slots to fit in one 64-byte cache line. For critical exception handlers use SPRG3 instead of SPRG7. Provide a routine to store and restore user-visible SPRGs. This will be subsequently used to restore VDSO information in SPRG3. Add EX_R13 to paca slots to free up SPRG3 and change the critical exception epilog to use it. Signed-off-by: Mihai Caraman <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2012-09-05powerpc/booke64: Eemove mfspr srr1 duplicate in exception prologMihai Caraman1-35/+32
Refactor exception prolog to get rid of mfspr srr1 duplicate. This was introduced by KVM integration, with DO_KVM macro logic expecting srr1 value earlier in r11. Reserve r11 to hold srr1's value also required at the end of the prolog and free up r10 to serve as spare in addition macros. For syscalls case this change does not add any performance penalty. For irq soft-disabled case the change adds a store/load of conditional register value to/from a paca slot. Paca slots fit in one 64-byte cache line so these additional operations have little impact on performance. Signed-off-by: Mihai Caraman <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2012-09-05powerpc/booke64: Add DO_KVM kernel hooksMihai Caraman1-34/+59
Hook DO_KVM macro into 64-bit booke for KVM integration. Extend interrupt handlers' parameter list with interrupt vector numbers to accomodate the macro. Only the bolted version of tlb miss handers is addressed now. Signed-off-by: Mihai Caraman <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>