aboutsummaryrefslogtreecommitdiff
path: root/arch/powerpc
AgeCommit message (Collapse)AuthorFilesLines
2013-10-17KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpuPaul Mackerras9-239/+162
Currently PR-style KVM keeps the volatile guest register values (R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than the main kvm_vcpu struct. For 64-bit, the shadow_vcpu exists in two places, a kmalloc'd struct and in the PACA, and it gets copied back and forth in kvmppc_core_vcpu_load/put(), because the real-mode code can't rely on being able to access the kmalloc'd struct. This changes the code to copy the volatile values into the shadow_vcpu as one of the last things done before entering the guest. Similarly the values are copied back out of the shadow_vcpu to the kvm_vcpu immediately after exiting the guest. We arrange for interrupts to be still disabled at this point so that we can't get preempted on 64-bit and end up copying values from the wrong PACA. This means that the accessor functions in kvm_book3s.h for these registers are greatly simplified, and are same between PR and HV KVM. In places where accesses to shadow_vcpu fields are now replaced by accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs. Finally, on 64-bit, we don't need the kmalloc'd struct at all any more. With this, the time to read the PVR one million times in a loop went from 567.7ms to 575.5ms (averages of 6 values), an increase of about 1.4% for this worse-case test for guest entries and exits. The standard deviation of the measurements is about 11ms, so the difference is only marginally significant statistically. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Alexander Graf <[email protected]>
2013-10-17KVM: PPC: Book3S PR: Fix compilation without CONFIG_ALTIVECPaul Mackerras1-0/+2
Commit 9d1ffdd8f3 ("KVM: PPC: Book3S PR: Don't corrupt guest state when kernel uses VMX") added a call to kvmppc_load_up_altivec() that isn't guarded by CONFIG_ALTIVEC, causing a link failure when building a kernel without CONFIG_ALTIVEC set. This adds an #ifdef to fix this. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Alexander Graf <[email protected]>
2013-10-17KVM: PPC: Book3S HV: Don't crash host on unknown guest interruptPaul Mackerras1-1/+1
If we come out of a guest with an interrupt that we don't know about, instead of crashing the host with a BUG(), we now return to userspace with the exit reason set to KVM_EXIT_UNKNOWN and the trap vector in the hw.hardware_exit_reason field of the kvm_run structure, as is done on x86. Note that run->exit_reason is already set to KVM_EXIT_UNKNOWN at the beginning of kvmppc_handle_exit(). Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Alexander Graf <[email protected]>
2013-10-17KVM: PPC: Book3S HV: Support POWER6 compatibility mode on POWER7Paul Mackerras6-2/+66
This enables us to use the Processor Compatibility Register (PCR) on POWER7 to put the processor into architecture 2.05 compatibility mode when running a guest. In this mode the new instructions and registers that were introduced on POWER7 are disabled in user mode. This includes all the VSX facilities plus several other instructions such as ldbrx, stdbrx, popcntw, popcntd, etc. To select this mode, we have a new register accessible through the set/get_one_reg interface, called KVM_REG_PPC_ARCH_COMPAT. Setting this to zero gives the full set of capabilities of the processor. Setting it to one of the "logical" PVR values defined in PAPR puts the vcpu into the compatibility mode for the corresponding architecture level. The supported values are: 0x0f000002 Architecture 2.05 (POWER6) 0x0f000003 Architecture 2.06 (POWER7) 0x0f100003 Architecture 2.06+ (POWER7+) Since the PCR is per-core, the architecture compatibility level and the corresponding PCR value are stored in the struct kvmppc_vcore, and are therefore shared between all vcpus in a virtual core. Signed-off-by: Paul Mackerras <[email protected]> [agraf: squash in fix to add missing break statements and documentation] Signed-off-by: Alexander Graf <[email protected]>
2013-10-17KVM: PPC: Book3S HV: Add support for guest Program Priority RegisterPaul Mackerras7-1/+30
POWER7 and later IBM server processors have a register called the Program Priority Register (PPR), which controls the priority of each hardware CPU SMT thread, and affects how fast it runs compared to other SMT threads. This priority can be controlled by writing to the PPR or by use of a set of instructions of the form or rN,rN,rN which are otherwise no-ops but have been defined to set the priority to particular levels. This adds code to context switch the PPR when entering and exiting guests and to make the PPR value accessible through the SET/GET_ONE_REG interface. When entering the guest, we set the PPR as late as possible, because if we are setting a low thread priority it will make the code run slowly from that point on. Similarly, the first-level interrupt handlers save the PPR value in the PACA very early on, and set the thread priority to the medium level, so that the interrupt handling code runs at a reasonable speed. Acked-by: Benjamin Herrenschmidt <[email protected]> Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Alexander Graf <[email protected]>
2013-10-17KVM: PPC: Book3S HV: Store LPCR value for each virtual corePaul Mackerras8-17/+74
This adds the ability to have a separate LPCR (Logical Partitioning Control Register) value relating to a guest for each virtual core, rather than only having a single value for the whole VM. This corresponds to what real POWER hardware does, where there is a LPCR per CPU thread but most of the fields are required to have the same value on all active threads in a core. The per-virtual-core LPCR can be read and written using the GET/SET_ONE_REG interface. Userspace can can only modify the following fields of the LPCR value: DPFD Default prefetch depth ILE Interrupt little-endian TC Translation control (secondary HPT hash group search disable) We still maintain a per-VM default LPCR value in kvm->arch.lpcr, which contains bits relating to memory management, i.e. the Virtualized Partition Memory (VPM) bits and the bits relating to guest real mode. When this default value is updated, the update needs to be propagated to the per-vcore values, so we add a kvmppc_update_lpcr() helper to do that. Signed-off-by: Paul Mackerras <[email protected]> [agraf: fix whitespace] Signed-off-by: Alexander Graf <[email protected]>
2013-10-17KVM: PPC: BookE: Add GET/SET_ONE_REG interface for VRSAVEPaul Mackerras1-0/+6
This makes the VRSAVE register value for a vcpu accessible through the GET/SET_ONE_REG interface on Book E systems (in addition to the existing GET/SET_SREGS interface), for consistency with Book 3S. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Alexander Graf <[email protected]>
2013-10-17KVM: PPC: Book3S HV: Avoid unbalanced increments of VPA yield countPaul Mackerras1-10/+10
The yield count in the VPA is supposed to be incremented every time we enter the guest, and every time we exit the guest, so that its value is even when the vcpu is running in the guest and odd when it isn't. However, it's currently possible that we increment the yield count on the way into the guest but then find that other CPU threads are already exiting the guest, so we go back to nap mode via the secondary_too_late label. In this situation we don't increment the yield count again, breaking the relationship between the LSB of the count and whether the vcpu is in the guest. To fix this, we move the increment of the yield count to a point after we have checked whether other CPU threads are exiting. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Alexander Graf <[email protected]>
2013-10-17KVM: PPC: Book3S HV: Pull out interrupt-reading code into a subroutinePaul Mackerras1-49/+68
This moves the code in book3s_hv_rmhandlers.S that reads any pending interrupt from the XICS interrupt controller, and works out whether it is an IPI for the guest, an IPI for the host, or a device interrupt, into a new function called kvmppc_read_intr. Later patches will need this. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Alexander Graf <[email protected]>
2013-10-17KVM: PPC: Book3S HV: Restructure kvmppc_hv_entry to be a subroutinePaul Mackerras1-166/+178
We have two paths into and out of the low-level guest entry and exit code: from a vcpu task via kvmppc_hv_entry_trampoline, and from the system reset vector for an offline secondary thread on POWER7 via kvm_start_guest. Currently both just branch to kvmppc_hv_entry to enter the guest, and on guest exit, we test the vcpu physical thread ID to detect which way we came in and thus whether we should return to the vcpu task or go back to nap mode. In order to make the code flow clearer, and to keep the code relating to each flow together, this turns kvmppc_hv_entry into a subroutine that follows the normal conventions for call and return. This means that kvmppc_hv_entry_trampoline() and kvmppc_hv_entry() now establish normal stack frames, and we use the normal stack slots for saving return addresses rather than local_paca->kvm_hstate.vmhandler. Apart from that this is mostly moving code around unchanged. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Alexander Graf <[email protected]>
2013-10-17KVM: PPC: Book3S HV: Implement H_CONFERPaul Mackerras1-0/+9
The H_CONFER hypercall is used when a guest vcpu is spinning on a lock held by another vcpu which has been preempted, and the spinning vcpu wishes to give its timeslice to the lock holder. We implement this in the straightforward way using kvm_vcpu_yield_to(). Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Alexander Graf <[email protected]>
2013-10-17KVM: PPC: Book3S: Add GET/SET_ONE_REG interface for VRSAVEPaul Mackerras2-0/+12
The VRSAVE register value for a vcpu is accessible through the GET/SET_SREGS interface for Book E processors, but not for Book 3S processors. In order to make this accessible for Book 3S processors, this adds a new register identifier for GET/SET_ONE_REG, and adds the code to implement it. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Alexander Graf <[email protected]>
2013-10-17KVM: PPC: Book3S HV: Implement timebase offset for guestsPaul Mackerras6-10/+56
This allows guests to have a different timebase origin from the host. This is needed for migration, where a guest can migrate from one host to another and the two hosts might have a different timebase origin. However, the timebase seen by the guest must not go backwards, and should go forwards only by a small amount corresponding to the time taken for the migration. Therefore this provides a new per-vcpu value accessed via the one_reg interface using the new KVM_REG_PPC_TB_OFFSET identifier. This value defaults to 0 and is not modified by KVM. On entering the guest, this value is added onto the timebase, and on exiting the guest, it is subtracted from the timebase. This is only supported for recent POWER hardware which has the TBU40 (timebase upper 40 bits) register. Writing to the TBU40 register only alters the upper 40 bits of the timebase, leaving the lower 24 bits unchanged. This provides a way to modify the timebase for guest migration without disturbing the synchronization of the timebase registers across CPU cores. The kernel rounds up the value given to a multiple of 2^24. Timebase values stored in KVM structures (struct kvm_vcpu, struct kvmppc_vcore, etc.) are stored as host timebase values. The timebase values in the dispatch trace log need to be guest timebase values, however, since that is read directly by the guest. This moves the setting of vcpu->arch.dec_expires on guest exit to a point after we have restored the host timebase so that vcpu->arch.dec_expires is a host timebase value. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Alexander Graf <[email protected]>
2013-10-17KVM: PPC: Book3S HV: Save/restore SIAR and SDAR along with other PMU registersPaul Mackerras4-0/+24
Currently we are not saving and restoring the SIAR and SDAR registers in the PMU (performance monitor unit) on guest entry and exit. The result is that performance monitoring tools in the guest could get false information about where a program was executing and what data it was accessing at the time of a performance monitor interrupt. This fixes it by saving and restoring these registers along with the other PMU registers on guest entry/exit. This also provides a way for userspace to access these values for a vcpu via the one_reg interface. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Alexander Graf <[email protected]>
2013-10-17KVM: PPC: Book3S HV: Reserve POWER8 space in get/set_one_regMichael Neuling1-0/+54
This reserves space in get/set_one_reg ioctl for the extra guest state needed for POWER8. It doesn't implement these at all, it just reserves them so that the ABI is defined now. A few things to note here: - This add *a lot* state for transactional memory. TM suspend mode, this is unavoidable, you can't simply roll back all transactions and store only the checkpointed state. I've added this all to get/set_one_reg (including GPRs) rather than creating a new ioctl which returns a struct kvm_regs like KVM_GET_REGS does. This means we if we need to extract the TM state, we are going to need a bucket load of IOCTLs. Hopefully most of the time this will not be needed as we can look at the MSR to see if TM is active and only grab them when needed. If this becomes a bottle neck in future we can add another ioctl to grab all this state in one go. - The TM state is offset by 0x80000000. - For TM, I've done away with VMX and FP and created a single 64x128 bit VSX register space. - I've left a space of 1 (at 0x9c) since Paulus needs to add a value which applies to POWER7 as well. Signed-off-by: Michael Neuling <[email protected]> Signed-off-by: Alexander Graf <[email protected]>
2013-10-16powerpc: Emulate sync instruction variantsJames Yang2-0/+9
Reserved fields of the sync instruction have been used for other instructions (e.g. lwsync). On processors that do not support variants of the sync instruction, emulate it by executing a sync to subsume the effect of the intended instruction. Signed-off-by: James Yang <[email protected]> [[email protected]: whitespace and subject line fix] Signed-off-by: Scott Wood <[email protected]>
2013-10-16powerpc/fsl-booke: Use common defines for SPE/FP interrupts numbersMihai Caraman1-5/+5
On Book3E some SPE/FP/AltiVec interrupts share the same number. Use common defines to indentify these numbers. Signed-off-by: Mihai Caraman <[email protected]> [[email protected]: fixed space-before-tab] Signed-off-by: Scott Wood <[email protected]>
2013-10-16powerpc/booke64: Use common defines for AltiVec interrupts numbersMihai Caraman1-2/+3
On Book3E some SPE/FP/AltiVec interrupts share the same number. Use common defines to indentify these numbers. Signed-off-by: Mihai Caraman <[email protected]> Signed-off-by: Scott Wood <[email protected]>
2013-10-16powerpc: remove dependency on MV64360Paul Bolle1-1/+1
The Kconfig entry that allows to "Distribute interrupts on all CPUs by default" has a (negative) dependency on MV64360. But that Kconfig symbol was removed in v2.6.27, which means that this dependency has evaluated to true ever since. It can be removed too. Signed-off-by: Paul Bolle <[email protected]> Signed-off-by: Scott Wood <[email protected]>
2013-10-14doc: typo on word accounting in kprobes.c in mutliple architecturesAnoop Thomas Mathew1-1/+1
Signed-off-by: Anoop Thomas Mathew <[email protected]> Signed-off-by: Jiri Kosina <[email protected]>
2013-10-14treewide: fix "distingush" typoMichael Opdenacker1-1/+1
Signed-off-by: Michael Opdenacker <[email protected]> Signed-off-by: Jiri Kosina <[email protected]>
2013-10-14KVM: PPC: Get rid of KVM_HPAGE definesChristoffer Dall1-5/+0
Now when the main kvm code relying on these defines has been moved to the x86 specific part of the world, we can get rid of these. Signed-off-by: Christoffer Dall <[email protected]> Signed-off-by: Gleb Natapov <[email protected]>
2013-10-11Merge branch 'for-kvm' into nextBenjamin Herrenschmidt24-383/+410
Topic branch for commits that the KVM tree might want to pull in separately. Hand merged a few files due to conflicts with the LE stuff Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc: Provide for giveup_fpu/altivec to save state in alternate locationPaul Mackerras6-3/+71
This provides a facility which is intended for use by KVM, where the contents of the FP/VSX and VMX (Altivec) registers can be saved away to somewhere other than the thread_struct when kernel code wants to use floating point or VMX instructions. This is done by providing a pointer in the thread_struct to indicate where the state should be saved to. The giveup_fpu() and giveup_altivec() functions test these pointers and save state to the indicated location if they are non-NULL. Note that the MSR_FP/VEC bits in task->thread.regs->msr are still used to indicate whether the CPU register state is live, even when an alternate save location is being used. This also provides load_fp_state() and load_vr_state() functions, which load up FP/VSX and VMX state from memory into the CPU registers, and corresponding store_fp_state() and store_vr_state() functions, which store FP/VSX and VMX state into memory from the CPU registers. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc: Put FP/VSX and VR state into structuresPaul Mackerras17-358/+200
This creates new 'thread_fp_state' and 'thread_vr_state' structures to store FP/VSX state (including FPSCR) and Altivec/VSX state (including VSCR), and uses them in the thread_struct. In the thread_fp_state, the FPRs and VSRs are represented as u64 rather than double, since we rarely perform floating-point computations on the values, and this will enable the structures to be used in KVM code as well. Similarly FPSCR is now a u64 rather than a structure of two 32-bit values. This takes the offsets out of the macros such as SAVE_32FPRS, REST_32FPRS, etc. This enables the same macros to be used for normal and transactional state, enabling us to delete the transactional versions of the macros. This also removes the unused do_load_up_fpu and do_load_up_altivec, which were in fact buggy since they didn't create large enough stack frames to account for the fact that load_up_fpu and load_up_altivec are not designed to be called from C and assume that their caller's stack frame is an interrupt frame. Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc: add real mode support for dma operations on powernvAlexey Kardashevskiy4-19/+87
The existing TCE machine calls (tce_build and tce_free) only support virtual mode as they call __raw_writeq for TCE invalidation what fails in real mode. This introduces tce_build_rm and tce_free_rm real mode versions which do mostly the same but use "Store Doubleword Caching Inhibited Indexed" instruction for TCE invalidation. This new feature is going to be utilized by real mode support of VFIO. Signed-off-by: Alexey Kardashevskiy <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc: Prepare to support kernel handling of IOMMU map/unmapAlexey Kardashevskiy2-1/+52
The current VFIO-on-POWER implementation supports only user mode driven mapping, i.e. QEMU is sending requests to map/unmap pages. However this approach is really slow, so we want to move that to KVM. Since H_PUT_TCE can be extremely performance sensitive (especially with network adapters where each packet needs to be mapped/unmapped) we chose to implement that as a "fast" hypercall directly in "real mode" (processor still in the guest context but MMU off). To be able to do that, we need to provide some facilities to access the struct page count within that real mode environment as things like the sparsemem vmemmap mappings aren't accessible. This adds an API function realmode_pfn_to_page() to get page struct when MMU is off. This adds to MM a new function put_page_unless_one() which drops a page if counter is bigger than 1. It is going to be used when MMU is off (for example, real mode on PPC64) and we want to make sure that page release will not happen in real mode as it may crash the kernel in a horrible way. CONFIG_SPARSEMEM_VMEMMAP and CONFIG_FLATMEM are supported. Cc: [email protected] Cc: Benjamin Herrenschmidt <[email protected]> Cc: Andrew Morton <[email protected]> Reviewed-by: Paul Mackerras <[email protected]> Signed-off-by: Paul Mackerras <[email protected]> Signed-off-by: Alexey Kardashevskiy <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc/eeh: Reorder output messagesGavin Shan1-3/+3
We already had some output messages from EEH core. Occasionally, we can see the output messages from EEH core before the stack dump. That's not what we expected. The patch fixes that and shows the stack dump prior to output messages from EEH core. Signed-off-by: Gavin Shan <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc/eeh: Output PHB3 diag-dataGavin Shan2-0/+135
The patch adds function ioda_eeh_phb3_phb_diag() to dump PHB3 PHB diag-data. That's called while detecting informative errors or frozen PE on the specific PHB. Signed-off-by: Gavin Shan <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc/powernv: Double size of log blobGavin Shan1-1/+1
Each PHB instance (struct pnv_phb) has its corresponding log blob, which is used to hold the retrieved error log from firmware. The current size of that (4096) isn't enough for PHB3 case and the patch makes that double to 8192. Signed-off-by: Gavin Shan <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc/eeh: Output error numberGavin Shan1-2/+2
The patch prints the error number while failing to retrieve error log from firmware. It's helpful for debugging. Signed-off-by: Gavin Shan <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc/powernv: Support inbound error injectionGavin Shan1-9/+50
For now, we only support outbound error injection. Actually, the hardware supports injecting inbound errors as well. The patch enables to inject inbound errors. Signed-off-by: Gavin Shan <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc/powernv: Enable EEH for PHB3Gavin Shan2-19/+12
The EEH isn't enabled for PHB3 and the patch intends to enable it. Signed-off-by: Gavin Shan <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc/scom: Use "devspec" rather than "path" in debugfs entriesBenjamin Herrenschmidt1-1/+1
This is the traditional name for device-tree path, used in sysfs, do the same for the XSCOM debugfs files. Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc/scom: CONFIG_SCOM_DEBUGFS should depend on CONFIG_DEBUG_FSBenjamin Herrenschmidt1-1/+1
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc/powernv: Add scom support under OPALv3Benjamin Herrenschmidt3-0/+107
OPAL v3 provides interfaces to access the chips XSCOM, expose this via the existing scom infrastructure. Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc/scom: Create debugfs files using ibm,chip-id if availableBenjamin Herrenschmidt1-2/+7
When creating the debugfs scom files, use "ibm,chip-id" as the scom%d index rather than a simple made up number when possible. Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc/scom: Add support for "reg" propertyBenjamin Herrenschmidt1-5/+17
When devices are direct children of a scom controller node, they should be able to use the normal "reg" property instead of "scom-reg". In that case, they also use #address-cells rather than #scom-cells to indicate the size of an entry. Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc/scom: Change scom_read() and scom_write() to return errorsBenjamin Herrenschmidt5-23/+46
scom_read() now returns the read value via a pointer argument and both functions return an int error code Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc: Enable /dev/port when isa_io_special is setBenjamin Herrenschmidt1-1/+1
isa_io_special is set when the platform provides a "special" implementation of inX/outX via some FW interface for example. Such a platform doesn't need an ISA bridge on PCI, and so /dev/port should be made available even if one isn't present. This makes the LPC bus IOs accessible via /dev/port on PowerNV Power8 Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc: Make ftrace endian-safe.Eugene Surovegin1-0/+4
Signed-off-by: Eugene Surovegin <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc: Make kernel module helper endian-safe.Eugene Surovegin1-0/+16
Signed-off-by: Eugene Surovegin <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc: prom_init exception when updating core valueLaurent Dufour1-6/+22
Since the CPU is generating an exception when accessing unaligned word, and as this exception is not yet handled when running prom_init, data should be copied from the architecture vector byte per byte. Signed-off-by: Laurent Dufour <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc/booke64: Check napping in performance monitor interruptKevin Hao1-0/+1
The performance monitor interrupt is asynchronous, so we should check if the current processor is in napping status in the handler of this interrupt. Signed-off-by: Kevin Hao <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc/kernel: Fix endian issue in rtas_pciCedric Le Goater1-3/+3
Signed-off-by: Cédric Le Goater <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc/pseries: Implement arch_get_random_long() based on H_RANDOMMichael Ellerman2-1/+45
Add support for the arch_get_random_long() hook based on the H_RANDOM hypervisor call. We trust the hypervisor to provide us with random data, ie. we don't whiten it in anyway. Signed-off-by: Michael Ellerman <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc: Implement arch_get_random_long/int() for powernvMichael Ellerman6-1/+163
Add the plumbing to implement arch_get_random_long/int(). It didn't seem worth adding an extra ppc_md hook for int, so we reuse the one for long. Add an implementation for powernv based on the hwrng found in power7+ systems. We whiten the output of the hwrng, and the result passes all the dieharder tests. Signed-off-by: Michael Ellerman <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc: Added __cmpdi2 for signed 64bit comparisionBharat Bhushan2-0/+16
This was missing on powerpc and I am getting compilation error drivers/vfio/pci/vfio_pci_rdwr.c:193: undefined reference to `__cmpdi2' drivers/vfio/pci/vfio_pci_rdwr.c:193: undefined reference to `__cmpdi2' Signed-off-by: Bharat Bhushan <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc: Fix section mismatch warning in free_lppacasVladimir Murzin1-3/+3
While cross-building for PPC64 I've got bunch of WARNING: arch/powerpc/kernel/built-in.o(.text.unlikely+0x2d2): Section mismatch in reference from the function .free_lppacas() to the variable .init.data:lppaca_size The function .free_lppacas() references the variable __initdata lppaca_size. This is often because .free_lppacas lacks a __initdata annotation or the annotation of lppaca_size is wrong. Fix it by using proper annotation for free_lppacas. Additionally, annotate {allocate,new}_llpcas properly. Signed-off-by: Vladimir Murzin <[email protected]> Acked-by: Michael Ellerman <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2013-10-11powerpc/ppc64: Remove the unneeded load of ti_flags in resume_kernelKevin Hao1-3/+1
We already got the value of current_thread_info and ti_flags and store them into r9 and r4 respectively before jumping to resume_kernel. So there is no reason to reload them again. Signed-off-by: Kevin Hao <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>