aboutsummaryrefslogtreecommitdiff
path: root/arch/powerpc/include
AgeCommit message (Collapse)AuthorFilesLines
2020-07-23powerpc/powernv: Machine check handler for POWER10Nicholas Piggin1-0/+1
Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-23powerpc: Select ARCH_HAS_MEMBARRIER_SYNC_CORENicholas Piggin2-1/+13
powerpc return from interrupt and return from system call sequences are context synchronising. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-23powerpc/test_emulate_step: Move extern declaration to sstep.hBalamuruhan S1-0/+2
fix checkpatch.pl warnings by moving extern declaration from source file to headerfile. Signed-off-by: Balamuruhan S <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-23powerpc/sstep: Introduce macros to retrieve Prefix instruction operandsBalamuruhan S1-0/+4
retrieve prefix instruction operands RA and pc relative bit R values using macros and adopt it in sstep.c and test_emulate_step.c. Signed-off-by: Balamuruhan S <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-23powerpc: Add a ppc_inst_as_str() helperJordan Niethe1-0/+19
There are quite a few places where instructions are printed, this is done using a '%x' format specifier. With the introduction of prefixed instructions, this does not work well. Currently in these places, ppc_inst_val() is used for the value for %x so only the first word of prefixed instructions are printed. When the instructions are word instructions, only a single word should be printed. For prefixed instructions both the prefix and suffix should be printed. To accommodate both of these situations, instead of a '%x' specifier use '%s' and introduce a helper, __ppc_inst_as_str() which returns a char *. The char * __ppc_inst_as_str() returns is buffer that is passed to it by the caller. It is cumbersome to require every caller of __ppc_inst_as_str() to now declare a buffer. To make it more convenient to use __ppc_inst_as_str(), wrap it in a macro that uses a compound statement to allocate a buffer on the caller's stack before calling it. Signed-off-by: Jordan Niethe <[email protected]> Reviewed-by: Joel Stanley <[email protected]> Acked-by: Segher Boessenkool <[email protected]> [mpe: Drop 0x prefix to match most existings uses, especially xmon] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-23powerpc/sstep: Add tests for prefixed floating-point load/storesJordan Niethe1-0/+4
Add tests for the prefixed versions of the floating-point load/stores that are currently tested. This includes the following instructions: * Prefixed Load Floating-Point Single (plfs) * Prefixed Load Floating-Point Double (plfd) * Prefixed Store Floating-Point Single (pstfs) * Prefixed Store Floating-Point Double (pstfd) Skip the new tests if ISA v3.10 is unsupported. Signed-off-by: Jordan Niethe <[email protected]> [mpe: Fix conflicts with ppc-opcode.h changes] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-23powerpc/sstep: Add tests for prefixed integer load/storesJordan Niethe1-0/+9
Add tests for the prefixed versions of the integer load/stores that are currently tested. This includes the following instructions: * Prefixed Load Doubleword (pld) * Prefixed Load Word and Zero (plwz) * Prefixed Store Doubleword (pstd) Skip the new tests if ISA v3.1 is unsupported. Signed-off-by: Jordan Niethe <[email protected]> [mpe: Fix conflicts with ppc-opcode.h changes] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-23KVM: PPC: Clean up redundant kvm_run parameters in assemblyTianjia Zhang1-1/+1
In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu' structure. For historical reasons, many kvm-related function parameters retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This patch does a unified cleanup of these remaining redundant parameters. [[email protected] - Fixed places that were missed in book3s_interrupts.S] Signed-off-by: Tianjia Zhang <[email protected]> Signed-off-by: Paul Mackerras <[email protected]>
2020-07-22powerpc/64s: system call support for scv/rfscv instructionsNicholas Piggin7-5/+19
Add support for the scv instruction on POWER9 and later CPUs. For now this implements the zeroth scv vector 'scv 0', as identical to 'sc' system calls, with the exception that LR is not preserved, nor are volatile CR registers, and error is not indicated with CR0[SO], but by returning a negative errno. rfscv is implemented to return from scv type system calls. It can not be used to return from sc system calls because those are defined to preserve LR. getpid syscall throughput on POWER9 is improved by 26% (428 to 318 cycles), largely due to reducing mtmsr and mtspr. Signed-off-by: Nicholas Piggin <[email protected]> [mpe: Fix ppc64e build] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-22powerpc/perf: Add Power10 PMU feature to DT CPU featuresMadhavan Srinivasan1-0/+1
Add Power10 feature function to DT CPU features, along with a Power10 specific init() to initialize PMU SPRs, sets the oprofile_cpu_type and cpu_features. This will enable performance monitoring unit (PMU) for Power10 in CPU features with "performance-monitor-power10". For Power ISA v3.1, BHRB disable is controlled via Monitor Mode Control Register A (MMCRA) bit, namely "BHRB Recording Disable (BHRBRD)". This patch initializes MMCRA BHRBRD to disable BHRB feature at boot for Power10. Signed-off-by: Madhavan Srinivasan <[email protected]> [mpe: Move MMCRA_BHRB_DISABLE as noted by jpn, drop CPU setup changes] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-22KVM: PPC: Book3S HV: Save/restore new PMU registersAthira Rajeev3-3/+8
Power ISA v3.1 has added new performance monitoring unit (PMU) special purpose registers (SPRs). They are: Monitor Mode Control Register 3 (MMCR3) Sampled Instruction Event Register A (SIER2) Sampled Instruction Event Register B (SIER3) Add support to save/restore these new SPRs while entering/exiting guest. Also include changes to support KVM_REG_PPC_MMCR3/SIER2/SIER3. Add new SPRs to KVM API documentation. Signed-off-by: Athira Rajeev <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-22powerpc/perf: Add support for ISA3.1 PMU SPRsMadhavan Srinivasan3-0/+12
PowerISA v3.1 includes new performance monitoring unit(PMU) special purpose registers (SPRs). They are Monitor Mode Control Register 3 (MMCR3) Sampled Instruction Event Register 2 (SIER2) Sampled Instruction Event Register 3 (SIER3) MMCR3 is added for further sampling related configuration control. SIER2/SIER3 are added to provide additional information about the sampled instruction. Patch adds new PPMU flag called "PPMU_ARCH_31" to support handling of these new SPRs, updates the struct thread_struct to include these new SPRs, include MMCR3 in struct mmcr_regs. This is needed to support programming of MMCR3 SPR during event_enable/disable. Patch also adds the sysfs support for the MMCR3 SPR along with SPRN_ macros for these new pmu SPRs. Signed-off-by: Madhavan Srinivasan <[email protected]> [mpe: Rename to PPMU_ARCH_31 as noted by jpn] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-22powerpc/perf: Update Power PMU cache_events to u64 typeAthira Rajeev1-1/+1
Events of type PERF_TYPE_HW_CACHE was described for Power PMU as: int (*cache_events)[type][op][result]; where type, op, result values unpacked from the event attribute config value is used to generate the raw event code at runtime. So far the event code values which used to create these cache-related events were within 32 bit and `int` type worked. In power10, some of the event codes are of 64-bit value and hence update the Power PMU cache_events to `u64` type in `power_pmu` struct. Also propagate this change to existing all PMU driver code paths which are using ppmu->cache_events. Signed-off-by: Athira Rajeev <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-22KVM: PPC: Book3S HV: Cleanup updates for kvm vcpu MMCRAthira Rajeev1-1/+3
Currently `kvm_vcpu_arch` stores all Monitor Mode Control registers in a flat array in order: mmcr0, mmcr1, mmcra, mmcr2, mmcrs Split this to give mmcra and mmcrs its own entries in vcpu and use a flat array for mmcr0 to mmcr2. This patch implements this cleanup to make code easier to read. Signed-off-by: Athira Rajeev <[email protected]> [mpe: Fix MMCRA/MMCR2 uapi breakage as noted by paulus] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-22powerpc/perf: Update cpu_hw_event to use `struct` for storing MMCR registersAthira Rajeev1-2/+8
core-book3s currently uses array to store the MMCR registers as part of per-cpu `cpu_hw_events`. This patch does a clean up to use `struct` to store mmcr regs instead of array. This will make code easier to read and reduces chance of any subtle bug that may come in the future, say when new registers are added. Patch updates all relevant code that was using MMCR array ( cpuhw->mmcr[x]) to use newly introduced `struct`. This includes the PMU driver code for supported platforms (power5 to power9) and ISA macros for counter support functions. Signed-off-by: Athira Rajeev <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-22powerpc/perf: Fix missing is_sier_aviable() during buildMadhavan Srinivasan1-0/+2
Compilation error: arch/powerpc/perf/perf_regs.c:80:undefined reference to `.is_sier_available' Currently is_sier_available() is part of core-book3s.c, which is added to build based on CONFIG_PPC_PERF_CTRS. A config with CONFIG_PERF_EVENTS and without CONFIG_PPC_PERF_CTRS will have a build break because of missing is_sier_available(). In practice it only breaks when CONFIG_FSL_EMB_PERF_EVENT=n because that also guards the usage of is_sier_available(). That only happens with CONFIG_PPC_BOOK3E_64=y and CONFIG_FSL_SOC_BOOKE=n. Patch adds is_sier_available() in asm/perf_event.h to fix the build break for configs missing CONFIG_PPC_PERF_CTRS. Fixes: 333804dc3b7a ("powerpc/perf: Update perf_regs structure to include SIER") Reported-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Madhavan Srinivasan <[email protected]> [mpe: Add detail about CONFIG_FSL_SOC_BOOKE] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-22powerpc/64s: Remove PROT_SAO supportNicholas Piggin5-33/+15
ISA v3.1 does not support the SAO storage control attribute required to implement PROT_SAO. PROT_SAO was used by specialised system software (Lx86) that has been discontinued for about 7 years, and is not thought to be used elsewhere, so removal should not cause problems. We rather remove it than keep support for older processors, because live migrating guest partitions to newer processors may not be possible if SAO is in use (or worse allowed with silent races). - PROT_SAO stays in the uapi header so code using it would still build. - arch_validate_prot() is removed, the generic version rejects PROT_SAO so applications would get a failure at mmap() time. Signed-off-by: Nicholas Piggin <[email protected]> [mpe: Drop KVM change for the time being] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-22powerpc: Remove stale calc_vm_prot_bits() commentNicholas Piggin1-4/+0
This comment is wrong, we wouldn't use calc_vm_prot_bits() here because we are being called by calc_vm_prot_bits() to modify its behaviour. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-22powerpc: Remove unneeded inline functionsYueHaibing1-2/+0
Both of those functions are only called from 64-bit only code, so the stubs should not be needed at all. Suggested-by: Michael Ellerman <[email protected]> Signed-off-by: YueHaibing <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-22powerpc: Replace HTTP links with HTTPS onesAlexander A. Klimov1-1/+1
Rationale: Reduces attack surface on kernel devs opening the links for MITM as HTTPS traffic is much harder to manipulate. Deterministic algorithm: For each file: If not .svg: For each line: If doesn't contain `\bxmlns\b`: For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`: If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`: If both the HTTP and HTTPS versions return 200 OK and serve the same content: Replace HTTP with HTTPS. Signed-off-by: Alexander A. Klimov <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-21KVM: PPC: Book3S HV: Increase KVMPPC_NR_LPIDS on POWER8 and POWER9Cédric Le Goater1-1/+2
POWER8 and POWER9 have 12-bit LPIDs. Change LPID_RSVD to support up to (4096 - 2) guests on these processors. POWER7 is kept the same with a limitation of (1024 - 2), but it might be time to drop KVM support for POWER7. Tested with 2048 guests * 4 vCPUs on a witherspoon system with 512G RAM and a bit of swap. Signed-off-by: Cédric Le Goater <[email protected]> Signed-off-by: Paul Mackerras <[email protected]>
2020-07-21KVM: PPC: Book3SHV: Enable support for ISA v3.1 guestsAlistair Popple1-0/+1
Adds support for emulating ISAv3.1 guests by adding the appropriate PCR and FSCR bits. Signed-off-by: Alistair Popple <[email protected]> Reported-by: kbuild test robot <[email protected]> Signed-off-by: Paul Mackerras <[email protected]>
2020-07-20powerpc/book3s64/kuap: Move UAMOR setup to key init functionAneesh Kumar K.V2-1/+2
UAMOR values are not application-specific. The kernel initializes its value based on different reserved keys. Remove the thread-specific UAMOR value and don't switch the UAMOR on context switch. Move UAMOR initialization to key initialization code and remove thread_struct.uamor because it is not used anymore. Before commit: 4a4a5e5d2aad ("powerpc/pkeys: key allocation/deallocation must not change pkey registers") we used to update uamor based on key allocation and free. Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-20powerpc/book3s64/keys/kuap: Reset AMR/IAMR values on kexecAneesh Kumar K.V2-0/+35
As we kexec across kernels that use AMR/IAMR for different purposes we need to ensure that new kernels get kexec'd with a reset value of AMR/IAMR. For ex: the new kernel can use key 0 for kernel mapping and the old AMR value prevents access to key 0. This patch also removes reset if IAMR and AMOR in kexec_sequence. Reset of AMOR is not needed and the IAMR reset is partial (it doesn't do the reset on secondary cpus) and is redundant with this patch. Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-20powerpc/book3s64/pkeys: Use MMU_FTR_PKEY instead of pkey_disabled static keyAneesh Kumar K.V2-9/+7
Instead of pkey_disabled static key use mmu feature MMU_FTR_PKEY. Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-20powerpc/book3s64/pkeys: Use pkey_execute_disable_supportedAneesh Kumar K.V1-9/+1
Use pkey_execute_disable_supported to check for execute key support instead of pkey_disabled. Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-20powerpc/book3s64/kuep: Add MMU_FTR_KUEPAneesh Kumar K.V1-0/+9
This will be used to enable/disable Kernel Userspace Execution Prevention (KUEP). Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-20powerpc/book3s64/pkeys: Add MMU_FTR_PKEYAneesh Kumar K.V2-0/+16
Parse storage keys related device tree entry in early_init_devtree and enable MMU feature MMU_FTR_PKEY if pkeys are supported. MMU feature is used instead of CPU feature because this enables us to group MMU_FTR_KUAP and MMU_FTR_PKEY in asm feature fixup code. Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-20powerpc/book3s64/pkeys: Make initial_allocation_mask staticAneesh Kumar K.V1-1/+0
initial_allocation_mask is not used outside this file. Also mark reserved_allocation_mask and initial_allocation_mask __ro_after_init; Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-20powerpc/book3s64/pkeys: Convert pkey_total to num_pkeyAneesh Kumar K.V1-2/+5
num_pkey now represents max number of keys supported such that we return to userspace 0 - num_pkey - 1. Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-20powerpc/book3s64/pkeys: Simplify pkey disable branchAneesh Kumar K.V1-1/+1
Make the default value FALSE (pkey enabled) and set to TRUE when we find the total number of keys supported to be zero. Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-20powerpc/book3s64/pkeys: kill cpu feature key CPU_FTR_PKEYAneesh Kumar K.V1-7/+6
We don't use CPU_FTR_PKEY anymore. Remove the feature bit and mark it free. Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-20powerpc/book3s64/pkeys: Move pkey related bits in the linux page tableAneesh Kumar K.V3-23/+22
To keep things simple, all the pkey related bits are kept together in linux page table for 64K config with hash translation. With hash-4k kernel requires 4 bits to store slots details. This is done by overloading some of the RPN bits for storing the slot details. Due to this PKEY_BIT0 on the 4K config is used for storing hash slot details. 64K before |....|RSV1| RSV2| RSV3 | RSV4 | RPN44| RPN43 |.... | RSV5| |....| P4 | P3 | P2 | P1 | Busy | HASHPTE |.... | P0 | after |....|RSV1| RSV2| RSV3 | RSV4 | RPN44 | RPN43 |.... | RSV5 | |....| P4 | P3 | P2 | P1 | P0 | HASHPTE |.... | Busy | 4k before |....| RSV1 | RSV2 | RSV3 | RSV4 | RPN44| RPN43.... | RSV5| |....| Busy | HASHPTE | P2 | P1 | F_SEC| F_GIX.... | P0 | after |....| RSV1 | RSV2| RSV3 | RSV4 | Free | RPN43.... | RSV5 | |....| HASHPTE | P2 | P1 | P0 | F_SEC| F_GIX.... | BUSY | Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-20powerpc/book3s64/pkeys: pkeys are supported only on hash on book3s.Aneesh Kumar K.V3-30/+64
Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-20powerpc/book3s64/pkeys: Fixup bit numberingAneesh Kumar K.V4-24/+25
This number the pkey bit such that it is easy to follow. PKEY_BIT0 is the lower order bit. This makes further changes easy to follow. No functional change in this patch other than linux page table for hash translation now maps pkeys differently. Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-20powerpc/mce: Add MCE notification chainSantosh Sivaraj1-0/+2
Introduce notification chain which lets us know about uncorrected memory errors(UE). This would help prospective users in pmem or nvdimm subsystem to track bad blocks for better handling of persistent memory allocations. Signed-off-by: Santosh Sivaraj <[email protected]> Signed-off-by: Ganesh Goudar <[email protected]> Reviewed-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-20powerpc/mm/radix: Create separate mappings for hot-plugged memoryAneesh Kumar K.V1-0/+5
To enable memory unplug without splitting kernel page table mapping, we force the max mapping size to the LMB size. LMB size is the unit in which hypervisor will do memory add/remove operation. Pseries systems supports max LMB size of 256MB. Hence on pseries, we now end up mapping memory with 2M page size instead of 1G. To improve that we want hypervisor to hint the kernel about the hotplug memory range. That was added that as part of commit b6eca183e23e ("powerpc/kernel: Enables memory hot-remove after reboot on pseries guests") But PowerVM doesn't provide that hint yet. Once we get PowerVM updated, we can then force the 2M mapping only to hot-pluggable memory region using memblock_is_hotpluggable(). Till then let's depend on LMB size for finding the mapping page size for linear range. With this change KVM guest will also be doing linear mapping with 2M page size. The actual TLB benefit of mapping guest page table entries with hugepage size can only be materialized if the partition scoped entries are also using the same or higher page size. A guest using 1G hugetlbfs backing guest memory can have a performance impact with the above change. Signed-off-by: Bharata B Rao <[email protected]> Signed-off-by: Aneesh Kumar K.V <[email protected]> [mpe: Fold in fix from Aneesh spotted by [email protected]] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-20powerpc/mm/radix: Fix PTE/PMD fragment count for early page table mappingsAneesh Kumar K.V1-1/+15
We can hit the following BUG_ON during memory unplug: kernel BUG at arch/powerpc/mm/book3s64/pgtable.c:342! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=2048 NUMA pSeries NIP [c000000000093308] pmd_fragment_free+0x48/0xc0 LR [c00000000147bfec] remove_pagetable+0x578/0x60c Call Trace: 0xc000008050000000 (unreliable) remove_pagetable+0x384/0x60c radix__remove_section_mapping+0x18/0x2c remove_section_mapping+0x1c/0x3c arch_remove_memory+0x11c/0x180 try_remove_memory+0x120/0x1b0 __remove_memory+0x20/0x40 dlpar_remove_lmb+0xc0/0x114 dlpar_memory+0x8b0/0xb20 handle_dlpar_errorlog+0xc0/0x190 pseries_hp_work_fn+0x2c/0x60 process_one_work+0x30c/0x810 worker_thread+0x98/0x540 kthread+0x1c4/0x1d0 ret_from_kernel_thread+0x5c/0x74 This occurs when unplug is attempted for such memory which has been mapped using memblock pages as part of early kernel page table setup. We wouldn't have initialized the PMD or PTE fragment count for those PMD or PTE pages. This can be fixed by allocating memory in PAGE_SIZE granularity during early page table allocation. This makes sure a specific page is not shared for another memblock allocation and we can free them correctly on removing page-table pages. Since we now do PAGE_SIZE allocations for both PUD table and PMD table (Note that PTE table allocation is already of PAGE_SIZE), we end up allocating more memory for the same amount of system RAM. Here is a comparision of how much more we need for a 64T and 2G system after this patch: 1. 64T system ------------- 64T RAM would need 64G for vmemmap with struct page size being 64B. 128 PUD tables for 64T memory (1G mappings) 1 PUD table and 64 PMD tables for 64G vmemmap (2M mappings) With default PUD[PMD]_TABLE_SIZE(4K), (128+1+64)*4K=772K With PAGE_SIZE(64K) table allocations, (128+1+64)*64K=12352K 2. 2G system ------------ 2G RAM would need 2M for vmemmap with struct page size being 64B. 1 PUD table for 2G memory (1G mapping) 1 PUD table and 1 PMD table for 2M vmemmap (2M mappings) With default PUD[PMD]_TABLE_SIZE(4K), (1+1+1)*4K=12K With new PAGE_SIZE(64K) table allocations, (1+1+1)*64K=192K Signed-off-by: Bharata B Rao <[email protected]> Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-19powerpc: use the generic dma_ops_bypass modeChristoph Hellwig1-5/+0
Use the DMA API bypass mechanism for direct window mappings. This uses common code and speed up the direct mapping case by avoiding indirect calls just when not using dma ops at all. It also fixes a problem where the sync_* methods were using the bypass check for DMA allocations, but those are part of the streaming ops. Note that this patch loses the DMA_ATTR_WEAK_ORDERING override, which has never been well defined, as is only used by a few drivers, which IIRC never showed up in the typical Cell blade setups that are affected by the ordering workaround. Fixes: efd176a04bef ("powerpc/pseries/dma: Allow SWIOTLB") Signed-off-by: Christoph Hellwig <[email protected]> Tested-by: Alexey Kardashevskiy <[email protected]> Reviewed-by: Alexey Kardashevskiy <[email protected]>
2020-07-18Merge branch 'fixes' into nextMichael Ellerman1-0/+2
Merge our fixes branch, primarily to bring in the ebb selftests build fix and the pkey fix, which is a dependency for some future work.
2020-07-16powerpc/perf: Add kernel support for new MSR[HV PR] bits in trace-imcAnju T Sudhakar1-0/+5
IMC trace-mode record has MSR[HV PR] bits added in the third DW. These bits can be used to set the cpumode for the instruction pointer captured in each sample. Add support in kernel to use these bits to set the cpumode for each sample. Signed-off-by: Anju T Sudhakar <[email protected]> Signed-off-by: Madhavan Srinivasan <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-16powerpc/xive: Remove unused inline function xive_kexec_teardown_cpu()YueHaibing1-1/+0
commit e27e0a94651e ("powerpc/xive: Remove xive_kexec_teardown_cpu()") left behind this, remove it. Signed-off-by: YueHaibing <[email protected]> Reviewed-by: Greg Kurz <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-16powerpc: Add cputime_to_nsecs()Anton Blanchard1-0/+2
Generic code has a wrapper to implement cputime_to_nsecs() on top of cputime_to_usecs() but we can easily return the full nanosecond resolution directly. Signed-off-by: Anton Blanchard <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-16powerpc/ppc-opcode: Fold PPC_INST_* macros into PPC_RAW_* macrosBalamuruhan S1-265/+129
Lots of PPC_INST_* macros are used only ever in PPC_* macros, fold those PPC_INST_* into PPC_RAW_* to avoid using PPC_INST_* accidentally. Signed-off-by: Balamuruhan S <[email protected]> Tested-by: Naveen N. Rao <[email protected]> Acked-by: Naveen N. Rao <[email protected]> [mpe: Deal with PHWSYNC, PLWSYNC] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-16powerpc/ppc-opcode: Reuse raw instruction macros to stringifyBalamuruhan S1-158/+75
Wrap existing stringify macros to reuse raw instruction encoding macros that are newly added. Signed-off-by: Balamuruhan S <[email protected]> Tested-by: Naveen N. Rao <[email protected]> Acked-by: Naveen N. Rao <[email protected]> [mpe: Add DCBFPS, DCBSTPS, PHWSYNC, PLWSYNC] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-16powerpc/ppc-opcode: Consolidate powerpc instructions from bpf_jit.hBalamuruhan S1-0/+85
Move macro definitions of powerpc instructions from bpf_jit.h to ppc-opcode.h and adopt the users of the macros accordingly. `PPC_MR()` is defined twice in bpf_jit.h, remove the duplicate one. Signed-off-by: Balamuruhan S <[email protected]> Tested-by: Naveen N. Rao <[email protected]> Acked-by: Naveen N. Rao <[email protected]> Acked-by: Sandipan Das <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-16powerpc/ppc-opcode: Move ppc instruction encoding from test_emulate_stepBalamuruhan S1-0/+18
Few ppc instructions are encoded in test_emulate_step.c, consolidate them and use it from ppc-opcode.h Signed-off-by: Balamuruhan S <[email protected]> Tested-by: Naveen N. Rao <[email protected]> Acked-by: Naveen N. Rao <[email protected]> Acked-by: Sandipan Das <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-16powerpc/ppc-opcode: Introduce PPC_RAW_* macros for base instruction encodingBalamuruhan S1-7/+86
Introduce PPC_RAW_* macros to have all the bare encoding of ppc instructions. Move `VSX_XX*()` and `TMRN()` macros up to reuse it. Signed-off-by: Balamuruhan S <[email protected]> Tested-by: Naveen N. Rao <[email protected]> Acked-by: Naveen N. Rao <[email protected]> [mpe: Add DCBFPS, DCBSTPS, PHWSYNC, PLWSYNC] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-16powerpc/pseries: remove dlpar_cpu_readd()Nathan Lynch1-1/+0
dlpar_cpu_readd() is unused now. Signed-off-by: Nathan Lynch <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-16powerpc/pseries: remove memory "re-add" implementationNathan Lynch1-1/+0
dlpar_memory() no longer has any callers which pass PSERIES_HP_ELOG_ACTION_READD. Remove this case and the corresponding unreachable code. Signed-off-by: Nathan Lynch <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]