aboutsummaryrefslogtreecommitdiff
path: root/arch/powerpc
AgeCommit message (Collapse)AuthorFilesLines
2020-09-15powerpc/mm/book3s: Split radix and hash MAX_PHYSMEM limitAneesh Kumar K.V8-19/+50
MAX_PHYSMEM #define is used along with sparsemem to determine the SECTION_SHIFT value. Powerpc also uses the same value to limit the max memory enabled on the system. With 4K PAGE_SIZE and hash translation mode, we want to limit the max memory enabled to 64TB due to page table size restrictions. However, with radix translation, we don't have these restrictions. Hence split the radix and hash MA_PHYSMEM limit and use different limit for each of them. Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-15powerpc/book3s64/hash/4k: Support large linear mapping range with 4KAneesh Kumar K.V1-7/+6
With commit: 0034d395f89d ("powerpc/mm/hash64: Map all the kernel regions in the same 0xc range"), we now split the 64TB address range into 4 contexts each of 16TB. That implies we can do only 16TB linear mapping. On some systems, eg. Power9, memory attached to nodes > 0 will appear above 16TB in the linear mapping. This resulted in kernel crash when we boot such systems in hash translation mode with 4K PAGE_SIZE. This patch updates the kernel mapping such that we now start supporting upto 61TB of memory with 4K. The kernel mapping now looks like below 4K PAGE_SIZE and hash translation. vmalloc start = 0xc0003d0000000000 IO start = 0xc0003e0000000000 vmemmap start = 0xc0003f0000000000 Our MAX_PHYSMEM_BITS for 4K is still 64TB even though we can only map 61TB. We prevent bolt mapping anything outside 61TB range by checking against H_VMALLOC_START. Fixes: 0034d395f89d ("powerpc/mm/hash64: Map all the kernel regions in the same 0xc range") Reported-by: Cameron Berkenpas <[email protected]> Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-15powerpc/64/mm: implement page mapping percpu first chunk allocatorAneesh Kumar K.V2-4/+63
Implement page mapping percpu first chunk allocator as a fallback to the embedding allocator. With 4K hash translation we limit our page table range to 64TB and commit: 0034d395f89d ("powerpc/mm/hash64: Map all the kernel regions in the same 0xc range") moved all kernel mapping to that 64TB range. In-order to support sparse memory layout we need to increase our linear mapping space and reduce other mappings. With such a layout percpu embedded first chunk allocator will fail because of small vmalloc range. Add a fallback to page mapping percpu first chunk allocator for such failures. The below dmesg output can be observed in such case. percpu: max_distance=0x1ffffef00000 too large for vmalloc space 0x10000000000 PERCPU: auto allocator failed (-22), falling back to page size percpu: 40 4K pages/cpu s148816 r0 d15024 Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-15powerpc/percpu: Update percpu bootmem allocatorAneesh Kumar K.V1-8/+37
This update the ppc64 version to be closer to x86/sparc. Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-15powerpc/watchpoint/ptrace: Introduce PPC_DEBUG_FEATURE_DATA_BP_ARCH_31Ravi Bangoria2-0/+3
PPC_DEBUG_FEATURE_DATA_BP_ARCH_31 can be used to determine whether we are running on an ISA 3.1 compliant machine. Which is needed to determine DAR behaviour, 512 byte boundary limit etc. This was requested by Pedro Miraglia Franco de Carvalho for extending watchpoint features in gdb. Note that availability of 2nd DAWR is independent of this flag and should be checked using ppc_debug_info->num_data_bps. Signed-off-by: Ravi Bangoria <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-15powerpc/watchpoint: Add hw_len wherever missingRavi Bangoria2-0/+2
There are couple of places where we set len but not hw_len. For ptrace/perf watchpoints, when CONFIG_HAVE_HW_BREAKPOINT=Y, hw_len will be calculated and set internally while parsing watchpoint. But when CONFIG_HAVE_HW_BREAKPOINT=N, we need to manually set 'hw_len'. Similarly for xmon as well, hw_len needs to be set directly. Fixes: b57aeab811db ("powerpc/watchpoint: Fix length calculation for unaligned target") Signed-off-by: Ravi Bangoria <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-15powerpc/watchpoint: Fix exception handling for CONFIG_HAVE_HW_BREAKPOINT=NRavi Bangoria3-1/+54
On powerpc, ptrace watchpoint works in one-shot mode. i.e. kernel disables event every time it fires and user has to re-enable it. Also, in case of ptrace watchpoint, kernel notifies ptrace user before executing instruction. With CONFIG_HAVE_HW_BREAKPOINT=N, kernel is missing to disable ptrace event and thus it's causing infinite loop of exceptions. This is especially harmful when user watches on a data which is also read/written by kernel, eg syscall parameters. In such case, infinite exceptions happens in kernel mode which causes soft-lockup. Fixes: 9422de3e953d ("powerpc: Hardware breakpoints rewrite to handle non DABR breakpoint registers") Reported-by: Pedro Miraglia Franco de Carvalho <[email protected]> Signed-off-by: Ravi Bangoria <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-15powerpc/watchpoint: Move DAWR detection logic outside of hw_breakpoint.cRavi Bangoria4-158/+174
Power10 hw has multiple DAWRs but hw doesn't tell which DAWR caused the exception. So we have a sw logic to detect that in hw_breakpoint.c. But hw_breakpoint.c gets compiled only with CONFIG_HAVE_HW_BREAKPOINT=Y. Move DAWR detection logic outside of hw_breakpoint.c so that it can be reused when CONFIG_HAVE_HW_BREAKPOINT is not set. Signed-off-by: Ravi Bangoria <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-15powerpc/watchpoint/ptrace: Fix SETHWDEBUG when CONFIG_HAVE_HW_BREAKPOINT=NRavi Bangoria1-1/+1
When kernel is compiled with CONFIG_HAVE_HW_BREAKPOINT=N, user can still create watchpoint using PPC_PTRACE_SETHWDEBUG, with limited functionalities. But, such watchpoints are never firing because of the missing privilege settings. Fix that. It's safe to set HW_BRK_TYPE_PRIV_ALL because we don't really leak any kernel address in signal info. Setting HW_BRK_TYPE_PRIV_ALL will also help to find scenarios when kernel accesses user memory. Reported-by: Pedro Miraglia Franco de Carvalho <[email protected]> Suggested-by: Pedro Miraglia Franco de Carvalho <[email protected]> Signed-off-by: Ravi Bangoria <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-15powerpc/watchpoint: Fix handling of vector instructionsRavi Bangoria1-0/+2
Vector load/store instructions are special because they are always aligned. Thus unaligned EA needs to be aligned down before comparing it with watch ranges. Otherwise we might consider valid event as invalid. Fixes: 74c6881019b7 ("powerpc/watchpoint: Prepare handler to handle more than one watchpoint") Signed-off-by: Ravi Bangoria <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-15powerpc/watchpoint: Fix quadword instruction handling on p10 predecessorsRavi Bangoria2-2/+11
On p10 predecessors, watchpoint with quadword access is compared at quadword length. If the watch range is doubleword or less than that in a first half of quadword aligned 16 bytes, and if there is any unaligned quadword access which will access only the 2nd half, the handler should consider it as extraneous and emulate/single-step it before continuing. Fixes: 74c6881019b7 ("powerpc/watchpoint: Prepare handler to handle more than one watchpoint") Reported-by: Pedro Miraglia Franco de Carvalho <[email protected]> Signed-off-by: Ravi Bangoria <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-14vgacon: remove software scrollback supportLinus Torvalds2-2/+0
Yunhai Zhang recently fixed a VGA software scrollback bug in commit ebfdfeeae8c0 ("vgacon: Fix for missing check in scrollback handling"), but that then made people look more closely at some of this code, and there were more problems on the vgacon side, but also the fbcon software scrollback. We don't really have anybody who maintains this code - probably because nobody actually _uses_ it any more. Sure, people still use both VGA and the framebuffer consoles, but they are no longer the main user interfaces to the kernel, and haven't been for decades, so these kinds of extra features end up bitrotting and not really being used. So rather than try to maintain a likely unused set of code, I'll just aggressively remove it, and see if anybody even notices. Maybe there are people who haven't jumped on the whole GUI badnwagon yet, and think it's just a fad. And maybe those people use the scrollback code. If that turns out to be the case, we can resurrect this again, once we've found the sucker^Wmaintainer for it who actually uses it. Reported-by: NopNop Nop <[email protected]> Tested-by: Willy Tarreau <[email protected]> Cc: 张云海 <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Acked-by: Willy Tarreau <[email protected]> Reviewed-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-09-14powerpc/pseries/svm: Allocate SWIOTLB buffer anywhere in memoryThiago Jung Bauermann3-1/+35
POWER secure guests (i.e., guests which use the Protected Execution Facility) need to use SWIOTLB to be able to do I/O with the hypervisor, but they don't need the SWIOTLB memory to be in low addresses since the hypervisor doesn't have any addressing limitation. This solves a SWIOTLB initialization problem we are seeing in secure guests with 128 GB of RAM: they are configured with 4 GB of crashkernel reserved memory, which leaves no space for SWIOTLB in low addresses. To do this, we use mostly the same code as swiotlb_init(), but allocate the buffer using memblock_alloc() instead of memblock_alloc_low(). Fixes: 2efbc58f157a ("powerpc/pseries/svm: Force SWIOTLB for secure guests") Signed-off-by: Thiago Jung Bauermann <[email protected]> Reviewed-by: Konrad Rzeszutek Wilk <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-14powerpc/64: Make VDSO32 track COMPAT on 64-bitMichael Ellerman1-4/+3
When we added the VDSO32 kconfig symbol, which controls building of the 32-bit VDSO, we made it depend on CPU_BIG_ENDIAN (for 64-bit). That was because back then COMPAT was always enabled for 64-bit, so depending on it would have left the 32-bit VDSO always enabled, which we didn't want. But since then we have made COMPAT selectable, and off by default for ppc64le, so VDSO32 should really depend on that. For most people this makes no difference, none of the defconfigs change, it's only if someone is building ppc64le with COMPAT=y, they will now also get VDSO32. If they've enabled COMPAT in order to run 32-bit binaries they presumably also want the 32-bit VDSO. Signed-off-by: Michael Ellerman <[email protected]> Reviewed-by: Christophe Leroy <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-14Merge branch 'fixes' into nextMichael Ellerman23-58/+104
Bring in our fixes branch for this cycle which avoids some small conflicts with upcoming commits.
2020-09-11dma-direct: rename and cleanup __phys_to_dmaChristoph Hellwig1-1/+1
The __phys_to_dma vs phys_to_dma distinction isn't exactly obvious. Try to improve the situation by renaming __phys_to_dma to phys_to_dma_unencryped, and not forcing architectures that want to override phys_to_dma to actually provide __phys_to_dma. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Robin Murphy <[email protected]>
2020-09-11dma-direct: remove __dma_to_physChristoph Hellwig1-1/+1
There is no harm in just always clearing the SME encryption bit, while significantly simplifying the interface. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Robin Murphy <[email protected]>
2020-09-09powerpc/papr_scm: Limit the readability of 'perf_stats' sysfs attributeVaibhav Jain1-1/+1
The newly introduced 'perf_stats' attribute uses the default access mode of 0444, allowing non-root users to access performance stats of an nvdimm and potentially force the kernel into issuing a large number of expensive hypercalls. Since the information exposed by this attribute cannot be cached it is better to ward off access to this attribute from users who don't need to access to these performance statistics. Hence update the access mode of 'perf_stats' attribute to be only readable by root users. Fixes: 2d02bf835e57 ("powerpc/papr_scm: Fetch nvdimm performance stats from PHYP") Reported-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Vaibhav Jain <[email protected]> Reviewed-by: Ira Weiny <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-08powerpc: remove address space overrides using set_fs()Christoph Hellwig6-64/+9
Stop providing the possibility to override the address space using set_fs() now that there is no need for that any more. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-09-08powerpc: use non-set_fs based maccess routinesChristoph Hellwig1-0/+16
Provide __get_kernel_nofault and __put_kernel_nofault routines to implement the maccess routines without messing with set_fs and without opening up access to user space. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-09-08uaccess: add infrastructure for kernel builds with set_fs()Christoph Hellwig1-0/+1
Add a CONFIG_SET_FS option that is selected by architecturess that implement set_fs, which is all of them initially. If the option is not set stubs for routines related to overriding the address space are provided so that architectures can start to opt out of providing set_fs. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Kees Cook <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-09-08powerpc/64s: handle ISA v3.1 local copy-paste context switchesNicholas Piggin3-7/+24
The ISA v3.1 the copy-paste facility has a new memory move functionality which allows the copy buffer to be pasted to domestic memory (RAM) as opposed to foreign memory (accelerator). This means the POWER9 trick of avoiding the cp_abort on context switch if the process had not mapped foreign memory does not work on POWER10. Do the cp_abort unconditionally there. KVM must also cp_abort on guest exit to prevent copy buffer state leaking between contexts. Signed-off-by: Nicholas Piggin <[email protected]> Acked-by: Paul Mackerras <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-08powerpc: Warn about use of smt_snooze_delayJoel Stanley1-25/+17
It's not done anything for a long time. Save the percpu variable, and emit a warning to remind users to not expect it to do anything. This uses pr_warn_once instead of pr_warn_ratelimit as testing 'ppc64_cpu --smt=off' on a 24 core / 4 SMT system showed the warning to be noisy, as the online/offline loop is slow. Fixes: 3fa8cad82b94 ("powerpc/pseries/cpuidle: smt-snooze-delay cleanup.") Cc: [email protected] # v3.14 Signed-off-by: Joel Stanley <[email protected]> Acked-by: Gautham R. Shenoy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-08powerpc/powernv: Print helpful message when cores guardedJoel Stanley1-0/+24
Often the firmware will guard out cores after a crash. This often undesirable, and is not immediately noticeable. This adds an informative message when a CPU device tree nodes are marked bad in the device tree. Signed-off-by: Joel Stanley <[email protected]> [mpe: Use an eye-catcher that's less likely to get us in trouble] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-08powerpc/pseries/iommu: Allow bigger 64bit window by removing default DMA windowLeonardo Bras1-7/+66
On LoPAR "DMA Window Manipulation Calls", it's recommended to remove the default DMA window for the device, before attempting to configure a DDW, in order to make the maximum resources available for the next DDW to be created. This is a requirement for using DDW on devices in which hypervisor allows only one DMA window. If setting up a new DDW fails anywhere after the removal of this default DMA window, it's needed to restore the default DMA window. For this, an implementation of ibm,reset-pe-dma-windows rtas call is needed: Platforms supporting the DDW option starting with LoPAR level 2.7 implement ibm,ddw-extensions. The first extension available (index 2) carries the token for ibm,reset-pe-dma-windows rtas call, which is used to restore the default DMA window for a device, if it has been deleted. It does so by resetting the TCE table allocation for the PE to it's boot time value, available in "ibm,dma-window" device tree node. Signed-off-by: Leonardo Bras <[email protected]> Tested-by: David Dai <[email protected]> Reviewed-by: Alexey Kardashevskiy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-08powerpc/pseries/iommu: Move window-removing part of remove_ddw into ↵Leonardo Bras1-18/+27
remove_dma_window Move the window-removing part of remove_ddw into a new function (remove_dma_window), so it can be used to remove other DMA windows. It's useful for removing DMA windows that don't create DIRECT64_PROPNAME property, like the default DMA window from the device, which uses "ibm,dma-window". Signed-off-by: Leonardo Bras <[email protected]> Tested-by: David Dai <[email protected]> Reviewed-by: Alexey Kardashevskiy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-08powerpc/pseries/iommu: Update call to ibm, query-pe-dma-windowsLeonardo Bras1-10/+81
>From LoPAR level 2.8, "ibm,ddw-extensions" index 3 can make the number of outputs from "ibm,query-pe-dma-windows" go from 5 to 6. This change of output size is meant to expand the address size of largest_available_block PE TCE from 32-bit to 64-bit, which ends up shifting page_size and migration_capable. This ends up requiring the update of ddw_query_response->largest_available_block from u32 to u64, and manually assigning the values from the buffer into this struct, according to output size. Also, a routine was created for helping reading the ddw extensions as suggested by LoPAR: First reading the size of the extension array from index 0, checking if the property exists, and then returning it's value. Signed-off-by: Leonardo Bras <[email protected]> Tested-by: David Dai <[email protected]> Reviewed-by: Alexey Kardashevskiy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-08powerpc/pseries/iommu: Create defines for operations in ibm, ddw-applicableLeonardo Bras1-17/+26
Create defines to help handling ibm,ddw-applicable values, avoiding confusion about the index of given operations. Signed-off-by: Leonardo Bras <[email protected]> Tested-by: David Dai <[email protected]> Reviewed-by: Alexey Kardashevskiy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-08powerpc/tools: Remove 90 line limit in checkpatch scriptRussell Currey1-1/+0
As of commit bdc48fa11e46, scripts/checkpatch.pl now has a default line length warning of 100 characters. The powerpc wrapper script was using a length of 90 instead of 80 in order to make checkpatch less restrictive, but now it's making it more restrictive instead. I think it makes sense to just use the default value now. Signed-off-by: Russell Currey <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-08powerpc/boot: Update Makefile comment for 64bit wrapperJordan Niethe1-1/+1
As of commit 147c05168fc8 ("powerpc/boot: Add support for 64bit little endian wrapper") the comment in the Makefile is misleading. The wrapper packaging 64bit kernel may built as a 32 or 64 bit elf. Update the comment to reflect this. Signed-off-by: Jordan Niethe <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-08powerpc/64: Remove unused generic_secondary_thread_init()Michael Ellerman2-6/+2
The last caller was removed in 2014 in commit fb5a515704d7 ("powerpc: Remove platforms/wsp and associated pieces"). As Jordan noticed even though there are no callers, the code above in fsl_secondary_thread_init() falls through into generic_secondary_thread_init(). So we can remove the _GLOBAL but not the body of the function. However because fsl_secondary_thread_init() is inside #ifdef CONFIG_PPC_BOOK3E, we can never reach the body of generic_secondary_thread_init() unless CONFIG_PPC_BOOK3E is enabled, so we can wrap the whole thing in a single #ifdef. Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-08powerpc/pseries/eeh: Fix dumb linebreaksOliver O'Halloran1-8/+4
These annoy me every time I see them. Why are they here? They're not even needed for 80cols compliance. Signed-off-by: Oliver O'Halloran <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-08powerpc/process: Remove unnecessary #ifdef CONFIG_FUNCTION_GRAPH_TRACERChristophe Leroy1-4/+0
ftrace_graph_ret_addr() is always defined and returns 'ip' when CONFIG_FUNCTION GRAPH_TRACER is not set. So the #ifdef is not needed, remove it. Signed-off-by: Christophe Leroy <[email protected]> Acked-by: Naveen N. Rao <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/9d11143d4e27ba8274369a926968756917584868.1597643153.git.christophe.leroy@csgroup.eu
2020-09-08powerpc/uaccess: Add pre-update addressing to __get_user_asm() and ↵Christophe Leroy1-4/+4
__put_user_asm() Enable pre-update addressing mode in __get_user_asm() and __put_user_asm() Signed-off-by: Christophe Leroy <[email protected]> Reviewed-by: Segher Boessenkool <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/13041c7df39e89ddf574ea0cdc6dedfdd9734140.1597235091.git.christophe.leroy@csgroup.eu
2020-09-08powerpc: kprobes: Use generic kretprobe trampoline handlerMasami Hiramatsu1-50/+3
Use the generic kretprobe trampoline handler. Don't use framepointer verification. Signed-off-by: Masami Hiramatsu <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/159870610825.1229682.2090635992093223399.stgit@devnote2
2020-09-08powerpc/dma: Fix dma_map_ops::get_required_maskAlexey Kardashevskiy1-1/+2
There are 2 problems with it: 1. "<" vs expected "<<" 2. the shift number is an IOMMU page number mask, not an address mask as the IOMMU page shift is missing. This did not hit us before f1565c24b596 ("powerpc: use the generic dma_ops_bypass mode") because we had additional code to handle bypass mask so this chunk (almost?) never executed.However there were reports that aacraid does not work with "iommu=nobypass". After f1565c24b596, aacraid (and probably others which call dma_get_required_mask() before setting the mask) was unable to enable 64bit DMA and fall back to using IOMMU which was known not to work, one of the problems is double free of an IOMMU page. This fixes DMA for aacraid, both with and without "iommu=nobypass" in the kernel command line. Verified with "stress-ng -d 4". Fixes: 6a5c7be5e484 ("powerpc: Override dma_get_required_mask by platform hook and ops") Cc: [email protected] # v3.2+ Signed-off-by: Alexey Kardashevskiy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-07arch: vdso: add vdso linker script to 'targets' instead of extra-yMasahiro Yamada2-2/+2
The vdso linker script is preprocessed on demand. Adding it to 'targets' is enough to include the .cmd file. Signed-off-by: Masahiro Yamada <[email protected]> Acked-by: Catalin Marinas <[email protected]> Acked-by: Greentime Hu <[email protected]>
2020-09-04crypto: powerpc/crc-vpmsum_test - Fix sparse endianness warningHerbert Xu1-2/+3
This patch fixes a sparse endianness warning by changing crc32 to __le32 instead of u32: CHECK ../arch/powerpc/crypto/crc-vpmsum_test.c ../arch/powerpc/crypto/crc-vpmsum_test.c:102:39: warning: cast from restricted __le32 Signed-off-by: Herbert Xu <[email protected]>
2020-09-03dma-mapping: introduce dma_get_seg_boundary_nr_pages()Nicolin Chen1-9/+2
We found that callers of dma_get_seg_boundary mostly do an ALIGN with page mask and then do a page shift to get number of pages: ALIGN(boundary + 1, 1 << shift) >> shift However, the boundary might be as large as ULONG_MAX, which means that a device has no specific boundary limit. So either "+ 1" or passing it to ALIGN() would potentially overflow. According to kernel defines: #define ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask)) #define ALIGN(x, a) ALIGN_MASK(x, (typeof(x))(a) - 1) We can simplify the logic here into a helper function doing: ALIGN(boundary + 1, 1 << shift) >> shift = ALIGN_MASK(b + 1, (1 << s) - 1) >> s = {[b + 1 + (1 << s) - 1] & ~[(1 << s) - 1]} >> s = [b + 1 + (1 << s) - 1] >> s = [b + (1 << s)] >> s = (b >> s) + 1 This patch introduces and applies dma_get_seg_boundary_nr_pages() as an overflow-free helper for the dma_get_seg_boundary() callers to get numbers of pages. It also takes care of the NULL dev case for non-DMA API callers. Suggested-by: Christoph Hellwig <[email protected]> Signed-off-by: Nicolin Chen <[email protected]> Acked-by: Niklas Schnelle <[email protected]> Acked-by: Michael Ellerman <[email protected]> (powerpc) Signed-off-by: Christoph Hellwig <[email protected]>
2020-09-03Revert "powerpc/build: vdso linker warning for orphan sections"Michael Ellerman4-5/+3
This reverts commit f2af201002a8bc22500c04cc474ea480bf361351. It added a usage of cc-ldoption, but cc-ldoption was removed in commit 055efab3120b ("kbuild: drop support for cc-ldoption"). Reported-by: Nick Desaulniers <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2020-09-03KVM: PPC: Book3S HV: XICS: Replace the 'destroy' method by a 'release' methodGreg Kurz3-19/+72
Similarly to what was done with XICS-on-XIVE and XIVE native KVM devices with commit 5422e95103cf ("KVM: PPC: Book3S HV: XIVE: Replace the 'destroy' method by a 'release' method"), convert the historical XICS KVM device to implement the 'release' method. This is needed to run nested guests with an in-kernel IRQ chip. A typical POWER9 guest can select XICS or XIVE during boot, which requires to be able to destroy and to re-create the KVM device. Only the historical XICS KVM device is available under pseries at the current time and it still uses the legacy 'destroy' method. Switching to 'release' means that vCPUs might still be running when the device is destroyed. In order to avoid potential use-after-free, the kvmppc_xics structure is allocated on first usage and kept around until the VM exits. The same pointer is used each time a KVM XICS device is being created, but this is okay since we only have one per VM. Clear the ICP of each vCPU with vcpu->mutex held. This ensures that the next time the vCPU resumes execution, it won't be going into the XICS code anymore. Signed-off-by: Greg Kurz <[email protected]> Reviewed-by: Cédric Le Goater <[email protected]> Tested-by: Cédric Le Goater <[email protected]> Signed-off-by: Paul Mackerras <[email protected]>
2020-09-02powerpc/mm: Remove DEBUG_VM_PGTABLE support on powerpcAneesh Kumar K.V1-1/+0
The test is broken w.r.t page table update rules and results in kernel crash as below. Disable the support until we get the tests updated. [ 21.083519] kernel BUG at arch/powerpc/mm/pgtable.c:304! cpu 0x0: Vector: 700 (Program Check) at [c000000c6d1e76c0] pc: c00000000009a5ec: assert_pte_locked+0x14c/0x380 lr: c0000000005eeeec: pte_update+0x11c/0x190 sp: c000000c6d1e7950 msr: 8000000002029033 current = 0xc000000c6d172c80 paca = 0xc000000003ba0000 irqmask: 0x03 irq_happened: 0x01 pid = 1, comm = swapper/0 kernel BUG at arch/powerpc/mm/pgtable.c:304! [link register ] c0000000005eeeec pte_update+0x11c/0x190 [c000000c6d1e7950] 0000000000000001 (unreliable) [c000000c6d1e79b0] c0000000005eee14 pte_update+0x44/0x190 [c000000c6d1e7a10] c000000001a2ca9c pte_advanced_tests+0x160/0x3d8 [c000000c6d1e7ab0] c000000001a2d4fc debug_vm_pgtable+0x7e8/0x1338 [c000000c6d1e7ba0] c0000000000116ec do_one_initcall+0xac/0x5f0 [c000000c6d1e7c80] c0000000019e4fac kernel_init_freeable+0x4dc/0x5a4 [c000000c6d1e7db0] c000000000012474 kernel_init+0x24/0x160 [c000000c6d1e7e20] c00000000000cbd0 ret_from_kernel_thread+0x5c/0x6c With DEBUG_VM disabled [ 20.530152] BUG: Kernel NULL pointer dereference on read at 0x00000000 [ 20.530183] Faulting instruction address: 0xc0000000000df330 cpu 0x33: Vector: 380 (Data SLB Access) at [c000000c6d19f700] pc: c0000000000df330: memset+0x68/0x104 lr: c00000000009f6d8: hash__pmdp_huge_get_and_clear+0xe8/0x1b0 sp: c000000c6d19f990 msr: 8000000002009033 dar: 0 current = 0xc000000c6d177480 paca = 0xc00000001ec4f400 irqmask: 0x03 irq_happened: 0x01 pid = 1, comm = swapper/0 [link register ] c00000000009f6d8 hash__pmdp_huge_get_and_clear+0xe8/0x1b0 [c000000c6d19f990] c00000000009f748 hash__pmdp_huge_get_and_clear+0x158/0x1b0 (unreliable) [c000000c6d19fa10] c0000000019ebf30 pmd_advanced_tests+0x1f0/0x378 [c000000c6d19fab0] c0000000019ed088 debug_vm_pgtable+0x79c/0x1244 [c000000c6d19fba0] c0000000000116ec do_one_initcall+0xac/0x5f0 [c000000c6d19fc80] c0000000019a4fac kernel_init_freeable+0x4dc/0x5a4 [c000000c6d19fdb0] c000000000012474 kernel_init+0x24/0x160 [c000000c6d19fe20] c00000000000cbd0 ret_from_kernel_thread+0x5c/0x6c 33:mon> Signed-off-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-02powerpc/uaccess: Use flexible addressing with __put_user()/__get_user()Christophe Leroy1-14/+14
At the time being, __put_user()/__get_user() and friends only use D-form addressing, with 0 offset. Ex: lwz reg1, 0(reg2) Give the compiler the opportunity to use other adressing modes whenever possible, to get more optimised code. Hereunder is a small exemple: struct test { u32 item1; u16 item2; u8 item3; u64 item4; }; int set_test_user(struct test __user *from, struct test __user *to) { int err; u32 item1; u16 item2; u8 item3; u64 item4; err = __get_user(item1, &from->item1); err |= __get_user(item2, &from->item2); err |= __get_user(item3, &from->item3); err |= __get_user(item4, &from->item4); err |= __put_user(item1, &to->item1); err |= __put_user(item2, &to->item2); err |= __put_user(item3, &to->item3); err |= __put_user(item4, &to->item4); return err; } Before the patch: 00000df0 <set_test_user>: df0: 94 21 ff f0 stwu r1,-16(r1) df4: 39 40 00 00 li r10,0 df8: 93 c1 00 08 stw r30,8(r1) dfc: 93 e1 00 0c stw r31,12(r1) e00: 7d 49 53 78 mr r9,r10 e04: 80 a3 00 00 lwz r5,0(r3) e08: 38 e3 00 04 addi r7,r3,4 e0c: 7d 46 53 78 mr r6,r10 e10: a0 e7 00 00 lhz r7,0(r7) e14: 7d 29 33 78 or r9,r9,r6 e18: 39 03 00 06 addi r8,r3,6 e1c: 7d 46 53 78 mr r6,r10 e20: 89 08 00 00 lbz r8,0(r8) e24: 7d 29 33 78 or r9,r9,r6 e28: 38 63 00 08 addi r3,r3,8 e2c: 7d 46 53 78 mr r6,r10 e30: 83 c3 00 00 lwz r30,0(r3) e34: 83 e3 00 04 lwz r31,4(r3) e38: 7d 29 33 78 or r9,r9,r6 e3c: 7d 43 53 78 mr r3,r10 e40: 90 a4 00 00 stw r5,0(r4) e44: 7d 29 1b 78 or r9,r9,r3 e48: 38 c4 00 04 addi r6,r4,4 e4c: 7d 43 53 78 mr r3,r10 e50: b0 e6 00 00 sth r7,0(r6) e54: 7d 29 1b 78 or r9,r9,r3 e58: 38 e4 00 06 addi r7,r4,6 e5c: 7d 43 53 78 mr r3,r10 e60: 99 07 00 00 stb r8,0(r7) e64: 7d 23 1b 78 or r3,r9,r3 e68: 38 84 00 08 addi r4,r4,8 e6c: 93 c4 00 00 stw r30,0(r4) e70: 93 e4 00 04 stw r31,4(r4) e74: 7c 63 53 78 or r3,r3,r10 e78: 83 c1 00 08 lwz r30,8(r1) e7c: 83 e1 00 0c lwz r31,12(r1) e80: 38 21 00 10 addi r1,r1,16 e84: 4e 80 00 20 blr After the patch: 00000dbc <set_test_user>: dbc: 39 40 00 00 li r10,0 dc0: 7d 49 53 78 mr r9,r10 dc4: 80 03 00 00 lwz r0,0(r3) dc8: 7d 48 53 78 mr r8,r10 dcc: a1 63 00 04 lhz r11,4(r3) dd0: 7d 29 43 78 or r9,r9,r8 dd4: 7d 48 53 78 mr r8,r10 dd8: 88 a3 00 06 lbz r5,6(r3) ddc: 7d 29 43 78 or r9,r9,r8 de0: 7d 48 53 78 mr r8,r10 de4: 80 c3 00 08 lwz r6,8(r3) de8: 80 e3 00 0c lwz r7,12(r3) dec: 7d 29 43 78 or r9,r9,r8 df0: 7d 43 53 78 mr r3,r10 df4: 90 04 00 00 stw r0,0(r4) df8: 7d 29 1b 78 or r9,r9,r3 dfc: 7d 43 53 78 mr r3,r10 e00: b1 64 00 04 sth r11,4(r4) e04: 7d 29 1b 78 or r9,r9,r3 e08: 7d 43 53 78 mr r3,r10 e0c: 98 a4 00 06 stb r5,6(r4) e10: 7d 23 1b 78 or r3,r9,r3 e14: 90 c4 00 08 stw r6,8(r4) e18: 90 e4 00 0c stw r7,12(r4) e1c: 7c 63 53 78 or r3,r3,r10 e20: 4e 80 00 20 blr Signed-off-by: Christophe Leroy <[email protected]> Reviewed-by: Segher Boessenkool <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/c27bc4e598daf3bbb225de7a1f5c52121cf1e279.1597235091.git.christophe.leroy@csgroup.eu
2020-09-02powerpc: Remove flush_instruction_cache() on 8xxChristophe Leroy1-7/+0
flush_instruction_cache() is never used on 8xx, remove it. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/245cabd8f291facac8c8c5fd370e361a69e02860.1597384145.git.christophe.leroy@csgroup.eu
2020-09-02powerpc: unrel_branch_check.sh: enable the use of llvm-objdump v9, 10 or 11Stephen Rothwell1-5/+29
Currently, using llvm-objtool, this script just silently succeeds without actually do the intended checking. So this updates it to work properly. Firstly, llvm-objdump does not add target symbol names to the end of branches in its asm output, so we have to drop the branch to __start_initialization_multiplatform using its address. Secondly, v9 and 10 specify branch targets as .+<offset>, so we convert those to actual addresses. Thirdly, v10 and 11 error out on a vmlinux if given the -R option complaining that it is "not a dynamic object". The -R does not make any difference to the asm output, so remove it. Lastly, v11 produces asm that is very similar to Gnu objtool (at least as far as branches are concerned), so no further changes are necessary to make it work. Signed-off-by: Stephen Rothwell <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-02powerpc: unrel_branch_check.sh: use nm to find symbol valueStephen Rothwell2-9/+6
This is considerably faster then parsing the objdump asm output. It will also make the enabling of llvm-objdump a little easier. Signed-off-by: Stephen Rothwell <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-02powerpc: unrel_branch_check.sh: exit silently for early errorsStephen Rothwell1-1/+4
If we can't find the address of __end_interrupts, then we still exit successfully as that is the current behaviour. Signed-off-by: Stephen Rothwell <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-02powerpc: unrel_branch_check.sh: fix up the file headerStephen Rothwell1-11/+4
Signed-off-by: Stephen Rothwell <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-02powerpc: unrel_branch_check.sh: simplify and tidy up the final loopStephen Rothwell1-16/+10
Signed-off-by: Stephen Rothwell <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-02powerpc: unrel_branch_check.sh: convert grep | sed | awk to just sedStephen Rothwell1-10/+20
Also start using sed -E and make all the separate expressions into a single one with comments. Pull the stripping of condition registers back into the sed command. Signed-off-by: Stephen Rothwell <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]