aboutsummaryrefslogtreecommitdiff
path: root/arch/s390/mm
AgeCommit message (Collapse)AuthorFilesLines
2021-09-08mm/memory_hotplug: remove nid parameter from arch_remove_memory()David Hildenbrand1-2/+1
The parameter is unused, let's remove it. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: David Hildenbrand <[email protected]> Acked-by: Catalin Marinas <[email protected]> Acked-by: Michael Ellerman <[email protected]> [powerpc] Acked-by: Heiko Carstens <[email protected]> [s390] Reviewed-by: Pankaj Gupta <[email protected]> Reviewed-by: Oscar Salvador <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Yoshinori Sato <[email protected]> Cc: Rich Felker <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Anshuman Khandual <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Pavel Tatashin <[email protected]> Cc: Baoquan He <[email protected]> Cc: Laurent Dufour <[email protected]> Cc: Sergei Trofimovich <[email protected]> Cc: Kefeng Wang <[email protected]> Cc: Michel Lespinasse <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: "Aneesh Kumar K.V" <[email protected]> Cc: Thiago Jung Bauermann <[email protected]> Cc: Joe Perches <[email protected]> Cc: Pierre Morel <[email protected]> Cc: Jia He <[email protected]> Cc: Anton Blanchard <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Jiang <[email protected]> Cc: Jason Wang <[email protected]> Cc: Len Brown <[email protected]> Cc: "Michael S. Tsirkin" <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Nathan Lynch <[email protected]> Cc: Pankaj Gupta <[email protected]> Cc: "Rafael J. Wysocki" <[email protected]> Cc: "Rafael J. Wysocki" <[email protected]> Cc: Scott Cheloha <[email protected]> Cc: Vishal Verma <[email protected]> Cc: Vitaly Kuznetsov <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Wei Yang <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-09-07Revert "mm/gup: remove try_get_page(), call try_get_compound_head() directly"Linus Torvalds1-1/+1
This reverts commit 9857a17f206ff374aea78bccfb687f145368be2e. That commit was completely broken, and I should have caught on to it earlier. But happily, the kernel test robot noticed the breakage fairly quickly. The breakage is because "try_get_page()" is about avoiding the page reference count overflow case, but is otherwise the exact same as a plain "get_page()". In contrast, "try_get_compound_head()" is an entirely different beast, and uses __page_cache_add_speculative() because it's not just about the page reference count, but also about possibly racing with the underlying page going away. So all the commentary about how "try_get_page() has fallen a little behind in terms of maintenance, try_get_compound_head() handles speculative page references more thoroughly" was just completely wrong: yes, try_get_compound_head() handles speculative page references, but the point is that try_get_page() does not, and must not. So there's no lack of maintainance - there are fundamentally different semantics. A speculative page reference would be entirely wrong in "get_page()", and it's entirely wrong in "try_get_page()". It's not about speculation, it's purely about "uhhuh, you can't get this page because you've tried to increment the reference count too much already". The reason the kernel test robot noticed this bug was that it hit the VM_BUG_ON() in __page_cache_add_speculative(), which is all about verifying that the context of any speculative page access is correct. But since that isn't what try_get_page() is all about, the VM_BUG_ON() tests things that are not correct to test for try_get_page(). Reported-by: kernel test robot <[email protected]> Cc: John Hubbard <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-09-07s390/mm: fix kernel doc commentsHeiko Carstens2-8/+7
Signed-off-by: Heiko Carstens <[email protected]>
2021-09-03Merge branch 'stable/for-linus-5.15' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb Pull swiotlb updates from Konrad Rzeszutek Wilk: "A new feature called restricted DMA pools. It allows SWIOTLB to utilize per-device (or per-platform) allocated memory pools instead of using the global one. The first big user of this is ARM Confidential Computing where the memory for DMA operations can be set per platform" * 'stable/for-linus-5.15' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb: (23 commits) swiotlb: use depends on for DMA_RESTRICTED_POOL of: restricted dma: Don't fail device probe on rmem init failure of: Move of_dma_set_restricted_buffer() into device.c powerpc/svm: Don't issue ultracalls if !mem_encrypt_active() s390/pv: fix the forcing of the swiotlb swiotlb: Free tbl memory in swiotlb_exit() swiotlb: Emit diagnostic in swiotlb_exit() swiotlb: Convert io_default_tlb_mem to static allocation of: Return success from of_dma_set_restricted_buffer() when !OF_ADDRESS swiotlb: add overflow checks to swiotlb_bounce swiotlb: fix implicit debugfs declarations of: Add plumbing for restricted DMA pool dt-bindings: of: Add restricted DMA pool swiotlb: Add restricted DMA pool initialization swiotlb: Add restricted DMA alloc/free support swiotlb: Refactor swiotlb_tbl_unmap_single swiotlb: Move alloc_size to swiotlb_find_slots swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing swiotlb: Update is_swiotlb_active to add a struct device argument swiotlb: Update is_swiotlb_buffer to add a struct device argument ...
2021-09-03Merge branch 'akpm' (patches from Andrew)Linus Torvalds1-1/+1
Merge misc updates from Andrew Morton: "173 patches. Subsystems affected by this series: ia64, ocfs2, block, and mm (debug, pagecache, gup, swap, shmem, memcg, selftests, pagemap, mremap, bootmem, sparsemem, vmalloc, kasan, pagealloc, memory-failure, hugetlb, userfaultfd, vmscan, compaction, mempolicy, memblock, oom-kill, migration, ksm, percpu, vmstat, and madvise)" * emailed patches from Andrew Morton <[email protected]>: (173 commits) mm/madvise: add MADV_WILLNEED to process_madvise() mm/vmstat: remove unneeded return value mm/vmstat: simplify the array size calculation mm/vmstat: correct some wrong comments mm/percpu,c: remove obsolete comments of pcpu_chunk_populated() selftests: vm: add COW time test for KSM pages selftests: vm: add KSM merging time test mm: KSM: fix data type selftests: vm: add KSM merging across nodes test selftests: vm: add KSM zero page merging test selftests: vm: add KSM unmerge test selftests: vm: add KSM merge test mm/migrate: correct kernel-doc notation mm: wire up syscall process_mrelease mm: introduce process_mrelease system call memblock: make memblock_find_in_range method private mm/mempolicy.c: use in_task() in mempolicy_slab_node() mm/mempolicy: unify the create() func for bind/interleave/prefer-many policies mm/mempolicy: advertise new MPOL_PREFERRED_MANY mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY ...
2021-09-03mm/gup: remove try_get_page(), call try_get_compound_head() directlyJohn Hubbard1-1/+1
try_get_page() is very similar to try_get_compound_head(), and in fact try_get_page() has fallen a little behind in terms of maintenance: try_get_compound_head() handles speculative page references more thoroughly. There are only two try_get_page() callsites, so just call try_get_compound_head() directly from those, and remove try_get_page() entirely. Also, seeing as how this changes try_get_compound_head() into a non-static function, provide some kerneldoc documentation for it. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: John Hubbard <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Vasily Gorbik <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-08-25s390/mm,pageattr: fix walk_pte_level() early exitHeiko Carstens1-1/+1
In case of splitting to 4k mapping the early exit in walk_pte_level() must only be taken iff flags is equal to SET_MEMORY_4K. Currently the early exit is taken if the flag is set, and also others might be set. This may lead to the situation that a mapping is split but other changes are not done, like e.g. setting pages to R/W. There is currently no such caller, but there might be in the future. Fixes: b3e1a00c8fa4 ("s390/mm: implement set_memory_4k()") Signed-off-by: Heiko Carstens <[email protected]>
2021-08-25s390/kasan: fix large PMD pages address alignment checkAlexander Gordeev1-21/+20
It is currently possible to initialize a large PMD page when the address is not aligned on page boundary. Signed-off-by: Alexander Gordeev <[email protected]> Reviewed-by: Vasily Gorbik <[email protected]> Signed-off-by: Vasily Gorbik <[email protected]> Signed-off-by: Heiko Carstens <[email protected]>
2021-08-18s390/mm: remove unused cmma functionsDavid Hildenbrand1-43/+0
The last user of arch_set_page_states(), arch_set_page_nodat() and arch_test_page_nodat() was removed in commit 394216275c7d ("s390: remove broken hibernate / power management support"), let's remove these functions. Signed-off-by: David Hildenbrand <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Heiko Carstens <[email protected]> Signed-off-by: Vasily Gorbik <[email protected]>
2021-08-05s390: rename dma section to amode31Heiko Carstens2-3/+3
The dma section name is confusing, since the code which resides within that section has nothing to do with direct memory access. Instead the limitation is that the code has to run in 31 bit addressing mode, and therefore has to reside below 2GB. So the name was chosen since ZONE_DMA is the same region. To reduce confusion rename the section to amode31, which hopefully describes better what this is about. Note: this will also change vmcoreinfo strings - SDMA=... gets renamed to SAMODE31=... - EDMA=... gets renamed to EAMODE31=... Acked-by: Vasily Gorbik <[email protected]> Reviewed-by: Alexander Egorenkov <[email protected]> Signed-off-by: Heiko Carstens <[email protected]>
2021-08-05s390/mm: use page_to_virt() in __kernel_map_pages()Heiko Carstens1-1/+1
Fix virtual vs physical address confusion (which currently are the same). Reviewed-by: Alexander Gordeev <[email protected]> Signed-off-by: Heiko Carstens <[email protected]>
2021-08-05s390: replace deprecated CPU-hotplug functionsSebastian Andrzej Siewior1-2/+2
The functions get_online_cpus() and put_online_cpus() have been deprecated during the CPU hotplug rework. They map directly to cpus_read_lock() and cpus_read_unlock(). Replace deprecated CPU-hotplug functions with the official version. The behavior remains unchanged. Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Heiko Carstens <[email protected]>
2021-07-30s390: add kfence region to pagetable dumperSven Schnelle1-0/+16
Signed-off-by: Sven Schnelle <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Heiko Carstens <[email protected]>
2021-07-30s390: add support for KFENCESven Schnelle3-4/+11
Signed-off-by: Sven Schnelle <[email protected]> [[email protected]: simplify/rework code] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Heiko Carstens <[email protected]>
2021-07-30s390/mm: implement set_memory_4k()Heiko Carstens1-2/+12
Implement set_memory_4k() which will split any present large or huge mapping in the given range to a 4k mapping. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Heiko Carstens <[email protected]>
2021-07-27s390/boot: introduce boot data 'initrd_data'Alexander Egorenkov1-1/+1
The new boot data struct shall replace global variables INITRD_START and INITRD_SIZE. It is initialized in the decompressor and passed to the decompressed kernel. In comparison to the old solution, this one doesn't access data at fixed physical addresses which will become important when the decompressor becomes relocatable. Signed-off-by: Alexander Egorenkov <[email protected]> Acked-by: Heiko Carstens <[email protected]> Signed-off-by: Heiko Carstens <[email protected]>
2021-07-23s390/pv: fix the forcing of the swiotlbHalil Pasic1-1/+1
Since commit 903cd0f315fe ("swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing") if code sets swiotlb_force it needs to do so before the swiotlb is initialised. Otherwise io_tlb_default_mem->force_bounce will not get set to true, and devices that use (the default) swiotlb will not bounce despite switolb_force having the value of SWIOTLB_FORCE. Let us restore swiotlb functionality for PV by fulfilling this new requirement. This change addresses what turned out to be a fragility in commit 64e1f0c531d1 ("s390/mm: force swiotlb for protected virtualization"), which ain't exactly broken in its original context, but could give us some more headache if people backport the broken change and forget this fix. Signed-off-by: Halil Pasic <[email protected]> Tested-by: Christian Borntraeger <[email protected]> Reviewed-by: Christian Borntraeger <[email protected]> Fixes: 903cd0f315fe ("swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing") Fixes: 64e1f0c531d1 ("s390/mm: force swiotlb for protected virtualization") Cc: [email protected] #5.3+ Signed-off-by: Konrad Rzeszutek Wilk <[email protected]>
2021-07-08s390/mm: use call_on_stack() macroHeiko Carstens1-4/+9
Reviewed-by: Sven Schnelle <[email protected]> Signed-off-by: Heiko Carstens <[email protected]> Signed-off-by: Vasily Gorbik <[email protected]>
2021-07-08s390/signal: remove sigreturn on stackSven Schnelle1-23/+0
{rt_}sigreturn is now called from the vdso, so we no longer need the svc on the stack, and therefore no hack to support that mechanism on machines with non-executable stack. Signed-off-by: Sven Schnelle <[email protected]> Reviewed-by: Heiko Carstens <[email protected]> Signed-off-by: Vasily Gorbik <[email protected]>
2021-07-05s390: mm: Fix secure storage access exception handlingJanosch Frank1-0/+26
Turns out that the bit 61 in the TEID is not always 1 and if that's the case the address space ID and the address are unpredictable. Without an address and its address space ID we can't export memory and hence we can only send a SIGSEGV to the process or panic the kernel depending on who caused the exception. Unfortunately bit 61 is only reliable if we have the "misc" UV feature bit. Signed-off-by: Janosch Frank <[email protected]> Reviewed-by: Christian Borntraeger <[email protected]> Fixes: 084ea4d611a3d ("s390/mm: add (non)secure page access exceptions handlers") Cc: [email protected] Signed-off-by: Vasily Gorbik <[email protected]>
2021-07-04Merge tag 's390-5.14-1' of ↵Linus Torvalds3-45/+21
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 updates from Vasily Gorbik: - Rework inline asm to get rid of error prone "register asm" constructs, which are problematic especially when code instrumentation is enabled. In particular introduce and use register pair union to allocate even/odd register pairs. Unfortunately this breaks compatibility with older clang compilers and minimum clang version for s390 has been raised to 13. https://lore.kernel.org/linux-next/CAK7LNARuSmPCEy-ak0erPrPTgZdGVypBROFhtw+=3spoGoYsyw@mail.gmail.com/ - Fix gcc 11 warnings, which triggered various minor reworks all over the code. - Add zstd kernel image compression support. - Rework boot CPU lowcore handling. - De-duplicate and move kernel memory layout setup logic earlier. - Few fixes in preparation for FORTIFY_SOURCE performing compile-time and run-time field bounds checking for mem functions. - Remove broken and unused power management support leftovers in s390 drivers. - Disable stack-protector for decompressor and purgatory to fix buildroot build. - Fix vt220 sclp console name to match the char device name. - Enable HAVE_IOREMAP_PROT and add zpci_set_irq()/zpci_clear_irq() in zPCI code. - Remove some implausible WARN_ON_ONCEs and remove arch specific counter transaction call backs in favour of default transaction handling in perf code. - Extend/add new uevents for online/config/mode state changes of AP card / queue device in zcrypt. - Minor entry and ccwgroup code improvements. - Other small various fixes and improvements all over the code. * tag 's390-5.14-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (91 commits) s390/dasd: use register pair instead of register asm s390/qdio: get rid of register asm s390/ioasm: use symbolic names for asm operands s390/ioasm: get rid of register asm s390/cmf: get rid of register asm s390/lib,string: get rid of register asm s390/lib,uaccess: get rid of register asm s390/string: get rid of register asm s390/cmpxchg: use register pair instead of register asm s390/mm,pages-states: get rid of register asm s390/lib,xor: get rid of register asm s390/timex: get rid of register asm s390/hypfs: use register pair instead of register asm s390/zcrypt: Switch to flexible array member s390/speculation: Use statically initialized const for instructions virtio/s390: get rid of open-coded kvm hypercall s390/pci: add zpci_set_irq()/zpci_clear_irq() scripts/min-tool-version.sh: Raise minimum clang version to 13.0.0 for s390 s390/ipl: use register pair instead of register asm s390/mem_detect: fix tprot() program check new psw handling ...
2021-07-02Merge branch 'akpm' (patches from Andrew)Linus Torvalds1-1/+1
Merge more updates from Andrew Morton: "190 patches. Subsystems affected by this patch series: mm (hugetlb, userfaultfd, vmscan, kconfig, proc, z3fold, zbud, ras, mempolicy, memblock, migration, thp, nommu, kconfig, madvise, memory-hotplug, zswap, zsmalloc, zram, cleanups, kfence, and hmm), procfs, sysctl, misc, core-kernel, lib, lz4, checkpatch, init, kprobes, nilfs2, hfs, signals, exec, kcov, selftests, compress/decompress, and ipc" * emailed patches from Andrew Morton <[email protected]>: (190 commits) ipc/util.c: use binary search for max_idx ipc/sem.c: use READ_ONCE()/WRITE_ONCE() for use_global_lock ipc: use kmalloc for msg_queue and shmid_kernel ipc sem: use kvmalloc for sem_undo allocation lib/decompressors: remove set but not used variabled 'level' selftests/vm/pkeys: exercise x86 XSAVE init state selftests/vm/pkeys: refill shadow register after implicit kernel write selftests/vm/pkeys: handle negative sys_pkey_alloc() return code selftests/vm/pkeys: fix alloc_random_pkey() to make it really, really random kcov: add __no_sanitize_coverage to fix noinstr for all architectures exec: remove checks in __register_bimfmt() x86: signal: don't do sas_ss_reset() until we are certain that sigframe won't be abandoned hfsplus: report create_date to kstat.btime hfsplus: remove unnecessary oom message nilfs2: remove redundant continue statement in a while-loop kprobes: remove duplicated strong free_insn_page in x86 and s390 init: print out unknown kernel parameters checkpatch: do not complain about positive return values starting with EPOLL checkpatch: improve the indented label test checkpatch: scripts/spdxcheck.py now requires python3 ...
2021-07-01mm: remove special swap entry functionsAlistair Popple1-1/+1
Patch series "Add support for SVM atomics in Nouveau", v11. Introduction ============ Some devices have features such as atomic PTE bits that can be used to implement atomic access to system memory. To support atomic operations to a shared virtual memory page such a device needs access to that page which is exclusive of the CPU. This series introduces a mechanism to temporarily unmap pages granting exclusive access to a device. These changes are required to support OpenCL atomic operations in Nouveau to shared virtual memory (SVM) regions allocated with the CL_MEM_SVM_ATOMICS clSVMAlloc flag. A more complete description of the OpenCL SVM feature is available at https://www.khronos.org/registry/OpenCL/specs/3.0-unified/html/ OpenCL_API.html#_shared_virtual_memory . Implementation ============== Exclusive device access is implemented by adding a new swap entry type (SWAP_DEVICE_EXCLUSIVE) which is similar to a migration entry. The main difference is that on fault the original entry is immediately restored by the fault handler instead of waiting. Restoring the entry triggers calls to MMU notifers which allows a device driver to revoke the atomic access permission from the GPU prior to the CPU finalising the entry. Patches ======= Patches 1 & 2 refactor existing migration and device private entry functions. Patches 3 & 4 rework try_to_unmap_one() by splitting out unrelated functionality into separate functions - try_to_migrate_one() and try_to_munlock_one(). Patch 5 renames some existing code but does not introduce functionality. Patch 6 is a small clean-up to swap entry handling in copy_pte_range(). Patch 7 contains the bulk of the implementation for device exclusive memory. Patch 8 contains some additions to the HMM selftests to ensure everything works as expected. Patch 9 is a cleanup for the Nouveau SVM implementation. Patch 10 contains the implementation of atomic access for the Nouveau driver. Testing ======= This has been tested with upstream Mesa 21.1.0 and a simple OpenCL program which checks that GPU atomic accesses to system memory are atomic. Without this series the test fails as there is no way of write-protecting the page mapping which results in the device clobbering CPU writes. For reference the test is available at https://ozlabs.org/~apopple/opencl_svm_atomics/ Further testing has been performed by adding support for testing exclusive access to the hmm-tests kselftests. This patch (of 10): Remove multiple similar inline functions for dealing with different types of special swap entries. Both migration and device private swap entries use the swap offset to store a pfn. Instead of multiple inline functions to obtain a struct page for each swap entry type use a common function pfn_swap_entry_to_page(). Also open-code the various entry_to_pfn() functions as this results is shorter code that is easier to understand. Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Alistair Popple <[email protected]> Reviewed-by: Ralph Campbell <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: "Matthew Wilcox (Oracle)" <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Peter Xu <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Ben Skeggs <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: John Hubbard <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-06-28s390/mm,pages-states: get rid of register asmHeiko Carstens1-6/+6
There is no reason to use fixed registers for the essa instruction. Therefore remove the register asm construct. Signed-off-by: Heiko Carstens <[email protected]> Signed-off-by: Vasily Gorbik <[email protected]>
2021-06-18s390/maccess: use register pair instead of register asmHeiko Carstens1-10/+9
Signed-off-by: Heiko Carstens <[email protected]> Signed-off-by: Vasily Gorbik <[email protected]>
2021-06-18s390: setup kernel memory layout earlyVasily Gorbik1-29/+6
Currently there are two separate places where kernel memory layout has to be known and adjusted: 1. early kasan setup. 2. paging setup later. Those 2 places had to be kept in sync and adjusted to reflect peculiar technical details of one another. With additional factors which influence kernel memory layout like ultravisor secure storage limit, complexity of keeping two things in sync grew up even more. Besides that if we look forward towards creating identity mapping and enabling DAT before jumping into uncompressed kernel - that would also require full knowledge of and control over kernel memory layout. So, de-duplicate and move kernel memory layout setup logic into the decompressor. Reviewed-by: Alexander Gordeev <[email protected]> Signed-off-by: Vasily Gorbik <[email protected]>
2021-06-18sched: Introduce task_is_running()Peter Zijlstra1-1/+1
Replace a bunch of 'p->state == TASK_RUNNING' with a new helper: task_is_running(p). Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Davidlohr Bueso <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-05hugetlb: pass vma into huge_pte_alloc() and huge_pmd_share()Peter Xu1-1/+1
Patch series "hugetlb: Disable huge pmd unshare for uffd-wp", v4. This series tries to disable huge pmd unshare of hugetlbfs backed memory for uffd-wp. Although uffd-wp of hugetlbfs is still during rfc stage, the idea of this series may be needed for multiple tasks (Axel's uffd minor fault series, and Mike's soft dirty series), so I picked it out from the larger series. This patch (of 4): It is a preparation work to be able to behave differently in the per architecture huge_pte_alloc() according to different VMA attributes. Pass it deeper into huge_pmd_share() so that we can avoid the find_vma() call. [[email protected]: build fix] Link: https://lkml.kernel.org/r/20210304164653.GB397383@xz-x1Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Peter Xu <[email protected]> Suggested-by: Mike Kravetz <[email protected]> Cc: Adam Ruprecht <[email protected]> Cc: Alexander Viro <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Anshuman Khandual <[email protected]> Cc: Axel Rasmussen <[email protected]> Cc: Cannon Matthews <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chinwen Chang <[email protected]> Cc: David Rientjes <[email protected]> Cc: "Dr . David Alan Gilbert" <[email protected]> Cc: Huang Ying <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jann Horn <[email protected]> Cc: Jerome Glisse <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: Lokesh Gidra <[email protected]> Cc: "Matthew Wilcox (Oracle)" <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: "Michal Koutn" <[email protected]> Cc: Michel Lespinasse <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Mina Almasry <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Oliver Upton <[email protected]> Cc: Shaohua Li <[email protected]> Cc: Shawn Anastasio <[email protected]> Cc: Steven Price <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Vlastimil Babka <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-04-30mm: move mem_init_print_info() into mm_init()Kefeng Wang1-2/+0
mem_init_print_info() is called in mem_init() on each architecture, and pass NULL argument, so using void argument and move it into mm_init(). Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kefeng Wang <[email protected]> Acked-by: Dave Hansen <[email protected]> [x86] Reviewed-by: Christophe Leroy <[email protected]> [powerpc] Acked-by: David Hildenbrand <[email protected]> Tested-by: Anatoly Pugachev <[email protected]> [sparc64] Acked-by: Russell King <[email protected]> [arm] Acked-by: Mike Rapoport <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Guo Ren <[email protected]> Cc: Yoshinori Sato <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: "Peter Zijlstra" <[email protected]> Cc: Ingo Molnar <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-04-12s390/traps,mm: add conditional trap handlersHeiko Carstens1-16/+2
Add conditional trap handlers similar to conditional system calls (COND_SYSCALL), to reduce the number of ifdefs. Trap handlers which may or may not exist depending on config options are supposed to have a COND_TRAP entry, which redirects to default_trap_handler() for non-existent trap handlers during link time. This allows to get rid of the secure execution trap handlers for the !PGSTE case. Reviewed-by: Vasily Gorbik <[email protected]> Signed-off-by: Heiko Carstens <[email protected]>
2021-04-05s390/mm: fix phys vs virt confusion in mark_kernel_pXd() functions familyAlexander Gordeev1-4/+4
Due to historical reasons mark_kernel_pXd() functions misuse the notion of physical vs virtual addresses difference. Signed-off-by: Alexander Gordeev <[email protected]> Signed-off-by: Heiko Carstens <[email protected]>
2021-02-26Merge tag 's390-5.12-2' of ↵Linus Torvalds2-26/+26
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull more s390 updates from Vasily Gorbik: - Fix physical vs virtual confusion in some basic mm macros and routines. Caused by __pa == __va on s390 currently. - Get rid of on-stack cpu masks. - Add support for complete CPU counter set extraction. - Add arch_irq_work_raise implementation. - virtio-ccw revision and opcode fixes. * tag 's390-5.12-2' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: s390/cpumf: Add support for complete counter set extraction virtio/s390: implement virtio-ccw revision 2 correctly s390/smp: implement arch_irq_work_raise() s390/topology: move cpumasks away from stack s390/smp: smp_emergency_stop() - move cpumask away from stack s390/smp: __smp_rescan_cpus() - move cpumask away from stack s390/smp: consolidate locking for smp_rescan() s390/mm: fix phys vs virt confusion in vmem_*() functions family s390/mm: fix phys vs virt confusion in pgtable allocation routines s390/mm: fix invalid __pa() usage in pfn_pXd() macros s390/mm: make pXd_deref() macros return a pointer s390/opcodes: rename selhhhr to selfhr
2021-02-26s390/mm: define arch_get_mappable_range()Anshuman Khandual2-1/+14
This overrides arch_get_mappabble_range() on s390 platform which will be used with recently added generic framework. It modifies the existing range check in vmem_add_mapping() using arch_get_mappable_range(). It also adds a VM_BUG_ON() check that would ensure that mhp_range_allowed() has already been called on the hotplug path. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Anshuman Khandual <[email protected]> Acked-by: Heiko Carstens <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Jason Wang <[email protected]> Cc: Jonathan Cameron <[email protected]> Cc: Mark Rutland <[email protected]> Cc: "Michael S. Tsirkin" <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Oscar Salvador <[email protected]> Cc: Pankaj Gupta <[email protected]> Cc: Pankaj Gupta <[email protected]> Cc: teawater <[email protected]> Cc: Wei Yang <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24s390/mm: fix phys vs virt confusion in vmem_*() functions familyAlexander Gordeev1-15/+15
Due to historical reasons vmem_*() functions misuse or ignore the notion of physical vs virtual addresses difference. Signed-off-by: Alexander Gordeev <[email protected]> Reviewed-by: Heiko Carstens <[email protected]> Signed-off-by: Vasily Gorbik <[email protected]>
2021-02-24s390/mm: fix phys vs virt confusion in pgtable allocation routinesAlexander Gordeev1-11/+11
The physical address of page tables is passed around and used as virtual address in various locations. Signed-off-by: Alexander Gordeev <[email protected]> Reviewed-by: Heiko Carstens <[email protected]> Signed-off-by: Vasily Gorbik <[email protected]>
2021-01-19s390: convert to generic entrySven Schnelle1-1/+1
This patch converts s390 to use the generic entry infrastructure from kernel/entry/*. There are a few special things on s390: - PIF_PER_TRAP is moved to TIF_PER_TRAP as the generic code doesn't know about our PIF flags in exit_to_user_mode_loop(). - The old code had several ways to restart syscalls: a) PIF_SYSCALL_RESTART, which was only set during execve to force a restart after upgrading a process (usually qemu-kvm) to pgste page table extensions. b) PIF_SYSCALL, which is set by do_signal() to indicate that the current syscall should be restarted. This is changed so that do_signal() now also uses PIF_SYSCALL_RESTART. Continuing to use PIF_SYSCALL doesn't work with the generic code, and changing it to PIF_SYSCALL_RESTART makes PIF_SYSCALL and PIF_SYSCALL_RESTART more unique. - On s390 calling sys_sigreturn or sys_rt_sigreturn is implemented by executing a svc instruction on the process stack which causes a fault. While handling that fault the fault code sets PIF_SYSCALL to hand over processing to the syscall code on exit to usermode. The patch introduces PIF_SYSCALL_RET_SET, which is set if ptrace sets a return value for a syscall. The s390x ptrace ABI uses r2 both for the syscall number and return value, so ptrace cannot set the syscall number + return value at the same time. The flag makes handling that a bit easier. do_syscall() will just skip executing the syscall if PIF_SYSCALL_RET_SET is set. CONFIG_DEBUG_ASCE was removd in favour of the generic CONFIG_DEBUG_ENTRY. CR1/7/13 will be checked both on kernel entry and exit to contain the correct asces. Signed-off-by: Sven Schnelle <[email protected]> Signed-off-by: Vasily Gorbik <[email protected]>
2020-12-20Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds1-15/+15
Pull KVM updates from Paolo Bonzini: "Much x86 work was pushed out to 5.12, but ARM more than made up for it. ARM: - PSCI relay at EL2 when "protected KVM" is enabled - New exception injection code - Simplification of AArch32 system register handling - Fix PMU accesses when no PMU is enabled - Expose CSV3 on non-Meltdown hosts - Cache hierarchy discovery fixes - PV steal-time cleanups - Allow function pointers at EL2 - Various host EL2 entry cleanups - Simplification of the EL2 vector allocation s390: - memcg accouting for s390 specific parts of kvm and gmap - selftest for diag318 - new kvm_stat for when async_pf falls back to sync x86: - Tracepoints for the new pagetable code from 5.10 - Catch VFIO and KVM irqfd events before userspace - Reporting dirty pages to userspace with a ring buffer - SEV-ES host support - Nested VMX support for wait-for-SIPI activity state - New feature flag (AVX512 FP16) - New system ioctl to report Hyper-V-compatible paravirtualization features Generic: - Selftest improvements" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (171 commits) KVM: SVM: fix 32-bit compilation KVM: SVM: Add AP_JUMP_TABLE support in prep for AP booting KVM: SVM: Provide support to launch and run an SEV-ES guest KVM: SVM: Provide an updated VMRUN invocation for SEV-ES guests KVM: SVM: Provide support for SEV-ES vCPU loading KVM: SVM: Provide support for SEV-ES vCPU creation/loading KVM: SVM: Update ASID allocation to support SEV-ES guests KVM: SVM: Set the encryption mask for the SVM host save area KVM: SVM: Add NMI support for an SEV-ES guest KVM: SVM: Guest FPU state save/restore not needed for SEV-ES guest KVM: SVM: Do not report support for SMM for an SEV-ES guest KVM: x86: Update __get_sregs() / __set_sregs() to support SEV-ES KVM: SVM: Add support for CR8 write traps for an SEV-ES guest KVM: SVM: Add support for CR4 write traps for an SEV-ES guest KVM: SVM: Add support for CR0 write traps for an SEV-ES guest KVM: SVM: Add support for EFER write traps for an SEV-ES guest KVM: SVM: Support string IO operations for an SEV-ES guest KVM: SVM: Support MMIO for an SEV-ES guest KVM: SVM: Create trace events for VMGEXIT MSR protocol processing KVM: SVM: Create trace events for VMGEXIT processing ...
2020-12-14Merge tag 's390-5.11-1' of ↵Linus Torvalds6-116/+71
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 updates from Heiko Carstens: - Add support for the hugetlb_cma command line option to allocate gigantic hugepages using CMA - Add arch_get_random_long() support. - Add ap bus userspace notifications. - Increase default size of vmalloc area to 512GB and otherwise let it increase dynamically by the size of physical memory. This should fix all occurrences where the vmalloc area was not large enough. - Completely get rid of set_fs() (aka select SET_FS) and rework address space handling while doing that; making address space handling much more simple. - Reimplement getcpu vdso syscall in C. - Add support for extended SCLP responses (> 4k). This allows e.g. to handle also potential large system configurations. - Simplify KASAN by removing 3-level page table support and only supporting 4-levels from now on. - Improve debug-ability of the kernel decompressor code, which now prints also stack traces and symbols in case of problems to the console. - Remove more power management leftovers. - Other various fixes and improvements all over the place. * tag 's390-5.11-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (62 commits) s390/mm: add support to allocate gigantic hugepages using CMA s390/crypto: add arch_get_random_long() support s390/smp: perform initial CPU reset also for SMT siblings s390/mm: use invalid asce for user space when switching to init_mm s390/idle: fix accounting with machine checks s390/idle: add missing mt_cycles calculation s390/boot: add build-id to decompressor s390/kexec_file: fix diag308 subcode when loading crash kernel s390/cio: fix use-after-free in ccw_device_destroy_console s390/cio: remove pm support from ccw bus driver s390/cio: remove pm support from css-bus driver s390/cio: remove pm support from IO subchannel drivers s390/cio: remove pm support from chsc subchannel driver s390/vmur: remove unused pm related functions s390/tape: remove unsupported PM functions s390/cio: remove pm support from eadm-sch drivers s390: remove pm support from console drivers s390/dasd: remove unused pm related functions s390/zfcp: remove pm support from zfcp driver s390/ap: let bus_register() add the AP bus sysfs attributes ...
2020-12-10s390/gmap: make gmap memcg awareChristian Borntraeger1-15/+15
gmap allocations can be attributed to a process. Signed-off-by: Christian Borntraeger <[email protected]> Acked-by: Heiko Carstens <[email protected]> Acked-by: Janosch Frank <[email protected]> Acked-by: Cornelia Huck <[email protected]>
2020-11-23s390/mm: use invalid asce instead of kernel asceHeiko Carstens1-2/+8
Create a region 3 page table which contains only invalid entries, and use that via "s390_invalid_asce" instead of the kernel ASCE whenever there is either - no user address space available, e.g. during early startup - as an intermediate ASCE when address spaces are switched This makes sure that user space accesses in such situations are guaranteed to fail. Reviewed-by: Sven Schnelle <[email protected]> Reviewed-by: Alexander Gordeev <[email protected]> Signed-off-by: Heiko Carstens <[email protected]>
2020-11-23s390/mm: remove set_fs / rework address space handlingHeiko Carstens2-33/+9
Remove set_fs support from s390. With doing this rework address space handling and simplify it. As a result address spaces are now setup like this: CPU running in | %cr1 ASCE | %cr7 ASCE | %cr13 ASCE ----------------------------|-----------|-----------|----------- user space | user | user | kernel kernel, normal execution | kernel | user | kernel kernel, kvm guest execution | gmap | user | kernel To achieve this the getcpu vdso syscall is removed in order to avoid secondary address mode and a separate vdso address space in for user space. The getcpu vdso syscall will be implemented differently with a subsequent patch. The kernel accesses user space always via secondary address space. This happens in different ways: - with mvcos in home space mode and directly read/write to secondary address space - with mvcs/mvcp in primary space mode and copy from primary space to secondary space or vice versa - with e.g. cs in secondary space mode and access secondary space Switching translation modes happens with sacf before and after instructions which access user space, like before. Lazy handling of control register reloading is removed in the hope to make everything simpler, but at the cost of making kernel entry and exit a bit slower. That is: on kernel entry the primary asce is always changed to contain the kernel asce, and on kernel exit the primary asce is changed again so it contains the user asce. In kernel mode there is only one exception to the primary asce: when kvm guests are executed the primary asce contains the gmap asce (which describes the guest address space). The primary asce is reset to kernel asce whenever kvm guest execution is interrupted, so that this doesn't has to be taken into account for any user space accesses. Reviewed-by: Sven Schnelle <[email protected]> Signed-off-by: Heiko Carstens <[email protected]>
2020-11-20s390/vmem: make variable and function names consistentAlexander Gordeev1-18/+18
Rename some variable and functions to better clarify what they are and what they do. Signed-off-by: Alexander Gordeev <[email protected]> Signed-off-by: Heiko Carstens <[email protected]>
2020-11-20s390/vmem: remove redundant checkAlexander Gordeev1-1/+1
Reviewed-by: David Hildenbrand <[email protected]> Signed-off-by: Alexander Gordeev <[email protected]> Signed-off-by: Heiko Carstens <[email protected]>
2020-11-20s390: unify identity mapping limits handlingVasily Gorbik2-8/+15
Currently we have to consider too many different values which in the end only affect identity mapping size. These are: 1. max_physmem_end - end of physical memory online or standby. Always <= end of the last online memory block (get_mem_detect_end()). 2. CONFIG_MAX_PHYSMEM_BITS - the maximum size of physical memory the kernel is able to support. 3. "mem=" kernel command line option which limits physical memory usage. 4. OLDMEM_BASE which is a kdump memory limit when the kernel is executed as crash kernel. 5. "hsa" size which is a memory limit when the kernel is executed during zfcp/nvme dump. Through out kernel startup and run we juggle all those values at once but that does not bring any amusement, only confusion and complexity. Unify all those values to a single one we should really care, that is our identity mapping size. Signed-off-by: Vasily Gorbik <[email protected]> Reviewed-by: Alexander Gordeev <[email protected]> Acked-by: Heiko Carstens <[email protected]> Signed-off-by: Heiko Carstens <[email protected]>
2020-11-11KVM: s390: pv: Mark mm as protected after the set secure parameters and ↵Janosch Frank1-0/+2
improve cleanup We can only have protected guest pages after a successful set secure parameters call as only then the UV allows imports and unpacks. By moving the test we can now also check for it in s390_reset_acc() and do an early return if it is 0. Signed-off-by: Janosch Frank <[email protected]> Fixes: 29b40f105ec8 ("KVM: s390: protvirt: Add initial vm and cpu lifecycle handling") Reviewed-by: Cornelia Huck <[email protected]> Signed-off-by: Christian Borntraeger <[email protected]>
2020-11-09s390/kasan: remove obvious parameter with the only possible valueVasily Gorbik2-3/+3
Kasan early code is only working on init_mm, remove unneeded pgd parameter from kasan_copy_shadow and rename it to kasan_copy_shadow_mapping. Reviewed-by: Alexander Egorenkov <[email protected]> Signed-off-by: Vasily Gorbik <[email protected]> Signed-off-by: Heiko Carstens <[email protected]>
2020-11-09s390/kasan: avoid confusing namingVasily Gorbik1-9/+7
Kasan has nothing to do with vmemmap, strip vmemmap from function names to avoid confusing people. Reviewed-by: Alexander Egorenkov <[email protected]> Signed-off-by: Vasily Gorbik <[email protected]> Signed-off-by: Heiko Carstens <[email protected]>
2020-11-09s390/kasan: remove 3-level paging supportVasily Gorbik1-42/+10
Compiling the kernel with Kasan disables automatic 3-level vs 4-level kernel space paging selection, because the shadow memory offset has to be known at compile time and there is no such offset which would be acceptable for both 3 and 4-level paging. Instead S390_4_LEVEL_PAGING option was introduced which allowed to pick how many paging levels to use under Kasan. With the introduction of protected virtualization, kernel memory layout may be affected due to ultravisor secure storage limit. This adds additional complexity into how memory layout would look like in combination with Kasan predefined shadow memory offsets. To simplify this make Kasan 4-level paging default and remove Kasan 3-level paging support. Suggested-by: Heiko Carstens <[email protected]> Reviewed-by: Alexander Gordeev <[email protected]> Signed-off-by: Vasily Gorbik <[email protected]> Signed-off-by: Heiko Carstens <[email protected]>
2020-10-25treewide: Convert macro and uses of __section(foo) to __section("foo")Joe Perches1-1/+1
Use a more generic form for __section that requires quotes to avoid complications with clang and gcc differences. Remove the quote operator # from compiler_attributes.h __section macro. Convert all unquoted __section(foo) uses to quoted __section("foo"). Also convert __attribute__((section("foo"))) uses to __section("foo") even if the __attribute__ has multiple list entry forms. Conversion done using the script at: https://lore.kernel.org/lkml/[email protected]/2-convert_section.pl Signed-off-by: Joe Perches <[email protected]> Reviewed-by: Nick Desaulniers <[email protected]> Reviewed-by: Miguel Ojeda <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-10-23Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhostLinus Torvalds1-0/+11
Pull virtio updates from Michael Tsirkin: "vhost, vdpa, and virtio cleanups and fixes A very quiet cycle, no new features" * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost: MAINTAINERS: add URL for virtio-mem vhost_vdpa: remove unnecessary spin_lock in vhost_vring_call vringh: fix __vringh_iov() when riov and wiov are different vdpa/mlx5: Setup driver only if VIRTIO_CONFIG_S_DRIVER_OK s390: virtio: PV needs VIRTIO I/O device protection virtio: let arch advertise guest's memory access restrictions vhost_vdpa: Fix duplicate included kernel.h vhost: reduce stack usage in log_used virtio-mem: Constify mem_id_table virtio_input: Constify id_table virtio-balloon: Constify id_table vdpa/mlx5: Fix failure to bring link up vdpa/mlx5: Make use of a specific 16 bit endianness API