aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2019-11-26io_uring: cleanup io_import_fixed()Pavel Begunkov1-7/+5
Clean io_import_fixed() call site and make it return proper type. Signed-off-by: Pavel Begunkov <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2019-11-26io_uring: inline struct sqe_submitPavel Begunkov1-91/+78
There is no point left in keeping struct sqe_submit. Inline it into struct io_kiocb, so any req->submit.field is now just req->field - moves initialisation of ring_file into io_get_req() - removes duplicated req->sequence. Signed-off-by: Pavel Begunkov <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2019-11-26io_uring: store timeout's sqe->off in proper placePavel Begunkov1-4/+5
Timeouts' sequence offset (i.e. sqe->off) is stored in req->submit.sequence under a false name. Keep it in timeout.data instead. The unused space for sequence will be reclaimed in the following patches. Signed-off-by: Pavel Begunkov <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2019-11-26net: disallow ancillary data for __sys_{send,recv}msg_file()Jens Axboe1-6/+37
Only io_uring uses (and added) these, and we want to disallow the use of sendmsg/recvmsg for anything but regular data transfers. Use the newly added prep helper to split the msghdr copy out from the core function, to check for msg_control and msg_controllen settings. If either is set, we return -EINVAL. Acked-by: David S. Miller <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2019-11-26net: separate out the msghdr copy from ___sys_{send,recv}msg()Jens Axboe1-46/+95
This is in preparation for enabling the io_uring helpers for sendmsg and recvmsg to first copy the header for validation before continuing with the operation. There should be no functional changes in this patch. Acked-by: David S. Miller <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2019-11-26net: port < inet_prot_sock(net) --> inet_port_requires_bind_service(net, port)Maciej Żenczykowski6-11/+11
Note that the sysctl write accessor functions guarantee that: net->ipv4.sysctl_ip_prot_sock <= net->ipv4.ip_local_ports.range[0] invariant is maintained, and as such the max() in selinux hooks is actually spurious. ie. even though if (snum < max(inet_prot_sock(sock_net(sk)), low) || snum > high) { per logic is the same as if ((snum < inet_prot_sock(sock_net(sk)) && snum < low) || snum > high) { it is actually functionally equivalent to: if (snum < low || snum > high) { which is equivalent to: if (snum < inet_prot_sock(sock_net(sk)) || snum < low || snum > high) { even though the first clause is spurious. But we want to hold on to it in case we ever want to change what what inet_port_requires_bind_service() means (for example by changing it from a, by default, [0..1024) range to some sort of set). Test: builds, git 'grep inet_prot_sock' finds no other references Cc: Eric Dumazet <[email protected]> Signed-off-by: Maciej Żenczykowski <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-11-26Merge branch 'ibmvnic-Harden-device-commands-and-queries'David S. Miller2-27/+167
Thomas Falcon says: ==================== ibmvnic: Harden device commands and queries This patch series fixes some shortcomings with the current VNIC device command implementation. The first patch fixes the initialization of driver completion structures used for device commands. Additionally, all waits for device commands are bounded with a timeout in the event that the device does not respond or becomes inoperable. Finally, serialize queries to retain the integrity of device return codes. Changes in v2: - included header comment for ibmvnic_wait_for_completion - removed open-coded loop in patch 3/4, suggested by Jakub - ibmvnic_wait_for_completion accepts timeout value in milliseconds instead of jiffies - timeout calculations cleaned up and completed before wait loop - included missing mutex_destroy calls, suggested by Jakub - included comment before mutex declaration ==================== Signed-off-by: David S. Miller <[email protected]>
2019-11-26ibmvnic: Serialize device queriesThomas Falcon2-5/+51
Provide some serialization for device CRQ commands and queries to ensure that the shared variable used for storing return codes is properly synchronized. Signed-off-by: Thomas Falcon <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-11-26ibmvnic: Bound waits for device queriesThomas Falcon1-15/+97
Create a wrapper for wait_for_completion calls with additional driver checks to ensure that the driver does not wait on a disabled device. In those cases or if the device does not respond in an extended amount of time, this will allow the driver an opportunity to recover. Signed-off-by: Thomas Falcon <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-11-26ibmvnic: Terminate waiting device threads after loss of serviceThomas Falcon1-0/+9
If we receive a notification that the device has been deactivated or removed, force a completion of all waiting threads. Signed-off-by: Thomas Falcon <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-11-26ibmvnic: Fix completion structure initializationThomas Falcon1-8/+11
Fix multiple calls to init_completion for device completion structures. Instead, initialize them during device probe and reinitialize them later as needed. Signed-off-by: Thomas Falcon <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-11-26net-sctp: replace some sock_net(sk) with just 'net'Maciej Żenczykowski1-6/+6
It already existed in part of the function, but move it to a higher level and use it consistently throughout. Safe since sk is never written to. Signed-off-by: Maciej Żenczykowski <[email protected]> Acked-by: Marcelo Ricardo Leitner <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-11-26net: Fix a documentation bug wrt. ip_unprivileged_port_startMaciej Żenczykowski1-4/+5
It cannot overlap with the local port range - ie. with autobind selectable ports - and not with reserved ports. Indeed 'ip_local_reserved_ports' isn't even a range, it's a (by default empty) set. Fixes: 4548b683b781 ("Introduce a sysctl that modifies the value of PROT_SOCK.") Signed-off-by: Maciej Żenczykowski <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-11-26x86/ptrace: Document FSBASE and GSBASE ABI odditiesAndy Lutomirski1-0/+17
Signed-off-by: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Linus Torvalds <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-11-26x86/ptrace: Remove set_segment_reg() implementations for currentAndy Lutomirski1-12/+7
seg_segment_reg() should be unreachable with task == current. Rather than confusingly trying to make it work, just explicitly disable this case. (regset->get is used for current in the coredump code, but the ->set interface is only used for ptrace, and you can't ptrace yourself.) Signed-off-by: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Linus Torvalds <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-11-26x86/traps: die() instead of panicking on a double faultAndy Lutomirski1-1/+1
A double fault has a decent chance of being recoverable by killing the offending thread. Use die() so that we at least try to recover. Signed-off-by: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Linus Torvalds <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-11-26x86/doublefault/32: Rewrite the x86_32 #DF handler and unify with 64-bitAndy Lutomirski4-33/+138
The old x86_32 doublefault_fn() was old and crufty, and it did not even try to recover. do_double_fault() is much nicer. Rewrite the 32-bit double fault code to sanitize CPU state and call do_double_fault(). This is mostly an exercise i386 archaeology. With this patch applied, 32-bit double faults get a real stack trace, just like 64-bit double faults. [ mingo: merged the patch to a later kernel base. ] Signed-off-by: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Linus Torvalds <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-11-26x86/doublefault/32: Move #DF stack and TSS to cpu_entry_areaAndy Lutomirski8-34/+113
There are three problems with the current layout of the doublefault stack and TSS. First, the TSS is only cacheline-aligned, which is not enough -- if the hardware portion of the TSS (struct x86_hw_tss) crosses a page boundary, horrible things happen [0]. Second, the stack and TSS are global, so simultaneous double faults on different CPUs will cause massive corruption. Third, the whole mechanism won't work if user CR3 is loaded, resulting in a triple fault [1]. Let the doublefault stack and TSS share a page (which prevents the TSS from spanning a page boundary), make it percpu, and move it into cpu_entry_area. Teach the stack dump code about the doublefault stack. [0] Real hardware will read past the end of the page onto the next *physical* page if a task switch happens. Virtual machines may have any number of bugs, and I would consider it reasonable for a VM to summarily kill the guest if it tries to task-switch to a page-spanning TSS. [1] Real hardware triple faults. At least some VMs seem to hang. I'm not sure what's going on. Signed-off-by: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Linus Torvalds <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-11-26x86/doublefault/32: Rename doublefault.c to doublefault_32.cAndy Lutomirski2-5/+3
doublefault.c now only contains 32-bit code. Rename it to doublefault_32.c. Signed-off-by: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Linus Torvalds <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-11-26x86/traps: Disentangle the 32-bit and 64-bit doublefault codeAndy Lutomirski4-22/+4
The 64-bit doublefault handler is much nicer than the 32-bit one. As a first step toward unifying them, make the 64-bit handler self-contained. This should have no effect no functional effect except in the odd case of x86_64 with CONFIG_DOUBLEFAULT=n in which case it will change the logging a bit. This also gets rid of CONFIG_DOUBLEFAULT configurability on 64-bit kernels. It didn't do anything useful -- CONFIG_DOUBLEFAULT=n didn't actually disable doublefault handling on x86_64. Signed-off-by: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Linus Torvalds <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-11-26lkdtm: Add a DOUBLE_FAULT crash type on x86Andy Lutomirski3-0/+45
The DOUBLE_FAULT crash does INT $8, which is a decent approximation of a double fault. This is useful for testing the double fault handling. Use it like: Signed-off-by: Andy Lutomirski <[email protected]> Cc: Kees Cook <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Linus Torvalds <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-11-26selftests/x86/single_step_syscall: Check SYSENTER directlyAndy Lutomirski1-9/+85
We used to test SYSENTER only through the vDSO. Test it directly too, just in case. Signed-off-by: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Linus Torvalds <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-11-26x86/mm/32: Sync only to VMALLOC_END in vmalloc_sync_all()Joerg Roedel1-1/+1
The job of vmalloc_sync_all() is to help the lazy freeing of vmalloc() ranges: before such vmap ranges are reused we make sure that they are unmapped from every task's page tables. This is really easy on pagetable setups where the kernel page tables are shared between all tasks - this is the case on 32-bit kernels with SHARED_KERNEL_PMD = 1. But on !SHARED_KERNEL_PMD 32-bit kernels this involves iterating over the pgd_list and clearing all pmd entries in the pgds that are cleared in the init_mm.pgd, which is the reference pagetable that the vmalloc() code uses. In that context the current practice of vmalloc_sync_all() iterating until FIX_ADDR_TOP is buggy: for (address = VMALLOC_START & PMD_MASK; address >= TASK_SIZE_MAX && address < FIXADDR_TOP; address += PMD_SIZE) { struct page *page; Because iterating up to FIXADDR_TOP will involve a lot of non-vmalloc address ranges: VMALLOC -> PKMAP -> LDT -> CPU_ENTRY_AREA -> FIX_ADDR This is mostly harmless for the FIX_ADDR and CPU_ENTRY_AREA ranges that don't clear their pmds, but it's lethal for the LDT range, which relies on having different mappings in different processes, and 'synchronizing' them in the vmalloc sense corrupts those pagetable entries (clearing them). This got particularly prominent with PTI, which turns SHARED_KERNEL_PMD off and makes this the dominant mapping mode on 32-bit. To make LDT working again vmalloc_sync_all() must only iterate over the volatile parts of the kernel address range that are identical between all processes. So the correct check in vmalloc_sync_all() is "address < VMALLOC_END" to make sure the VMALLOC areas are synchronized and the LDT mapping is not falsely overwritten. The CPU_ENTRY_AREA and the FIXMAP area are no longer synced either, but this is not really a proplem since their PMDs get established during bootup and never change. This change fixes the ldt_gdt selftest in my setup. [ mingo: Fixed up the changelog to explain the logic and modified the copying to only happen up until VMALLOC_END. ] Reported-by: Borislav Petkov <[email protected]> Tested-by: Borislav Petkov <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Cc: <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Joerg Roedel <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Fixes: 7757d607c6b3: ("x86/pti: Allow CONFIG_PAGE_TABLE_ISOLATION for x86_32") Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2019-11-26x86/iopl: Make 'struct tss_struct' constant size againIngo Molnar1-2/+0
After the following commit: 05b042a19443: ("x86/pti/32: Calculate the various PTI cpu_entry_area sizes correctly, make the CPU_ENTRY_AREA_PAGES assert precise") 'struct cpu_entry_area' has to be Kconfig invariant, so that we always have a matching CPU_ENTRY_AREA_PAGES size. This commit added a CONFIG_X86_IOPL_IOPERM dependency to tss_struct: 111e7b15cf10: ("x86/ioperm: Extend IOPL config to control ioperm() as well") Which, if CONFIG_X86_IOPL_IOPERM is turned off, reduces the size of cpu_entry_area by two pages, triggering the assert: ./include/linux/compiler.h:391:38: error: call to ‘__compiletime_assert_202’ declared with attribute error: BUILD_BUG_ON failed: (CPU_ENTRY_AREA_PAGES+1)*PAGE_SIZE != CPU_ENTRY_AREA_MAP_SIZE Simplify the Kconfig dependencies and make cpu_entry_area constant size on 32-bit kernels again. Fixes: 05b042a19443: ("x86/pti/32: Calculate the various PTI cpu_entry_area sizes correctly, make the CPU_ENTRY_AREA_PAGES assert precise") Cc: Thomas Gleixner <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Andy Lutomirski <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-11-26libfdt: define INT32_MAX and UINT32_MAX in libfdt_env.hMasahiro Yamada3-1/+8
The DTC v1.5.1 added references to (U)INT32_MAX. This is no problem for user-space programs since <stdint.h> defines (U)INT32_MAX along with (u)int32_t. For the kernel space, libfdt_env.h needs to be adjusted before we pull in the changes. In the kernel, we usually use s/u32 instead of (u)int32_t for the fixed-width types. Accordingly, we already have S/U32_MAX for their max values. So, we should not add (U)INT32_MAX to <linux/limits.h> any more. Instead, add them to the in-kernel libfdt_env.h to compile the latest libfdt. Signed-off-by: Masahiro Yamada <[email protected]> Signed-off-by: Rob Herring <[email protected]>
2019-11-26sr_vendor: support Beurer GL50 evo CD-on-a-chip devices.Diego Elio Pettenò1-0/+18
The Beurer GL50 evo uses a Cygnal-manufactured CD-on-a-chip that only accepts a subset of SCSI commands, and supports neither audio commands nor generic packet commands. Actually sending those commands bring the device to an unrecoverable state that causes the device to hang and reset. To: Jens Axboe <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Diego Elio Pettenò <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2019-11-26cdrom: respect device capabilities during opening actionDiego Elio Pettenò1-1/+11
Reading the TOC only works if the device can play audio, otherwise these commands fail (and possibly bring the device to an unhealthy state.) Similarly, cdrom_mmc3_profile() should only be called if the device supports generic packet commands. To: Jens Axboe <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Diego Elio Pettenò <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2019-11-26xtensa: fix syscall_set_return_valueMax Filippov1-1/+1
syscall return value is in the register a2, not a0. Cc: [email protected] # v5.0+ Fixes: 9f24f3c1067c ("xtensa: implement tracehook functions and enable HAVE_ARCH_TRACEHOOK") Signed-off-by: Max Filippov <[email protected]>
2019-11-26Revert "vfs: properly and reliably lock f_pos in fdget_pos()"Linus Torvalds3-2/+8
This reverts commit 0be0ee71816b2b6725e2b4f32ad6726c9d729777. I was hoping it would be benign to switch over entirely to FMODE_STREAM, and we'd have just a couple of small fixups we'd need, but it looks like we're not quite there yet. While it worked fine on both my desktop and laptop, they are fairly similar in other respects, and run mostly the same loads. Kenneth Crudup reports that it seems to break both his vmware installation and the KDE upower service. In both cases apparently leading to timeouts due to waitinmg for the f_pos lock. There are a number of character devices in particular that definitely want stream-like behavior, but that currently don't get marked as streams, and as a result get the exclusion between concurrent read()/write() on the same file descriptor. Which doesn't work well for them. The most obvious example if this is /dev/console and /dev/tty, which use console_fops and tty_fops respectively (and ptmx_fops for the pty master side). It may be that it's just this that causes problems, but we clearly weren't ready yet. Because there's a number of other likely common cases that don't have llseek implementations and would seem to act as stream devices: /dev/fuse (fuse_dev_operations) /dev/mcelog (mce_chrdev_ops) /dev/mei0 (mei_fops) /dev/net/tun (tun_fops) /dev/nvme0 (nvme_dev_fops) /dev/tpm0 (tpm_fops) /proc/self/ns/mnt (ns_file_operations) /dev/snd/pcm* (snd_pcm_f_ops[]) and while some of these could be trivially automatically detected by the vfs layer when the character device is opened by just noticing that they have no read or write operations either, it often isn't that obvious. Some character devices most definitely do use the file position, even if they don't allow seeking: the firmware update code, for example, uses simple_read_from_buffer() that does use f_pos, but doesn't allow seeking back and forth. We'll revisit this when there's a better way to detect the problem and fix it (possibly with a coccinelle script to do more of the FMODE_STREAM annotations). Reported-by: Kenneth R. Crudup <[email protected]> Cc: Kirill Smelkov <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-11-26xtensa: drop unneeded headers from coprocessor.SMax Filippov1-9/+1
A bunch of irrelevant headers is included into coprocessor.S. Remove them and add necessary asm/regs.h. Signed-off-by: Max Filippov <[email protected]>
2019-11-26xtensa: entry: Remove unneeded need_resched() loopValentin Schneider1-1/+1
Since the enabling and disabling of IRQs within preempt_schedule_irq() is contained in a need_resched() loop, we don't need the outer arch code loop. Acked-by: Max Filippov <[email protected]> Signed-off-by: Valentin Schneider <[email protected]> Message-Id: <[email protected]> Signed-off-by: Max Filippov <[email protected]>
2019-11-26xtensa: use MEMBLOCK_ALLOC_ANYWHERE for KASAN shadow mapMax Filippov1-1/+3
KASAN shadow map doesn't need to be accessible through the linear kernel mapping, allocate its pages with MEMBLOCK_ALLOC_ANYWHERE so that high memory can be used. This frees up to ~100MB of low memory on xtensa configurations with KASAN and high memory. Cc: [email protected] # v5.1+ Fixes: f240ec09bb8a ("memblock: replace memblock_alloc_base(ANYWHERE) with memblock_phys_alloc") Reviewed-by: Mike Rapoport <[email protected]> Signed-off-by: Max Filippov <[email protected]>
2019-11-26xtensa: fix TLB sanity checkerMax Filippov1-2/+2
Virtual and translated addresses retrieved by the xtensa TLB sanity checker must be consistent, i.e. correspond to the same state of the checked TLB entry. KASAN shadow memory is mapped dynamically using auto-refill TLB entries and thus may change TLB state between the virtual and translated address retrieval, resulting in false TLB insanity report. Move read_xtlb_translation close to read_xtlb_virtual to make sure that read values are consistent. Cc: [email protected] Fixes: a99e07ee5e88 ("xtensa: check TLB sanity on return to userspace") Signed-off-by: Max Filippov <[email protected]>
2019-11-26xtensa: get rid of __ARCH_USE_5LEVEL_HACKMike Rapoport6-10/+24
xtensa has 2-level page tables and already uses pgtable-nopmd for page table folding. Add walks of p4d level where appropriate and drop usage of __ARCH_USE_5LEVEL_HACK. Signed-off-by: Mike Rapoport <[email protected]> Message-Id: <[email protected]> Signed-off-by: Max Filippov <[email protected]> [fix up arch/xtensa/include/asm/fixmap.h and arch/xtensa/mm/tlb.c]
2019-11-26xtensa: mm: fix PMD folding implementationMike Rapoport5-9/+19
There was a definition of pmd_offset() in arch/xtensa/include/asm/pgtable.h that shadowed the generic implementation defined in include/asm-generic/pgtable-nopmd.h. As the result, xtensa had shortcuts in page table traversal in several places instead of doing level unfolding. Remove local override for pmd_offset() and add page table unfolding where necessary. Signed-off-by: Mike Rapoport <[email protected]> Message-Id: <[email protected]> Signed-off-by: Max Filippov <[email protected]>
2019-11-26xtensa: make stack dump size configurableMax Filippov2-1/+8
Introduce Kconfig symbol PRINT_STACK_DEPTH and use it to initialize kstack_depth_to_print. Reviewed-by: Petr Mladek <[email protected]> Signed-off-by: Max Filippov <[email protected]>
2019-11-26xtensa: improve stack dumpingMax Filippov1-16/+11
Calculate printable stack size and use print_hex_dump instead of opencoding it. Drop extra newline output in show_trace as its output format does not depend on CONFIG_KALLSYMS. Reviewed-by: Petr Mladek <[email protected]> Signed-off-by: Max Filippov <[email protected]>
2019-11-26xtensa: use "m" constraint instead of "r" in futex.h assemblyMax Filippov1-5/+5
Use "m" constraint instead of "r" for the address, as "m" allows compiler to access adjacent locations using base + offset, while "r" requires updating the base register every time. Signed-off-by: Max Filippov <[email protected]>
2019-11-26xtensa: use "m" constraint instead of "a" in cmpxchg.h assemblyMax Filippov1-15/+16
Use "m" constraint instead of "r" for the address, as "m" allows compiler to access adjacent locations using base + offset, while "r" requires updating the base register every time. Signed-off-by: Max Filippov <[email protected]>
2019-11-26xtensa: use named assembly arguments in cmpxchg.hMax Filippov1-35/+35
Numeric assembly arguments are hard to understand and assembly code that uses them is hard to modify. Use named arguments in __cmpxchg_u32 and xchg_u32. Signed-off-by: Max Filippov <[email protected]>
2019-11-26xtensa: use "m" constraint instead of "a" in atomic.h assemblyMax Filippov1-24/+28
Use "m" constraint instead of "r" for the address, as "m" allows compiler to access adjacent locations using base + offset, while "r" requires updating the base register every time. Signed-off-by: Max Filippov <[email protected]>
2019-11-26xtensa: use named assembly arguments in atomic.hMax Filippov1-60/+60
Numeric assembly arguments are hard to understand and assembly code that uses them is hard to modify. Use named arguments in ATOMIC_OP, ATOMIC_OP_RETURN and ATOMIC_FETCH_OP macros. Signed-off-by: Max Filippov <[email protected]>
2019-11-26xtensa: use "m" constraint instead of "a" in bitops.h assemblyMax Filippov1-8/+10
Use "m" constraint instead of "r" for the address, as "m" allows compiler to access adjacent locations using base + offset, while "r" requires updating the base register every time. Signed-off-by: Max Filippov <[email protected]>
2019-11-26xtensa: use named assembly arguments in bitops.hMax Filippov1-28/+28
Numeric assembly arguments are hard to understand and assembly code that uses them is hard to modify. Use named arguments in BIT_OP and TEST_AND_BIT_OP macros. Signed-off-by: Max Filippov <[email protected]>
2019-11-26xtensa: use macros to generate *_bit and test_and_*_bit functionsMax Filippov1-229/+92
Parameterize macros with function name, opcode and inversion pattern. This reduces code duplication removing 2/3 of definitions. Signed-off-by: Max Filippov <[email protected]>
2019-11-26xtensa: use "m" constraint instead of "a" in uaccess.h assemblyMax Filippov1-8/+8
Use "m" constraint instead of "r" for the address, as "m" allows compiler to access adjacent locations using base + offset, while "r" requires updating the base register every time. Use %[mem] * 0 + v to replace offset part of %[mem] expansion with v. It is impossible to change address alignment through the offset part on xtensa, so just ignore offset in alignment checks. Signed-off-by: Max Filippov <[email protected]>
2019-11-26xtensa: merge .fixup with .textMax Filippov1-5/+1
Section .fixup contains pieces of code, merge it with the rest of the .text section. Signed-off-by: Max Filippov <[email protected]>
2019-11-26xtensa: add XIP kernel supportMax Filippov10-5/+257
XIP (eXecute In Place) kernel image is the image that can be run directly from ROM, using RAM only for writable data. XIP xtensa kernel differs from regular xtensa kernel in the following ways: - it has exception/IRQ vectors merged into text section. No vectors relocation takes place at kernel startup. - .data/.bss location must be specified in the kernel configuration, its content is copied there in the _startup function. - .init.text is merged with the rest of text and is executed from ROM. - when MMU is used the virtual address where the kernel will be mapped must be specified in the kernel configuration. It may be in the KSEG or in the KIO, __pa macro is adjusted to be able to handle both. Signed-off-by: Max Filippov <[email protected]>
2019-11-26dt-bindings: arm: Remove leftover axentia.txtRob Herring2-29/+0
The bindings described in axentia.txt are already covered by atmel-at91.yaml, so remove the file. Cc: Peter Rosin <[email protected]> Cc: Nicolas Ferre <[email protected]> Cc: Alexandre Belloni <[email protected]> Cc: Ludovic Desroches <[email protected]> Signed-off-by: Rob Herring <[email protected]>
2019-11-26of: unittest: fix memory leak in attach_node_and_childrenErhard Furtner1-1/+3
In attach_node_and_children memory is allocated for full_name via kasprintf. If the condition of the 1st if is not met the function returns early without freeing the memory. Add a kfree() to fix that. This has been detected with kmemleak: Link: https://bugzilla.kernel.org/show_bug.cgi?id=205327 It looks like the leak was introduced by this commit: Fixes: 5babefb7f7ab ("of: unittest: allow base devicetree to have symbol metadata") Signed-off-by: Erhard Furtner <[email protected]> Reviewed-by: Michael Ellerman <[email protected]> Reviewed-by: Tyrel Datwyler <[email protected]> Signed-off-by: Rob Herring <[email protected]>