aboutsummaryrefslogtreecommitdiff
path: root/arch/ia64/kernel
AgeCommit message (Collapse)AuthorFilesLines
2019-05-21treewide: Add SPDX license identifier for missed filesThomas Gleixner7-0/+7
Add SPDX license identifiers to all files which: - Have no license information of any form - Have EXPORT_.*_SYMBOL_GPL inside which was used in the initial scan/conversion to ignore the file These files fall under the project license, GPL v2 only. The resulting SPDX license identifier is: GPL-2.0-only Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-05-18ia64: require -Wl,--hash-style=sysvNick Desaulniers1-1/+1
Towards the goal of removing cc-ldoption, it seems that --hash-style= was added to binutils 2.17.50.0.2 in 2006. The minimal required version of binutils for the kernel according to Documentation/process/changes.rst is 2.20. Link: https://gcc.gnu.org/ml/gcc/2007-01/msg01141.html Cc: [email protected] Suggested-by: Masahiro Yamada <[email protected]> Signed-off-by: Nick Desaulniers <[email protected]> Signed-off-by: Masahiro Yamada <[email protected]>
2019-05-17Merge branch 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds1-0/+6
Pull more vfs mount updates from Al Viro: "Propagation of new syscalls to other architectures + cosmetic change from Christian (fscontext didn't follow the convention for anon inode names)" * 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: uapi: Wire up the mount API syscalls on non-x86 arches [ver #2] uapi, x86: Fix the syscall numbering of the mount API syscalls [ver #2] uapi, fsopen: use square brackets around "fscontext" [ver #2]
2019-05-16uapi: Wire up the mount API syscalls on non-x86 arches [ver #2]David Howells1-0/+6
Wire up the mount API syscalls on non-x86 arches. Reported-by: Arnd Bergmann <[email protected]> Signed-off-by: David Howells <[email protected]> Reviewed-by: Arnd Bergmann <[email protected]> Signed-off-by: Al Viro <[email protected]>
2019-05-15ia64: Make sure that we have a mmiowb function real earlyTony Luck1-1/+3
Generic kernels feed many operation through the "machvec" logic to get the correct form of the operation for the current system. "mmiowb()" is one of those operations. Although machvec is initialized very early in boot, it isn't early enough for a recent upstream kernel change that added mmiowb to the spin_unlock() path. Statically initialize the mmiowb field of machvec so that we won't die with a call through a NULL pointer. This should be safe because we do the real initialization of machvec before bringing up any addtional CPUs or doing any I/O. Fixes: 49ca6462fc9e ("ia64/mmiowb: Add unconditional mmiowb() to arch_spin_unlock()") Signed-off-by: Tony Luck <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-05-07Merge tag 'driver-core-5.2-rc1' of ↵Linus Torvalds1-7/+7
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core Pull driver core/kobject updates from Greg KH: "Here is the "big" set of driver core patches for 5.2-rc1 There are a number of ACPI patches in here as well, as Rafael said they should go through this tree due to the driver core changes they required. They have all been acked by the ACPI developers. There are also a number of small subsystem-specific changes in here, due to some changes to the kobject core code. Those too have all been acked by the various subsystem maintainers. As for content, it's pretty boring outside of the ACPI changes: - spdx cleanups - kobject documentation updates - default attribute groups for kobjects - other minor kobject/driver core fixes All have been in linux-next for a while with no reported issues" * tag 'driver-core-5.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (47 commits) kobject: clean up the kobject add documentation a bit more kobject: Fix kernel-doc comment first line kobject: Remove docstring reference to kset firmware_loader: Fix a typo ("syfs" -> "sysfs") kobject: fix dereference before null check on kobj Revert "driver core: platform: Fix the usage of platform device name(pdev->name)" init/config: Do not select BUILD_BIN2C for IKCONFIG Provide in-kernel headers to make extending kernel easier kobject: Improve doc clarity kobject_init_and_add() kobject: Improve docs for kobject_add/del driver core: platform: Fix the usage of platform device name(pdev->name) livepatch: Replace klp_ktype_patch's default_attrs with groups cpufreq: schedutil: Replace default_attrs field with groups padata: Replace padata_attr_type default_attrs field with groups irqdesc: Replace irq_kobj_type's default_attrs field with groups net-sysfs: Replace ktype default_attrs field with groups block: Replace all ktype default_attrs with groups samples/kobject: Replace foo_ktype's default_attrs field with groups kobject: Add support for default attribute groups to kobj_type driver core: Postpone DMA tear-down until after devres release for probe failure ...
2019-05-06Merge branch 'efi-core-for-linus' of ↵Linus Torvalds1-3/+1
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull EFI updates from Ingo Molnar: "The changes in this cycle were: - Squash a spurious warning when using the EFI framebuffer on a non-EFI boot - Use DMI data to annotate RAS memory errors on ARM just like we do on Intel - Followup cleanups for DMI - libstub Makefile cleanups" * 'efi-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: efi/libstub/arm: Omit unneeded stripping of ksymtab/kcrctab sections efi: Unify DMI setup code over the arm/arm64, ia64 and x86 architectures efi/arm: Show SMBIOS bank/device location in CPER and GHES error logs efifb: Omit memory map check on legacy boot efi/libstub: Refactor the cmd_stubcopy Makefile command
2019-04-23Merge tag 'syscalls-5.1' of ↵Linus Torvalds1-0/+4
git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic Pull syscall numbering updates from Arnd Bergmann: "arch: add pidfd and io_uring syscalls everywhere This comes a bit late, but should be in 5.1 anyway: we want the newly added system calls to be synchronized across all architectures in the release. I hope that in the future, any newly added system calls can be added to all architectures at the same time, and tested there while they are in linux-next, avoiding dependencies between the architecture maintainer trees and the tree that contains the new system call" * tag 'syscalls-5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic: arch: add pidfd and io_uring syscalls everywhere
2019-04-15arch: add pidfd and io_uring syscalls everywhereArnd Bergmann1-0/+4
Add the io_uring and pidfd_send_signal system calls to all architectures. These system calls are designed to handle both native and compat tasks, so all entries are the same across architectures, only arm-compat and the generic tale still use an old format. Acked-by: Michael Ellerman <[email protected]> (powerpc) Acked-by: Heiko Carstens <[email protected]> (s390) Acked-by: Geert Uytterhoeven <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]>
2019-04-05syscalls: Remove start and number from syscall_set_arguments() argsSteven Rostedt (VMware)1-4/+3
After removing the start and count arguments of syscall_get_arguments() it seems reasonable to remove them from syscall_set_arguments(). Note, as of today, there are no users of syscall_set_arguments(). But we are told that there will be soon. But for now, at least make it consistent with syscall_get_arguments(). Link: http://lkml.kernel.org/r/[email protected] Cc: Oleg Nesterov <[email protected]> Cc: Kees Cook <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Dominik Brodowski <[email protected]> Cc: Dave Martin <[email protected]> Cc: "Dmitry V. Levin" <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Acked-by: Max Filippov <[email protected]> # For xtensa changes Acked-by: Will Deacon <[email protected]> # For the arm64 bits Reviewed-by: Thomas Gleixner <[email protected]> # for x86 Reviewed-by: Dmitry V. Levin <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2019-04-04acpi: Create subtable parsing infrastructureKeith Busch1-7/+7
Parsing entries in an ACPI table had assumed a generic header structure. There is no standard ACPI header, though, so less common layouts with different field sizes required custom parsers to go through their subtable entry list. Create the infrastructure for adding different table types so parsing the entries array may be more reused for all ACPI system tables and the common code doesn't need to be duplicated. Reviewed-by: Rafael J. Wysocki <[email protected]> Acked-by: Jonathan Cameron <[email protected]> Tested-by: Jonathan Cameron <[email protected]> Signed-off-by: Keith Busch <[email protected]> Tested-by: Brice Goglin <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-03-29efi: Unify DMI setup code over the arm/arm64, ia64 and x86 architecturesRobert Richter1-3/+1
All architectures (arm/arm64, ia64 and x86) do the same here, so unify the code. Note: We do not need to call dump_stack_set_arch_desc() in case of !dmi_available. Both strings, dmi_ids_string and dump_stack_arch_ desc_str are initialized zero and thus nothing would change. Signed-off-by: Robert Richter <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Jean Delvare <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Matt Fleming <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2019-03-12ia64: add checks for the return value of memblock_alloc*()Mike Rapoport1-6/+14
Add panic() calls if memblock_alloc*() returns NULL. Most of the changes are simply addition of if(!ptr) panic(); statements after the calls to memblock_alloc*() variants. Exceptions are create_mem_map_page_table() and ia64_log_init() that were slightly refactored to accommodate the change. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Guo Ren <[email protected]> [c-sky] Cc: Heiko Carstens <[email protected]> Cc: Juergen Gross <[email protected]> [Xen] Cc: Mark Salter <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Paul Burton <[email protected]> Cc: Petr Mladek <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Rob Herring <[email protected]> Cc: Rob Herring <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-03-12arch: don't memset(0) memory returned by memblock_alloc()Mike Rapoport1-2/+0
memblock_alloc() already clears the allocated memory, no point in doing it twice. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> [m68k] Cc: Catalin Marinas <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Guo Ren <[email protected]> [c-sky] Cc: Heiko Carstens <[email protected]> Cc: Juergen Gross <[email protected]> [Xen] Cc: Mark Salter <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Paul Burton <[email protected]> Cc: Petr Mladek <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Rob Herring <[email protected]> Cc: Rob Herring <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-03-12arch: use memblock_alloc() instead of memblock_alloc_from(size, align, 0)Mike Rapoport1-2/+1
The last parameter of memblock_alloc_from() is the lower limit for the memory allocation. When it is 0, the call is equivalent to memblock_alloc(). Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Acked-by: Paul Burton <[email protected]> # MIPS part Cc: Catalin Marinas <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Guo Ren <[email protected]> [c-sky] Cc: Heiko Carstens <[email protected]> Cc: Juergen Gross <[email protected]> [Xen] Cc: Mark Salter <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Petr Mladek <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Rob Herring <[email protected]> Cc: Rob Herring <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-03-06Merge branch 'akpm' (patches from Andrew)Linus Torvalds2-56/+5
Merge misc updates from Andrew Morton: - a few misc things - ocfs2 updates - most of MM * emailed patches from Andrew Morton <[email protected]>: (159 commits) tools/testing/selftests/proc/proc-self-syscall.c: remove duplicate include proc: more robust bulk read test proc: test /proc/*/maps, smaps, smaps_rollup, statm proc: use seq_puts() everywhere proc: read kernel cpu stat pointer once proc: remove unused argument in proc_pid_lookup() fs/proc/thread_self.c: code cleanup for proc_setup_thread_self() fs/proc/self.c: code cleanup for proc_setup_self() proc: return exit code 4 for skipped tests mm,mremap: bail out earlier in mremap_to under map pressure mm/sparse: fix a bad comparison mm/memory.c: do_fault: avoid usage of stale vm_area_struct writeback: fix inode cgroup switching comment mm/huge_memory.c: fix "orig_pud" set but not used mm/hotplug: fix an imbalance with DEBUG_PAGEALLOC mm/memcontrol.c: fix bad line in comment mm/cma.c: cma_declare_contiguous: correct err handling mm/page_ext.c: fix an imbalance with kmemleak mm/compaction: pass pgdat to too_many_isolated() instead of zone mm: remove zone_lru_lock() function, access ->lru_lock directly ...
2019-03-05ia64: perfmon: don't mark buffer pages as PG_reservedDavid Hildenbrand1-55/+4
In the old days, remap_pfn_range() required pages to be marked as PG_reserved, so they would e.g. never get swapped out. This was required for special mappings. Nowadays, this is fully handled via the VMA (VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP inside remap_pfn_range() to be precise). PG_reserved is no longer required but only a relic from the past. So only architecture specific MM handling might require it (e.g. to detect them as MMIO pages). As there are no architecture specific checks for PageReserved() apart from MCA handling in ia64code, this can go. Use simple vzalloc()/vfree() instead. Note that before calling vzalloc(), size has already been aligned to PAGE_SIZE, no need to align again. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: David Hildenbrand <[email protected]> Cc: Tony Luck <[email protected]> Cc: Fenghua Yu <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: David Howells <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Michal Hocko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-03-05mm: replace all open encodings for NUMA_NO_NODEAnshuman Khandual1-1/+1
Patch series "Replace all open encodings for NUMA_NO_NODE", v3. All these places for replacement were found by running the following grep patterns on the entire kernel code. Please let me know if this might have missed some instances. This might also have replaced some false positives. I will appreciate suggestions, inputs and review. 1. git grep "nid == -1" 2. git grep "node == -1" 3. git grep "nid = -1" 4. git grep "node = -1" This patch (of 2): At present there are multiple places where invalid node number is encoded as -1. Even though implicitly understood it is always better to have macros in there. Replace these open encodings for an invalid node number with the global macro NUMA_NO_NODE. This helps remove NUMA related assumptions like 'invalid node' from various places redirecting them to a common definition. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Anshuman Khandual <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Acked-by: Jeff Kirsher <[email protected]> [ixgbe] Acked-by: Jens Axboe <[email protected]> [mtip32xx] Acked-by: Vinod Koul <[email protected]> [dmaengine.c] Acked-by: Michael Ellerman <[email protected]> [powerpc] Acked-by: Doug Ledford <[email protected]> [drivers/infiniband] Cc: Joseph Qi <[email protected]> Cc: Hans Verkuil <[email protected]> Cc: Stephen Rothwell <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-02-07y2038: add 64-bit time_t syscalls to all 32-bit architecturesArnd Bergmann1-0/+1
This adds 21 new system calls on each ABI that has 32-bit time_t today. All of these have the exact same semantics as their existing counterparts, and the new ones all have macro names that end in 'time64' for clarification. This gets us to the point of being able to safely use a C library that has 64-bit time_t in user space. There are still a couple of loose ends to tie up in various areas of the code, but this is the big one, and should be entirely uncontroversial at this point. In particular, there are four system calls (getitimer, setitimer, waitid, and getrusage) that don't have a 64-bit counterpart yet, but these can all be safely implemented in the C library by wrapping around the existing system calls because the 32-bit time_t they pass only counts elapsed time, not time since the epoch. They will be dealt with later. Signed-off-by: Arnd Bergmann <[email protected]> Acked-by: Heiko Carstens <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Catalin Marinas <[email protected]>
2019-01-25arch: add pkey and rseq syscall numbers everywhereArnd Bergmann1-0/+4
Most architectures define system call numbers for the rseq and pkey system calls, even when they don't support the features, and perhaps never will. Only a few architectures are missing these, so just define them anyway for consistency. If we decide to add them later to one of these, the system call numbers won't get out of sync then. Signed-off-by: Arnd Bergmann <[email protected]> Acked-by: Heiko Carstens <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]>
2019-01-25ia64: assign syscall numbers for perf and seccompArnd Bergmann1-0/+2
Most architectures have assigned numbers for both seccomp and perf_event_open, even when they do not implement either. ia64 is an exception here, so for consistency lets add numbers for both of them. Unless CONFIG_PERF_EVENTS and CONFIG_SECCOMP are implemented, the system calls just return -ENOSYS. Signed-off-by: Arnd Bergmann <[email protected]>
2019-01-25ia64: add statx and io_pgetevents syscallsArnd Bergmann1-0/+2
All architectures should implement these two, so assign numbers and hook them up on ia64. Signed-off-by: Arnd Bergmann <[email protected]>
2019-01-25ia64: add __NR_umount2 definitionArnd Bergmann1-1/+1
Other architectures commonly use __NR_umount2 for sys_umount, only ia64 and alpha use __NR_umount here. In order to synchronize the generated tables, use umount2 like everyone else, and add back the old name from asm/unistd.h for compatibility. The __IGNORE_* lines are now all obsolete and can be removed as a side-effect. Signed-off-by: Arnd Bergmann <[email protected]>
2019-01-03Remove 'type' argument from access_ok() functionLinus Torvalds2-4/+4
Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument of the user address range verification function since we got rid of the old racy i386-only code to walk page tables by hand. It existed because the original 80386 would not honor the write protect bit when in kernel mode, so you had to do COW by hand before doing any user access. But we haven't supported that in a long time, and these days the 'type' argument is a purely historical artifact. A discussion about extending 'user_access_begin()' to do the range checking resulted this patch, because there is no way we're going to move the old VERIFY_xyz interface to that model. And it's best done at the end of the merge window when I've done most of my merges, so let's just get this done once and for all. This patch was mostly done with a sed-script, with manual fix-ups for the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form. There were a couple of notable cases: - csky still had the old "verify_area()" name as an alias. - the iter_iov code had magical hardcoded knowledge of the actual values of VERIFY_{READ,WRITE} (not that they mattered, since nothing really used it) - microblaze used the type argument for a debug printout but other than those oddities this should be a total no-op patch. I tried to fix up all architectures, did fairly extensive grepping for access_ok() uses, and the changes are trivial, but I may have missed something. Any missed conversion should be trivially fixable, though. Signed-off-by: Linus Torvalds <[email protected]>
2018-12-29Merge tag 'kbuild-v4.21' of ↵Linus Torvalds1-5/+2
git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kbuild updates from Masahiro Yamada: "Kbuild core: - remove unneeded $(call cc-option,...) switches - consolidate Clang compiler flags into CLANG_FLAGS - announce the deprecation of SUBDIRS - fix single target build for external module - simplify the dependencies of 'prepare' stage targets - allow fixdep to directly write to .*.cmd files - simplify dependency generation for CONFIG_TRIM_UNUSED_KSYMS - change if_changed_rule to accept multi-line recipe - move .SECONDARY special target to scripts/Kbuild.include - remove redundant 'set -e' - improve parallel execution for CONFIG_HEADERS_CHECK - misc cleanups Treewide fixes and cleanups - set Clang flags correctly for PowerPC boot images - fix UML build error with CONFIG_GCC_PLUGINS - remove unneeded patterns from .gitignore files - refactor firmware/Makefile - remove unneeded rules for *offsets.s - avoid unneeded regeneration of intermediate .s files - clean up ./Kbuild Modpost: - remove unused -M, -K options - fix false positive warnings about section mismatch - use simple devtable lookup instead of linker magic - misc cleanups Coccinelle: - relax boolinit.cocci checks for overall consistency - fix warning messages of boolinit.cocci Other tools: - improve -dirty check of scripts/setlocalversion - add a tool to generate compile_commands.json from .*.cmd files" * tag 'kbuild-v4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (51 commits) kbuild: remove unused cmd_gentimeconst kbuild: remove $(obj)/ prefixes in ./Kbuild treewide: add intermediate .s files to targets treewide: remove explicit rules for *offsets.s firmware: refactor firmware/Makefile firmware: remove unnecessary patterns from .gitignore scripts: remove unnecessary ihex2fw and check-lc_ctypes from .gitignore um: remove unused filechk_gen_header in Makefile scripts: add a tool to produce a compile_commands.json file kbuild: add -Werror=implicit-int flag unconditionally kbuild: add -Werror=strict-prototypes flag unconditionally kbuild: add -fno-PIE flag unconditionally scripts: coccinelle: Correct warning message scripts: coccinelle: only suggest true/false in files that already use them kbuild: handle part-of-module correctly for *.ll and *.symtypes kbuild: refactor part-of-module kbuild: refactor quiet_modtag kbuild: remove redundant quiet_modtag for $(obj-m) kbuild: refactor Makefile.asm-generic user/Makefile: Fix typo and capitalization in comment section ...
2018-12-28Merge tag 'dma-mapping-4.21' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds1-2/+19
Pull DMA mapping updates from Christoph Hellwig: "A huge update this time, but a lot of that is just consolidating or removing code: - provide a common DMA_MAPPING_ERROR definition and avoid indirect calls for dma_map_* error checking - use direct calls for the DMA direct mapping case, avoiding huge retpoline overhead for high performance workloads - merge the swiotlb dma_map_ops into dma-direct - provide a generic remapping DMA consistent allocator for architectures that have devices that perform DMA that is not cache coherent. Based on the existing arm64 implementation and also used for csky now. - improve the dma-debug infrastructure, including dynamic allocation of entries (Robin Murphy) - default to providing chaining scatterlist everywhere, with opt-outs for the few architectures (alpha, parisc, most arm32 variants) that can't cope with it - misc sparc32 dma-related cleanups - remove the dma_mark_clean arch hook used by swiotlb on ia64 and replace it with the generic noncoherent infrastructure - fix the return type of dma_set_max_seg_size (Niklas Söderlund) - move the dummy dma ops for not DMA capable devices from arm64 to common code (Robin Murphy) - ensure dma_alloc_coherent returns zeroed memory to avoid kernel data leaks through userspace. We already did this for most common architectures, but this ensures we do it everywhere. dma_zalloc_coherent has been deprecated and can hopefully be removed after -rc1 with a coccinelle script" * tag 'dma-mapping-4.21' of git://git.infradead.org/users/hch/dma-mapping: (73 commits) dma-mapping: fix inverted logic in dma_supported dma-mapping: deprecate dma_zalloc_coherent dma-mapping: zero memory returned from dma_alloc_* sparc/iommu: fix ->map_sg return value sparc/io-unit: fix ->map_sg return value arm64: default to the direct mapping in get_arch_dma_ops PCI: Remove unused attr variable in pci_dma_configure ia64: only select ARCH_HAS_DMA_COHERENT_TO_PFN if swiotlb is enabled dma-mapping: bypass indirect calls for dma-direct vmd: use the proper dma_* APIs instead of direct methods calls dma-direct: merge swiotlb_dma_ops into the dma_direct code dma-direct: use dma_direct_map_page to implement dma_direct_map_sg dma-direct: improve addressability error reporting swiotlb: remove dma_mark_clean swiotlb: remove SWIOTLB_MAP_ERROR ACPI / scan: Refactor _CCA enforcement dma-mapping: factor out dummy DMA ops dma-mapping: always build the direct mapping code dma-mapping: move dma_cache_sync out of line dma-mapping: move various slow path functions out of line ...
2018-12-26Merge tag 'please-pull-y2038prep' of ↵Linus Torvalds5-328/+448
git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux Pull ia64 updates from Tony Luck: "Change ia64 to make it easier to update system call tables (in line with some other architectures). This is in preparation for y2038 changes which are forecasted to add several new system calls" * tag 'please-pull-y2038prep' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux: ia64: generate uapi header and system call table files ia64: add system call table generation support ia64: add an offset for system call number ia64: add __NR_syscalls along with NR_syscalls ia64: add __NR_old_getpagesize in uapi header file
2018-12-23treewide: add intermediate .s files to targetsMasahiro Yamada1-0/+2
Avoid unneeded recreation of these in the incremental build. Signed-off-by: Masahiro Yamada <[email protected]>
2018-12-23treewide: remove explicit rules for *offsets.sMasahiro Yamada1-5/+0
These explicit rules are unneeded because scripts/Makefile.build provides a pattern rule to create %.s from %.c Signed-off-by: Masahiro Yamada <[email protected]>
2018-12-13dma-mapping: bypass indirect calls for dma-directChristoph Hellwig1-1/+0
Avoid expensive indirect calls in the fast path DMA mapping operations by directly calling the dma_direct_* ops if we are using the directly mapped DMA operations. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Jesper Dangaard Brouer <[email protected]> Tested-by: Jesper Dangaard Brouer <[email protected]> Tested-by: Tony Luck <[email protected]>
2018-12-13dma-direct: merge swiotlb_dma_ops into the dma_direct codeChristoph Hellwig1-1/+1
While the dma-direct code is (relatively) clean and simple we actually have to use the swiotlb ops for the mapping on many architectures due to devices with addressing limits. Instead of keeping two implementations around this commit allows the dma-direct implementation to call the swiotlb bounce buffering functions and thus share the guts of the mapping implementation. This also simplified the dma-mapping setup on a few architectures where we don't have to differenciate which implementation to use. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Jesper Dangaard Brouer <[email protected]> Tested-by: Jesper Dangaard Brouer <[email protected]> Tested-by: Tony Luck <[email protected]>
2018-12-13swiotlb: remove dma_mark_cleanChristoph Hellwig1-1/+19
Instead of providing a special dma_mark_clean hook just for ia64, switch ia64 to use the normal arch_sync_dma_for_cpu hooks instead. This means that we now also set the PG_arch_1 bit for pages in the swiotlb buffer, which isn't stricly needed as we will never execute code out of the swiotlb buffer, but otherwise harmless. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Jesper Dangaard Brouer <[email protected]> Tested-by: Jesper Dangaard Brouer <[email protected]> Tested-by: Tony Luck <[email protected]>
2018-11-26ia64: export node_distance functionMatias Bjørling1-3/+3
The numa_slit variable used by node_distance is available to a module as long as it is linked at compile-time. However, it is not available to loadable modules. Leading to errors such as: ERROR: "numa_slit" [drivers/nvme/host/nvme-core.ko] undefined! The error above is caused by the nvme multipath code that makes use of node_distance for its path calculation. When the patch was added, the lightnvm subsystem would select nvme and always compile it in, leading to the node_distance call to always succeed. However, when this requirement was removed, nvme could be compiled in as a module, which exposed this bug. This patch extracts node_distance to a function and exports it. Since ACPI is depending on node_distance being a simple lookup to numa_slit, the previous behavior is exposed as slit_distance and its users updated. Fixes: f333444708f82 "nvme: take node locality into account when selecting a path" Fixes: 73569e11032f "lightnvm: remove dependencies on BLK_DEV_NVME and PCI" Signed-off-by: Matias Bjøring <[email protected]> Signed-off-by: Tony Luck <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-11-13ia64: generate uapi header and system call table filesFiroz Khan1-328/+3
System call table generation script must be run to gener- ate unistd_64.h and syscall_table.h files. This patch will have changes which will invokes the script. This patch will generate unistd_64.h and syscall_table.h files by the syscall table generation script invoked by ia64/Makefile and the generated files against the removed files must be identical. The generated uapi header file will be included in uapi/- asm/unistd.h and generated system call table header file will be included by kernel/entry.S file. Signed-off-by: Firoz Khan <[email protected]> Signed-off-by: Tony Luck <[email protected]>
2018-11-13ia64: add system call table generation supportFiroz Khan4-0/+445
The system call tables are in different format in all architecture and it will be difficult to manually add, modify or delete the syscall table entries in the res- pective files. To make it easy by keeping a script and which will generate the uapi header and syscall table file. This change will also help to unify the implemen- tation across all architectures. The system call table generation script is added in kernel/syscalls directory which contain the scripts to generate both uapi header file and system call table files. The syscall.tbl will be input for the scripts. syscall.tbl contains the list of available system calls along with system call number and corresponding entry point. Add a new system call in this architecture will be possible by adding new entry in the syscall.tbl file. Adding a new table entry consisting of: - System call number. - ABI. - System call name. - Entry point name. syscallhdr.sh and syscalltbl.sh will generate uapi header unistd_64.h and syscall_table.h files respectively. Both .sh files will parse the content syscall.tbl to generate the header and table files. unistd_64.h will be included by uapi/asm/unistd.h and syscall_table.h is included by kernel/entry.S - the real system call table. ARM, s390 and x86 architecuture does have similar support. I leverage their implementation to come up with a generic solution. Signed-off-by: Firoz Khan <[email protected]> Signed-off-by: Tony Luck <[email protected]>
2018-10-31memblock: stop using implicit alignment to SMP_CACHE_BYTESMike Rapoport1-2/+2
When a memblock allocation APIs are called with align = 0, the alignment is implicitly set to SMP_CACHE_BYTES. Implicit alignment is done deep in the memblock allocator and it can come as a surprise. Not that such an alignment would be wrong even when used incorrectly but it is better to be explicit for the sake of clarity and the prinicple of the least surprise. Replace all such uses of memblock APIs with the 'align' parameter explicitly set to SMP_CACHE_BYTES and stop implicit alignment assignment in the memblock internal allocation functions. For the case when memblock APIs are used via helper functions, e.g. like iommu_arena_new_node() in Alpha, the helper functions were detected with Coccinelle's help and then manually examined and updated where appropriate. The direct memblock APIs users were updated using the semantic patch below: @@ expression size, min_addr, max_addr, nid; @@ ( | - memblock_alloc_try_nid_raw(size, 0, min_addr, max_addr, nid) + memblock_alloc_try_nid_raw(size, SMP_CACHE_BYTES, min_addr, max_addr, nid) | - memblock_alloc_try_nid_nopanic(size, 0, min_addr, max_addr, nid) + memblock_alloc_try_nid_nopanic(size, SMP_CACHE_BYTES, min_addr, max_addr, nid) | - memblock_alloc_try_nid(size, 0, min_addr, max_addr, nid) + memblock_alloc_try_nid(size, SMP_CACHE_BYTES, min_addr, max_addr, nid) | - memblock_alloc(size, 0) + memblock_alloc(size, SMP_CACHE_BYTES) | - memblock_alloc_raw(size, 0) + memblock_alloc_raw(size, SMP_CACHE_BYTES) | - memblock_alloc_from(size, 0, min_addr) + memblock_alloc_from(size, SMP_CACHE_BYTES, min_addr) | - memblock_alloc_nopanic(size, 0) + memblock_alloc_nopanic(size, SMP_CACHE_BYTES) | - memblock_alloc_low(size, 0) + memblock_alloc_low(size, SMP_CACHE_BYTES) | - memblock_alloc_low_nopanic(size, 0) + memblock_alloc_low_nopanic(size, SMP_CACHE_BYTES) | - memblock_alloc_from_nopanic(size, 0, min_addr) + memblock_alloc_from_nopanic(size, SMP_CACHE_BYTES, min_addr) | - memblock_alloc_node(size, 0, nid) + memblock_alloc_node(size, SMP_CACHE_BYTES, nid) ) [[email protected]: changelog update] [[email protected]: coding-style fixes] [[email protected]: fix missed uses of implicit alignment] Link: http://lkml.kernel.org/r/20181016133656.GA10925@rapoport-lnx Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Suggested-by: Michal Hocko <[email protected]> Acked-by: Paul Burton <[email protected]> [MIPS] Acked-by: Michael Ellerman <[email protected]> [powerpc] Acked-by: Michal Hocko <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Matt Turner <[email protected]> Cc: Michal Simek <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Russell King <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-10-31mm: remove include/linux/bootmem.hMike Rapoport10-10/+9
Move remaining definitions and declarations from include/linux/bootmem.h into include/linux/memblock.h and remove the redundant header. The includes were replaced with the semantic patch below and then semi-automated removal of duplicated '#include <linux/memblock.h> @@ @@ - #include <linux/bootmem.h> + #include <linux/memblock.h> [[email protected]: dma-direct: fix up for the removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: powerpc: fix up for removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: x86/kaslr, ACPI/NUMA: fix for linux/bootmem.h removal] Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Stephen Rothwell <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Matt Turner <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Burton <[email protected]> Cc: Richard Kuo <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Serge Semin <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-10-31memblock: replace alloc_bootmem with memblock_allocMike Rapoport1-2/+2
The alloc_bootmem(size) is a shortcut for allocation of SMP_CACHE_BYTES aligned memory. When the align parameter of memblock_alloc() is 0, the alignment is implicitly set to SMP_CACHE_BYTES and thus alloc_bootmem(size) and memblock_alloc(size, 0) are equivalent. The conversion is done using the following semantic patch: @@ expression size; @@ - alloc_bootmem(size) + memblock_alloc(size, 0) Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Matt Turner <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Michal Simek <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Burton <[email protected]> Cc: Richard Kuo <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Serge Semin <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-10-31memblock: replace __alloc_bootmem with memblock_alloc_fromMike Rapoport1-2/+2
The functions are equivalent, just the later does not require nobootmem translation layer. The conversion is done using the following semantic patch: @@ expression size, align, goal; @@ - __alloc_bootmem(size, align, goal) + memblock_alloc_from(size, align, goal) Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Matt Turner <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Burton <[email protected]> Cc: Richard Kuo <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Serge Semin <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-10-31kernel/signal.c: fix a comment errorWeikang Shi1-2/+2
Because get_signal_to_deliver() was renamed to get_signal() the comment should be fixed. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Weikang Shi <[email protected]> Reported-by: Christian Brauner <[email protected]> Cc: Eric W. Biederman <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: Anna-Maria Gleixner <[email protected]> Cc: "Luck, Tony" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-10-24Merge branch 'siginfo-linus' of ↵Linus Torvalds4-187/+60
git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace Pull siginfo updates from Eric Biederman: "I have been slowly sorting out siginfo and this is the culmination of that work. The primary result is in several ways the signal infrastructure has been made less error prone. The code has been updated so that manually specifying SEND_SIG_FORCED is never necessary. The conversion to the new siginfo sending functions is now complete, which makes it difficult to send a signal without filling in the proper siginfo fields. At the tail end of the patchset comes the optimization of decreasing the size of struct siginfo in the kernel from 128 bytes to about 48 bytes on 64bit. The fundamental observation that enables this is by definition none of the known ways to use struct siginfo uses the extra bytes. This comes at the cost of a small user space observable difference. For the rare case of siginfo being injected into the kernel only what can be copied into kernel_siginfo is delivered to the destination, the rest of the bytes are set to 0. For cases where the signal and the si_code are known this is safe, because we know those bytes are not used. For cases where the signal and si_code combination is unknown the bits that won't fit into struct kernel_siginfo are tested to verify they are zero, and the send fails if they are not. I made an extensive search through userspace code and I could not find anything that would break because of the above change. If it turns out I did break something it will take just the revert of a single change to restore kernel_siginfo to the same size as userspace siginfo. Testing did reveal dependencies on preferring the signo passed to sigqueueinfo over si->signo, so bit the bullet and added the complexity necessary to handle that case. Testing also revealed bad things can happen if a negative signal number is passed into the system calls. Something no sane application will do but something a malicious program or a fuzzer might do. So I have fixed the code that performs the bounds checks to ensure negative signal numbers are handled" * 'siginfo-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace: (80 commits) signal: Guard against negative signal numbers in copy_siginfo_from_user32 signal: Guard against negative signal numbers in copy_siginfo_from_user signal: In sigqueueinfo prefer sig not si_signo signal: Use a smaller struct siginfo in the kernel signal: Distinguish between kernel_siginfo and siginfo signal: Introduce copy_siginfo_from_user and use it's return value signal: Remove the need for __ARCH_SI_PREABLE_SIZE and SI_PAD_SIZE signal: Fail sigqueueinfo if si_signo != sig signal/sparc: Move EMT_TAGOVF into the generic siginfo.h signal/unicore32: Use force_sig_fault where appropriate signal/unicore32: Generate siginfo in ucs32_notify_die signal/unicore32: Use send_sig_fault where appropriate signal/arc: Use force_sig_fault where appropriate signal/arc: Push siginfo generation into unhandled_exception signal/ia64: Use force_sig_fault where appropriate signal/ia64: Use the force_sig(SIGSEGV,...) in ia64_rt_sigreturn signal/ia64: Use the generic force_sigsegv in setup_frame signal/arm/kvm: Use send_sig_mceerr signal/arm: Use send_sig_fault where appropriate signal/arm: Use force_sig_fault where appropriate ...
2018-09-27signal/ia64: Use force_sig_fault where appropriateEric W. Biederman3-140/+47
Acked-by: Tony Luck <[email protected]> Signed-off-by: "Eric W. Biederman" <[email protected]>
2018-09-27signal/ia64: Use the force_sig(SIGSEGV,...) in ia64_rt_sigreturnEric W. Biederman1-9/+1
The ia64 handling of failure to return from a signal frame has been trying to set overlapping fields in struct siginfo since 2.3.43. The si_code corresponds to the fields that were stomped (not the field that is actually written), so I can not imagine a piece of userspace code making sense of the signal frame if it looks closely. In practice failure to return from a signal frame is a rare event that almost never happens. Someone using an alternate signal stack to recover and looking in detail is even more rare. So I presume no one has ever noticed and reported this ia64 nonsense. Sort this out by causing ia64 to use force_sig(SIGSEGV) like other architectures. Fixes: 2.3.43 Cc: Tony Luck <[email protected]> Cc: Fenghua Yu <[email protected]> Cc: [email protected] Acked-by: Tony Luck <[email protected]> Signed-off-by: "Eric W. Biederman" <[email protected]>
2018-09-27signal/ia64: Use the generic force_sigsegv in setup_frameEric W. Biederman1-38/+12
The ia64 handling of failure to setup a signal frame has been trying to set overlapping fields in struct siginfo since 2.3.43. The si_pid and si_uid fields are stomped when the si_addr field is set. The si_code of SI_KERNEL indicates that si_pid and si_uid should be valid, and that si_addr does not exist. Being at odds with the definition of SI_KERNEL and with nothing to indicate that this was a signal frame setup failure there is no way for userspace to know that si_addr was filled out instead. In practice failure to setup a signal frame is rare, and si_pid and si_uid are always set to 0 when si_code is SI_KERNEL so I expect no one has looked closely enough before to see this weirdness. Further the only difference between force_sigsegv_info and the generic force_sigsegv other than the return code is that force_sigsegv_info stomps the si_uid and si_pid fields. Remove the bug and simplify the code by using force_sigsegv in this case just like other architectures. Fixes: 2.3.43 Cc: Tony Luck <[email protected]> Cc: Fenghua Yu <[email protected]> Cc: [email protected] Acked-by: Tony Luck <[email protected]> Signed-off-by: "Eric W. Biederman" <[email protected]>
2018-09-17intel-iommu: mark intel_dma_ops staticChristoph Hellwig1-4/+0
ia64 currently explicitly assigns it to dma_ops, but that same work is already done by intel_iommu_init a little later, so we can remove the duplicate assignment and mark the variable static. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Tony Luck <[email protected]>
2018-09-17ia64: remove machvec_dma_sync_{single,sg}Christoph Hellwig2-21/+0
The original form of these was added (to the HP zx1 platform only) by the following bitkeeper commit (by the way of the historic.git tree): commit 66b99421d118a5ddd98a72913670b0fcf0a38d45 Author: Andrew Morton <[email protected]> Date: Sat Mar 13 17:05:37 2004 -0800 [PATCH] DMA: Fill gaping hole in DMA API interfaces. From: "David S. Miller" <[email protected]> The commit does not explain why we'd need the memory barrier on ia64, it never included the swiotlb or SGI IOMMU based platforms, and also failed to address the map/unmap parts of the dma mapping interface, which should provide the same ordering semantics and actually are commonly used. The conclusion of this is that they were added in error and should be removed. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Tony Luck <[email protected]>
2018-09-17ia64: remove the unused iommu_dma_init functionChristoph Hellwig1-6/+0
Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Tony Luck <[email protected]>
2018-09-17ia64: remove the unused pci_iommu_shutdown functionChristoph Hellwig1-5/+0
Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Tony Luck <[email protected]>
2018-09-17ia64: remove the unused bad_dma_address symbolChristoph Hellwig1-3/+0
Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Tony Luck <[email protected]>
2018-09-17ia64: remove iommu_dma_supportedChristoph Hellwig1-13/+0
The generic dma_direct_supported helper already used by intel-iommu on x86 does a better job than the ia64 reimplementation. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Tony Luck <[email protected]>