aboutsummaryrefslogtreecommitdiff
path: root/arch/arc
AgeCommit message (Collapse)AuthorFilesLines
2019-02-21ARCv2: support manual regfile save on interruptsVineet Gupta5-1/+68
There's a hardware bug which affects the HSDK platform, triggered by micro-ops for auto-saving regfile on taken interrupt. The workaround is to inhibit autosave. Signed-off-by: Vineet Gupta <[email protected]>
2019-02-21ARC: uacces: remove lp_start, lp_end from clobber listVineet Gupta1-4/+4
Newer ARC gcc handles lp_start, lp_end in a different way and doesn't like them in the clobber list. Signed-off-by: Vineet Gupta <[email protected]>
2019-02-21ARC: fix actionpoints configuration detectionEugeniy Paltsev1-1/+1
Fix reversed logic while actionpoints configuration (full/min) detection. Fixies: 7dd380c338f1e ("ARC: boot log: print Action point details") Signed-off-by: Eugeniy Paltsev <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2019-02-21ARCv2: lib: memcpy: fix doing prefetchw outside of bufferEugeniy Paltsev1-14/+0
ARCv2 optimized memcpy uses PREFETCHW instruction for prefetching the next cache line but doesn't ensure that the line is not past the end of the buffer. PRETECHW changes the line ownership and marks it dirty, which can cause data corruption if this area is used for DMA IO. Fix the issue by avoiding the PREFETCHW. This leads to performance degradation but it is OK as we'll introduce new memcpy implementation optimized for unaligned memory access using. We also cut off all PREFETCH instructions at they are quite useless here: * we call PREFETCH right before LOAD instruction call. * we copy 16 or 32 bytes of data (depending on CONFIG_ARC_HAS_LL64) in a main logical loop. so we call PREFETCH 4 times (or 2 times) for each L1 cache line (in case of 64B L1 cache Line which is default case). Obviously this is not optimal. Signed-off-by: Eugeniy Paltsev <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2019-02-21ARCv2: Enable unaligned access in early ASM codeEugeniy Paltsev1-0/+10
It is currently done in arc_init_IRQ() which might be too late considering gcc 7.3.1 onwards (GNU 2018.03) generates unaligned memory accesses by default Cc: [email protected] #4.4+ Signed-off-by: Eugeniy Paltsev <[email protected]> Signed-off-by: Vineet Gupta <[email protected]> [vgupta: rewrote changelog]
2019-02-20dma-mapping: improve selection of dma_declare_coherent availabilityChristoph Hellwig1-1/+0
This API is primarily used through DT entries, but two architectures and two drivers call it directly. So instead of selecting the config symbol for random architectures pull it in implicitly for the actual users. Also rename the Kconfig option to describe the feature better. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Paul Burton <[email protected]> # MIPS Acked-by: Lee Jones <[email protected]> Reviewed-by: Greg Kroah-Hartman <[email protected]>
2019-02-19asm-generic: Make time32 syscall numbers optionalArnd Bergmann1-0/+1
We don't want new architectures to even provide the old 32-bit time_t based system calls any more, or define the syscall number macros. Add a new __ARCH_WANT_TIME32_SYSCALLS macro that gets enabled for all existing 32-bit architectures using the generic system call table, so we don't change any current behavior. Since this symbol is evaluated in user space as well, we cannot use a Kconfig CONFIG_* macro but have to define it in uapi/asm/unistd.h. On 64-bit architectures, the same system call numbers mostly refer to the system calls we want to keep, as they already pass 64-bit time_t. As new architectures no longer provide these, we need new exceptions in checksyscalls.sh. Signed-off-by: Arnd Bergmann <[email protected]>
2019-02-19asm-generic: Drop getrlimit and setrlimit syscalls from default listYury Norov1-0/+1
The newer prlimit64 syscall provides all the functionality of getrlimit and setrlimit syscalls and adds the pid of target process, so future architectures won't need to include getrlimit and setrlimit. Therefore drop getrlimit and setrlimit syscalls from the generic syscall list unless __ARCH_WANT_SET_GET_RLIMIT is defined by the architecture's unistd.h prior to including asm-generic/unistd.h, and adjust all architectures using the generic syscall list to define it so that no in-tree architectures are affected. Cc: Arnd Bergmann <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Signed-off-by: Yury Norov <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Acked-by: Mark Salter <[email protected]> [c6x] Acked-by: James Hogan <[email protected]> [metag] Acked-by: Ley Foon Tan <[email protected]> [nios2] Acked-by: Stafford Horne <[email protected]> [openrisc] Acked-by: Will Deacon <[email protected]> [arm64] Acked-by: Vineet Gupta <[email protected]> #arch/arc bits Signed-off-by: Yury Norov <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]>
2019-02-1932-bit userspace ABI: introduce ARCH_32BIT_OFF_T config optionYury Norov1-0/+1
All new 32-bit architectures should have 64-bit userspace off_t type, but existing architectures has 32-bit ones. To enforce the rule, new config option is added to arch/Kconfig that defaults ARCH_32BIT_OFF_T to be disabled for new 32-bit architectures. All existing 32-bit architectures enable it explicitly. New option affects force_o_largefile() behaviour. Namely, if userspace off_t is 64-bits long, we have no reason to reject user to open big files. Note that even if architectures has only 64-bit off_t in the kernel (arc, c6x, h8300, hexagon, nios2, openrisc, and unicore32), a libc may use 32-bit off_t, and therefore want to limit the file size to 4GB unless specified differently in the open flags. Signed-off-by: Yury Norov <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Signed-off-by: Yury Norov <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]>
2019-02-13of: select OF_RESERVED_MEM automaticallyChristoph Hellwig1-1/+0
The OF_RESERVED_MEM can be used if we have either CMA or the generic declare coherent code built and we support the early flattened DT. So don't bother making it a user visible options that is selected by most configs that fit the above category, but just select it when the requirements are met. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Rob Herring <[email protected]>
2019-02-13dma-mapping: add a kconfig symbol for arch_setup_dma_ops availabilityChristoph Hellwig3-13/+2
Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Paul Burton <[email protected]> # MIPS Acked-by: Catalin Marinas <[email protected]> # arm64
2019-01-17ARCv2: lib: memeset: fix doing prefetchw outside of bufferEugeniy Paltsev1-8/+32
ARCv2 optimized memset uses PREFETCHW instruction for prefetching the next cache line but doesn't ensure that the line is not past the end of the buffer. PRETECHW changes the line ownership and marks it dirty, which can cause issues in SMP config when next line was already owned by other core. Fix the issue by avoiding the PREFETCHW Some more details: The current code has 3 logical loops (ignroing the unaligned part) (a) Big loop for doing aligned 64 bytes per iteration with PREALLOC (b) Loop for 32 x 2 bytes with PREFETCHW (c) any left over bytes loop (a) was already eliding the last 64 bytes, so PREALLOC was safe. The fix was removing PREFETCW from (b). Another potential issue (applicable to configs with 32 or 128 byte L1 cache line) is that PREALLOC assumes 64 byte cache line and may not do the right thing specially for 32b. While it would be easy to adapt, there are no known configs with those lie sizes, so for now, just compile out PREALLOC in such cases. Signed-off-by: Eugeniy Paltsev <[email protected]> Cc: [email protected] #4.4+ Signed-off-by: Vineet Gupta <[email protected]> [vgupta: rewrote changelog, used asm .macro vs. "C" macro]
2019-01-17ARC: mm: do_page_fault fixes #1: relinquish mmap_sem if signal arrives while ↵Vineet Gupta1-4/+9
handle_mm_fault do_page_fault() forgot to relinquish mmap_sem if a signal came while handling handle_mm_fault() - due to say a ctl+c or oom etc. This would later cause a deadlock by acquiring it twice. This came to light when running libc testsuite tst-tls3-malloc test but is likely also the cause for prior seen LTP failures. Using lockdep clearly showed what the issue was. | # while true; do ./tst-tls3-malloc ; done | Didn't expect signal from child: got `Segmentation fault' | ^C | ============================================ | WARNING: possible recursive locking detected | 4.17.0+ #25 Not tainted | -------------------------------------------- | tst-tls3-malloc/510 is trying to acquire lock: | 606c7728 (&mm->mmap_sem){++++}, at: __might_fault+0x28/0x5c | |but task is already holding lock: |606c7728 (&mm->mmap_sem){++++}, at: do_page_fault+0x9c/0x2a0 | | other info that might help us debug this: | Possible unsafe locking scenario: | | CPU0 | ---- | lock(&mm->mmap_sem); | lock(&mm->mmap_sem); | | *** DEADLOCK *** | ------------------------------------------------------------ What the change does is not obvious (note to myself) prior code was | do_page_fault | | down_read() <-- lock taken | handle_mm_fault <-- signal pending as this runs | if fatal_signal_pending | if VM_FAULT_ERROR | up_read | if user_mode | return <-- lock still held, this was the BUG New code | do_page_fault | | down_read() <-- lock taken | handle_mm_fault <-- signal pending as this runs | if fatal_signal_pending | if VM_FAULT_RETRY | return <-- not same case as above, but still OK since | core mm already relinq lock for FAULT_RETRY | ... | | < Now falls through for bug case above > | | up_read() <-- lock relinquished Cc: [email protected] Signed-off-by: Vineet Gupta <[email protected]>
2019-01-17ARC: show_regs: lockdep: re-enable preemptionVineet Gupta1-0/+8
signal handling core calls show_regs() with preemption disabled which on ARC takes mmap_sem for mm/vma access, causing lockdep splat. | [ARCLinux]# ./segv-null-ptr | potentially unexpected fatal signal 11. | BUG: sleeping function called from invalid context at kernel/fork.c:1011 | in_atomic(): 1, irqs_disabled(): 0, pid: 70, name: segv-null-ptr | no locks held by segv-null-ptr/70. | CPU: 0 PID: 70 Comm: segv-null-ptr Not tainted 4.18.0+ #69 | | Stack Trace: | arc_unwind_core+0xcc/0x100 | ___might_sleep+0x17a/0x190 | mmput+0x16/0xb8 | show_regs+0x52/0x310 | get_signal+0x5ee/0x610 | do_signal+0x2c/0x218 | resume_user_mode_begin+0x90/0xd8 Workaround by re-enabling preemption temporarily. Note that the preemption disabling in core code around show_regs() was introduced by commit 3a9f84d354ce ("signals, debug: fix BUG: using smp_processor_id() in preemptible code in print_fatal_signal()") to silence a differnt lockdep seen on x86 bakc in 2009. Cc: <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2019-01-17ARC: show_regs: lockdep: avoid page allocator...Vineet Gupta1-14/+12
and use smaller/on-stack buffer instead The motivation for this change was lockdep splat like below. | potentially unexpected fatal signal 11. | BUG: sleeping function called from invalid context at ../mm/page_alloc.c:4317 | in_atomic(): 1, irqs_disabled(): 0, pid: 57, name: segv | no locks held by segv/57. | Preemption disabled at: | [<8182f17e>] get_signal+0x4a6/0x7c4 | CPU: 0 PID: 57 Comm: segv Not tainted 4.17.0+ #23 | | Stack Trace: | arc_unwind_core.constprop.1+0xd0/0xf4 | __might_sleep+0x1f6/0x234 | __get_free_pages+0x174/0xca0 | show_regs+0x22/0x330 | get_signal+0x4ac/0x7c4 # print_fatal_signals() -> preempt_disable() | do_signal+0x30/0x224 | resume_user_mode_begin+0x90/0xd8 So signal handling core calls show_regs() with preemption disabled but an ensuing GFP_KERNEL page allocator call is flagged by lockdep. We could have switched to GFP_NOWAIT, but turns out that is not enough anways and eliding page allocator call leads to less code and instruction traces to sift thru when debugging pesky crashes. FWIW, this patch doesn't cure the lockdep splat (which next patch does). Reviewed-by: William Kucharski <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2019-01-17ARC: perf: avoid kernel killing where it is possibleEugeniy Paltsev1-2/+4
No, not gonna die tonight. Signed-off-by: Eugeniy Paltsev <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2019-01-17ARC: perf: move HW events mapping to separate functionEugeniy Paltsev1-15/+33
Move HW events mapping to separate function to make code more readable. Signed-off-by: Eugeniy Paltsev <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2019-01-17ARC: perf: introduce Kernel PMU events supportEugeniy Paltsev1-1/+105
Export all available ARC architected hardware events as kernel PMU events to make non-generic events accessible. ARC PMU HW allow us to read the list of all available events names. So we generate kernel PMU event list dynamically in arc_pmu_device_probe() using human-readable events names we got from HW instead of using pre-defined events list. -------------------------->8-------------------------- $ perf list [snip] arc_pmu/bdata64/ [Kernel PMU event] arc_pmu/bdcstall/ [Kernel PMU event] arc_pmu/bdslot/ [Kernel PMU event] arc_pmu/bfbmp/ [Kernel PMU event] arc_pmu/bfirqex/ [Kernel PMU event] arc_pmu/bflgstal/ [Kernel PMU event] arc_pmu/bflush/ [Kernel PMU event] -------------------------->8-------------------------- Signed-off-by: Eugeniy Paltsev <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2019-01-17ARC: perf: trivial code cleanupEugeniy Paltsev1-43/+42
* Use BIT(), lower_32_bits(), upper_32_bits() macroses, fix code style violations. * Use u32, u64, s64 instead of uint32_t, uint64_t, int64_t * Fix description comment as this code doesn't belong only to ARC700 anymore. * Use SPDX License Identifier. * Remove useless ifdefs. ifdef around 'arc_pmu_match' structure declaration is useless as we refer to 'arc_pmu_match' in several places which aren't guarded with ifdef. Nevertheless 'ARC' option selects 'OF' unconditionally so we can simply get rid of this ifdef. Acked-by: Vineet Gupta <[email protected]> Signed-off-by: Eugeniy Paltsev <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2019-01-17ARC: perf: map generic branches to correct hardware conditionEugeniy Paltsev1-1/+2
So far we've mapped branches to "ijmp" which also counts conditional branches NOT taken. This makes us different from other architectures such as ARM which seem to be counting only taken branches. So use "ijmptak" hardware condition which only counts (all jump instructions that are taken) 'ijmptak' event is available on both ARCompact and ARCv2 ISA based cores. Signed-off-by: Eugeniy Paltsev <[email protected]> Cc: [email protected] Signed-off-by: Vineet Gupta <[email protected]> [vgupta: reworked changelog]
2019-01-17ARC: adjust memblock_reserve of kernel memoryEugeniy Paltsev1-1/+2
In setup_arch_memory we reserve the memory area wherein the kernel is located. Current implementation may reserve more memory than it actually required in case of CONFIG_LINUX_LINK_BASE is not equal to CONFIG_LINUX_RAM_BASE. This happens because we calculate start of the reserved region relatively to the CONFIG_LINUX_RAM_BASE and end of the region relatively to the CONFIG_LINUX_RAM_BASE. For example in case of HSDK board we wasted 256MiB of physical memory: ------------------->8------------------------------ Memory: 770416K/1048576K available (5496K kernel code, 240K rwdata, 1064K rodata, 2200K init, 275K bss, 278160K reserved, 0K cma-reserved) ------------------->8------------------------------ Fix that. Fixes: 9ed68785f7f2b ("ARC: mm: Decouple RAM base address from kernel link addr") Cc: [email protected] #4.14+ Signed-off-by: Eugeniy Paltsev <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2019-01-17arc: remove redundant kernel-space generic-yMasahiro Yamada1-4/+0
This commit removes redundant generic-y defines in arch/arc/include/asm/Kbuild. It is redundant to define generic-y when arch-specific implementation exists in arch/$(ARCH)/include/asm/*.h Remove the following generic-y: dma-mapping.h fb.h kmap_types.h pci.h Signed-off-by: Masahiro Yamada <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2019-01-17ARC: fix __ffs return value to avoid build warningsEugeniy Paltsev1-3/+3
| CC mm/nobootmem.o |In file included from ./include/asm-generic/bug.h:18:0, | from ./arch/arc/include/asm/bug.h:32, | from ./include/linux/bug.h:5, | from ./include/linux/mmdebug.h:5, | from ./include/linux/gfp.h:5, | from ./include/linux/slab.h:15, | from mm/nobootmem.c:14: |mm/nobootmem.c: In function '__free_pages_memory': |./include/linux/kernel.h:845:29: warning: comparison of distinct pointer types lacks a cast | (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1))) | ^ |./include/linux/kernel.h:859:4: note: in expansion of macro '__typecheck' | (__typecheck(x, y) && __no_side_effects(x, y)) | ^~~~~~~~~~~ |./include/linux/kernel.h:869:24: note: in expansion of macro '__safe_cmp' | __builtin_choose_expr(__safe_cmp(x, y), \ | ^~~~~~~~~~ |./include/linux/kernel.h:878:19: note: in expansion of macro '__careful_cmp' | #define min(x, y) __careful_cmp(x, y, <) | ^~~~~~~~~~~~~ |mm/nobootmem.c:104:11: note: in expansion of macro 'min' | order = min(MAX_ORDER - 1UL, __ffs(start)); Change __ffs return value from 'int' to 'unsigned long' as it is done in other implementations (like asm-generic, x86, etc...) to avoid build-time warnings in places where type is strictly checked. As __ffs may return values in [0-31] interval changing return type to unsigned is valid. Signed-off-by: Eugeniy Paltsev <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2019-01-17ARC: boot log: print Action point detailsVineet Gupta2-8/+24
This now prints the number of action points {2,4,8} and {min,full} targets supported. Signed-off-by: Vineet Gupta <[email protected]>
2019-01-17ARCv2: boot log: BPU return stack depthVineet Gupta2-3/+4
Signed-off-by: Vineet Gupta <[email protected]>
2019-01-06arch: remove redundant UAPI generic-y definesMasahiro Yamada1-24/+0
Now that Kbuild automatically creates asm-generic wrappers for missing mandatory headers, it is redundant to list the same headers in generic-y and mandatory-y. Suggested-by: Sam Ravnborg <[email protected]> Signed-off-by: Masahiro Yamada <[email protected]> Acked-by: Sam Ravnborg <[email protected]>
2019-01-06arch: remove stale comments "UAPI Header export list"Masahiro Yamada1-1/+0
These comments are leftovers of commit fcc8487d477a ("uapi: export all headers under uapi directories"). Prior to that commit, exported headers must be explicitly added to header-y. Now, all headers under the uapi/ directories are exported. Signed-off-by: Masahiro Yamada <[email protected]>
2019-01-05Merge branch 'mount.part1' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull vfs mount API prep from Al Viro: "Mount API prereqs. Mostly that's LSM mount options cleanups. There are several minor fixes in there, but nothing earth-shattering (leaks on failure exits, mostly)" * 'mount.part1' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (27 commits) mount_fs: suppress MAC on MS_SUBMOUNT as well as MS_KERNMOUNT smack: rewrite smack_sb_eat_lsm_opts() smack: get rid of match_token() smack: take the guts of smack_parse_opts_str() into a new helper LSM: new method: ->sb_add_mnt_opt() selinux: rewrite selinux_sb_eat_lsm_opts() selinux: regularize Opt_... names a bit selinux: switch away from match_token() selinux: new helper - selinux_add_opt() LSM: bury struct security_mnt_opts smack: switch to private smack_mnt_opts selinux: switch to private struct selinux_mnt_opts LSM: hide struct security_mnt_opts from any generic code selinux: kill selinux_sb_get_mnt_opts() LSM: turn sb_eat_lsm_opts() into a method nfs_remount(): don't leak, don't ignore LSM options quietly btrfs: sanitize security_mnt_opts use selinux; don't open-code a loop in sb_finish_set_opts() LSM: split ->sb_set_mnt_opts() out of ->sb_kern_mount() new helper: security_sb_eat_lsm_opts() ...
2019-01-05Merge branch 'akpm' (patches from Andrew)Linus Torvalds3-6/+5
Merge more updates from Andrew Morton: - procfs updates - various misc bits - lib/ updates - epoll updates - autofs - fatfs - a few more MM bits * emailed patches from Andrew Morton <[email protected]>: (58 commits) mm/page_io.c: fix polled swap page in checkpatch: add Co-developed-by to signature tags docs: fix Co-Developed-by docs drivers/base/platform.c: kmemleak ignore a known leak fs: don't open code lru_to_page() fs/: remove caller signal_pending branch predictions mm/: remove caller signal_pending branch predictions arch/arc/mm/fault.c: remove caller signal_pending_branch predictions kernel/sched/: remove caller signal_pending branch predictions kernel/locking/mutex.c: remove caller signal_pending branch predictions mm: select HAVE_MOVE_PMD on x86 for faster mremap mm: speed up mremap by 20x on large regions mm: treewide: remove unused address argument from pte_alloc functions initramfs: cleanup incomplete rootfs scripts/gdb: fix lx-version string output kernel/kcov.c: mark write_comp_data() as notrace kernel/sysctl: add panic_print into sysctl panic: add options to print system info when panic happens bfs: extra sanity checking and static inode bitmap exec: separate MM_ANONPAGES and RLIMIT_STACK accounting ...
2019-01-04arch/arc/mm/fault.c: remove caller signal_pending_branch predictionsDavidlohr Bueso1-1/+1
This is already done for us internally by the signal machinery. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Davidlohr Bueso <[email protected]> Reviewed-by: Andrew Morton <[email protected]> Cc: Vineet Gupta <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-01-04mm: treewide: remove unused address argument from pte_alloc functionsJoel Fernandes (Google)1-3/+2
Patch series "Add support for fast mremap". This series speeds up the mremap(2) syscall by copying page tables at the PMD level even for non-THP systems. There is concern that the extra 'address' argument that mremap passes to pte_alloc may do something subtle architecture related in the future that may make the scheme not work. Also we find that there is no point in passing the 'address' to pte_alloc since its unused. This patch therefore removes this argument tree-wide resulting in a nice negative diff as well. Also ensuring along the way that the enabled architectures do not do anything funky with the 'address' argument that goes unnoticed by the optimization. Build and boot tested on x86-64. Build tested on arm64. The config enablement patch for arm64 will be posted in the future after more testing. The changes were obtained by applying the following Coccinelle script. (thanks Julia for answering all Coccinelle questions!). Following fix ups were done manually: * Removal of address argument from pte_fragment_alloc * Removal of pte_alloc_one_fast definitions from m68k and microblaze. // Options: --include-headers --no-includes // Note: I split the 'identifier fn' line, so if you are manually // running it, please unsplit it so it runs for you. virtual patch @pte_alloc_func_def depends on patch exists@ identifier E2; identifier fn =~ "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$"; type T2; @@ fn(... - , T2 E2 ) { ... } @pte_alloc_func_proto_noarg depends on patch exists@ type T1, T2, T3, T4; identifier fn =~ "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$"; @@ ( - T3 fn(T1, T2); + T3 fn(T1); | - T3 fn(T1, T2, T4); + T3 fn(T1, T2); ) @pte_alloc_func_proto depends on patch exists@ identifier E1, E2, E4; type T1, T2, T3, T4; identifier fn =~ "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$"; @@ ( - T3 fn(T1 E1, T2 E2); + T3 fn(T1 E1); | - T3 fn(T1 E1, T2 E2, T4 E4); + T3 fn(T1 E1, T2 E2); ) @pte_alloc_func_call depends on patch exists@ expression E2; identifier fn =~ "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$"; @@ fn(... -, E2 ) @pte_alloc_macro depends on patch exists@ identifier fn =~ "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$"; identifier a, b, c; expression e; position p; @@ ( - #define fn(a, b, c) e + #define fn(a, b) e | - #define fn(a, b) e + #define fn(a) e ) Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Joel Fernandes (Google) <[email protected]> Suggested-by: Kirill A. Shutemov <[email protected]> Acked-by: Kirill A. Shutemov <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Julia Lawall <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: William Kucharski <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-01-04fls: change parameter to unsigned intMatthew Wilcox1-2/+2
When testing in userspace, UBSAN pointed out that shifting into the sign bit is undefined behaviour. It doesn't really make sense to ask for the highest set bit of a negative value, so just turn the argument type into an unsigned int. Some architectures (eg ppc) already had it declared as an unsigned int, so I don't expect too many problems. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-01-03Remove 'type' argument from access_ok() functionLinus Torvalds3-4/+4
Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument of the user address range verification function since we got rid of the old racy i386-only code to walk page tables by hand. It existed because the original 80386 would not honor the write protect bit when in kernel mode, so you had to do COW by hand before doing any user access. But we haven't supported that in a long time, and these days the 'type' argument is a purely historical artifact. A discussion about extending 'user_access_begin()' to do the range checking resulted this patch, because there is no way we're going to move the old VERIFY_xyz interface to that model. And it's best done at the end of the merge window when I've done most of my merges, so let's just get this done once and for all. This patch was mostly done with a sed-script, with manual fix-ups for the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form. There were a couple of notable cases: - csky still had the old "verify_area()" name as an alias. - the iter_iov code had magical hardcoded knowledge of the actual values of VERIFY_{READ,WRITE} (not that they mattered, since nothing really used it) - microblaze used the type argument for a debug printout but other than those oddities this should be a total no-op patch. I tried to fix up all architectures, did fairly extensive grepping for access_ok() uses, and the changes are trivial, but I may have missed something. Any missed conversion should be trivially fixable, though. Signed-off-by: Linus Torvalds <[email protected]>
2019-01-01Merge tag 'kgdb-4.21-rc1' of ↵Linus Torvalds1-9/+3
git://git.kernel.org/pub/scm/linux/kernel/git/danielt/linux Pull kgdb updates from Daniel Thompson: "Mostly clean ups although while Doug's was chasing down a odd lockdep warning he also did some work to improved debugger resilience when some CPUs fail to respond to the round up request. The main changes are: - Fixing a lockdep warning on architectures that cannot use an NMI for the round up plus related changes to make CPU round up and all CPU backtrace more resilient. - Constify the arch ops tables - A couple of other small clean ups Two of the three patchsets here include changes that spill over into arch/. Changes in the arch space are relatively narrow in scope (and directly related to kgdb). Didn't get comprehensive acks but all impacted maintainers were Cc:ed in good time" * tag 'kgdb-4.21-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/danielt/linux: kgdb/treewide: constify struct kgdb_arch arch_kgdb_ops mips/kgdb: prepare arch_kgdb_ops for constness kdb: use bool for binary state indicators kdb: Don't back trace on a cpu that didn't round up kgdb: Don't round up a CPU that failed rounding up before kgdb: Fix kgdb_roundup_cpus() for arches who used smp_call_function() kgdb: Remove irq flags from roundup
2018-12-30kgdb/treewide: constify struct kgdb_arch arch_kgdb_opsChristophe Leroy1-1/+1
checkpatch.pl reports the following: WARNING: struct kgdb_arch should normally be const #28: FILE: arch/mips/kernel/kgdb.c:397: +struct kgdb_arch arch_kgdb_ops = { This report makes sense, as all other ops struct, this one should also be const. This patch does the change. Cc: Vineet Gupta <[email protected]> Cc: Russell King <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Cc: Richard Kuo <[email protected]> Cc: Michal Simek <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Paul Burton <[email protected]> Cc: James Hogan <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Rich Felker <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: [email protected] Acked-by: Daniel Thompson <[email protected]> Acked-by: Paul Burton <[email protected]> Signed-off-by: Christophe Leroy <[email protected]> Acked-by: Borislav Petkov <[email protected]> Acked-by: Michael Ellerman <[email protected]> (powerpc) Signed-off-by: Daniel Thompson <[email protected]>
2018-12-30kgdb: Fix kgdb_roundup_cpus() for arches who used smp_call_function()Douglas Anderson1-8/+2
When I had lockdep turned on and dropped into kgdb I got a nice splat on my system. Specifically it hit: DEBUG_LOCKS_WARN_ON(current->hardirq_context) Specifically it looked like this: sysrq: SysRq : DEBUG ------------[ cut here ]------------ DEBUG_LOCKS_WARN_ON(current->hardirq_context) WARNING: CPU: 0 PID: 0 at .../kernel/locking/lockdep.c:2875 lockdep_hardirqs_on+0xf0/0x160 CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.19.0 #27 pstate: 604003c9 (nZCv DAIF +PAN -UAO) pc : lockdep_hardirqs_on+0xf0/0x160 ... Call trace: lockdep_hardirqs_on+0xf0/0x160 trace_hardirqs_on+0x188/0x1ac kgdb_roundup_cpus+0x14/0x3c kgdb_cpu_enter+0x53c/0x5cc kgdb_handle_exception+0x180/0x1d4 kgdb_compiled_brk_fn+0x30/0x3c brk_handler+0x134/0x178 do_debug_exception+0xfc/0x178 el1_dbg+0x18/0x78 kgdb_breakpoint+0x34/0x58 sysrq_handle_dbg+0x54/0x5c __handle_sysrq+0x114/0x21c handle_sysrq+0x30/0x3c qcom_geni_serial_isr+0x2dc/0x30c ... ... irq event stamp: ...45 hardirqs last enabled at (...44): [...] __do_softirq+0xd8/0x4e4 hardirqs last disabled at (...45): [...] el1_irq+0x74/0x130 softirqs last enabled at (...42): [...] _local_bh_enable+0x2c/0x34 softirqs last disabled at (...43): [...] irq_exit+0xa8/0x100 ---[ end trace adf21f830c46e638 ]--- Looking closely at it, it seems like a really bad idea to be calling local_irq_enable() in kgdb_roundup_cpus(). If nothing else that seems like it could violate spinlock semantics and cause a deadlock. Instead, let's use a private csd alongside smp_call_function_single_async() to round up the other CPUs. Using smp_call_function_single_async() doesn't require interrupts to be enabled so we can remove the offending bit of code. In order to avoid duplicating this across all the architectures that use the default kgdb_roundup_cpus(), we'll add a "weak" implementation to debug_core.c. Looking at all the people who previously had copies of this code, there were a few variants. I've attempted to keep the variants working like they used to. Specifically: * For arch/arc we passed NULL to kgdb_nmicallback() instead of get_irq_regs(). * For arch/mips there was a bit of extra code around kgdb_nmicallback() NOTE: In this patch we will still get into trouble if we try to round up a CPU that failed to round up before. We'll try to round it up again and potentially hang when we try to grab the csd lock. That's not new behavior but we'll still try to do better in a future patch. Suggested-by: Daniel Thompson <[email protected]> Signed-off-by: Douglas Anderson <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Russell King <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Richard Kuo <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Paul Burton <[email protected]> Cc: James Hogan <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Yoshinori Sato <[email protected]> Cc: Rich Felker <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Acked-by: Will Deacon <[email protected]> Signed-off-by: Daniel Thompson <[email protected]>
2018-12-30kgdb: Remove irq flags from roundupDouglas Anderson1-1/+1
The function kgdb_roundup_cpus() was passed a parameter that was documented as: > the flags that will be used when restoring the interrupts. There is > local_irq_save() call before kgdb_roundup_cpus(). Nobody used those flags. Anyone who wanted to temporarily turn on interrupts just did local_irq_enable() and local_irq_disable() without looking at them. So we can definitely remove the flags. Signed-off-by: Douglas Anderson <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Russell King <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Richard Kuo <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Paul Burton <[email protected]> Cc: James Hogan <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Yoshinori Sato <[email protected]> Cc: Rich Felker <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Acked-by: Will Deacon <[email protected]> Signed-off-by: Daniel Thompson <[email protected]>
2018-12-29Merge tag 'kconfig-v4.21-2' of ↵Linus Torvalds3-25/+3
git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kconfig file consolidation from Masahiro Yamada: "Consolidation of bus (PCI, PCMCIA, EISA, RapidIO) config entries by Christoph Hellwig. Currently, every architecture that wants to provide common peripheral busses needs to add some boilerplate code and include the right Kconfig files. This series instead just selects the presence (when needed) and then handles everything in the bus-specific Kconfig file under drivers/" * tag 'kconfig-v4.21-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: pcmcia: remove per-arch PCMCIA config entry eisa: consolidate EISA Kconfig entry in drivers/eisa rapidio: consolidate RAPIDIO config entry in drivers/rapidio pcmcia: allow PCMCIA support independent of the architecture PCI: consolidate the PCI_SYSCALL symbol PCI: consolidate the PCI_DOMAINS and PCI_DOMAINS_GENERIC config options PCI: consolidate PCI config entry in drivers/pci MIPS: remove the HT_PCI config option
2018-12-28Merge tag 'devicetree-for-4.21' of ↵Linus Torvalds1-20/+5
git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux Pull Devicetree updates from Rob Herring: "The biggest highlight here is the start of using json-schema for DT bindings. Being able to validate bindings has been discussed for years with little progress. - Initial support for DT bindings using json-schema language. This is the start of converting DT bindings from free-form text to a structured format. - Reworking of initrd address initialization. This moves to using the phys address instead of virt addr in the DT parsing code. This rework was motivated by CONFIG_DEV_BLK_INITRD causing unnecessary rebuilding of lots of files. - Fix stale phandle entries in phandle cache - DT overlay validation improvements. This exposed several memory leak bugs which have been fixed. - Use node name and device_type helper functions in DT code - Last remaining conversions to using %pOFn printk specifier instead of device_node.name directly - Create new common RTC binding doc and move all trivial RTC devices out of trivial-devices.txt. - New bindings for Freescale MAG3110 magnetometer, Cadence Sierra PHY, and Xen shared memory - Update dtc to upstream version v1.4.7-57-gf267e674d145" * tag 'devicetree-for-4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux: (68 commits) of: __of_detach_node() - remove node from phandle cache of: of_node_get()/of_node_put() nodes held in phandle cache gpio-omap.txt: add reg and interrupts properties dt-bindings: mrvl,intc: fix a trivial typo dt-bindings: iio: magnetometer: add dt-bindings for freescale mag3110 dt-bindings: Convert trivial-devices.txt to json-schema dt-bindings: arm: mrvl: amend Browstone compatible string dt-bindings: arm: Convert Tegra board/soc bindings to json-schema dt-bindings: arm: Convert ZTE board/soc bindings to json-schema dt-bindings: arm: Add missing Xilinx boards dt-bindings: arm: Convert Xilinx board/soc bindings to json-schema dt-bindings: arm: Convert VIA board/soc bindings to json-schema dt-bindings: arm: Convert ST STi board/soc bindings to json-schema dt-bindings: arm: Convert SPEAr board/soc bindings to json-schema dt-bindings: arm: Convert CSR SiRF board/soc bindings to json-schema dt-bindings: arm: Convert QCom board/soc bindings to json-schema dt-bindings: arm: Convert TI nspire board/soc bindings to json-schema dt-bindings: arm: Convert TI davinci board/soc bindings to json-schema dt-bindings: arm: Convert Calxeda board/soc bindings to json-schema dt-bindings: arm: Convert Altera board/soc bindings to json-schema ...
2018-12-28Merge tag 'dma-mapping-4.21' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds3-4/+2
Pull DMA mapping updates from Christoph Hellwig: "A huge update this time, but a lot of that is just consolidating or removing code: - provide a common DMA_MAPPING_ERROR definition and avoid indirect calls for dma_map_* error checking - use direct calls for the DMA direct mapping case, avoiding huge retpoline overhead for high performance workloads - merge the swiotlb dma_map_ops into dma-direct - provide a generic remapping DMA consistent allocator for architectures that have devices that perform DMA that is not cache coherent. Based on the existing arm64 implementation and also used for csky now. - improve the dma-debug infrastructure, including dynamic allocation of entries (Robin Murphy) - default to providing chaining scatterlist everywhere, with opt-outs for the few architectures (alpha, parisc, most arm32 variants) that can't cope with it - misc sparc32 dma-related cleanups - remove the dma_mark_clean arch hook used by swiotlb on ia64 and replace it with the generic noncoherent infrastructure - fix the return type of dma_set_max_seg_size (Niklas Söderlund) - move the dummy dma ops for not DMA capable devices from arm64 to common code (Robin Murphy) - ensure dma_alloc_coherent returns zeroed memory to avoid kernel data leaks through userspace. We already did this for most common architectures, but this ensures we do it everywhere. dma_zalloc_coherent has been deprecated and can hopefully be removed after -rc1 with a coccinelle script" * tag 'dma-mapping-4.21' of git://git.infradead.org/users/hch/dma-mapping: (73 commits) dma-mapping: fix inverted logic in dma_supported dma-mapping: deprecate dma_zalloc_coherent dma-mapping: zero memory returned from dma_alloc_* sparc/iommu: fix ->map_sg return value sparc/io-unit: fix ->map_sg return value arm64: default to the direct mapping in get_arch_dma_ops PCI: Remove unused attr variable in pci_dma_configure ia64: only select ARCH_HAS_DMA_COHERENT_TO_PFN if swiotlb is enabled dma-mapping: bypass indirect calls for dma-direct vmd: use the proper dma_* APIs instead of direct methods calls dma-direct: merge swiotlb_dma_ops into the dma_direct code dma-direct: use dma_direct_map_page to implement dma_direct_map_sg dma-direct: improve addressability error reporting swiotlb: remove dma_mark_clean swiotlb: remove SWIOTLB_MAP_ERROR ACPI / scan: Refactor _CCA enforcement dma-mapping: factor out dummy DMA ops dma-mapping: always build the direct mapping code dma-mapping: move dma_cache_sync out of line dma-mapping: move various slow path functions out of line ...
2018-12-25Merge branch 'timers-core-for-linus' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull timer updates from Thomas Gleixner: "The timer department delivers the following christmas presents: Core code: - Use proper seqcount initializer to make lockdep happy - SPDX annotations and cleanup of license boilerplates - Use DEFINE_SHOW_ATTRIBUTE() instead of open coding it - Minor cleanups Driver code: - Add the sched_clock for the arc timer (Alexey Brodkin) - Change the file timer names for riscv, rockchip, tegra20, sun4i and meson6 (Daniel Lezcano) - Add the DT bindings for r8a7796, r8a77470 and r8a774a1 (Biju Das) - Remove the early platform driver registration for timer-ti-dm (Bartosz Golaszewski) - Provide the sched_clock for the riscv timer (Anup Patel) - Add support for ARM64 for the imx-gpt and convert the imx-tpm to the timer-of API (Anson Huang) - Remove useless irq protection for the imx-gpt (Clément Péron) - Remove a duplicate function name for the vt8500 (Dan Carpenter) - Remove obsolete inclusion of <asm/smp_twd.h> for the tegra20 (Geert Uytterhoeven) - Demote the prcmu and the custom sched_clock for the dbx500 and the ux500 (Linus Walleij) - Add a new timer clock for the RDA8810PL (Manivannan Sadhasivam) - Rename the macro to stick to the register name and add the delay timer (Martin Blumenstingl) - Switch the bcm2835 to the SPDX identifier (Stefan Wahren) - Fix the interrupt register access on the fttmr010 (Tao Ren) - Add missing of_node_put in the initialization path on the integrator-ap (Yangtao Li)" * 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (39 commits) dt-bindings: timer: Document RDA8810PL SoC timer clocksource/drivers/rda: Add clock driver for RDA8810PL SoC clocksource/drivers/meson6: Change name meson6_timer timer-meson6 clocksource/drivers/sun4i: Change name sun4i_timer to timer-sun4i clocksource/drivers/tegra20: Change name tegra20_timer to timer-tegra20 clocksource/drivers/rockchip: Change name rockchip_timer to timer-rockchip clocksource/drivers/riscv: Change name riscv_timer to timer-riscv clocksource/drivers/riscv_timer: Provide the sched_clock clocksource/drivers/timer-imx-tpm: Specify clock name for timer-of clocksource/drivers/fttmr010: Fix invalid interrupt register access clocksource/drivers/integrator-ap: Add missing of_node_put() clocksource/drivers/bcm2835: Switch to SPDX identifier dt-bindings: timer: renesas, cmt: Document r8a774a1 CMT support clocksource/drivers/timer-imx-tpm: Convert the driver to timer-of clocksource/drivers/arc_timer: Utilize generic sched_clock dt-bindings: timer: renesas, cmt: Document r8a77470 CMT support dt-bindings: timer: renesas, cmt: Document r8a7796 CMT support clocksource/drivers/imx-gpt: Remove unnecessary irq protection clocksource/drivers/imx-gpt: Add support for ARM64 clocksource/drivers/meson6_timer: Implement the ARM delay timer ...
2018-12-20vfs: Suppress MS_* flag defs within the kernel unless explicitly enabledDavid Howells1-0/+1
Only the mount namespace code that implements mount(2) should be using the MS_* flags. Suppress them inside the kernel unless uapi/linux/mount.h is included. Signed-off-by: David Howells <[email protected]> Signed-off-by: Al Viro <[email protected]> Reviewed-by: David Howells <[email protected]>
2018-12-20dma-mapping: zero memory returned from dma_alloc_*Christoph Hellwig1-1/+1
If we want to map memory from the DMA allocator to userspace it must be zeroed at allocation time to prevent stale data leaks. We already do this on most common architectures, but some architectures don't do this yet, fix them up, either by passing GFP_ZERO when we use the normal page allocator or doing a manual memset otherwise. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> [m68k] Acked-by: Sam Ravnborg <[email protected]> [sparc]
2018-12-18clocksource/drivers/arc_timer: Utilize generic sched_clockAlexey Brodkin1-0/+1
It turned out we used to use default implementation of sched_clock() from kernel/sched/clock.c which was as precise as 1/HZ, i.e. by default we had 10 msec granularity of time measurement. Now given ARC built-in timers are clocked with the same frequency as CPU cores we may get much higher precision of time tracking. Thus we switch to generic sched_clock which really reads ARC hardware counters. This is especially helpful for measuring short events. That's what we used to have: ------------------------------>8------------------------ $ perf stat /bin/sh -c /root/lmbench-master/bin/arc/hello > /dev/null Performance counter stats for '/bin/sh -c /root/lmbench-master/bin/arc/hello': 10.000000 task-clock (msec) # 2.832 CPUs utilized 1 context-switches # 0.100 K/sec 1 cpu-migrations # 0.100 K/sec 63 page-faults # 0.006 M/sec 3049480 cycles # 0.305 GHz 1091259 instructions # 0.36 insn per cycle 256828 branches # 25.683 M/sec 27026 branch-misses # 10.52% of all branches 0.003530687 seconds time elapsed 0.000000000 seconds user 0.010000000 seconds sys ------------------------------>8------------------------ And now we'll see: ------------------------------>8------------------------ $ perf stat /bin/sh -c /root/lmbench-master/bin/arc/hello > /dev/null Performance counter stats for '/bin/sh -c /root/lmbench-master/bin/arc/hello': 3.004322 task-clock (msec) # 0.865 CPUs utilized 1 context-switches # 0.333 K/sec 1 cpu-migrations # 0.333 K/sec 63 page-faults # 0.021 M/sec 2986734 cycles # 0.994 GHz 1087466 instructions # 0.36 insn per cycle 255209 branches # 84.947 M/sec 26002 branch-misses # 10.19% of all branches 0.003474829 seconds time elapsed 0.003519000 seconds user 0.000000000 seconds sys ------------------------------>8------------------------ Note how much more meaningful is the second output - time spent for execution pretty much matches number of cycles spent (we're runnign @ 1GHz here). Signed-off-by: Alexey Brodkin <[email protected]> Cc: Daniel Lezcano <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Acked-by: Vineet Gupta <[email protected]> Signed-off-by: Daniel Lezcano <[email protected]>
2018-12-13dma-mapping: bypass indirect calls for dma-directChristoph Hellwig1-1/+1
Avoid expensive indirect calls in the fast path DMA mapping operations by directly calling the dma_direct_* ops if we are using the directly mapped DMA operations. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Jesper Dangaard Brouer <[email protected]> Tested-by: Jesper Dangaard Brouer <[email protected]> Tested-by: Tony Luck <[email protected]>
2018-12-13dma-mapping: always build the direct mapping codeChristoph Hellwig1-1/+0
All architectures except for sparc64 use the dma-direct code in some form, and even for sparc64 we had the discussion of a direct mapping mode a while ago. In preparation for directly calling the direct mapping code don't bother having it optionally but always build the code in. This is a minor hardship for some powerpc and arm configs that don't pull it in yet (although they should in a relase ot two), and sparc64 which currently doesn't need it at all, but it will reduce the ifdef mess we'd otherwise need significantly. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Jesper Dangaard Brouer <[email protected]> Tested-by: Jesper Dangaard Brouer <[email protected]> Tested-by: Tony Luck <[email protected]>
2018-12-06arch: switch the default on ARCH_HAS_SG_CHAINChristoph Hellwig1-1/+0
These days architectures are mostly out of the business of dealing with struct scatterlist at all, unless they have architecture specific iommu drivers. Replace the ARCH_HAS_SG_CHAIN symbol with a ARCH_NO_SG_CHAIN one only enabled for architectures with horrible legacy iommu drivers like alpha and parisc, and conditionally for arm which wants to keep it disable for legacy platforms. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Palmer Dabbelt <[email protected]>
2018-11-30ARC: io.h: Implement reads{x}()/writes{x}()Jose Abreu1-0/+72
Some ARC CPU's do not support unaligned loads/stores. Currently, generic implementation of reads{b/w/l}()/writes{b/w/l}() is being used with ARC. This can lead to misfunction of some drivers as generic functions do a plain dereference of a pointer that can be unaligned. Let's use {get/put}_unaligned() helpers instead of plain dereference of pointer in order to fix. The helpers allow to get and store data from an unaligned address whilst preserving the CPU internal alignment. According to [1], the use of these helpers are costly in terms of performance so we added an initial check for a buffer already aligned so that the usage of the helpers can be avoided, when possible. [1] Documentation/unaligned-memory-access.txt Cc: Alexey Brodkin <[email protected]> Cc: Joao Pinto <[email protected]> Cc: David Laight <[email protected]> Tested-by: Vitor Soares <[email protected]> Signed-off-by: Jose Abreu <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2018-11-30ARC: change defconfig defaults to ARCv2Kevin Hilman7-2/+7
Change the default defconfig (used with 'make defconfig') to the ARCv2 nsim_hs_defconfig, and also switch the default Kconfig ISA selection to ARCv2. This allows several default defconfigs (e.g. make defconfig, make allnoconfig, make tinyconfig) to all work with ARCv2 by default. Note since we change default architecture from ARCompact to ARCv2 it's required to explicitly mention architecture type in ARCompact defconfigs otherwise ARCv2 will be implied and binaries will be generated for ARCv2. Cc: <[email protected]> # 4.4.x Signed-off-by: Kevin Hilman <[email protected]> Signed-off-by: Alexey Brodkin <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2018-11-26arch: Move initrd= parsing into do_mounts_initrd.cFlorian Fainelli1-20/+5
ARC, ARM, ARM64 and Unicore32 are all capable of parsing the "initrd=" command line parameter to allow specifying the physical address and size of an initrd. Move that parsing into init/do_mounts_initrd.c such that we no longer duplicate that logic. Signed-off-by: Florian Fainelli <[email protected]> Reviewed-by: Mike Rapoport <[email protected]> Signed-off-by: Rob Herring <[email protected]>