aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2017-08-29Revert "libata: quirk read log on no-name M.2 SSD"Tejun Heo2-5/+0
This reverts commit 35f0b6a779b8b7a98faefd7c1c660b4dac9a5c26. We now conditionalize issuing of READ LOG PAGE on the TRUSTED COMPUTING SUPPORTED bit in the identity data and this shouldn't be necessary. Signed-off-by: Tejun Heo <[email protected]>
2017-08-29libata: check for trusted computing in IDENTIFY DEVICE dataChristoph Hellwig2-1/+12
ATA-8 and later mirrors the TRUSTED COMPUTING SUPPORTED bit in word 48 of the IDENTIFY DEVICE data. Check this before issuing a READ LOG PAGE command to avoid issues with buggy devices. The only downside is that we can't support Security Send / Receive for a device with an older revision due to the conflicting use of this field in earlier specifications. tj: The reason we need this is because some devices which don't support READ LOG PAGE lock up after getting issued that command. Signed-off-by: Christoph Hellwig <[email protected]> Tested-by: David Ahern <[email protected]> Signed-off-by: Tejun Heo <[email protected]>
2017-08-29perf symbols: Fix plt entry calculation for ARM and AARCH64Li Bin1-5/+22
On x86, the plt header size is as same as the plt entry size, and can be identified from shdr's sh_entsize of the plt. But we can't assume that the sh_entsize of the plt shdr is always the plt entry size in all architecture, and the plt header size may be not as same as the plt entry size in some architecure. On ARM, the plt header size is 20 bytes and the plt entry size is 12 bytes (don't consider the FOUR_WORD_PLT case) that refer to the binutils implementation. The plt section is as follows: Disassembly of section .plt: 000004a0 <__cxa_finalize@plt-0x14>: 4a0: e52de004 push {lr} ; (str lr, [sp, #-4]!) 4a4: e59fe004 ldr lr, [pc, #4] ; 4b0 <_init+0x1c> 4a8: e08fe00e add lr, pc, lr 4ac: e5bef008 ldr pc, [lr, #8]! 4b0: 00008424 .word 0x00008424 000004b4 <__cxa_finalize@plt>: 4b4: e28fc600 add ip, pc, #0, 12 4b8: e28cca08 add ip, ip, #8, 20 ; 0x8000 4bc: e5bcf424 ldr pc, [ip, #1060]! ; 0x424 000004c0 <printf@plt>: 4c0: e28fc600 add ip, pc, #0, 12 4c4: e28cca08 add ip, ip, #8, 20 ; 0x8000 4c8: e5bcf41c ldr pc, [ip, #1052]! ; 0x41c On AARCH64, the plt header size is 32 bytes and the plt entry size is 16 bytes. The plt section is as follows: Disassembly of section .plt: 0000000000000560 <__cxa_finalize@plt-0x20>: 560: a9bf7bf0 stp x16, x30, [sp,#-16]! 564: 90000090 adrp x16, 10000 <__FRAME_END__+0xf8a8> 568: f944be11 ldr x17, [x16,#2424] 56c: 9125e210 add x16, x16, #0x978 570: d61f0220 br x17 574: d503201f nop 578: d503201f nop 57c: d503201f nop 0000000000000580 <__cxa_finalize@plt>: 580: 90000090 adrp x16, 10000 <__FRAME_END__+0xf8a8> 584: f944c211 ldr x17, [x16,#2432] 588: 91260210 add x16, x16, #0x980 58c: d61f0220 br x17 0000000000000590 <__gmon_start__@plt>: 590: 90000090 adrp x16, 10000 <__FRAME_END__+0xf8a8> 594: f944c611 ldr x17, [x16,#2440] 598: 91262210 add x16, x16, #0x988 59c: d61f0220 br x17 NOTES: In addition to ARM and AARCH64, other architectures, such as s390/alpha/mips/parisc/poperpc/sh/sparc/xtensa also need to consider this issue. Signed-off-by: Li Bin <[email protected]> Acked-by: Namhyung Kim <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Alexis Berlemont <[email protected]> Cc: David Tolnay <[email protected]> Cc: Hanjun Guo <[email protected]> Cc: Hemant Kumar <[email protected]> Cc: Masami Hiramatsu <[email protected]> Cc: Milian Wolff <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Wang Nan <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2017-08-29Merge branch 'stable/for-jens-4.13' of ↵Jens Axboe1-2/+8
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen into for-linus Pull xen-blkback fix from Konrad: "[...] A bug-fix when shutting down xen block backend driver with multiple queues and the driver not clearing all of them."
2017-08-29perf probe: Fix kprobe blacklist checking conditionLi Bin1-1/+1
The commit 9aaf5a5f479b ("perf probe: Check kprobes blacklist when adding new events"), 'perf probe' supports checking the blacklist of the fuctions which can not be probed. But the checking condition is wrong, that the end_addr of the symbol which is the start_addr of the next symbol can't be included. Committer notes: IOW make it match its kernel counterpart in kernel/kprobes.c: bool within_kprobe_blacklist(unsigned long addr) Each entry have as its end address not its end address, but the first address _outside_ that symbol, which for related functions, is the first address of the next symbol, like these from kernel/trace/trace_probe.c: 0xffffffffbd198df0-0xffffffffbd198e40 print_type_u8 0xffffffffbd198e40-0xffffffffbd198e90 print_type_u16 0xffffffffbd198e90-0xffffffffbd198ee0 print_type_u32 0xffffffffbd198ee0-0xffffffffbd198f30 print_type_u64 0xffffffffbd198f30-0xffffffffbd198f80 print_type_s8 0xffffffffbd198f80-0xffffffffbd198fd0 print_type_s16 0xffffffffbd198fd0-0xffffffffbd199020 print_type_s32 0xffffffffbd199020-0xffffffffbd199070 print_type_s64 0xffffffffbd199070-0xffffffffbd1990c0 print_type_x8 0xffffffffbd1990c0-0xffffffffbd199110 print_type_x16 0xffffffffbd199110-0xffffffffbd199160 print_type_x32 0xffffffffbd199160-0xffffffffbd1991b0 print_type_x64 But not always: 0xffffffffbd1997b0-0xffffffffbd1997c0 fetch_kernel_stack_address (kernel/trace/trace_probe.c) 0xffffffffbd1c57f0-0xffffffffbd1c58b0 __context_tracking_enter (kernel/context_tracking.c) Signed-off-by: Li Bin <[email protected]> Cc: Masami Hiramatsu <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Wang Nan <[email protected]> Cc: [email protected] Fixes: 9aaf5a5f479b ("perf probe: Check kprobes blacklist when adding new events") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2017-08-29MIPS: Remove pt_regs adjustments in indirect syscall handlerJames Cowgill2-17/+0
If a restartable syscall is called using the indirect o32 syscall handler - eg: syscall(__NR_waitid, ...), then it is possible for the incorrect arguments to be passed to the syscall after it has been restarted. This is because the syscall handler tries to shift all the registers down one place in pt_regs so that when the syscall is restarted, the "real" syscall is called instead. Unfortunately it only shifts the arguments passed in registers, not the arguments on the user stack. This causes the 4th argument to be duplicated when the syscall is restarted. Fix by removing all the pt_regs shifting so that the indirect syscall handler is called again when the syscall is restarted. The comment "some syscalls like execve get their arguments from struct pt_regs" is long out of date so this should now be safe. Signed-off-by: James Cowgill <[email protected]> Reviewed-by: James Hogan <[email protected]> Tested-by: James Hogan <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15856/ Signed-off-by: Ralf Baechle <[email protected]>
2017-08-29MIPS: seccomp: Fix indirect syscall argsJames Hogan1-6/+4
Since commit 669c4092225f ("MIPS: Give __secure_computing() access to syscall arguments."), upon syscall entry when seccomp is enabled, syscall_trace_enter() passes a carefully prepared struct seccomp_data containing syscall arguments to __secure_computing(). Unfortunately it directly uses mips_get_syscall_arg() and fails to take into account the indirect O32 system calls (i.e. syscall(2)) which put the system call number in a0 and have the arguments shifted up by one entry. We can't just revert that commit as samples/bpf/tracex5 would break again, so use syscall_get_arguments() which already takes indirect syscalls into account instead of directly using mips_get_syscall_arg(), similar to what populate_seccomp_data() does. This also removes the redundant error checking of the mips_get_syscall_arg() return value (get_user() already zeroes the result if an argument from the stack can't be loaded). Reported-by: James Cowgill <[email protected]> Fixes: 669c4092225f ("MIPS: Give __secure_computing() access to syscall arguments.") Signed-off-by: James Hogan <[email protected]> Reviewed-by: Kees Cook <[email protected]> Cc: David Daney <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Will Drewry <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/16994/ Signed-off-by: Ralf Baechle <[email protected]>
2017-08-29clocksource/drivers/bcm2835: Remove message for a memory allocation failureMarkus Elfring1-1/+0
The bcm2835_timer_init() function emits an error message in case of a memory allocation failure. This is pointless as the mm core does that already. Remove this message. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <[email protected]> Signed-off-by: Daniel Lezcano <[email protected]>
2017-08-29locking/lockdep/selftests: Fix mixed read-write ABBA testsPeter Zijlstra1-0/+6
Commit: e91498589746 ("locking/lockdep/selftests: Add mixed read-write ABBA tests") adds an explicit FAILURE to the locking selftest but overlooked the fact that this kills lockdep. Fudge the test to avoid this. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29sched/completion: Avoid unnecessary stack allocation for ↵Boqun Feng1-1/+1
COMPLETION_INITIALIZER_ONSTACK() In theory, COMPLETION_INITIALIZER_ONSTACK() should never affect the stack allocation of the caller. However, on some compilers, a temporary structure was allocated for the return value of COMPLETION_INITIALIZER_ONSTACK(). For example in write_journal() with LOCKDEP_COMPLETIONS=y (GCC is 7.1.1): io_comp.comp = COMPLETION_INITIALIZER_ONSTACK(io_comp.comp); 2462: e8 00 00 00 00 callq 2467 <write_journal+0x47> 2467: 48 8d 85 80 fd ff ff lea -0x280(%rbp),%rax 246e: 48 c7 c6 00 00 00 00 mov $0x0,%rsi 2475: 48 c7 c2 00 00 00 00 mov $0x0,%rdx x->done = 0; 247c: c7 85 90 fd ff ff 00 movl $0x0,-0x270(%rbp) 2483: 00 00 00 init_waitqueue_head(&x->wait); 2486: 48 8d 78 18 lea 0x18(%rax),%rdi 248a: e8 00 00 00 00 callq 248f <write_journal+0x6f> if (commit_start + commit_sections <= ic->journal_sections) { 248f: 41 8b 87 a8 00 00 00 mov 0xa8(%r15),%eax io_comp.comp = COMPLETION_INITIALIZER_ONSTACK(io_comp.comp); 2496: 48 8d bd e8 f9 ff ff lea -0x618(%rbp),%rdi 249d: 48 8d b5 90 fd ff ff lea -0x270(%rbp),%rsi 24a4: b9 17 00 00 00 mov $0x17,%ecx 24a9: f3 48 a5 rep movsq %ds:(%rsi),%es:(%rdi) if (commit_start + commit_sections <= ic->journal_sections) { 24ac: 41 39 c6 cmp %eax,%r14d io_comp.comp = COMPLETION_INITIALIZER_ONSTACK(io_comp.comp); 24af: 48 8d bd 90 fd ff ff lea -0x270(%rbp),%rdi 24b6: 48 8d b5 e8 f9 ff ff lea -0x618(%rbp),%rsi 24bd: b9 17 00 00 00 mov $0x17,%ecx 24c2: f3 48 a5 rep movsq %ds:(%rsi),%es:(%rdi) We can obviously see the temporary structure allocated, and the compiler also does two meaningless memcpy with "rep movsq". And according to: https://gcc.gnu.org/onlinedocs/gcc/Statement-Exprs.html#Statement-Exprs The return value of a statement expression is returned by value, so the temporary variable is created in COMPLETION_INITIALIZER_ONSTACK(), and that's why the temporary structures are allocted. To fix this, make the brace block in COMPLETION_INITIALIZER_ONSTACK() return a pointer and dereference it outside the block rather than return the whole structure, in this way, we are able to teach the compiler not to do the unnecessary stack allocation. This could also reduce the stack size even if !LOCKDEP, for example in write_journal(), compiled with gcc 7.1.1, the result of command: objdump -d drivers/md/dm-integrity.o | ./scripts/checkstack.pl x86 before: 0x0000246a write_journal [dm-integrity.o]: 696 after: 0x00002b7a write_journal [dm-integrity.o]: 296 Reported-by: Arnd Bergmann <[email protected]> Signed-off-by: Boqun Feng <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Byungchul Park <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29acpi/nfit: Fix COMPLETION_INITIALIZER_ONSTACK() abuseBoqun Feng1-1/+1
COMPLETION_INITIALIZER_ONSTACK() is supposed to be used as an initializer, in other words, it should only be used in assignment expressions or compound literals. So the usage in drivers/acpi/nfit/core.c: COMPLETION_INITIALIZER_ONSTACK(flush.cmp); ... is inappropriate. Besides, this usage could also break the build for another fix that reduces stack sizes caused by COMPLETION_INITIALIZER_ONSTACK(), because that fix changes COMPLETION_INITIALIZER_ONSTACK() from rvalue to lvalue, and usage as above will report the following error: drivers/acpi/nfit/core.c: In function 'acpi_nfit_flush_probe': include/linux/completion.h:77:3: error: value computed is not used [-Werror=unused-value] (*({ init_completion(&work); &work; })) This patch fixes this by replacing COMPLETION_INITIALIZER_ONSTACK() with init_completion() in acpi_nfit_flush_probe(), which does the same initialization without any other problems. Signed-off-by: Boqun Feng <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Dan Williams <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Byungchul Park <[email protected]> Cc: Len Brown <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rafael J. Wysocki <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29locking/pvqspinlock: Relax cmpxchg's to improve performance on some ↵Waiman Long1-7/+17
architectures All the locking related cmpxchg's in the following functions are replaced with the _acquire variants: - pv_queued_spin_steal_lock() - trylock_clear_pending() This change should help performance on architectures that use LL/SC. The cmpxchg in pv_kick_node() is replaced with a relaxed version with explicit memory barrier to make sure that it is fully ordered in the writing of next->lock and the reading of pn->state whether the cmpxchg is a success or failure without affecting performance in non-LL/SC architectures. On a 2-socket 12-core 96-thread Power8 system with pvqspinlock explicitly enabled, the performance of a locking microbenchmark with and without this patch on a 4.13-rc4 kernel with Xinhui's PPC qspinlock patch were as follows: # of thread w/o patch with patch % Change ----------- --------- ---------- -------- 8 5054.8 Mop/s 5209.4 Mop/s +3.1% 16 3985.0 Mop/s 4015.0 Mop/s +0.8% 32 2378.2 Mop/s 2396.0 Mop/s +0.7% Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Waiman Long <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Andrea Parri <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Pan Xinhui <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29smp: Avoid using two cache lines for struct call_single_dataYing Huang12-33/+39
struct call_single_data is used in IPIs to transfer information between CPUs. Its size is bigger than sizeof(unsigned long) and less than cache line size. Currently it is not allocated with any explicit alignment requirements. This makes it possible for allocated call_single_data to cross two cache lines, which results in double the number of the cache lines that need to be transferred among CPUs. This can be fixed by requiring call_single_data to be aligned with the size of call_single_data. Currently the size of call_single_data is the power of 2. If we add new fields to call_single_data, we may need to add padding to make sure the size of new definition is the power of 2 as well. Fortunately, this is enforced by GCC, which will report bad sizes. To set alignment requirements of call_single_data to the size of call_single_data, a struct definition and a typedef is used. To test the effect of the patch, I used the vm-scalability multiple thread swap test case (swap-w-seq-mt). The test will create multiple threads and each thread will eat memory until all RAM and part of swap is used, so that huge number of IPIs are triggered when unmapping memory. In the test, the throughput of memory writing improves ~5% compared with misaligned call_single_data, because of faster IPIs. Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Huang, Ying <[email protected]> [ Add call_single_data_t and align with size of call_single_data. ] Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Aaron Lu <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Eric Dumazet <[email protected]> Cc: Juergen Gross <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29locking/lockdep: Untangle xhlock history save/restore from task independencePeter Zijlstra4-51/+48
Where XHLOCK_{SOFT,HARD} are save/restore points in the xhlocks[] to ensure the temporal IRQ events don't interact with task state, the XHLOCK_PROC is a fundament different beast that just happens to share the interface. The purpose of XHLOCK_PROC is to annotate independent execution inside one task. For example workqueues, each work should appear to run in its own 'pristine' 'task'. Remove XHLOCK_PROC in favour of its own interface to avoid confusion. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Byungchul Park <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29perf/x86: Fix caps/ for !IntelPeter Zijlstra2-17/+30
Move the 'max_precise' capability into generic x86 code where it belongs. This fixes a sysfs splat on !Intel systems where we fail to set x86_pmu_caps_group.atts. Reported-and-tested-by: Borislav Petkov <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Fixes: 22688d1c20f5 ("x86/perf: Export some PMU attributes in caps/ directory") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29perf/core, x86: Add PERF_SAMPLE_PHYS_ADDRKan Liang6-4/+55
For understanding how the workload maps to memory channels and hardware behavior, it's very important to collect address maps with physical addresses. For example, 3D XPoint access can only be found by filtering the physical address. Add a new sample type for physical address. perf already has a facility to collect data virtual address. This patch introduces a function to convert the virtual address to physical address. The function is quite generic and can be extended to any architecture as long as a virtual address is provided. - For kernel direct mapping addresses, virt_to_phys is used to convert the virtual addresses to physical address. - For user virtual addresses, __get_user_pages_fast is used to walk the pages tables for user physical address. - This does not work for vmalloc addresses right now. These are not resolved, but code to do that could be added. The new sample type requires collecting the virtual address. The virtual address will not be output unless SAMPLE_ADDR is applied. For security, the physical address can only be exposed to root or privileged user. Tested-by: Madhavan Srinivasan <[email protected]> Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29perf/core, pt, bts: Get rid of itrace_startedAlexander Shishkin4-7/+12
I just noticed that hw.itrace_started and hw.config are aliased to the same location. Now, the PT driver happens to use both, which works out fine by sheer luck: - STORE(hw.itrace_start) is ordered before STORE(hw.config), in the program order, although there are no compiler barriers to ensure that, - to the perf_log_itrace_start() hw.itrace_start looks set at the same time as when it is intended to be set because both stores happen in the same path, - hw.config is never reset to zero in the PT driver. Now, the use of hw.config by the PT driver makes more sense (it being a HW PMU) than messing around with itrace_started, which is an awkward API to begin with. This patch replaces hw.itrace_started with an attach_state bit and an API call for the PMU drivers to use to communicate the condition. Signed-off-by: Alexander Shishkin <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29Merge branch 'perf/urgent' into perf/core, to pick up fixesIngo Molnar116-388/+818
Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29Documentation: Hardware tag matchingArtemy Kovalyov1-0/+64
Add document providing definitions of terms and core explanations for tag matching (TM) protocols, eager and rendezvous, TM application header, tag list manipulations and matching process. Signed-off-by: Artemy Kovalyov <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2017-08-29IB/mlx5: Support IB_SRQT_TMArtemy Kovalyov2-5/+22
Pass to mlx5_core flag to enable rendezvous offload, list_size and CQ when SRQ created with IB_SRQT_TM. Signed-off-by: Artemy Kovalyov <[email protected]> Reviewed-by: Yossi Itigin <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2017-08-29net/mlx5: Add XRQ supportArtemy Kovalyov3-10/+146
Add support to new XRQ(eXtended shared Receive Queue) hardware object. It supports SRQ semantics with addition of extended receive buffers topologies and offloads. Currently supports tag matching topology and rendezvouz offload. Signed-off-by: Artemy Kovalyov <[email protected]> Reviewed-by: Yossi Itigin <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2017-08-29IB/mlx5: Fill XRQ capabilitiesArtemy Kovalyov2-0/+15
Provide driver specific values for XRQ capabilities. Signed-off-by: Artemy Kovalyov <[email protected]> Reviewed-by: Yossi Itigin <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2017-08-29IB/uverbs: Expose XRQ capabilitiesArtemy Kovalyov2-0/+25
Make XRQ capabilities available via ibv_query_device() verb. Signed-off-by: Artemy Kovalyov <[email protected]> Reviewed-by: Yossi Itigin <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2017-08-29IB/uverbs: Add new SRQ type IB_SRQT_TMArtemy Kovalyov1-1/+5
Add new SRQ type capable of new tag matching feature. When SRQ receives a message it will search through the matching list for the corresponding posted receive buffer. The process of searching the matching list is called tag matching. In case the tag matching results in a match, the received message will be placed in the address specified by the receive buffer. In case no match was found the message will be placed in a generic buffer until the corresponding receive buffer will be posted. These messages are called unexpected and their set is called an unexpected list. Signed-off-by: Artemy Kovalyov <[email protected]> Reviewed-by: Yossi Itigin <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2017-08-29IB/uverbs: Add XRQ creation parameter to UAPIArtemy Kovalyov1-1/+1
Add tm_list_size parameter to struct ib_uverbs_create_xsrq. If SRQ type is tag-matching this field defines maximum size of tag matching list. Otherwise, it is expected to be zero. Signed-off-by: Artemy Kovalyov <[email protected]> Reviewed-by: Yossi Itigin <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2017-08-29IB/core: Add new SRQ type IB_SRQT_TMArtemy Kovalyov1-2/+8
This patch adds new SRQ type - IB_SRQT_TM. The new SRQ type supports tag matching and rendezvous offloads for MPI applications. When SRQ receives a message it will search through the matching list for the corresponding posted receive buffer. The process of searching the matching list is called tag matching. In case the tag matching results in a match, the received message will be placed in the address specified by the receive buffer. In case no match was found the message will be placed in a generic buffer until the corresponding receive buffer will be posted. These messages are called unexpected and their set is called an unexpected list. Signed-off-by: Artemy Kovalyov <[email protected]> Reviewed-by: Yossi Itigin <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2017-08-29IB/core: Separate CQ handle in SRQ contextArtemy Kovalyov6-39/+60
Before this change CQ attached to SRQ was part of XRC specific extension. Moving CQ handle out makes it available to other types extending SRQ functionality. Signed-off-by: Artemy Kovalyov <[email protected]> Reviewed-by: Yossi Itigin <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2017-08-29IB/core: Add XRQ capabilitiesArtemy Kovalyov1-0/+19
This patch adds following TM XRQ capabilities: * max_rndv_hdr_size - Max size of rendezvous request message * max_num_tags - Max number of entries in tag matching list * max_ops - Max number of outstanding list operations * max_sge - Max number of SGE in tag matching entry * flags - the following flags are currently defined: - IB_TM_CAP_RC - Support tag matching on RC transport Signed-off-by: Artemy Kovalyov <[email protected]> Reviewed-by: Yossi Itigin <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2017-08-29net/mlx5: Update HW layout definitionsArtemy Kovalyov1-2/+7
* add offload_type field to mlx5_ifc_qpc_bits * update mlx5_ifc_xrqc_bits layout Signed-off-by: Artemy Kovalyov <[email protected]> Reviewed-by: Yossi Itigin <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2017-08-29x86/boot: Prevent faulty bootparams.screeninfo from causing harmJan H. Schönherr1-2/+1
If a zero for the number of lines manages to slip through, scroll() may underflow some offset calculations, causing accesses outside the video memory. Make the check in __putstr() more pessimistic to prevent that. Signed-off-by: Jan H. Schönherr <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Kees Cook <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29x86/boot: Provide more slack space during decompressionJan H. Schönherr1-1/+7
The current slack space is not enough for LZ4, which has a worst case overhead of 0.4% for data that cannot be further compressed. With an LZ4 compressed kernel with an embedded initrd, the output is likely to overwrite the input. Increase the slack space to avoid that. Signed-off-by: Jan H. Schönherr <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Kees Cook <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29perf/ftrace: Fix double traces of perf on ftrace:functionZhou Chengming7-13/+20
When running perf on the ftrace:function tracepoint, there is a bug which can be reproduced by: perf record -e ftrace:function -a sleep 20 & perf record -e ftrace:function ls perf script ls 10304 [005] 171.853235: ftrace:function: perf_output_begin ls 10304 [005] 171.853237: ftrace:function: perf_output_begin ls 10304 [005] 171.853239: ftrace:function: task_tgid_nr_ns ls 10304 [005] 171.853240: ftrace:function: task_tgid_nr_ns ls 10304 [005] 171.853242: ftrace:function: __task_pid_nr_ns ls 10304 [005] 171.853244: ftrace:function: __task_pid_nr_ns We can see that all the function traces are doubled. The problem is caused by the inconsistency of the register function perf_ftrace_event_register() with the probe function perf_ftrace_function_call(). The former registers one probe for every perf_event. And the latter handles all perf_events on the current cpu. So when two perf_events on the current cpu, the traces of them will be doubled. So this patch adds an extra parameter "event" for perf_tp_event, only send sample data to this event when it's not NULL. Signed-off-by: Zhou Chengming <[email protected]> Reviewed-by: Jiri Olsa <[email protected]> Acked-by: Steven Rostedt (VMware) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29perf/core: Fix potential double-fetch bugMeng Xu1-0/+2
While examining the kernel source code, I found a dangerous operation that could turn into a double-fetch situation (a race condition bug) where the same userspace memory region are fetched twice into kernel with sanity checks after the first fetch while missing checks after the second fetch. 1. The first fetch happens in line 9573 get_user(size, &uattr->size). 2. Subsequently the 'size' variable undergoes a few sanity checks and transformations (line 9577 to 9584). 3. The second fetch happens in line 9610 copy_from_user(attr, uattr, size) 4. Given that 'uattr' can be fully controlled in userspace, an attacker can race condition to override 'uattr->size' to arbitrary value (say, 0xFFFFFFFF) after the first fetch but before the second fetch. The changed value will be copied to 'attr->size'. 5. There is no further checks on 'attr->size' until the end of this function, and once the function returns, we lose the context to verify that 'attr->size' conforms to the sanity checks performed in step 2 (line 9577 to 9584). 6. My manual analysis shows that 'attr->size' is not used elsewhere later, so, there is no working exploit against it right now. However, this could easily turns to an exploitable one if careless developers start to use 'attr->size' later. To fix this, override 'attr->size' from the second fetch to the one from the first fetch, regardless of what is actually copied in. In this way, it is assured that 'attr->size' is consistent with the checks performed after the first fetch. Signed-off-by: Meng Xu <[email protected]> Acked-by: Peter Zijlstra <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29x86/entry/64: Use ENTRY() instead of ALIGN+GLOBAL for stub32_clone()Jiri Slaby1-2/+2
ALIGN+GLOBAL is effectively what ENTRY() does, so use ENTRY() which is dedicated for exactly this purpose -- global functions. Note that stub32_clone() is a C-like leaf function -- it has a standard call frame -- it only switches one argument and continues by jumping into C. Since each ENTRY() should be balanced by some END*() marker, we add a corresponding ENDPROC() to stub32_clone() too. Besides that, x86's custom GLOBAL macro is going to die very soon. Signed-off-by: Jiri Slaby <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29x86/fpu/math-emu: Add ENDPROC to functionsJiri Slaby14-4/+21
Functions in math-emu are annotated as ENTRY() symbols, but their ends are not annotated at all. But these are standard functions called from C, with proper stack register update etc. Omitting the ends means: * the annotations are not paired and we cannot deal with such functions e.g. in objtool * the symbols are not marked as functions in the object file * there are no sizes of the functions in the object file So fix this by adding ENDPROC() to each such case in math-emu. Signed-off-by: Jiri Slaby <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29x86/boot/64: Extract efi_pe_entry() from startup_64()Jiri Slaby1-59/+53
Similarly to the 32-bit code, efi_pe_entry body() is somehow squashed into startup_64(). In the old days, we forced startup_64() to start at offset 0x200 and efi_pe_entry() to start at 0x210. But this requirement was removed long time ago, in: 99f857db8857 ("x86, build: Dynamically find entry points in compressed startup code") The way it is now makes the code less readable and illogical. Given we can now safely extract the inlined efi_pe_entry() body from startup_64() into a separate function, we do so. We also annotate the function appropriatelly by ENTRY+ENDPROC. ABI offsets are preserved: 0000000000000000 T startup_32 0000000000000200 T startup_64 0000000000000390 T efi64_stub_entry On the top-level, it looked like: .org 0x200 ENTRY(startup_64) #ifdef CONFIG_EFI_STUB ; start of inlined jmp preferred_addr GLOBAL(efi_pe_entry) ... ; a lot of assembly (efi_pe_entry) leaq preferred_addr(%rax), %rax jmp *%rax preferred_addr: #endif ; end of inlined ... ; a lot of assembly (startup_64) ENDPROC(startup_64) And it is now converted into: .org 0x200 ENTRY(startup_64) ... ; a lot of assembly (startup_64) ENDPROC(startup_64) #ifdef CONFIG_EFI_STUB ENTRY(efi_pe_entry) ... ; a lot of assembly (efi_pe_entry) leaq startup_64(%rax), %rax jmp *%rax ENDPROC(efi_pe_entry) #endif Signed-off-by: Jiri Slaby <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Matt Fleming <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29x86/boot/32: Extract efi_pe_entry() from startup_32()Jiri Slaby1-65/+64
The efi_pe_entry() body is somehow squashed into startup_32(). In the old days, we forced startup_32() to start at offset 0x00 and efi_pe_entry() to start at 0x10. But this requirement was removed long time ago, in: 99f857db8857 ("x86, build: Dynamically find entry points in compressed startup code") The way it is now makes the code less readable and illogical. Given we can now safely extract the inlined efi_pe_entry() body from startup_32() into a separate function, we do so and we separate it to two functions as they are marked already: efi_pe_entry() + efi32_stub_entry(). We also annotate the functions appropriatelly by ENTRY+ENDPROC. ABI offset is preserved: 0000 128 FUNC GLOBAL DEFAULT 6 startup_32 0080 60 FUNC GLOBAL DEFAULT 6 efi_pe_entry 00bc 68 FUNC GLOBAL DEFAULT 6 efi32_stub_entry On the top-level, it looked like this: ENTRY(startup_32) #ifdef CONFIG_EFI_STUB ; start of inlined jmp preferred_addr ENTRY(efi_pe_entry) ... ; a lot of assembly (efi_pe_entry) ENTRY(efi32_stub_entry) ... ; a lot of assembly (efi32_stub_entry) leal preferred_addr(%eax), %eax jmp *%eax preferred_addr: #endif ; end of inlined ... ; a lot of assembly (startup_32) ENDPROC(startup_32) And it is now converted into: ENTRY(startup_32) ... ; a lot of assembly (startup_32) ENDPROC(startup_32) #ifdef CONFIG_EFI_STUB ENTRY(efi_pe_entry) ... ; a lot of assembly (efi_pe_entry) ENDPROC(efi_pe_entry) ENTRY(efi32_stub_entry) ... ; a lot of assembly (efi32_stub_entry) leal startup_32(%eax), %eax jmp *%eax ENDPROC(efi32_stub_entry) #endif Signed-off-by: Jiri Slaby <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Matt Fleming <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29locking/refcounts, x86/asm: Disable CONFIG_ARCH_HAS_REFCOUNT for the time beingIngo Molnar1-1/+2
Mike Galbraith bisected a boot crash back to the following commit: 7a46ec0e2f48 ("locking/refcounts, x86/asm: Implement fast refcount overflow protection") The crash/hang pattern is: > Symptom is a few splats as below, with box finally hanging. Network > comes up, but neither ssh nor console login is possible. > > ------------[ cut here ]------------ > WARNING: CPU: 4 PID: 0 at net/netlink/af_netlink.c:374 netlink_sock_destruct+0x82/0xa0 > ... > __sk_destruct() > rcu_process_callbacks() > __do_softirq() > irq_exit() > smp_apic_timer_interrupt() > apic_timer_interrupt() We are at -rc7 already, and the code has grown some dependencies, so instead of a plain revert disable the config temporarily, in the hope of getting real fixes. Reported-by: Mike Galbraith <[email protected]> Tested-by: Mike Galbraith <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Elena Reshetova <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Kees Cook <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29x86/ldt: Fix off by one in get_segment_base()Dan Carpenter1-5/+2
ldt->entries[] is allocated in alloc_ldt_struct(). It has ldt->nr_entries elements and ldt->nr_entries is capped at LDT_ENTRIES. So if "idx" is == ldt->nr_entries then we're reading beyond the end of the buffer. It seems duplicative to have two limit checks when one would work just as well so I removed the check against LDT_ENTRIES. The gdt_page.gdt[] array has GDT_ENTRIES entries. Signed-off-by: Dan Carpenter <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Fixes: d07bdfd322d3 ("perf/x86: Fix USER/KERNEL tagging of samples properly") Link: http://lkml.kernel.org/r/20170818102516.gqwm4xdvvuvjw5ho@mwanda Signed-off-by: Ingo Molnar <[email protected]>
2017-08-29devicetree: bindings: Remove deprecated propertiesMagnus Damm1-4/+0
The deprecated DT properties are part of the GIT history, no need to keep them around any longer. Signed-off-by: Magnus Damm <[email protected]> Acked-by: Rob Herring <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Signed-off-by: Daniel Lezcano <[email protected]>
2017-08-29devicetree: bindings: Remove unused 32-bit CMT bindingsMagnus Damm1-16/+0
Remove the 32-bit CMT compat strings to reduce maintenance burden. It should be fine to break DT compatibility because the 32-bit CMT DT binding was never part of any upstream DTS file. Signed-off-by: Magnus Damm <[email protected]> Acked-by: Rob Herring <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Signed-off-by: Daniel Lezcano <[email protected]>
2017-08-29devicetree: bindings: Deprecate property, update exampleMagnus Damm1-7/+17
Deprecate "renesas,channels-mask" and update the r8a7790 CMT example. Signed-off-by: Magnus Damm <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Laurent Pinchart <[email protected]> Acked-by: Rob Herring <[email protected]> Signed-off-by: Daniel Lezcano <[email protected]>
2017-08-29devicetree: bindings: r8a73a4 and R-Car Gen2 CMT bindingsMagnus Damm1-10/+14
Update SoC-specific bindings for r8a73a4 and R-Car Gen2 CMT0 and CMT1. Signed-off-by: Magnus Damm <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Laurent Pinchart <[email protected]> Acked-by: Rob Herring <[email protected]> Signed-off-by: Daniel Lezcano <[email protected]>
2017-08-29devicetree: bindings: R-Car Gen2 CMT0 and CMT1 bindingsMagnus Damm1-0/+3
Add documentation for new separate CMT0 and CMT1 DT compatible strings for R-Car Gen2. These compat strings allow us to enable CMT1-specific features in the driver. The old compat strings will be deprecated in the not so distant future. Signed-off-by: Magnus Damm <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Laurent Pinchart <[email protected]> Acked-by: Rob Herring <[email protected]> Signed-off-by: Daniel Lezcano <[email protected]>
2017-08-29devicetree: bindings: Remove sh7372 CMT bindingMagnus Damm1-9/+3
Remove the sh7372 CMT compat string to reduce maintenance burden. It should be fine to break DT compatibility because: 1) The sh7372 SoC support has been removed from upstream 2) The sh7372 CMT DT binding was never part of upstream DTS 3) The CMT driver never matches on the sh7372 binding Signed-off-by: Magnus Damm <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Laurent Pinchart <[email protected]> Acked-by: Rob Herring <[email protected]> Signed-off-by: Daniel Lezcano <[email protected]>
2017-08-29clocksource/drivers/imx-tpm: Add imx tpm timer supportDong Aisheng3-0/+248
IMX Timer/PWM Module (TPM) supports both timer and pwm function while this patch only adds the timer support. PWM would be added later. The TPM counter, compare and capture registers are clocked by an asynchronous clock that can remain enabled in low power modes. NOTE: We observed in a very small probability, the bus fabric contention between GPU and A7 may results a few cycles delay of writing CNT registers which may cause the min_delta event got missed, so we need add a ETIME check here in case it happened. Cc: Daniel Lezcano <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Shawn Guo <[email protected]> Cc: Anson Huang <[email protected]> Cc: Bai Ping <[email protected]> Signed-off-by: Dong Aisheng <[email protected]> Signed-off-by: Daniel Lezcano <[email protected]>
2017-08-29dt-bindings: timer: Add nxp tpm timer binding docDong Aisheng1-0/+28
Adding NXP Low Power Timer/Pulse Width Modulation Module (TPM) binding doc. Cc: Mark Rutland <[email protected]> Cc: [email protected] Cc: Daniel Lezcano <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Shawn Guo <[email protected]> Cc: Bai Ping <[email protected]> Acked-by: Rob Herring <[email protected]> Signed-off-by: Dong Aisheng <[email protected]> Signed-off-by: Daniel Lezcano <[email protected]>
2017-08-29x86/microcode/intel: Improve microcode patches saving flowBorislav Petkov1-13/+14
Avoid potentially dereferencing a NULL pointer when saving a microcode patch for early loading on the application processors. While at it, drop the IS_ERR() checking in favor of simpler, NULL-ptr checks which are sufficient and rename __alloc_microcode_buf() to memdup_patch() to more precisely denote what it does. No functionality change. Reported-by: Dan Carpenter <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected]
2017-08-29x86/mm: Fix SME encryption stack ptr handlingBorislav Petkov1-3/+3
sme_encrypt_execute() stashes the stack pointer on entry into %rbp because it allocates a one-page stack in the non-encrypted area for the encryption routine to use. When the latter is done, it restores it from %rbp again, before returning. However, it uses the FRAME_* macros partially but restores %rsp from %rbp explicitly with a MOV. And this is fine as long as the macros *actually* do something. Unless, you do a !CONFIG_FRAME_POINTER build where those macros are empty. Then, we still restore %rsp from %rbp but %rbp contains *something* and this leads to a stack corruption. The manifestation being a triple-fault during early boot when testing SME. Good luck to me debugging this with the clumsy endless-loop-in-asm method and narrowing it down gradually. :-( So, long story short, open-code the frame macros so that there's no monkey business and we avoid subtly breaking SME depending on the .config. Fixes: 6ebcb060713f ("x86/mm: Add support to encrypt the kernel in-place") Signed-off-by: Borislav Petkov <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Tom Lendacky <[email protected]> Cc: Brijesh Singh <[email protected]> Link: http://lkml.kernel.org/r/[email protected]
2017-08-28net: dsa: Don't dereference dst->cpu_dp->netdevFlorian Fainelli1-1/+1
If we do not have a master network device attached dst->cpu_dp will be NULL and accessing cpu_dp->netdev will create a trace similar to the one below. The correct check is on dst->cpu_dp period. [ 1.004650] DSA: switch 0 0 parsed [ 1.008078] Unable to handle kernel NULL pointer dereference at virtual address 00000010 [ 1.016195] pgd = c0003000 [ 1.018918] [00000010] *pgd=80000000004003, *pmd=00000000 [ 1.024349] Internal error: Oops: 206 [#1] SMP ARM [ 1.029157] Modules linked in: [ 1.032228] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.13.0-rc6-00071-g45b45afab9bd-dirty #7 [ 1.040772] Hardware name: Broadcom STB (Flattened Device Tree) [ 1.046704] task: ee08f840 task.stack: ee090000 [ 1.051258] PC is at dsa_register_switch+0x5e0/0x9dc [ 1.056234] LR is at dsa_register_switch+0x5d0/0x9dc [ 1.061211] pc : [<c08fb28c>] lr : [<c08fb27c>] psr: 60000213 [ 1.067491] sp : ee091d88 ip : 00000000 fp : 0000000c [ 1.072728] r10: 00000000 r9 : 00000001 r8 : ee208010 [ 1.077965] r7 : ee2b57b0 r6 : ee2b5780 r5 : 00000000 r4 : ee208e0c [ 1.084506] r3 : 00000000 r2 : 00040d00 r1 : 2d1b2000 r0 : 00000016 [ 1.091050] Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user [ 1.098199] Control: 32c5387d Table: 00003000 DAC: fffffffd [ 1.103957] Process swapper/0 (pid: 1, stack limit = 0xee090210) Reported-by: Dan Carpenter <[email protected]> Fixes: 6d3c8c0dd88a ("net: dsa: Remove master_netdev and use dst->cpu_dp->netdev") Signed-off-by: Florian Fainelli <[email protected]> Signed-off-by: David S. Miller <[email protected]>