aboutsummaryrefslogtreecommitdiff
path: root/kernel
AgeCommit message (Collapse)AuthorFilesLines
2018-10-17locking/qspinlock_stat: Count instances of nested lock slowpathsWaiman Long2-0/+11
Queued spinlock supports up to 4 levels of lock slowpath nesting - user context, soft IRQ, hard IRQ and NMI. However, we are not sure how often the nesting happens. So add 3 more per-CPU stat counters to track the number of instances where nesting index goes to 1, 2 and 3 respectively. On a dual-socket 64-core 128-thread Zen server, the following were the new stat counter values under different circumstances: State slowpath index1 index2 index3 ----- -------- ------ ------ ------- After bootup 1,012,150 82 0 0 After parallel build + perf-top 125,195,009 82 0 0 So the chance of having more than 2 levels of nesting is extremely low. [ mingo: Minor changelog edits. ] Signed-off-by: Waiman Long <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-10-16locking/qspinlock, x86: Provide liveness guaranteePeter Zijlstra1-1/+15
On x86 we cannot do fetch_or() with a single instruction and thus end up using a cmpxchg loop, this reduces determinism. Replace the fetch_or() with a composite operation: tas-pending + load. Using two instructions of course opens a window we previously did not have. Consider the scenario: CPU0 CPU1 CPU2 1) lock trylock -> (0,0,1) 2) lock trylock /* fail */ 3) unlock -> (0,0,0) 4) lock trylock -> (0,0,1) 5) tas-pending -> (0,1,1) load-val <- (0,1,0) from 3 6) clear-pending-set-locked -> (0,0,1) FAIL: _2_ owners where 5) is our new composite operation. When we consider each part of the qspinlock state as a separate variable (as we can when _Q_PENDING_BITS == 8) then the above is entirely possible, because tas-pending will only RmW the pending byte, so the later load is able to observe prior tail and lock state (but not earlier than its own trylock, which operates on the whole word, due to coherence). To avoid this we need 2 things: - the load must come after the tas-pending (obviously, otherwise it can trivially observe prior state). - the tas-pending must be a full word RmW instruction, it cannot be an XCHGB for example, such that we cannot observe other state prior to setting pending. On x86 we can realize this by using "LOCK BTS m32, r32" for tas-pending followed by a regular load. Note that observing later state is not a problem: - if we fail to observe a later unlock, we'll simply spin-wait for that store to become visible. - if we observe a later xchg_tail(), there is no difference from that xchg_tail() having taken place before the tas-pending. Suggested-by: Will Deacon <[email protected]> Reported-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Will Deacon <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: [email protected] Cc: [email protected] Fixes: 59fb586b4a07 ("locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpath") Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-10-16locking/qspinlock: Rework some commentsPeter Zijlstra1-10/+26
While working my way through the code again; I felt the comments could use help. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Will Deacon <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-10-16locking/qspinlock: Re-order codePeter Zijlstra1-29/+27
Flip the branch condition after atomic_fetch_or_acquire(_Q_PENDING_VAL) such that we loose the indent. This also result in a more natural code flow IMO. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Will Deacon <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-10-16Merge branch 'x86/build' into locking/core, to pick up dependent patches and ↵Ingo Molnar17-120/+180
unify jump-label work Signed-off-by: Ingo Molnar <[email protected]>
2018-10-16sched/fair: Fix the min_vruntime update logic in dequeue_entity()Song Muchun1-1/+1
The comment and the code around the update_min_vruntime() call in dequeue_entity() are not in agreement. From commit: b60205c7c558 ("sched/fair: Fix min_vruntime tracking") I think that we want to update min_vruntime when a task is sleeping/migrating. So, the check is inverted there - fix it. Signed-off-by: Song Muchun <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Fixes: b60205c7c558 ("sched/fair: Fix min_vruntime tracking") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-10-16locking/lockdep: Remove duplicated 'lock_class_ops' percpu arrayWaiman Long1-1/+0
Remove the duplicated 'lock_class_ops' percpu array that is not used anywhere. Signed-off-by: Waiman Long <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Fixes: 8ca2b56cd7da ("locking/lockdep: Make class->ops a percpu counter and move it under CONFIG_DEBUG_LOCKDEP=y") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-10-15Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller8-2644/+29
Daniel Borkmann says: ==================== pull-request: bpf-next 2018-10-16 The following pull-request contains BPF updates for your *net-next* tree. The main changes are: 1) Convert BPF sockmap and kTLS to both use a new sk_msg API and enable sk_msg BPF integration for the latter, from Daniel and John. 2) Enable BPF syscall side to indicate for maps that they do not support a map lookup operation as opposed to just missing key, from Prashant. 3) Add bpftool map create command which after map creation pins the map into bpf fs for further processing, from Jakub. 4) Add bpftool support for attaching programs to maps allowing sock_map and sock_hash to be used from bpftool, from John. 5) Improve syscall BPF map update/delete path for map-in-map types to wait a RCU grace period for pending references to complete, from Daniel. 6) Couple of follow-up fixes for the BPF socket lookup to get it enabled also when IPv6 is compiled as a module, from Joe. 7) Fix a generic-XDP bug to handle the case when the Ethernet header was mangled and thus update skb's protocol and data, from Jesper. 8) Add a missing BTF header length check between header copies from user space, from Wenwen. 9) Minor fixups in libbpf to use __u32 instead u32 types and include proper perf_event.h uapi header instead of perf internal one, from Yonghong. 10) Allow to pass user-defined flags through EXTRA_CFLAGS and EXTRA_LDFLAGS to bpftool's build, from Jiri. 11) BPF kselftest tweaks to add LWTUNNEL to config fragment and to install with_addr.sh script from flow dissector selftest, from Anders. ==================== Signed-off-by: David S. Miller <[email protected]>
2018-10-15bpf, sockmap: convert to generic sk_msg interfaceDaniel Borkmann4-2620/+3
Add a generic sk_msg layer, and convert current sockmap and later kTLS over to make use of it. While sk_buff handles network packet representation from netdevice up to socket, sk_msg handles data representation from application to socket layer. This means that sk_msg framework spans across ULP users in the kernel, and enables features such as introspection or filtering of data with the help of BPF programs that operate on this data structure. Latter becomes in particular useful for kTLS where data encryption is deferred into the kernel, and as such enabling the kernel to perform L7 introspection and policy based on BPF for TLS connections where the record is being encrypted after BPF has run and came to a verdict. In order to get there, first step is to transform open coding of scatter-gather list handling into a common core framework that subsystems can use. The code itself has been split and refactored into three bigger pieces: i) the generic sk_msg API which deals with managing the scatter gather ring, providing helpers for walking and mangling, transferring application data from user space into it, and preparing it for BPF pre/post-processing, ii) the plain sock map itself where sockets can be attached to or detached from; these bits are independent of i) which can now be used also without sock map, and iii) the integration with plain TCP as one protocol to be used for processing L7 application data (later this could e.g. also be extended to other protocols like UDP). The semantics are the same with the old sock map code and therefore no change of user facing behavior or APIs. While pursuing this work it also helped finding a number of bugs in the old sockmap code that we've fixed already in earlier commits. The test_sockmap kselftest suite passes through fine as well. Joint work with John. Signed-off-by: Daniel Borkmann <[email protected]> Signed-off-by: John Fastabend <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2018-10-15tcp, ulp: remove ulp bits from sockmapDaniel Borkmann1-29/+10
In order to prepare sockmap logic to be used in combination with kTLS we need to detangle it from ULP, and further split it in later commits into a generic API. Joint work with John. Signed-off-by: Daniel Borkmann <[email protected]> Signed-off-by: John Fastabend <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2018-10-14Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfDavid S. Miller1-8/+2
Daniel Borkmann says: ==================== pull-request: bpf 2018-10-14 The following pull-request contains BPF updates for your *net* tree. The main changes are: 1) Fix xsk map update and delete operation to not call synchronize_net() but to piggy back on SOCK_RCU_FREE for sockets instead as we are not allowed to sleep under RCU, from Björn. 2) Do not change RLIMIT_MEMLOCK in reuseport_bpf selftest if the process already has unlimited RLIMIT_MEMLOCK, from Eric. ==================== Signed-off-by: David S. Miller <[email protected]>
2018-10-12Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller2-9/+22
Conflicts were easy to resolve using immediate context mostly, except the cls_u32.c one where I simply too the entire HEAD chunk. Signed-off-by: David S. Miller <[email protected]>
2018-10-12bpf: wait for running BPF programs when updating map-in-mapDaniel Colascione1-0/+13
The map-in-map frequently serves as a mechanism for atomic snapshotting of state that a BPF program might record. The current implementation is dangerous to use in this way, however, since userspace has no way of knowing when all programs that might have retrieved the "old" value of the map may have completed. This change ensures that map update operations on map-in-map map types always wait for all references to the old map to drop before returning to userspace. Signed-off-by: Daniel Colascione <[email protected]> Reviewed-by: Joel Fernandes (Google) <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2018-10-12Merge branch 'for-linus' of ↵Greg Kroah-Hartman1-0/+6
git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input Dmitry writes: "Input updates for v4.19-rc7 - we added a few scheduling points into various input interfaces to ensure that large writes will not cause RCU stalls - fixed configuring PS/2 keyboards as wakeup devices on newer platforms - added a new Xbox gamepad ID." * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input: Input: uinput - add a schedule point in uinput_inject_events() Input: evdev - add a schedule point in evdev_write() Input: mousedev - add a schedule point in mousedev_write() Input: i8042 - enable keyboard wakeups by default when s2idle is used Input: xpad - add support for Xbox1 PDP Camo series gamepad
2018-10-12printk: fix integer overflow in setup_log_buf()Sergey Senozhatsky1-1/+1
The way we calculate logbuf free space percentage overflows signed integer: int free; free = __LOG_BUF_LEN - log_next_idx; pr_info("early log buf free: %u(%u%%)\n", free, (free * 100) / __LOG_BUF_LEN); We support LOG_BUF_LEN of up to 1<<25 bytes. Since setup_log_buf() is called during early init, logbuf is mostly empty, so __LOG_BUF_LEN - log_next_idx is close to 1<<25. Thus when we multiply it by 100, we overflow signed integer value range: 100 is 2^6 + 2^5 + 2^2. Example, booting with LOG_BUF_LEN 1<<25 and log_buf_len=2G boot param: [ 0.075317] log_buf_len: -2147483648 bytes [ 0.075319] early log buf free: 33549896(-28%) Make "free" unsigned integer and use appropriate printk() specifier. Link: http://lkml.kernel.org/r/[email protected] To: Steven Rostedt <[email protected]> Cc: [email protected] Cc: Sergey Senozhatsky <[email protected]> Signed-off-by: Sergey Senozhatsky <[email protected]> Signed-off-by: Petr Mladek <[email protected]>
2018-10-12printk: do not preliminary split up cont bufferSergey Senozhatsky1-3/+0
We have a proper 'overflow' check which tells us that we need to split up existing cont buffer in separate records: if (cont.len + len > sizeof(cont.buf)) cont_flush(); At the same time we also have one extra flush: "if cont buffer is 80% full then split it up" in cont_add(): if (cont.len > (sizeof(cont.buf) * 80) / 100) cont_flush(); This looks to be redundant, since the existing "overflow" check should work just fine, so remove this 80% check and wait for either a normal cont termination \n, for preliminary flush due to possible buffer overflow or for preliminary flush due to cont race. Link: http://lkml.kernel.org/r/[email protected] To: Steven Rostedt <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Dmitriy Vyukov <[email protected]> Cc: Tetsuo Handa <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: LKML <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Signed-off-by: Sergey Senozhatsky <[email protected]> Signed-off-by: Petr Mladek <[email protected]>
2018-10-12printk: lock/unlock console only for new logbuf entriesSergey Senozhatsky1-3/+7
Prior to commit 5c2992ee7fd8a29 ("printk: remove console flushing special cases for partial buffered lines") we would do console_cont_flush() for each pr_cont() to print cont fragments, so console_unlock() would actually print data: pr_cont(); console_lock(); console_unlock() console_cont_flush(); // print cont fragment ... pr_cont(); console_lock(); console_unlock() console_cont_flush(); // print cont fragment We don't do console_cont_flush() anymore, so when we do pr_cont() console_unlock() does nothing (unless we flushed the cont buffer): pr_cont(); console_lock(); console_unlock(); // noop ... pr_cont(); console_lock(); console_unlock(); // noop ... pr_cont(); cont_flush(); console_lock(); console_unlock(); // print data We also wakeup klogd purposelessly for pr_cont() output - un-flushed cont buffer is not stored in log_buf; there is nothing to pull. Thus we can console_lock()/console_unlock()/wake_up_klogd() only when we know that we log_store()-ed a message and there is something to print to the consoles/syslog. Link: http://lkml.kernel.org/r/[email protected] To: Steven Rostedt <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Dmitriy Vyukov <[email protected]> Cc: Tetsuo Handa <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: LKML <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Signed-off-by: Sergey Senozhatsky <[email protected]> Signed-off-by: Petr Mladek <[email protected]>
2018-10-12printk: keep kernel cont support always enabledSergey Senozhatsky1-18/+4
Since commit 5c2992ee7fd8a29 ("printk: remove console flushing special cases for partial buffered lines") we don't print cont fragments to the consoles; cont lines are now proper log_buf entries and there is no "consecutive continuation flag" anymore: we either have 'c' entries that mark continuation lines without fragments; or '-' entries that mark normal logbuf entries. There are no '+' entries anymore. However, we still have a small leftover - presence of ext_console drivers disables kernel cont support and we flush each pr_cont() and store it as a separate log_buf entry. Previously, it worked because msg_print_ext_header() had that "an optional external merge of the records" functionality: if (msg->flags & LOG_CONT) cont = (prev_flags & LOG_CONT) ? '+' : 'c'; We don't do this as of now, so keep kernel cont always enabled. Note from pmladek: The original purpose was to get full information including the metadata and dictionary via extended console drivers, see commit 6fe29354befe4c46e ("printk: implement support for extended console drivers"). The dictionary probably was the most important part but it was actually lost: static void cont_flush(void) { [...] log_store(cont.facility, cont.level, cont.flags, cont.ts_nsec, NULL, 0, cont.buf, cont.len); Nobody noticed because the only dictionary user is dev_printk() and dev_cont() is _not_ defined. Link: http://lkml.kernel.org/r/[email protected] To: Steven Rostedt <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Dmitriy Vyukov <[email protected]> Cc: Tetsuo Handa <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: LKML <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Signed-off-by: Sergey Senozhatsky <[email protected]> [[email protected]: Updated commit message] Signed-off-by: Petr Mladek <[email protected]>
2018-10-11Merge branch 'for-4.19-fixes' of ↵Greg Kroah-Hartman1-9/+16
git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup Tejun writes: "cgroup fixes for v4.19-rc7 One cgroup2 threaded mode fix for v4.19-rc7. While threaded mode isn't used widely (yet) and the bug requires somewhat convoluted sequence of operations, it causes a userland visible malfunction - EINVAL on a valid attempt to enable threaded mode. This pull request contains the fix" * 'for-4.19-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup: cgroup: Fix dom_cgrp propagation when enabling threaded mode
2018-10-11sched/fair: Fix throttle_list starvation with low CFS quotaPhil Auld2-3/+21
With a very low cpu.cfs_quota_us setting, such as the minimum of 1000, distribute_cfs_runtime may not empty the throttled_list before it runs out of runtime to distribute. In that case, due to the change from c06f04c7048 to put throttled entries at the head of the list, later entries on the list will starve. Essentially, the same X processes will get pulled off the list, given CPU time and then, when expired, get put back on the head of the list where distribute_cfs_runtime will give runtime to the same set of processes leaving the rest. Fix the issue by setting a bit in struct cfs_bandwidth when distribute_cfs_runtime is running, so that the code in throttle_cfs_rq can decide to put the throttled entry on the tail or the head of the list. The bit is set/cleared by the callers of distribute_cfs_runtime while they hold cfs_bandwidth->lock. This is easy to reproduce with a handful of CPU consumers. I use 'crash' on the live system. In some cases you can simply look at the throttled list and see the later entries are not changing: crash> list cfs_rq.throttled_list -H 0xffff90b54f6ade40 -s cfs_rq.runtime_remaining | paste - - | awk '{print $1" "$4}' | pr -t -n3 1 ffff90b56cb2d200 -976050 2 ffff90b56cb2cc00 -484925 3 ffff90b56cb2bc00 -658814 4 ffff90b56cb2ba00 -275365 5 ffff90b166a45600 -135138 6 ffff90b56cb2da00 -282505 7 ffff90b56cb2e000 -148065 8 ffff90b56cb2fa00 -872591 9 ffff90b56cb2c000 -84687 10 ffff90b56cb2f000 -87237 11 ffff90b166a40a00 -164582 crash> list cfs_rq.throttled_list -H 0xffff90b54f6ade40 -s cfs_rq.runtime_remaining | paste - - | awk '{print $1" "$4}' | pr -t -n3 1 ffff90b56cb2d200 -994147 2 ffff90b56cb2cc00 -306051 3 ffff90b56cb2bc00 -961321 4 ffff90b56cb2ba00 -24490 5 ffff90b166a45600 -135138 6 ffff90b56cb2da00 -282505 7 ffff90b56cb2e000 -148065 8 ffff90b56cb2fa00 -872591 9 ffff90b56cb2c000 -84687 10 ffff90b56cb2f000 -87237 11 ffff90b166a40a00 -164582 Sometimes it is easier to see by finding a process getting starved and looking at the sched_info: crash> task ffff8eb765994500 sched_info PID: 7800 TASK: ffff8eb765994500 CPU: 16 COMMAND: "cputest" sched_info = { pcount = 8, run_delay = 697094208, last_arrival = 240260125039, last_queued = 240260327513 }, crash> task ffff8eb765994500 sched_info PID: 7800 TASK: ffff8eb765994500 CPU: 16 COMMAND: "cputest" sched_info = { pcount = 8, run_delay = 697094208, last_arrival = 240260125039, last_queued = 240260327513 }, Signed-off-by: Phil Auld <[email protected]> Reviewed-by: Ben Segall <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Fixes: c06f04c70489 ("sched: Fix potential near-infinite distribute_cfs_runtime() loop") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-10-11xsk: do not call synchronize_net() under RCU read lockBjörn Töpel1-8/+2
The XSKMAP update and delete functions called synchronize_net(), which can sleep. It is not allowed to sleep during an RCU read section. Instead we need to make sure that the sock sk_destruct (xsk_destruct) function is asynchronously called after an RCU grace period. Setting the SOCK_RCU_FREE flag for XDP sockets takes care of this. Fixes: fbfc504a24f5 ("bpf: introduce new bpf AF_XDP map type BPF_MAP_TYPE_XSKMAP") Reported-by: Eric Dumazet <[email protected]> Signed-off-by: Björn Töpel <[email protected]> Acked-by: Song Liu <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2018-10-10signal: Guard against negative signal numbers in copy_siginfo_from_user32Eric W. Biederman1-1/+1
While fixing an out of bounds array access in known_siginfo_layout reported by the kernel test robot it became apparent that the same bug exists in siginfo_layout and affects copy_siginfo_from_user32. The straight forward fix that makes guards against making this mistake in the future and should keep the code size small is to just take an unsigned signal number instead of a signed signal number, as I did to fix known_siginfo_layout. Cc: [email protected] Fixes: cc731525f26a ("signal: Remove kernel interal si_code magic") Signed-off-by: "Eric W. Biederman" <[email protected]>
2018-10-10signal: Guard against negative signal numbers in copy_siginfo_from_userEric W. Biederman1-1/+1
The bounds checks in known_siginfo_layout only guards against positive numbers that are too large, large negative can slip through and can cause out of bounds accesses. Ordinarily this is not a concern because early in signal processing the signal number is filtered with valid_signal which ensures it is a small positive signal number, but copy_siginfo_from_user is called before this check is performed. [ 73.031126] BUG: unable to handle kernel paging request at ffffffff6281bcb6 [ 73.032038] PGD 3014067 P4D 3014067 PUD 0 [ 73.032565] Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC PTI [ 73.033287] CPU: 0 PID: 732 Comm: trinity-c3 Tainted: G W T 4.19.0-rc1-00077-g4ce5f9c #1 [ 73.034423] RIP: 0010:copy_siginfo_from_user+0x4d/0xd0 [ 73.034908] Code: 00 8b 53 08 81 fa 80 00 00 00 0f 84 90 00 00 00 85 d2 7e 2d 48 63 0b 83 f9 1f 7f 1c 8d 71 ff bf d8 04 01 50 48 0f a3 f7 73 0e <0f> b6 8c 09 20 bb 81 82 39 ca 7f 15 eb 68 31 c0 83 fa 06 7f 0c eb [ 73.036665] RSP: 0018:ffff88001b8f7e20 EFLAGS: 00010297 [ 73.037160] RAX: 0000000000000000 RBX: ffff88001b8f7e90 RCX: fffffffff00000cb [ 73.037865] RDX: 0000000000000001 RSI: 00000000f00000ca RDI: 00000000500104d8 [ 73.038546] RBP: ffff88001b8f7e80 R08: 0000000000000000 R09: 0000000000000000 [ 73.039201] R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000008 [ 73.039874] R13: 00000000000002dc R14: 0000000000000000 R15: 0000000000000000 [ 73.040613] FS: 000000000104a880(0000) GS:ffff88001f000000(0000) knlGS:0000000000000000 [ 73.041649] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 73.042405] CR2: ffffffff6281bcb6 CR3: 000000001cb52003 CR4: 00000000001606b0 [ 73.043351] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 73.044286] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000600 [ 73.045221] Call Trace: [ 73.045556] __x64_sys_rt_tgsigqueueinfo+0x34/0xa0 [ 73.046199] do_syscall_64+0x1a4/0x390 [ 73.046708] ? vtime_user_enter+0x61/0x80 [ 73.047242] ? __context_tracking_enter+0x4e/0x60 [ 73.047714] ? __context_tracking_enter+0x4e/0x60 [ 73.048278] entry_SYSCALL_64_after_hwframe+0x44/0xa9 Therefore fix known_siginfo_layout to take an unsigned signal number instead of a signed signal number. All valid signal numbers are small positive numbers so they will not be affected, but invalid negative signal numbers will now become large positive signal numbers and will not be used as indices into the sig_sicodes array. Making the signal number unsigned makes it difficult for similar mistakes to happen in the future. Fixes: 4ce5f9c9e754 ("signal: Use a smaller struct siginfo in the kernel") Inspired-by: Sean Christopherson <[email protected]> Reported-by: kernel test robot <[email protected]> Signed-off-by: "Eric W. Biederman" <[email protected]>
2018-10-10tick/sched : Remove redundant cpu_online() checkPeng Hao1-1/+1
can_stop_idle_tick() checks cpu_online() twice. The first check leaves the function when the CPU is not online, so the second check it redundant. Remove it. Signed-off-by: Peng Hao <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-10-09bpf: return EOPNOTSUPP when map lookup isn't supportedPrashant Bhole4-4/+4
Return ERR_PTR(-EOPNOTSUPP) from map_lookup_elem() methods of below map types: - BPF_MAP_TYPE_PROG_ARRAY - BPF_MAP_TYPE_STACK_TRACE - BPF_MAP_TYPE_XSKMAP - BPF_MAP_TYPE_SOCKMAP/BPF_MAP_TYPE_SOCKHASH Signed-off-by: Prashant Bhole <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2018-10-09bpf: error handling when map_lookup_elem isn't supportedPrashant Bhole1-2/+7
The error value returned by map_lookup_elem doesn't differentiate whether lookup was failed because of invalid key or lookup is not supported. Lets add handling for -EOPNOTSUPP return value of map_lookup_elem() method of map, with expectation from map's implementation that it should return -EOPNOTSUPP if lookup is not supported. The errno for bpf syscall for BPF_MAP_LOOKUP_ELEM command will be set to EOPNOTSUPP if map lookup is not supported. Signed-off-by: Prashant Bhole <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2018-10-09bpf: btf: Fix a missing check bugWenwen Wang1-0/+3
In btf_parse_hdr(), the length of the btf data header is firstly copied from the user space to 'hdr_len' and checked to see whether it is larger than 'btf_data_size'. If yes, an error code EINVAL is returned. Otherwise, the whole header is copied again from the user space to 'btf->hdr'. However, after the second copy, there is no check between 'btf->hdr->hdr_len' and 'hdr_len' to confirm that the two copies get the same value. Given that the btf data is in the user space, a malicious user can race to change the data between the two copies. By doing so, the user can provide malicious data to the kernel and cause undefined behavior. This patch adds a necessary check after the second copy, to make sure 'btf->hdr->hdr_len' has the same value as 'hdr_len'. Otherwise, an error code EINVAL will be returned. Signed-off-by: Wenwen Wang <[email protected]> Acked-by: Song Liu <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2018-10-09resource: Clean it up a bitBorislav Petkov1-29/+26
- Drop BUG_ON()s and do normal error handling instead, in find_next_iomem_res(). - Align function arguments on opening braces. - Get rid of local var sibling_only in find_next_iomem_res(). - Shorten unnecessarily long first_level_children_only arg name. Signed-off-by: Borislav Petkov <[email protected]> CC: Andrew Morton <[email protected]> CC: Bjorn Helgaas <[email protected]> CC: Brijesh Singh <[email protected]> CC: Dan Williams <[email protected]> CC: H. Peter Anvin <[email protected]> CC: Lianbo Jiang <[email protected]> CC: Takashi Iwai <[email protected]> CC: Thomas Gleixner <[email protected]> CC: Tom Lendacky <[email protected]> CC: Vivek Goyal <[email protected]> CC: Yaowei Bai <[email protected]> CC: [email protected] CC: [email protected] CC: [email protected] CC: [email protected] CC: [email protected] Link: <new submission>
2018-10-09resource: Fix find_next_iomem_res() iteration issueBjorn Helgaas1-54/+42
Previously find_next_iomem_res() used "*res" as both an input parameter for the range to search and the type of resource to search for, and an output parameter for the resource we found, which makes the interface confusing. The current callers use find_next_iomem_res() incorrectly because they allocate a single struct resource and use it for repeated calls to find_next_iomem_res(). When find_next_iomem_res() returns a resource, it overwrites the start, end, flags, and desc members of the struct. If we call find_next_iomem_res() again, we must update or restore these fields. The previous code restored res.start and res.end, but not res.flags or res.desc. Since the callers did not restore res.flags, if they searched for flags IORESOURCE_MEM | IORESOURCE_BUSY and found a resource with flags IORESOURCE_MEM | IORESOURCE_BUSY | IORESOURCE_SYSRAM, the next search would incorrectly skip resources unless they were also marked as IORESOURCE_SYSRAM. Fix this by restructuring the interface so it takes explicit "start, end, flags" parameters and uses "*res" only as an output parameter. Based on a patch by Lianbo Jiang <[email protected]>. [ bp: While at it: - make comments kernel-doc style. - Originally-by: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Bjorn Helgaas <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> CC: Andrew Morton <[email protected]> CC: Brijesh Singh <[email protected]> CC: Dan Williams <[email protected]> CC: H. Peter Anvin <[email protected]> CC: Lianbo Jiang <[email protected]> CC: Takashi Iwai <[email protected]> CC: Thomas Gleixner <[email protected]> CC: Tom Lendacky <[email protected]> CC: Vivek Goyal <[email protected]> CC: Yaowei Bai <[email protected]> CC: [email protected] CC: [email protected] CC: [email protected] CC: [email protected] CC: [email protected] CC: x86-ml <[email protected]> Link: http://lkml.kernel.org/r/153805812916.1157.177580438135143788.stgit@bhelgaas-glaptop.roam.corp.google.com
2018-10-09resource: Include resource end in walk_*() interfacesBjorn Helgaas1-2/+2
find_next_iomem_res() finds an iomem resource that covers part of a range described by "start, end". All callers expect that range to be inclusive, i.e., both start and end are included, but find_next_iomem_res() doesn't handle the end address correctly. If it finds an iomem resource that contains exactly the end address, it skips it, e.g., if "start, end" is [0x0-0x10000] and there happens to be an iomem resource [mem 0x10000-0x10000] (the single byte at 0x10000), we skip it: find_next_iomem_res(...) { start = 0x0; end = 0x10000; for (p = next_resource(...)) { # p->start = 0x10000; # p->end = 0x10000; # we *should* return this resource, but this condition is false: if ((p->end >= start) && (p->start < end)) break; Adjust find_next_iomem_res() so it allows a resource that includes the single byte at the end of the range. This is a corner case that we probably don't see in practice. Fixes: 58c1b5b07907 ("[PATCH] memory hotadd fixes: find_next_system_ram catch range fix") Signed-off-by: Bjorn Helgaas <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> CC: Andrew Morton <[email protected]> CC: Brijesh Singh <[email protected]> CC: Dan Williams <[email protected]> CC: H. Peter Anvin <[email protected]> CC: Lianbo Jiang <[email protected]> CC: Takashi Iwai <[email protected]> CC: Thomas Gleixner <[email protected]> CC: Tom Lendacky <[email protected]> CC: Vivek Goyal <[email protected]> CC: Yaowei Bai <[email protected]> CC: [email protected] CC: [email protected] CC: [email protected] CC: [email protected] CC: [email protected] CC: x86-ml <[email protected]> Link: http://lkml.kernel.org/r/153805812254.1157.16736368485811773752.stgit@bhelgaas-glaptop.roam.corp.google.com
2018-10-09smp,cpumask: introduce on_each_cpu_cond_maskRik van Riel2-7/+24
Introduce a variant of on_each_cpu_cond that iterates only over the CPUs in a cpumask, in order to avoid making callbacks for every single CPU in the system when we only need to test a subset. Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Signed-off-by: Rik van Riel <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: http://lkml.kernel.org/r/[email protected]
2018-10-09smp: use __cpumask_set_cpu in on_each_cpu_condRik van Riel1-1/+1
The code in on_each_cpu_cond sets CPUs in a locally allocated bitmask, which should never be used by other CPUs simultaneously. There is no need to use locked memory accesses to set the bits in this bitmap. Switch to __cpumask_set_cpu. Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Rik van Riel <[email protected]> Reviewed-by: Andy Lutomirski <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: http://lkml.kernel.org/r/[email protected]
2018-10-09dma-direct: respect DMA_ATTR_NO_WARNChristoph Hellwig1-0/+3
Respect the DMA_ATTR_NO_WARN flags for allocations in dma-direct. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Robin Murphy <[email protected]>
2018-10-09printk: Give error on attempt to set log buffer length to over 2GHe Zhe1-6/+12
The current printk() is ready to handle log buffer size up to 2G. Give an explicit error for users who want to use larger log buffer. Also fix printk formatting to show the 2G as a positive number. Link: http://lkml.kernel.org/r/[email protected] Cc: [email protected] Cc: [email protected] Suggested-by: Sergey Senozhatsky <[email protected]> Signed-off-by: He Zhe <[email protected]> Reviewed-by: Sergey Senozhatsky <[email protected]> [pmladek: Fixed to the really safe limit 2GB.] Signed-off-by: Petr Mladek <[email protected]>
2018-10-09futex: Replace spin_is_locked() with lockdepLance Roy1-2/+2
lockdep_assert_held() is better suited for checking locking requirements, since it won't get confused when the lock is held by some other task. This is also a step towards possibly removing spin_is_locked(). Signed-off-by: Lance Roy <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Darren Hart <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2018-10-09locking/lockdep: Make class->ops a percpu counter and move it under ↵Waiman Long3-4/+36
CONFIG_DEBUG_LOCKDEP=y A sizable portion of the CPU cycles spent on the __lock_acquire() is used up by the atomic increment of the class->ops stat counter. By taking it out from the lock_class structure and changing it to a per-cpu per-lock-class counter, we can reduce the amount of cacheline contention on the class structure when multiple CPUs are trying to acquire locks of the same class simultaneously. To limit the increase in memory consumption because of the percpu nature of that counter, it is now put back under the CONFIG_DEBUG_LOCKDEP config option. So the memory consumption increase will only occur if CONFIG_DEBUG_LOCKDEP is defined. The lock_class structure, however, is reduced in size by 16 bytes on 64-bit archs after ops removal and a minor restructuring of the fields. This patch also fixes a bug in the increment code as the counter is of the 'unsigned long' type, but atomic_inc() was used to increment it. Signed-off-by: Waiman Long <[email protected]> Acked-by: Peter Zijlstra <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-10-08Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller7-184/+767
Alexei Starovoitov says: ==================== pull-request: bpf-next 2018-10-08 The following pull-request contains BPF updates for your *net-next* tree. The main changes are: 1) sk_lookup_[tcp|udp] and sk_release helpers from Joe Stringer which allow BPF programs to perform lookups for sockets in a network namespace. This would allow programs to determine early on in processing whether the stack is expecting to receive the packet, and perform some action (eg drop, forward somewhere) based on this information. 2) per-cpu cgroup local storage from Roman Gushchin. Per-cpu cgroup local storage is very similar to simple cgroup storage except all the data is per-cpu. The main goal of per-cpu variant is to implement super fast counters (e.g. packet counters), which don't require neither lookups, neither atomic operations in a fast path. The example of these hybrid counters is in selftests/bpf/netcnt_prog.c 3) allow HW offload of programs with BPF-to-BPF function calls from Quentin Monnet 4) support more than 64-byte key/value in HW offloaded BPF maps from Jakub Kicinski 5) rename of libbpf interfaces from Andrey Ignatov. libbpf is maturing as a library and should follow good practices in library design and implementation to play well with other libraries. This patch set brings consistent naming convention to global symbols. 6) relicense libbpf as LGPL-2.1 OR BSD-2-Clause from Alexei Starovoitov to let Apache2 projects use libbpf 7) various AF_XDP fixes from Björn and Magnus ==================== Signed-off-by: David S. Miller <[email protected]>
2018-10-09genirq: Fix grammar s/an /a /Geert Uytterhoeven1-1/+1
Fix a grammar mistake in <linux/interrupt.h>. [ mingo: While at it also fix another similar error in another comment as well. ] Signed-off-by: Geert Uytterhoeven <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/20181008111726.26286-1-geert%[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-10-09dma-direct: document the zone selection logicChristoph Hellwig1-1/+8
What we are doing here isn't quite obvious, so add a comment explaining it. Signed-off-by: Christoph Hellwig <[email protected]>
2018-10-08bpf: allow offload of programs with BPF-to-BPF function callsQuentin Monnet1-7/+3
Now that there is at least one driver supporting BPF-to-BPF function calls, lift the restriction, in the verifier, on hardware offload of eBPF programs containing such calls. But prevent jit_subprogs(), still in the verifier, from being run for offloaded programs. Signed-off-by: Quentin Monnet <[email protected]> Reviewed-by: Jiong Wang <[email protected]> Reviewed-by: Jakub Kicinski <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2018-10-08bpf: add verifier callback to get stack usage info for offloaded progsQuentin Monnet2-0/+21
In preparation for BPF-to-BPF calls in offloaded programs, add a new function attribute to the struct bpf_prog_offload_ops so that drivers supporting eBPF offload can hook at the end of program verification, and potentially extract information collected by the verifier. Implement a minimal callback (returning 0) in the drivers providing the structs, namely netdevsim and nfp. This will be useful in the nfp driver, in later commits, to extract the number of subprograms as well as the stack depth for those subprograms. Signed-off-by: Quentin Monnet <[email protected]> Reviewed-by: Jiong Wang <[email protected]> Reviewed-by: Jakub Kicinski <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2018-10-08dma-debug: Check for drivers mapping invalid addresses in dma_map_single()Stephen Boyd1-0/+16
I recently debugged a DMA mapping oops where a driver was trying to map a buffer returned from request_firmware() with dma_map_single(). Memory returned from request_firmware() is mapped into the vmalloc region and this isn't a valid region to map with dma_map_single() per the DMA documentation's "What memory is DMA'able?" section. Unfortunately, we don't really check that in the DMA debugging code, so enabling DMA debugging doesn't help catch this problem. Let's add a new DMA debug function to check for a vmalloc address or an invalid virtual address and print a warning if this happens. This makes it a little easier to debug these sorts of problems, instead of seeing odd behavior or crashes when drivers attempt to map the vmalloc space for DMA. Cc: Marek Szyprowski <[email protected]> Reviewed-by: Robin Murphy <[email protected]> Signed-off-by: Stephen Boyd <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]>
2018-10-08signal: In sigqueueinfo prefer sig not si_signoEric W. Biederman1-57/+84
Andrei Vagin <[email protected]> reported: > Accoding to the man page, the user should not set si_signo, it has to be set > by kernel. > > $ man 2 rt_sigqueueinfo > > The uinfo argument specifies the data to accompany the signal. This > argument is a pointer to a structure of type siginfo_t, described in > sigaction(2) (and defined by including <sigaction.h>). The caller > should set the following fields in this structure: > > si_code > This must be one of the SI_* codes in the Linux kernel source > file include/asm-generic/siginfo.h, with the restriction that > the code must be negative (i.e., cannot be SI_USER, which is > used by the kernel to indicate a signal sent by kill(2)) and > cannot (since Linux 2.6.39) be SI_TKILL (which is used by the > kernel to indicate a signal sent using tgkill(2)). > > si_pid This should be set to a process ID, typically the process ID of > the sender. > > si_uid This should be set to a user ID, typically the real user ID of > the sender. > > si_value > This field contains the user data to accompany the signal. For > more information, see the description of the last (union sigval) > argument of sigqueue(3). > > Internally, the kernel sets the si_signo field to the value specified > in sig, so that the receiver of the signal can also obtain the signal > number via that field. > > On Tue, Sep 25, 2018 at 07:19:02PM +0200, Eric W. Biederman wrote: >> >> If there is some application that calls sigqueueinfo directly that has >> a problem with this added sanity check we can revisit this when we see >> what kind of crazy that application is doing. > > > I already know two "applications" ;) > > https://github.com/torvalds/linux/blob/master/tools/testing/selftests/ptrace/peeksiginfo.c > https://github.com/checkpoint-restore/criu/blob/master/test/zdtm/static/sigpending.c > > Disclaimer: I'm the author of both of them. Looking at the kernel code the historical behavior has alwasy been to prefer the signal number passed in by the kernel. So sigh. Implmenet __copy_siginfo_from_user and __copy_siginfo_from_user32 to take that signal number and prefer it. The user of ptrace will still use copy_siginfo_from_user and copy_siginfo_from_user32 as they do not and never have had a signal number there. Luckily this change has never made it farther than linux-next. Fixes: e75dc036c445 ("signal: Fail sigqueueinfo if si_signo != sig") Reported-by: Andrei Vagin <[email protected]> Tested-by: Andrei Vagin <[email protected]> Signed-off-by: "Eric W. Biederman" <[email protected]>
2018-10-06Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller8-29/+118
2018-10-06Merge branch 'core/core' into x86/build, to prevent conflictsIngo Molnar2-55/+54
Signed-off-by: Ingo Molnar <[email protected]>
2018-10-06kexec: Allocate decrypted control pages for kdump if SME is enabledLianbo Jiang1-0/+6
When SME is enabled in the first kernel, it needs to allocate decrypted pages for kdump because when the kdump kernel boots, these pages need to be accessed decrypted in the initial boot stage, before SME is enabled. [ bp: clean up text. ] Signed-off-by: Lianbo Jiang <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Tom Lendacky <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-10-06Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netGreg Kroah-Hartman2-2/+13
Dave writes: "Networking fixes: 1) Fix truncation of 32-bit right shift in bpf, from Jann Horn. 2) Fix memory leak in wireless wext compat, from Stefan Seyfried. 3) Use after free in cfg80211's reg_process_hint(), from Yu Zhao. 4) Need to cancel pending work when unbinding in smsc75xx otherwise we oops, also from Yu Zhao. 5) Don't allow enslaving a team device to itself, from Ido Schimmel. 6) Fix backwards compat with older userspace for rtnetlink FDB dumps. From Mauricio Faria. 7) Add validation of tc policy netlink attributes, from David Ahern. 8) Fix RCU locking in rawv6_send_hdrinc(), from Wei Wang." * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (26 commits) net: mvpp2: Extract the correct ethtype from the skb for tx csum offload ipv6: take rcu lock in rawv6_send_hdrinc() net: sched: Add policy validation for tc attributes rtnetlink: fix rtnl_fdb_dump() for ndmsg header yam: fix a missing-check bug net: bpfilter: Fix type cast and pointer warnings net: cxgb3_main: fix a missing-check bug bpf: 32-bit RSH verification must truncate input before the ALU op net: phy: phylink: fix SFP interface autodetection be2net: don't flip hw_features when VXLANs are added/deleted net/packet: fix packet drop as of virtio gso net: dsa: b53: Keep CPU port as tagged in all VLANs openvswitch: load NAT helper bnxt_en: get the reduced max_irqs by the ones used by RDMA bnxt_en: free hwrm resources, if driver probe fails. bnxt_en: Fix enables field in HWRM_QUEUE_COS2BW_CFG request bnxt_en: Fix VNIC reservations on the PF. team: Forbid enslaving team device to itself net/usb: cancel pending work when unbinding smsc75xx mlxsw: spectrum: Delete RIF when VLAN device is removed ...
2018-10-05Merge branch 'perf-urgent-for-linus' of ↵Greg Kroah-Hartman1-7/+4
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Ingo writes: "perf fixes: - fix a CPU#0 hot unplug bug and a PCI enumeration bug in the x86 Intel uncore PMU driver - fix a CPU event enumeration bug in the x86 AMD PMU driver - fix a perf ring-buffer corruption bug when using tracepoints - fix a PMU unregister locking bug" * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/x86/amd/uncore: Set ThreadMask and SliceMask for L3 Cache perf events perf/x86/intel/uncore: Fix PCI BDF address of M3UPI on SKX perf/ring_buffer: Prevent concurent ring buffer access perf/x86/intel/uncore: Use boot_cpu_data.phys_proc_id instead of hardcorded physical package ID 0 perf/core: Fix perf_pmu_unregister() locking
2018-10-05Merge branch 'sched-urgent-for-linus' of ↵Greg Kroah-Hartman4-16/+95
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Ingo writes: "scheduler fixes: These fixes address a rather involved performance regression between v4.17->v4.19 in the sched/numa auto-balancing code. Since distros really need this fix we accelerated it to sched/urgent for a faster upstream merge. NUMA scheduling and balancing performance is now largely back to v4.17 levels, without reintroducing the NUMA placement bugs that v4.18 and v4.19 fixed. Many thanks to Srikar Dronamraju, Mel Gorman and Jirka Hladky, for reporting, testing, re-testing and solving this rather complex set of bugs." * 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched/numa: Migrate pages to local nodes quicker early in the lifetime of a task mm, sched/numa: Remove rate-limiting of automatic NUMA balancing migration sched/numa: Avoid task migration for small NUMA improvement mm/migrate: Use spin_trylock() while resetting rate limit sched/numa: Limit the conditions where scan period is reset sched/numa: Reset scan rate whenever task moves across nodes sched/numa: Pass destination CPU as a parameter to migrate_task_rq sched/numa: Stop multiple tasks from moving to the CPU at the same time
2018-10-05Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfDavid S. Miller2-2/+13
Daniel Borkmann says: ==================== pull-request: bpf 2018-10-05 The following pull-request contains BPF updates for your *net* tree. The main changes are: 1) Fix to truncate input on ALU operations in 32 bit mode, from Jann. 2) Fixes for cgroup local storage to reject reserved flags on element update and rejection of map allocation with zero-sized value, from Roman. ==================== Signed-off-by: David S. Miller <[email protected]>