aboutsummaryrefslogtreecommitdiff
path: root/kernel
AgeCommit message (Collapse)AuthorFilesLines
2021-05-26libbpf: Move BPF_SEQ_PRINTF and BPF_SNPRINTF to bpf_helpers.hFlorent Revest1-1/+0
These macros are convenient wrappers around the bpf_seq_printf and bpf_snprintf helpers. They are currently provided by bpf_tracing.h which targets low level tracing primitives. bpf_helpers.h is a better fit. The __bpf_narg and __bpf_apply are needed in both files and provided twice. __bpf_empty isn't used anywhere and is removed from bpf_tracing.h Reported-by: Andrii Nakryiko <[email protected]> Signed-off-by: Florent Revest <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-05-26module: correctly exit module_kallsyms_on_each_symbol when fn() != 0Jon Mediero1-1/+2
Commit 013c1667cf78 ("kallsyms: refactor {,module_}kallsyms_on_each_symbol") replaced the return inside the nested loop with a break, changing the semantics of the function: the break only exits the innermost loop, so the code continues iterating the symbols of the next module instead of exiting. Fixes: 013c1667cf78 ("kallsyms: refactor {,module_}kallsyms_on_each_symbol") Reviewed-by: Petr Mladek <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Signed-off-by: Jon Mediero <[email protected]> Signed-off-by: Jessica Yu <[email protected]>
2021-05-26xdp: Extend xdp_redirect_map with broadcast supportHangbin Liu2-3/+183
This patch adds two flags BPF_F_BROADCAST and BPF_F_EXCLUDE_INGRESS to extend xdp_redirect_map for broadcast support. With BPF_F_BROADCAST the packet will be broadcasted to all the interfaces in the map. with BPF_F_EXCLUDE_INGRESS the ingress interface will be excluded when do broadcasting. When getting the devices in dev hash map via dev_map_hash_get_next_key(), there is a possibility that we fall back to the first key when a device was removed. This will duplicate packets on some interfaces. So just walk the whole buckets to avoid this issue. For dev array map, we also walk the whole map to find valid interfaces. Function bpf_clear_redirect_map() was removed in commit ee75aef23afe ("bpf, xdp: Restructure redirect actions"). Add it back as we need to use ri->map again. With test topology: +-------------------+ +-------------------+ | Host A (i40e 10G) | ---------- | eno1(i40e 10G) | +-------------------+ | | | Host B | +-------------------+ | | | Host C (i40e 10G) | ---------- | eno2(i40e 10G) | +-------------------+ | | | +------+ | | veth0 -- | Peer | | | veth1 -- | | | | veth2 -- | NS | | | +------+ | +-------------------+ On Host A: # pktgen/pktgen_sample03_burst_single_flow.sh -i eno1 -d $dst_ip -m $dst_mac -s 64 On Host B(Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz, 128G Memory): Use xdp_redirect_map and xdp_redirect_map_multi in samples/bpf for testing. All the veth peers in the NS have a XDP_DROP program loaded. The forward_map max_entries in xdp_redirect_map_multi is modify to 4. Testing the performance impact on the regular xdp_redirect path with and without patch (to check impact of additional check for broadcast mode): 5.12 rc4 | redirect_map i40e->i40e | 2.0M | 9.7M 5.12 rc4 | redirect_map i40e->veth | 1.7M | 11.8M 5.12 rc4 + patch | redirect_map i40e->i40e | 2.0M | 9.6M 5.12 rc4 + patch | redirect_map i40e->veth | 1.7M | 11.7M Testing the performance when cloning packets with the redirect_map_multi test, using a redirect map size of 4, filled with 1-3 devices: 5.12 rc4 + patch | redirect_map multi i40e->veth (x1) | 1.7M | 11.4M 5.12 rc4 + patch | redirect_map multi i40e->veth (x2) | 1.1M | 4.3M 5.12 rc4 + patch | redirect_map multi i40e->veth (x3) | 0.8M | 2.6M Signed-off-by: Hangbin Liu <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Toke Høiland-Jørgensen <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Acked-by: John Fastabend <[email protected]> Acked-by: Jesper Dangaard Brouer <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-05-26bpf: Run devmap xdp_prog on flush instead of bulk enqueueJesper Dangaard Brouer1-51/+76
This changes the devmap XDP program support to run the program when the bulk queue is flushed instead of before the frame is enqueued. This has a couple of benefits: - It "sorts" the packets by destination devmap entry, and then runs the same BPF program on all the packets in sequence. This ensures that we keep the XDP program and destination device properties hot in I-cache. - It makes the multicast implementation simpler because it can just enqueue packets using bq_enqueue() without having to deal with the devmap program at all. The drawback is that if the devmap program drops the packet, the enqueue step is redundant. However, arguably this is mostly visible in a micro-benchmark, and with more mixed traffic the I-cache benefit should win out. The performance impact of just this patch is as follows: Using 2 10Gb i40e NIC, redirecting one to another, or into a veth interface, which do XDP_DROP on veth peer. With xdp_redirect_map in sample/bpf, send pkts via pktgen cmd: ./pktgen_sample03_burst_single_flow.sh -i eno1 -d $dst_ip -m $dst_mac -t 10 -s 64 There are about +/- 0.1M deviation for native testing, the performance improved for the base-case, but some drop back with xdp devmap prog attached. Version | Test | Generic | Native | Native + 2nd xdp_prog 5.12 rc4 | xdp_redirect_map i40e->i40e | 1.9M | 9.6M | 8.4M 5.12 rc4 | xdp_redirect_map i40e->veth | 1.7M | 11.7M | 9.8M 5.12 rc4 + patch | xdp_redirect_map i40e->i40e | 1.9M | 9.8M | 8.0M 5.12 rc4 + patch | xdp_redirect_map i40e->veth | 1.7M | 12.0M | 9.4M When bq_xmit_all() is called from bq_enqueue(), another packet will always be enqueued immediately after, so clearing dev_rx, xdp_prog and flush_node in bq_xmit_all() is redundant. Move the clear to __dev_flush(), and only check them once in bq_enqueue() since they are all modified together. This change also has the side effect of extending the lifetime of the RCU-protected xdp_prog that lives inside the devmap entries: Instead of just living for the duration of the XDP program invocation, the reference now lives all the way until the bq is flushed. This is safe because the bq flush happens at the end of the NAPI poll loop, so everything happens between a local_bh_disable()/local_bh_enable() pair. However, this is by no means obvious from looking at the call sites; in particular, some drivers have an additional rcu_read_lock() around only the XDP program invocation, which only confuses matters further. Cleaning this up will be done in a separate patch series. Signed-off-by: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Hangbin Liu <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Toke Høiland-Jørgensen <[email protected]> Acked-by: John Fastabend <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-05-26sched: Stop PF_NO_SETAFFINITY from being inherited by various init system ↵Frederic Weisbecker1-1/+1
threads Commit: 00b89fe0197f ("sched: Make the idle task quack like a per-CPU kthread") ... added PF_KTHREAD | PF_NO_SETAFFINITY to the idle kernel threads. Unfortunately these properties are inherited to the init/0 children through kernel_thread() calls: init/1 and kthreadd. There are several side effects to that: 1) kthreadd affinity can not be reset anymore from userspace. Also PF_NO_SETAFFINITY propagates to all kthreadd children, including the unbound kthreads Therefore it's not possible anymore to overwrite the affinity of any of them. Here is an example of warning reported by rcutorture: WARNING: CPU: 0 PID: 116 at kernel/rcu/tree_nocb.h:1306 rcu_bind_current_to_nocb+0x31/0x40 Call Trace: rcu_torture_fwd_prog+0x62/0x730 kthread+0x122/0x140 ret_from_fork+0x22/0x30 2) init/1 does an exec() in the end which clears both PF_KTHREAD and PF_NO_SETAFFINITY so we are fine once kernel_init() escapes to userspace. But until then, no initcall or init code can successfully call sched_setaffinity() to init/1. Also PF_KTHREAD looks legit on init/1 before it calls exec() but we better be careful with unknown introduced side effects. One way to solve the PF_NO_SETAFFINITY issue is to not inherit this flag on copy_process() at all. The cases where it matters are: * fork_idle(): explicitly set the flag already. * fork() syscalls: userspace tasks that shouldn't be concerned by that. * create_io_thread(): the callers explicitly attribute the flag to the newly created tasks. * kernel_thread(): - Fix the issues on init/1 and kthreadd - Fix the issues on kthreadd children. - Usermode helper created by an unbound workqueue. This shouldn't matter. In the worst case it gives more control to userspace on setting affinity to these short living tasks although this can be tuned with inherited unbound workqueues affinity already. Fixes: 00b89fe0197f ("sched: Make the idle task quack like a per-CPU kthread") Reported-by: Paul E. McKenney <[email protected]> Signed-off-by: Frederic Weisbecker <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Tested-by: Paul E. McKenney <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-25bpf: No need to simulate speculative domain for immediatesDaniel Borkmann1-1/+5
In 801c6058d14a ("bpf: Fix leakage of uninitialized bpf stack under speculation") we replaced masking logic with direct loads of immediates if the register is a known constant. Given in this case we do not apply any masking, there is also no reason for the operation to be truncated under the speculative domain. Therefore, there is also zero reason for the verifier to branch-off and simulate this case, it only needs to do it for unknown but bounded scalars. As a side-effect, this also enables few test cases that were previously rejected due to simulation under zero truncation. Signed-off-by: Daniel Borkmann <[email protected]> Reviewed-by: Piotr Krysiuk <[email protected]> Acked-by: Alexei Starovoitov <[email protected]>
2021-05-25bpf: Fix mask direction swap upon off reg sign changeDaniel Borkmann1-10/+12
Masking direction as indicated via mask_to_left is considered to be calculated once and then used to derive pointer limits. Thus, this needs to be placed into bpf_sanitize_info instead so we can pass it to sanitize_ptr_alu() call after the pointer move. Piotr noticed a corner case where the off reg causes masking direction change which then results in an incorrect final aux->alu_limit. Fixes: 7fedb63a8307 ("bpf: Tighten speculative pointer arithmetic mask") Reported-by: Piotr Krysiuk <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Reviewed-by: Piotr Krysiuk <[email protected]> Acked-by: Alexei Starovoitov <[email protected]>
2021-05-25bpf: Wrap aux data inside bpf_sanitize_info containerDaniel Borkmann1-7/+11
Add a container structure struct bpf_sanitize_info which holds the current aux info, and update call-sites to sanitize_ptr_alu() to pass it in. This is needed for passing in additional state later on. Signed-off-by: Daniel Borkmann <[email protected]> Reviewed-by: Piotr Krysiuk <[email protected]> Acked-by: Alexei Starovoitov <[email protected]>
2021-05-25bpf: Fix BPF_LSM kconfig symbol dependencyDaniel Borkmann1-0/+2
Similarly as 6bdacdb48e94 ("bpf: Fix BPF_JIT kconfig symbol dependency") we need to detangle the hard BPF_LSM dependency on NET. This was previously implicit by its dependency on BPF_JIT which itself was dependent on NET (but without any actual/real hard dependency code-wise). Given the latter was lifted, so should be the former as BPF_LSMs could well exist on net-less systems. This therefore also fixes a randconfig build error recently reported by Randy: ld: kernel/bpf/bpf_lsm.o: in function `bpf_lsm_func_proto': bpf_lsm.c:(.text+0x1a0): undefined reference to `bpf_sk_storage_get_proto' ld: bpf_lsm.c:(.text+0x1b8): undefined reference to `bpf_sk_storage_delete_proto' [...] Fixes: b24abcff918a ("bpf, kconfig: Add consolidated menu entry for bpf with core options") Reported-by: Randy Dunlap <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Randy Dunlap <[email protected]> Tested-by: Randy Dunlap <[email protected]>
2021-05-25futex: Deduplicate cond_resched() invocation in futex_wake_op()Pavel Begunkov1-5/+2
After pagefaulting in futex_wake_op() both branches do cond_resched() before retry. Deduplicate it as compilers cannot figure it out themself. Signed-off-by: Pavel Begunkov <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Davidlohr Bueso <[email protected]> Link: https://lore.kernel.org/r/9b2588c1fd33c91fb01c4e348a3b647ab2c8baab.1621258128.git.asml.silence@gmail.com
2021-05-25cpu/hotplug: Simplify access to percpu cpuhp_stateYuan ZhaoXiong1-2/+2
It is unnecessary to invoke per_cpu_ptr() everytime to access cpuhp_state. Use the available pointer instead. Signed-off-by: Yuan ZhaoXiong <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Valentin Schneider <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-24bpf: Fix spelling mistakesZhen Lei7-16/+16
Fix some spelling mistakes in comments: aother ==> another Netiher ==> Neither desribe ==> describe intializing ==> initializing funciton ==> function wont ==> won't and move the word 'the' at the end to the next line accross ==> across pathes ==> paths triggerred ==> triggered excute ==> execute ether ==> either conervative ==> conservative convetion ==> convention markes ==> marks interpeter ==> interpreter Signed-off-by: Zhen Lei <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-05-24bpf: Add lookup_and_delete_elem support to hashtabDenis Salopek2-4/+128
Extend the existing bpf_map_lookup_and_delete_elem() functionality to hashtab map types, in addition to stacks and queues. Create a new hashtab bpf_map_ops function that does lookup and deletion of the element under the same bucket lock and add the created map_ops to bpf.h. Signed-off-by: Denis Salopek <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/4d18480a3e990ffbf14751ddef0325eed3be2966.1620763117.git.denis.salopek@sartura.hr
2021-05-24Merge branch 'for-5.13-fixes' of ↵Linus Torvalds5-16/+13
git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup Pull cgroup fixes from Tejun Heo: - "cgroup_disable=" boot param was being applied too late confusing some subsystems. Fix it by moving application to __setup() time. - Comment spelling fixes. Included here to lower the chance of trivial future merge conflicts. * 'for-5.13-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup: cgroup: fix spelling mistakes cgroup: disable controllers at parse time
2021-05-24Merge branch 'for-5.13-fixes' into for-5.14Tejun Heo10-24/+60
2021-05-24Merge branch 'for-5.13-fixes' of ↵Linus Torvalds1-2/+10
git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq Pull workqueue fix from Tejun Heo: "One commit to fix spurious workqueue stall warnings across VM suspensions" * 'for-5.13-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: wq: handle VM suspension in stall detection
2021-05-24cgroup: fix spelling mistakesZhen Lei5-8/+8
Fix some spelling mistakes in comments: hierarhcy ==> hierarchy automtically ==> automatically overriden ==> overridden In absense of .. or ==> In absence of .. and assocaited ==> associated taget ==> target initate ==> initiate succeded ==> succeeded curremt ==> current udpated ==> updated Signed-off-by: Zhen Lei <[email protected]> Signed-off-by: Tejun Heo <[email protected]>
2021-05-24PM: hibernate: fix spelling mistakesZhen Lei2-5/+5
Fix some spelling mistakes in comments: corresonds ==> corresponds alocated ==> allocated unitialized ==> uninitialized Deompression ==> Decompression Signed-off-by: Zhen Lei <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2021-05-24block_dump: remove block_dump featurezhangyi (F)1-8/+0
We have already delete block_dump feature in mark_inode_dirty() because it can be replaced by tracepoints, now we also remove the part in submit_bio() for the same reason. The part of block dump feature in submit_bio() dump the write process, write region and sectors on the target disk into kernel message. it can be replaced by block_bio_queue tracepoint in submit_bio_checks(), so we do not need block_dump anymore, remove the whole block_dump feature. Signed-off-by: zhangyi (F) <[email protected]> Reviewed-by: Jan Kara <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
2021-05-24Merge tag 'irq-export-set-affinity' of ↵Will Deacon1-1/+32
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into for-next/perf Export irq_set_affinity() for cleaning up drivers/perf Pull export of irq_set_affinity() from Thomas Gleixner, so we can convert all new and exiting Arm PMU drivers to the new interface. * tag 'irq-export-set-affinity' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: genirq: Export affinity setter for modules
2021-05-23Merge tag 'locking-urgent-2021-05-23' of ↵Linus Torvalds5-13/+19
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking fixes from Thomas Gleixner: "Two locking fixes: - Invoke the lockdep tracepoints in the correct place so the ordering is correct again - Don't leave the mutex WAITER bit stale when the last waiter is dropping out early due to a signal as that forces all subsequent lock operations needlessly into the slowpath until it's cleaned up again" * tag 'locking-urgent-2021-05-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: locking/mutex: clear MUTEX_FLAGS if wait_list is empty due to signal locking/lockdep: Correct calling tracepoints
2021-05-23kprobes: Allow architectures to override optinsn page allocationChristophe Leroy1-2/+12
Some architectures like powerpc require a non standard allocation of optinsn page, because module pages are too far from the kernel for direct branches. Define weak alloc_optinsn_page() and free_optinsn_page(), that fall back on alloc_insn_page() and free_insn_page() when not overridden by the architecture. Suggested-by: Masami Hiramatsu <[email protected]> Signed-off-by: Christophe Leroy <[email protected]> Reviewed-by: Naveen N. Rao <[email protected]> Acked-by: Masami Hiramatsu <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/40a43d6df1fdf41ade36e9a46e60a4df774ca9f6.1620896780.git.christophe.leroy@csgroup.eu
2021-05-22watchdog: reliable handling of timestampsPetr Mladek1-14/+20
Commit 9bf3bc949f8a ("watchdog: cleanup handling of false positives") tried to handle a virtual host stopped by the host a more straightforward and cleaner way. But it introduced a risk of false softlockup reports. The virtual host might be stopped at any time, for example between kvm_check_and_clear_guest_paused() and is_softlockup(). As a result, is_softlockup() might read the updated jiffies and detects a softlockup. A solution might be to put back kvm_check_and_clear_guest_paused() after is_softlockup() and detect it. But it would put back the cycle that complicates the logic. In fact, the handling of all the timestamps is not reliable. The code does not guarantee when and how many times the timestamps are read. For example, "period_ts" might be touched anytime also from NMI and re-read in is_softlockup(). It works just by chance. Fix all the problems by making the code even more explicit. 1. Make sure that "now" and "period_ts" timestamps are read only once. They might be changed at anytime by NMI or when the virtual guest is stopped by the host. Note that "now" timestamp does this implicitly because "jiffies" is marked volatile. 2. "now" time must be read first. The state of "period_ts" will decide whether it will be used or the period will get restarted. 3. kvm_check_and_clear_guest_paused() must be called before reading "period_ts". It touches the variable when the guest was stopped. As a result, "now" timestamp is used only when the watchdog was not touched and the guest not stopped in the meantime. "period_ts" is restarted in all other situations. Link: https://lkml.kernel.org/r/YKT55gw+RZfyoFf7@alley Fixes: 9bf3bc949f8aeefeacea4b ("watchdog: cleanup handling of false positives") Signed-off-by: Petr Mladek <[email protected]> Reported-by: Sergey Senozhatsky <[email protected]> Reviewed-by: Sergey Senozhatsky <[email protected]> Cc: Peter Zijlstra <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-05-21Merge branch 'for-v5.13-rc3' of ↵Linus Torvalds2-34/+36
git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace Pull siginfo fix from Eric Biederman: "During the merge window an issue with si_perf and the siginfo ABI came up. The alpha and sparc siginfo structure layout had changed with the addition of SIGTRAP TRAP_PERF and the new field si_perf. The reason only alpha and sparc were affected is that they are the only architectures that use si_trapno. Looking deeper it was discovered that si_trapno is used for only a few select signals on alpha and sparc, and that none of the other _sigfault fields past si_addr are used at all. Which means technically no regression on alpha and sparc. While the alignment concerns might be dismissed the abuse of si_errno by SIGTRAP TRAP_PERF does have the potential to cause regressions in existing userspace. While we still have time before userspace starts using and depending on the new definition siginfo for SIGTRAP TRAP_PERF this set of changes cleans up siginfo_t. - The si_trapno field is demoted from magic alpha and sparc status and made an ordinary union member of the _sigfault member of siginfo_t. Without moving it of course. - si_perf is replaced with si_perf_data and si_perf_type ending the abuse of si_errno. - Unnecessary additions to signalfd_siginfo are removed" * 'for-v5.13-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace: signalfd: Remove SIL_PERF_EVENT fields from signalfd_siginfo signal: Deliver all of the siginfo perf data in _perf signal: Factor force_sig_perf out of perf_sigtrap signal: Implement SIL_FAULT_TRAPNO siginfo: Move si_trapno inside the union inside _si_fault
2021-05-21Merge tag 'modules-for-v5.13-rc3' of ↵Linus Torvalds1-6/+11
git://git.kernel.org/pub/scm/linux/kernel/git/jeyu/linux Pull module fix from Jessica Yu: "When CONFIG_MODULE_UNLOAD=n, module exit sections get sorted into the init region of the module in order to satisfy the requirements of jump_labels and static_calls. Previously, the exit section check was done in module_init_section(), but the solution there is not completely arch-indepedent as ARM is a special case and supplies its own module_init_section() function. Instead of pushing this logic further to the arch-specific code, switch to an arch-independent solution to check for module exit sections in the core module loader code in layout_sections() instead" * tag 'modules-for-v5.13-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/jeyu/linux: module: check for exit sections in layout_sections() instead of module_init_section()
2021-05-20Merge branch 'urgent.2021.05.20a' of ↵Linus Torvalds1-1/+2
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu Pull kcsan fix from Paul McKenney: "Fix for a regression introduced in this merge window by commit e36299efe7d7 ("kcsan, debugfs: Move debugfs file creation out of early init"). The regression is not easy to trigger, requiring a KCSAN build using clang with CONFIG_LTO_CLANG=y. The fix is to simply make the kcsan_debugfs_init() function's type initcall-compatible. This has been posted to the relevant mailing lists:" * 'urgent.2021.05.20a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu: kcsan: Fix debugfs initcall return type
2021-05-20bpf, offload: Reorder offload callback 'prepare' in verifierYinjun Zhang1-6/+6
Commit 4976b718c355 ("bpf: Introduce pseudo_btf_id") switched the order of resolve_pseudo_ldimm(), in which some pseudo instructions are rewritten. Thus those rewritten instructions cannot be passed to driver via 'prepare' offload callback. Reorder the 'prepare' offload callback to fix it. Fixes: 4976b718c355 ("bpf: Introduce pseudo_btf_id") Signed-off-by: Yinjun Zhang <[email protected]> Signed-off-by: Simon Horman <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-05-20bpf: Avoid using ARRAY_SIZE on an uninitialized pointerFlorent Revest1-2/+3
The cppcheck static code analysis reported the following error: if (WARN_ON_ONCE(nest_level > ARRAY_SIZE(bufs->tmp_bufs))) { ^ ARRAY_SIZE is a macro that expands to sizeofs, so bufs is not actually dereferenced at runtime, and the code is actually safe. But to keep things tidy, this patch removes the need for a call to ARRAY_SIZE by extracting the size of the array into a macro. Cppcheck should no longer be confused and the code ends up being a bit cleaner. Fixes: e2d5b2bb769f ("bpf: Fix nested bpf_bprintf_prepare with more per-cpu buffers") Reported-by: kernel test robot <[email protected]> Signed-off-by: Florent Revest <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-05-20bpf: Clarify a bpf_bprintf_prepare macroFlorent Revest1-4/+5
The per-cpu buffers contain bprintf data rather than printf arguments. The macro name and comment were a bit confusing, this rewords them in a clearer way. Signed-off-by: Florent Revest <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-05-20bpf: Fix BPF_JIT kconfig symbol dependencyDaniel Borkmann1-0/+1
Randy reported a randconfig build error recently on i386: ld: arch/x86/net/bpf_jit_comp32.o: in function `do_jit': bpf_jit_comp32.c:(.text+0x28c9): undefined reference to `__bpf_call_base' ld: arch/x86/net/bpf_jit_comp32.o: in function `bpf_int_jit_compile': bpf_jit_comp32.c:(.text+0x3694): undefined reference to `bpf_jit_blind_constants' ld: bpf_jit_comp32.c:(.text+0x3719): undefined reference to `bpf_jit_binary_free' ld: bpf_jit_comp32.c:(.text+0x3745): undefined reference to `bpf_jit_binary_alloc' ld: bpf_jit_comp32.c:(.text+0x37d3): undefined reference to `bpf_jit_prog_release_other' [...] The cause was that b24abcff918a ("bpf, kconfig: Add consolidated menu entry for bpf with core options") moved BPF_JIT from net/Kconfig into kernel/bpf/Kconfig and previously BPF_JIT was guarded by a 'if NET'. However, there is no actual dependency on NET, it's just that menuconfig NET selects BPF. And the latter in turn causes kernel/bpf/core.o to be built which contains above symbols. Randy's randconfig didn't have NET set, and BPF wasn't either, but BPF_JIT otoh was. Detangle this by making BPF_JIT depend on BPF instead. arm64 was the only arch that pulled in its JIT in net/ via obj-$(CONFIG_NET), all others unconditionally pull this dir in via obj-y. Do the same since CONFIG_NET guard there is really useless as we compiled the JIT via obj-$(CONFIG_BPF_JIT) += bpf_jit_comp.o anyway. Fixes: b24abcff918a ("bpf, kconfig: Add consolidated menu entry for bpf with core options") Reported-by: Randy Dunlap <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Randy Dunlap <[email protected]> Tested-by: Randy Dunlap <[email protected]>
2021-05-20wq: handle VM suspension in stall detectionSergey Senozhatsky1-2/+10
If VCPU is suspended (VM suspend) in wq_watchdog_timer_fn() then once this VCPU resumes it will see the new jiffies value, while it may take a while before IRQ detects PVCLOCK_GUEST_STOPPED on this VCPU and updates all the watchdogs via pvclock_touch_watchdogs(). There is a small chance of misreported WQ stalls in the meantime, because new jiffies is time_after() old 'ts + thresh'. wq_watchdog_timer_fn() { for_each_pool(pool, pi) { if (time_after(jiffies, ts + thresh)) { pr_emerg("BUG: workqueue lockup - pool"); } } } Save jiffies at the beginning of this function and use that value for stall detection. If VM gets suspended then we continue using "old" jiffies value and old WQ touch timestamps. If IRQ at some point restarts the stall detection cycle (pvclock_touch_watchdogs()) then old jiffies will always be before new 'ts + thresh'. Signed-off-by: Sergey Senozhatsky <[email protected]> Signed-off-by: Tejun Heo <[email protected]>
2021-05-20cgroup: disable controllers at parse timeShakeel Butt1-8/+5
This patch effectively reverts the commit a3e72739b7a7 ("cgroup: fix too early usage of static_branch_disable()"). The commit 6041186a3258 ("init: initialize jump labels before command line option parsing") has moved the jump_label_init() before parse_args() which has made the commit a3e72739b7a7 unnecessary. On the other hand there are consequences of disabling the controllers later as there are subsystems doing the controller checks for different decisions. One such incident is reported [1] regarding the memory controller and its impact on memory reclaim code. [1] https://lore.kernel.org/linux-mm/[email protected] Signed-off-by: Shakeel Butt <[email protected]> Reported-by: NOMURA JUNICHI(野村 淳一) <[email protected]> Signed-off-by: Tejun Heo <[email protected]> Tested-by: Jun'ichi Nomura <[email protected]>
2021-05-19bpf: Make some symbols staticPu Lehui1-2/+2
The sparse tool complains as follows: kernel/bpf/syscall.c:4567:29: warning: symbol 'bpf_sys_bpf_proto' was not declared. Should it be static? kernel/bpf/syscall.c:4592:29: warning: symbol 'bpf_sys_close_proto' was not declared. Should it be static? This symbol is not used outside of syscall.c, so marks it static. Signed-off-by: Pu Lehui <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-05-19sched: Fix a stale comment in pick_next_task()Masahiro Yamada1-1/+1
fair_sched_class->next no longer exists since commit: a87e749e8fa1 ("sched: Remove struct sched_class::next field"). Now the sched_class order is specified by the linker script. Rewrite the comment in a more generic way. Signed-off-by: Masahiro Yamada <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-19Merge branch 'irq/affinity' into irq/coreThomas Gleixner1-1/+32
Merge the export of irq_set_affinity() which is a standalone commit so it can be pulled into other trees.
2021-05-19genirq: Export affinity setter for modulesThomas Gleixner1-1/+32
Perf modules abuse irq_set_affinity_hint() to set the affinity of system PMU interrupts just because irq_set_affinity() was not exported. The fact that irq_set_affinity_hint() actually sets the affinity is a non-documented side effect and the name is clearly saying it's a hint. To clean this up, export the real affinity setter. Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Mark Rutland <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-19sched/uclamp: Fix locking around cpu_util_update_eff()Qais Yousef1-0/+7
cpu_cgroup_css_online() calls cpu_util_update_eff() without holding the uclamp_mutex or rcu_read_lock() like other call sites, which is a mistake. The uclamp_mutex is required to protect against concurrent reads and writes that could update the cgroup hierarchy. The rcu_read_lock() is required to traverse the cgroup data structures in cpu_util_update_eff(). Surround the caller with the required locks and add some asserts to better document the dependency in cpu_util_update_eff(). Fixes: 7226017ad37a ("sched/uclamp: Fix a bug in propagating uclamp value in new cgroups") Reported-by: Quentin Perret <[email protected]> Signed-off-by: Qais Yousef <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2021-05-19sched/uclamp: Fix wrong implementation of cpu.uclamp.minQais Yousef1-4/+17
cpu.uclamp.min is a protection as described in cgroup-v2 Resource Distribution Model Documentation/admin-guide/cgroup-v2.rst which means we try our best to preserve the minimum performance point of tasks in this group. See full description of cpu.uclamp.min in the cgroup-v2.rst. But the current implementation makes it a limit, which is not what was intended. For example: tg->cpu.uclamp.min = 20% p0->uclamp[UCLAMP_MIN] = 0 p1->uclamp[UCLAMP_MIN] = 50% Previous Behavior (limit): p0->effective_uclamp = 0 p1->effective_uclamp = 20% New Behavior (Protection): p0->effective_uclamp = 20% p1->effective_uclamp = 50% Which is inline with how protections should work. With this change the cgroup and per-task behaviors are the same, as expected. Additionally, we remove the confusing relationship between cgroup and !user_defined flag. We don't want for example RT tasks that are boosted by default to max to change their boost value when they attach to a cgroup. If a cgroup wants to limit the max performance point of tasks attached to it, then cpu.uclamp.max must be set accordingly. Or if they want to set different boost value based on cgroup, then sysctl_sched_util_clamp_min_rt_default must be used to NOT boost to max and set the right cpu.uclamp.min for each group to let the RT tasks obtain the desired boost value when attached to that group. As it stands the dependency on !user_defined flag adds an extra layer of complexity that is not required now cpu.uclamp.min behaves properly as a protection. The propagation model of effective cpu.uclamp.min in child cgroups as implemented by cpu_util_update_eff() is still correct. The parent protection sets an upper limit of what the child cgroups will effectively get. Fixes: 3eac870a3247 (sched/uclamp: Use TG's clamps to restrict TASK's clamps) Signed-off-by: Qais Yousef <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2021-05-19bpf: Add bpf_sys_close() helper.Alexei Starovoitov1-0/+19
Add bpf_sys_close() helper to be used by the syscall/loader program to close intermediate FDs and other cleanup. Note this helper must never be allowed inside fdget/fdput bracketing. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-05-19bpf: Add bpf_btf_find_by_name_kind() helper.Alexei Starovoitov2-0/+64
Add new helper: long bpf_btf_find_by_name_kind(char *name, int name_sz, u32 kind, int flags) Description Find BTF type with given name and kind in vmlinux BTF or in module's BTFs. Return Returns btf_id and btf_obj_fd in lower and upper 32 bits. It will be used by loader program to find btf_id to attach the program to and to find btf_ids of ksyms. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-05-19bpf: Introduce fd_idxAlexei Starovoitov2-11/+38
Typical program loading sequence involves creating bpf maps and applying map FDs into bpf instructions in various places in the bpf program. This job is done by libbpf that is using compiler generated ELF relocations to patch certain instruction after maps are created and BTFs are loaded. The goal of fd_idx is to allow bpf instructions to stay immutable after compilation. At load time the libbpf would still create maps as usual, but it wouldn't need to patch instructions. It would store map_fds into __u32 fd_array[] and would pass that pointer to sys_bpf(BPF_PROG_LOAD). Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-05-19bpf: Make btf_load command to be bpfptr_t compatible.Alexei Starovoitov2-7/+8
Similar to prog_load make btf_load command to be availble to bpf_prog_type_syscall program. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-05-19bpf: Prepare bpf syscall to be used from kernel and user space.Alexei Starovoitov3-61/+99
With the help from bpfptr_t prepare relevant bpf syscall commands to be used from kernel and user space. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-05-19bpf: Introduce bpf_sys_bpf() helper and program type.Alexei Starovoitov2-0/+61
Add placeholders for bpf_sys_bpf() helper and new program type. Make sure to check that expected_attach_type is zero for future extensibility. Allow tracing helper functions to be used in this program type, since they will only execute from user context via bpf_prog_test_run. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: John Fastabend <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-05-18signal: Deliver all of the siginfo perf data in _perfEric W. Biederman1-8/+13
Don't abuse si_errno and deliver all of the perf data in _perf member of siginfo_t. Note: The data field in the perf data structures in a u64 to allow a pointer to be encoded without needed to implement a 32bit and 64bit version of the same structure. There already exists a 32bit and 64bit versions siginfo_t, and the 32bit version can not include a 64bit member as it only has 32bit alignment. So unsigned long is used in siginfo_t instead of a u64 as unsigned long can encode a pointer on all architectures linux supports. v1: https://lkml.kernel.org/r/[email protected] v2: https://lkml.kernel.org/r/[email protected] v3: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Reviewed-by: Marco Elver <[email protected]> Signed-off-by: "Eric W. Biederman" <[email protected]>
2021-05-18signal: Factor force_sig_perf out of perf_sigtrapEric W. Biederman2-9/+15
Separate filling in siginfo for TRAP_PERF from deciding that siginal needs to be sent. There are enough little details that need to be correct when properly filling in siginfo_t that it is easy to make mistakes if filling in the siginfo_t is in the same function with other logic. So factor out force_sig_perf to reduce the cognative load of on reviewers, maintainers and implementors. v1: https://lkml.kernel.org/r/[email protected] v2: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Reviewed-by: Marco Elver <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: "Eric W. Biederman" <[email protected]>
2021-05-18signal: Implement SIL_FAULT_TRAPNOEric W. Biederman1-22/+12
Now that si_trapno is part of the union in _si_fault and available on all architectures, add SIL_FAULT_TRAPNO and update siginfo_layout to return SIL_FAULT_TRAPNO when the code assumes si_trapno is valid. There is room for future changes to reduce when si_trapno is valid but this is all that is needed to make si_trapno and the other members of the the union in _sigfault mutually exclusive. Update the code that uses siginfo_layout to deal with SIL_FAULT_TRAPNO and have the same code ignore si_trapno in in all other cases. v1: https://lkml.kernel.org/r/[email protected] v2: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Reviewed-by: Marco Elver <[email protected]> Signed-off-by: "Eric W. Biederman" <[email protected]>
2021-05-18siginfo: Move si_trapno inside the union inside _si_faultEric W. Biederman1-0/+1
It turns out that linux uses si_trapno very sparingly, and as such it can be considered extra information for a very narrow selection of signals, rather than information that is present with every fault reported in siginfo. As such move si_trapno inside the union inside of _si_fault. This results in no change in placement, and makes it eaiser to extend _si_fault in the future as this reduces the number of special cases. In particular with si_trapno included in the union it is no longer a concern that the union must be pointer aligned on most architectures because the union follows immediately after si_addr which is a pointer. This change results in a difference in siginfo field placement on sparc and alpha for the fields si_addr_lsb, si_lower, si_upper, si_pkey, and si_perf. These architectures do not implement the signals that would use si_addr_lsb, si_lower, si_upper, si_pkey, and si_perf. Further these architecture have not yet implemented the userspace that would use si_perf. The point of this change is in fact to correct these placement issues before sparc or alpha grow userspace that cares. This change was discussed[1] and the agreement is that this change is currently safe. [1]: https://lkml.kernel.org/r/CAK8P3a0+uKYwL1NhY6Hvtieghba2hKYGD6hcKx5n8=4Gtt+pHA@mail.gmail.com Acked-by: Marco Elver <[email protected]> v1: https://lkml.kernel.org/r/[email protected] v2: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: "Eric W. Biederman" <[email protected]>
2021-05-18kcsan: Report observed value changesMark Rutland3-9/+33
When a thread detects that a memory location was modified without its watchpoint being hit, the report notes that a change was detected, but does not provide concrete values for the change. Knowing the concrete values can be very helpful in tracking down any racy writers (e.g. as specific values may only be written in some portions of code, or under certain conditions). When we detect a modification, let's report the concrete old/new values, along with the access's mask of relevant bits (and which relevant bits were modified). This can make it easier to identify potential racy writers. As the snapshots are at most 8 bytes, we can only report values for acceses up to this size, but this appears to cater for the common case. When we detect a race via a watchpoint, we may or may not have concrete values for the modification. To be helpful, let's attempt to log them when we do as they can be ignored where irrelevant. The resulting reports appears as follows, with values zero-padded to the access width: | ================================================================== | BUG: KCSAN: data-race in el0_svc_common+0x34/0x25c arch/arm64/kernel/syscall.c:96 | | race at unknown origin, with read to 0xffff00007ae6aa00 of 8 bytes by task 223 on cpu 1: | el0_svc_common+0x34/0x25c arch/arm64/kernel/syscall.c:96 | do_el0_svc+0x48/0xec arch/arm64/kernel/syscall.c:178 | el0_svc arch/arm64/kernel/entry-common.c:226 [inline] | el0_sync_handler+0x1a4/0x390 arch/arm64/kernel/entry-common.c:236 | el0_sync+0x140/0x180 arch/arm64/kernel/entry.S:674 | | value changed: 0x0000000000000000 -> 0x0000000000000002 | | Reported by Kernel Concurrency Sanitizer on: | CPU: 1 PID: 223 Comm: syz-executor.1 Not tainted 5.8.0-rc3-00094-ga73f923ecc8e-dirty #3 | Hardware name: linux,dummy-virt (DT) | ================================================================== If an access mask is set, it is shown underneath the "value changed" line as "bits changed: 0x<bits changed> with mask 0x<non-zero mask>". Signed-off-by: Mark Rutland <[email protected]> [ [email protected]: align "value changed" and "bits changed" lines, which required massaging the message; do not print bits+mask if no mask set. ] Signed-off-by: Marco Elver <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>
2021-05-18kcsan: Remove kcsan_report_typeMark Rutland2-42/+20
Now that the reporting code has been refactored, it's clear by construction that print_report() can only be passed KCSAN_REPORT_RACE_SIGNAL or KCSAN_REPORT_RACE_UNKNOWN_ORIGIN, and these can also be distinguished by the presence of `other_info`. Let's simplify things and remove the report type enum, and instead let's check `other_info` to distinguish these cases. This allows us to remove code for cases which are impossible and generally makes the code simpler. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> [ [email protected]: add updated comments to kcsan_report_*() functions ] Signed-off-by: Marco Elver <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>