aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2024-07-11netfilter: nf_tables: prefer nft_chain_validateFlorian Westphal1-145/+13
nft_chain_validate already performs loop detection because a cycle will result in a call stack overflow (ctx->level >= NFT_JUMP_STACK_SIZE). It also follows maps via ->validate callback in nft_lookup, so there appears no reason to iterate the maps again. nf_tables_check_loops() and all its helper functions can be removed. This improves ruleset load time significantly, from 23s down to 12s. This also fixes a crash bug. Old loop detection code can result in unbounded recursion: BUG: TASK stack guard page was hit at .... Oops: stack guard page: 0000 [#1] PREEMPT SMP KASAN CPU: 4 PID: 1539 Comm: nft Not tainted 6.10.0-rc5+ #1 [..] with a suitable ruleset during validation of register stores. I can't see any actual reason to attempt to check for this from nft_validate_register_store(), at this point the transaction is still in progress, so we don't have a full picture of the rule graph. For nf-next it might make sense to either remove it or make this depend on table->validate_state in case we could catch an error earlier (for improved error reporting to userspace). Fixes: 20a69341f2d0 ("netfilter: nf_tables: add netlink set API") Signed-off-by: Florian Westphal <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]>
2024-07-11netfilter: nfnetlink_queue: drop bogus WARN_ONFlorian Westphal1-1/+1
Happens when rules get flushed/deleted while packet is out, so remove this WARN_ON. This WARN exists in one form or another since v4.14, no need to backport this to older releases, hence use a more recent fixes tag. Fixes: 3f8019688894 ("netfilter: move nf_reinject into nfnetlink_queue modules") Reported-by: kernel test robot <[email protected]> Closes: https://lore.kernel.org/oe-lkp/[email protected] Signed-off-by: Florian Westphal <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]>
2024-07-11ethtool: netlink: do not return SQI value if link is downOleksij Rempel1-13/+28
Do not attach SQI value if link is down. "SQI values are only valid if link-up condition is present" per OpenAlliance specification of 100Base-T1 Interoperability Test suite [1]. The same rule would apply for other link types. [1] https://opensig.org/automotive-ethernet-specifications/# Fixes: 806602191592 ("ethtool: provide UAPI for PHY Signal Quality Index (SQI)") Signed-off-by: Oleksij Rempel <[email protected]> Reviewed-by: Andrew Lunn <[email protected]> Reviewed-by: Woojung Huh <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Paolo Abeni <[email protected]>
2024-07-11ppp: reject claimed-as-LCP but actually malformed packetsDmitry Antipov1-0/+15
Since 'ppp_async_encode()' assumes valid LCP packets (with code from 1 to 7 inclusive), add 'ppp_check_packet()' to ensure that LCP packet has an actual body beyond PPP_LCP header bytes, and reject claimed-as-LCP but actually malformed data otherwise. Reported-by: [email protected] Closes: https://syzkaller.appspot.com/bug?extid=ec0723ba9605678b14bf Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Dmitry Antipov <[email protected]> Signed-off-by: Paolo Abeni <[email protected]>
2024-07-11selftests/bpf: Add timer lockup selftestKumar Kartikeya Dwivedi2-0/+178
Add a selftest that tries to trigger a situation where two timer callbacks are attempting to cancel each other's timer. By running them continuously, we hit a condition where both run in parallel and cancel each other. Without the fix in the previous patch, this would cause a lockup as hrtimer_cancel on either side will wait for forward progress from the callback. Ensure that this situation leads to a EDEADLK error. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2024-07-11net: ethernet: mtk-star-emac: set mac_managed_pm when probingJian Hui Lee1-0/+7
The below commit introduced a warning message when phy state is not in the states: PHY_HALTED, PHY_READY, and PHY_UP. commit 744d23c71af3 ("net: phy: Warn about incorrect mdio_bus_phy_resume() state") mtk-star-emac doesn't need mdiobus suspend/resume. To fix the warning message during resume, indicate the phy resume/suspend is managed by the mac when probing. Fixes: 744d23c71af3 ("net: phy: Warn about incorrect mdio_bus_phy_resume() state") Signed-off-by: Jian Hui Lee <[email protected]> Reviewed-by: Jacob Keller <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Paolo Abeni <[email protected]>
2024-07-10Merge branch 'ice-support-to-dump-phy-config-fec'Jakub Kicinski7-15/+662
Tony Nguyen says: ==================== ice: Support to dump PHY config, FEC Anil Samal says: Implementation to dump PHY configuration and FEC statistics to facilitate link level debugging of customer issues. Implementation has two parts a. Serdes equalization # ethtool -d eth0 Output: Offset Values ------ ------ 0x0000: 00 00 00 00 03 00 00 00 05 00 00 00 01 08 00 40 0x0010: 01 00 00 40 00 00 39 3c 01 00 00 00 00 00 00 00 0x0020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ... 0x01f0: 01 00 00 00 ef be ad de 8f 00 00 00 00 00 00 00 0x0200: 00 00 00 00 ef be ad de 00 00 00 00 00 00 00 00 0x0210: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0220: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0230: 00 00 00 00 00 00 00 00 00 00 00 00 fa ff 00 00 0x0240: 06 00 00 00 03 00 00 00 00 00 00 00 00 00 00 00 0x0250: 0f b0 0f b0 00 00 00 00 00 00 00 00 00 00 00 00 0x0260: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0270: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0280: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0290: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x02a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x02b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x02c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x02d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x02e0: 00 00 00 00 00 00 00 00 00 00 00 00 Current implementation appends 176 bytes i.e. 44 bytes * 4 serdes lane. For port with 2 serdes lane, first 88 bytes are valid values and remaining 88 bytes are filled with zero. Similarly for port with 1 serdes lane, first 44 bytes are valid and remaining 132 bytes are marked zero. Each set of serdes equalizer parameter (i.e. set of 44 bytes) follows below order a. rx_equalization_pre2 b. rx_equalization_pre1 c. rx_equalization_post1 d. rx_equalization_bflf e. rx_equalization_bfhf f. rx_equalization_drate g. tx_equalization_pre1 h. tx_equalization_pre3 i. tx_equalization_atten j. tx_equalization_post1 k. tx_equalization_pre2 Where each individual equalizer parameter is of 4 bytes. As ethtool prints values as individual bytes, for little endian machine these values will be in reverse byte order. b. FEC block counts # ethtool -I --show-fec eth0 Output: FEC parameters for eth0: Supported/Configured FEC encodings: Auto RS BaseR Active FEC encoding: RS Statistics: corrected_blocks: 0 uncorrectable_blocks: 0 This series do following: Patch 1 - Implementation to support user provided flag for side band queue command. Patch 2 - Currently driver does not have a way to derive serdes lane number, pcs quad , pcs port from port number. So we introduced a mechanism to derive above info. Ethtool interface extension to include FEC statistics counter. Patch 3 - Ethtool interface extension to include serdes equalizer output. v1: https://lore.kernel.org/netdev/[email protected]/ ==================== Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2024-07-10ice: Implement driver functionality to dump serdes equalizer valuesAnil Samal5-2/+245
To debug link issues in the field, serdes Tx/Rx equalizer values help to determine the health of serdes lane. Extend 'ethtool -d' option to dump serdes Tx/Rx equalizer. The following list of equalizer param is supported a. rx_equalization_pre2 b. rx_equalization_pre1 c. rx_equalization_post1 d. rx_equalization_bflf e. rx_equalization_bfhf f. rx_equalization_drate g. tx_equalization_pre1 h. tx_equalization_pre3 i. tx_equalization_atten j. tx_equalization_post1 k. tx_equalization_pre2 Reviewed-by: Simon Horman <[email protected]> Reviewed-by: Jesse Brandeburg <[email protected]> Signed-off-by: Anil Samal <[email protected]> Tested-by: Pucha Himasekhar Reddy <[email protected]> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2024-07-10ice: Implement driver functionality to dump fec statisticsAnil Samal5-0/+403
To debug link issues in the field, it is paramount to dump fec corrected/uncorrected block counts from firmware. Firmware requires PCS quad number and PCS port number to read FEC statistics. Current driver implementation does not maintain above physical properties of a port. Add new driver API to derive physical properties of an input port.These properties include PCS quad number, PCS port number, serdes lane count, primary serdes lane number. Extend ethtool option '--show-fec' to support fec statistics. The IEEE standard mandates two sets of counters: - 30.5.1.1.17 aFECCorrectedBlocks - 30.5.1.1.18 aFECUncorrectableBlocks Standard defines above statistics per lane but current implementation supports total FEC statistics per port i.e. sum of all lane per port. Find sample output below FEC parameters for ens21f0np0: Supported/Configured FEC encodings: Auto RS BaseR Active FEC encoding: RS Statistics: corrected_blocks: 0 uncorrectable_blocks: 0 Reviewed-by: Simon Horman <[email protected]> Reviewed-by: Jesse Brandeburg <[email protected]> Signed-off-by: Anil Samal <[email protected]> Tested-by: Pucha Himasekhar Reddy <[email protected]> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2024-07-10ice: Extend Sideband Queue command to support flagsAnil Samal3-13/+14
Current driver implementation for Sideband Queue supports a fixed flag (ICE_AQ_FLAG_RD). To retrieve FEC statistics from firmware, Sideband Queue command is used with a different flag. Extend API for Sideband Queue command to use 'flags' as input argument. Reviewed-by: Simon Horman <[email protected]> Reviewed-by: Jesse Brandeburg <[email protected]> Signed-off-by: Anil Samal <[email protected]> Tested-by: Pucha Himasekhar Reddy <[email protected]> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2024-07-10e1000e: fix force smbus during suspend flowVitaly Lifshits1-20/+53
Commit 861e8086029e ("e1000e: move force SMBUS from enable ulp function to avoid PHY loss issue") resolved a PHY access loss during suspend on Meteor Lake consumer platforms, but it affected corporate systems incorrectly. A better fix, working for both consumer and corporate systems, was proposed in commit bfd546a552e1 ("e1000e: move force SMBUS near the end of enable_ulp function"). However, it introduced a regression on older devices, such as [8086:15B8], [8086:15F9], [8086:15BE]. This patch aims to fix the secondary regression, by limiting the scope of the changes to Meteor Lake platforms only. Fixes: bfd546a552e1 ("e1000e: move force SMBUS near the end of enable_ulp function") Reported-by: Todd Brandt <[email protected]> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218940 Reported-by: Dieter Mummenschanz <[email protected]> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218936 Signed-off-by: Vitaly Lifshits <[email protected]> Tested-by: Mor Bar-Gabay <[email protected]> (A Contingent Worker at Intel) Signed-off-by: Tony Nguyen <[email protected]> Reviewed-by: Simon Horman <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2024-07-10dt-bindings: net: convert enetc to yamlFrank Li4-119/+161
Convert enetc device binding file to yaml. Split to 3 yaml files, 'fsl,enetc.yaml', 'fsl,enetc-mdio.yaml', 'fsl,enetc-ierb.yaml'. Additional Changes: - Add pci<vendor id>,<production id> in compatible string. - Ref to common ethernet-controller.yaml and mdio.yaml. - Add Wei fang, Vladimir and Claudiu as maintainer. - Update ENETC description. - Remove fixed-link part. Signed-off-by: Frank Li <[email protected]> Reviewed-by: Rob Herring (Arm) <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2024-07-10tcp: avoid too many retransmit packetsEric Dumazet1-2/+13
If a TCP socket is using TCP_USER_TIMEOUT, and the other peer retracted its window to zero, tcp_retransmit_timer() can retransmit a packet every two jiffies (2 ms for HZ=1000), for about 4 minutes after TCP_USER_TIMEOUT has 'expired'. The fix is to make sure tcp_rtx_probe0_timed_out() takes icsk->icsk_user_timeout into account. Before blamed commit, the socket would not timeout after icsk->icsk_user_timeout, but would use standard exponential backoff for the retransmits. Also worth noting that before commit e89688e3e978 ("net: tcp: fix unexcepted socket die when snd_wnd is 0"), the issue would last 2 minutes instead of 4. Fixes: b701a99e431d ("tcp: Add tcp_clamp_rto_to_user_timeout() helper to improve accuracy") Signed-off-by: Eric Dumazet <[email protected]> Cc: Neal Cardwell <[email protected]> Reviewed-by: Jason Xing <[email protected]> Reviewed-by: Jon Maxwell <[email protected]> Reviewed-by: Kuniyuki Iwashima <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2024-07-10dt-bindings: net: realtek,rtl82xx: Document RTL8211F LED supportMarek Vasut1-3/+14
The RTL8211F PHY does support LED configuration, document support for LEDs in the binding document. Signed-off-by: Marek Vasut <[email protected]> Reviewed-by: Rob Herring (Arm) <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2024-07-10Merge branch 'fixes-for-bpf-timer-lockup-and-uaf'Alexei Starovoitov1-17/+82
Kumar Kartikeya Dwivedi says: ==================== Fixes for BPF timer lockup and UAF The following patches contain fixes for timer lockups and a use-after-free scenario. This set proposes to fix the following lockup situation for BPF timers. CPU 1 CPU 2 bpf_timer_cb bpf_timer_cb timer_cb1 timer_cb2 bpf_timer_cancel(timer_cb2) bpf_timer_cancel(timer_cb1) hrtimer_cancel hrtimer_cancel In this case, both callbacks will continue waiting for each other to finish synchronously, causing a lockup. The proposed fix adds support for tracking in-flight cancellations *begun by other timer callbacks* for a particular BPF timer. Whenever preparing to call hrtimer_cancel, a callback will increment the target timer's counter, then inspect its in-flight cancellations, and if non-zero, return -EDEADLK to avoid situations where the target timer's callback is waiting for its completion. This does mean that in cases where a callback is fired and cancelled, it will be unable to cancel any timers in that execution. This can be alleviated by maintaining the list of waiting callbacks in bpf_hrtimer and searching through it to avoid interdependencies, but this may introduce additional delays in bpf_timer_cancel, in addition to requiring extra state at runtime which may need to be allocated or reused from bpf_hrtimer storage. Moreover, extra synchronization is needed to delete these elements from the list of waiting callbacks once hrtimer_cancel has finished. The second patch is for a deadlock situation similar to above in bpf_timer_cancel_and_free, but also a UAF scenario that can occur if timer is armed before entering it, if hrtimer_running check causes the hrtimer_cancel call to be skipped. As seen above, synchronous hrtimer_cancel would lead to deadlock (if same callback tries to free its timer, or two timers free each other), therefore we queue work onto the global workqueue to ensure outstanding timers are cancelled before bpf_hrtimer state is freed. Further details are in the patches. ==================== Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2024-07-10bpf: Defer work in bpf_timer_cancel_and_freeKumar Kartikeya Dwivedi1-14/+47
Currently, the same case as previous patch (two timer callbacks trying to cancel each other) can be invoked through bpf_map_update_elem as well, or more precisely, freeing map elements containing timers. Since this relies on hrtimer_cancel as well, it is prone to the same deadlock situation as the previous patch. It would be sufficient to use hrtimer_try_to_cancel to fix this problem, as the timer cannot be enqueued after async_cancel_and_free. Once async_cancel_and_free has been done, the timer must be reinitialized before it can be armed again. The callback running in parallel trying to arm the timer will fail, and freeing bpf_hrtimer without waiting is sufficient (given kfree_rcu), and bpf_timer_cb will return HRTIMER_NORESTART, preventing the timer from being rearmed again. However, there exists a UAF scenario where the callback arms the timer before entering this function, such that if cancellation fails (due to timer callback invoking this routine, or the target timer callback running concurrently). In such a case, if the timer expiration is significantly far in the future, the RCU grace period expiration happening before it will free the bpf_hrtimer state and along with it the struct hrtimer, that is enqueued. Hence, it is clear cancellation needs to occur after async_cancel_and_free, and yet it cannot be done inline due to deadlock issues. We thus modify bpf_timer_cancel_and_free to defer work to the global workqueue, adding a work_struct alongside rcu_head (both used at _different_ points of time, so can share space). Update existing code comments to reflect the new state of affairs. Fixes: b00628b1c7d5 ("bpf: Introduce bpf timers.") Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2024-07-10bpf: Fail bpf_timer_cancel when callback is being cancelledKumar Kartikeya Dwivedi1-3/+35
Given a schedule: timer1 cb timer2 cb bpf_timer_cancel(timer2); bpf_timer_cancel(timer1); Both bpf_timer_cancel calls would wait for the other callback to finish executing, introducing a lockup. Add an atomic_t count named 'cancelling' in bpf_hrtimer. This keeps track of all in-flight cancellation requests for a given BPF timer. Whenever cancelling a BPF timer, we must check if we have outstanding cancellation requests, and if so, we must fail the operation with an error (-EDEADLK) since cancellation is synchronous and waits for the callback to finish executing. This implies that we can enter a deadlock situation involving two or more timer callbacks executing in parallel and attempting to cancel one another. Note that we avoid incrementing the cancelling counter for the target timer (the one being cancelled) if bpf_timer_cancel is not invoked from a callback, to avoid spurious errors. The whole point of detecting cur->cancelling and returning -EDEADLK is to not enter a busy wait loop (which may or may not lead to a lockup). This does not apply in case the caller is in a non-callback context, the other side can continue to cancel as it sees fit without running into errors. Background on prior attempts: Earlier versions of this patch used a bool 'cancelling' bit and used the following pattern under timer->lock to publish cancellation status. lock(t->lock); t->cancelling = true; mb(); if (cur->cancelling) return -EDEADLK; unlock(t->lock); hrtimer_cancel(t->timer); t->cancelling = false; The store outside the critical section could overwrite a parallel requests t->cancelling assignment to true, to ensure the parallely executing callback observes its cancellation status. It would be necessary to clear this cancelling bit once hrtimer_cancel is done, but lack of serialization introduced races. Another option was explored where bpf_timer_start would clear the bit when (re)starting the timer under timer->lock. This would ensure serialized access to the cancelling bit, but may allow it to be cleared before in-flight hrtimer_cancel has finished executing, such that lockups can occur again. Thus, we choose an atomic counter to keep track of all outstanding cancellation requests and use it to prevent lockups in case callbacks attempt to cancel each other while executing in parallel. Reported-by: Dohyun Kim <[email protected]> Reported-by: Neel Natu <[email protected]> Fixes: b00628b1c7d5 ("bpf: Introduce bpf timers.") Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2024-07-10bpf: fix order of args in call to bpf_map_kvcallocMohammad Shehar Yaar Tausif1-2/+2
The original function call passed size of smap->bucket before the number of buckets which raises the error 'calloc-transposed-args' on compilation. Vlastimil Babka added: The order of parameters can be traced back all the way to 6ac99e8f23d4 ("bpf: Introduce bpf sk local storage") accross several refactorings, and that's why the commit is used as a Fixes: tag. In v6.10-rc1, a different commit 2c321f3f70bc ("mm: change inlined allocation helpers to account at the call site") however exposed the order of args in a way that gcc-14 has enough visibility to start warning about it, because (in !CONFIG_MEMCG case) bpf_map_kvcalloc is then a macro alias for kvcalloc instead of a static inline wrapper. To sum up the warning happens when the following conditions are all met: - gcc-14 is used (didn't see it with gcc-13) - commit 2c321f3f70bc is present - CONFIG_MEMCG is not enabled in .config - CONFIG_WERROR turns this from a compiler warning to error Fixes: 6ac99e8f23d4 ("bpf: Introduce bpf sk local storage") Reviewed-by: Andrii Nakryiko <[email protected]> Tested-by: Christian Kujau <[email protected]> Signed-off-by: Mohammad Shehar Yaar Tausif <[email protected]> Signed-off-by: Vlastimil Babka <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2024-07-10Merge tag 'mm-hotfixes-stable-2024-07-10-13-19' of ↵Linus Torvalds25-286/+339
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull misc fixes from Andrew Morton: "21 hotfixes, 15 of which are cc:stable. No identifiable theme here - all are singleton patches, 19 are for MM" * tag 'mm-hotfixes-stable-2024-07-10-13-19' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (21 commits) mm/hugetlb: fix kernel NULL pointer dereference when migrating hugetlb folio mm/hugetlb: fix potential race in __update_and_free_hugetlb_folio() filemap: replace pte_offset_map() with pte_offset_map_nolock() arch/xtensa: always_inline get_current() and current_thread_info() sched.h: always_inline alloc_tag_{save|restore} to fix modpost warnings MAINTAINERS: mailmap: update Lorenzo Stoakes's email address mm: fix crashes from deferred split racing folio migration lib/build_OID_registry: avoid non-destructive substitution for Perl < 5.13.2 compat mm: gup: stop abusing try_grab_folio nilfs2: fix kernel bug on rename operation of broken directory mm/hugetlb_vmemmap: fix race with speculative PFN walkers cachestat: do not flush stats in recency check mm/shmem: disable PMD-sized page cache if needed mm/filemap: skip to create PMD-sized page cache if needed mm/readahead: limit page cache size in page_cache_ra_order() mm/filemap: make MAX_PAGECACHE_ORDER acceptable to xarray mm/damon/core: merge regions aggressively when max_nr_regions is unmet Fix userfaultfd_api to return EINVAL as expected mm: vmalloc: check if a hash-index is in cpu_possible_mask mm: prevent derefencing NULL ptr in pfn_section_valid() ...
2024-07-10Merge tag 'scsi-fixes' of ↵Linus Torvalds3-8/+10
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi Pull SCSI fixes from James Bottomley: "One core change that moves a disk start message to a location where it will only be printed once instead of twice plus a couple of error handling race fixes in the ufs driver" * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: scsi: sd: Do not repeat the starting disk message scsi: ufs: core: Fix ufshcd_abort_one racing issue scsi: ufs: core: Fix ufshcd_clear_cmd racing issue
2024-07-10Merge branch 'BPF selftests misc fixes'Martin KaFai Lau2-5/+13
Geliang Tang says: ==================== v2: - only check the first "link" (link_nl) in test_mixed_links(). - Drop patch 2 in v1. This patchset fixes a segfault and a bpf object leak in test_progs. It is a resend patch 1 out of "skip ENOTSUPP BPF selftests" set as Eduard suggested. Together with another fix for xdp_adjust_tail. ==================== Signed-off-by: Martin KaFai Lau <[email protected]>
2024-07-10selftests/bpf: Close obj in error path in xdp_adjust_tailGeliang Tang1-1/+1
If bpf_object__load() fails in test_xdp_adjust_frags_tail_grow(), "obj" opened before this should be closed. So use "goto out" to close it instead of using "return" here. Fixes: 110221081aac ("bpf: selftests: update xdp_adjust_tail selftest to include xdp frags") Signed-off-by: Geliang Tang <[email protected]> Link: https://lore.kernel.org/r/f282a1ed2d0e3fb38cceefec8e81cabb69cab260.1720615848.git.tanggeliang@kylinos.cn Signed-off-by: Martin KaFai Lau <[email protected]>
2024-07-10selftests/bpf: Null checks for links in bpf_tcp_caGeliang Tang1-4/+12
Run bpf_tcp_ca selftests (./test_progs -t bpf_tcp_ca) on a Loongarch platform, some "Segmentation fault" errors occur: ''' test_dctcp:PASS:bpf_dctcp__open_and_load 0 nsec test_dctcp:FAIL:bpf_map__attach_struct_ops unexpected error: -524 #29/1 bpf_tcp_ca/dctcp:FAIL test_cubic:PASS:bpf_cubic__open_and_load 0 nsec test_cubic:FAIL:bpf_map__attach_struct_ops unexpected error: -524 #29/2 bpf_tcp_ca/cubic:FAIL test_dctcp_fallback:PASS:dctcp_skel 0 nsec test_dctcp_fallback:PASS:bpf_dctcp__load 0 nsec test_dctcp_fallback:FAIL:dctcp link unexpected error: -524 #29/4 bpf_tcp_ca/dctcp_fallback:FAIL test_write_sk_pacing:PASS:open_and_load 0 nsec test_write_sk_pacing:FAIL:attach_struct_ops unexpected error: -524 #29/6 bpf_tcp_ca/write_sk_pacing:FAIL test_update_ca:PASS:open 0 nsec test_update_ca:FAIL:attach_struct_ops unexpected error: -524 settcpca:FAIL:setsockopt unexpected setsockopt: \ actual -1 == expected -1 (network_helpers.c:99: errno: No such file or directory) \ Failed to call post_socket_cb start_test:FAIL:start_server_str unexpected start_server_str: \ actual -1 == expected -1 test_update_ca:FAIL:ca1_ca1_cnt unexpected ca1_ca1_cnt: \ actual 0 <= expected 0 #29/9 bpf_tcp_ca/update_ca:FAIL #29 bpf_tcp_ca:FAIL Caught signal #11! Stack trace: ./test_progs(crash_handler+0x28)[0x5555567ed91c] linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x7ffffee408b0] ./test_progs(bpf_link__update_map+0x80)[0x555556824a78] ./test_progs(+0x94d68)[0x5555564c4d68] ./test_progs(test_bpf_tcp_ca+0xe8)[0x5555564c6a88] ./test_progs(+0x3bde54)[0x5555567ede54] ./test_progs(main+0x61c)[0x5555567efd54] /usr/lib64/libc.so.6(+0x22208)[0x7ffff2aaa208] /usr/lib64/libc.so.6(__libc_start_main+0xac)[0x7ffff2aaa30c] ./test_progs(_start+0x48)[0x55555646bca8] Segmentation fault ''' This is because BPF trampoline is not implemented on Loongarch yet, "link" returned by bpf_map__attach_struct_ops() is NULL. test_progs crashs when this NULL link passes to bpf_link__update_map(). This patch adds NULL checks for all links in bpf_tcp_ca to fix these errors. If "link" is NULL, goto the newly added label "out" to destroy the skel. v2: - use "goto out" instead of "return" as Eduard suggested. Fixes: 06da9f3bd641 ("selftests/bpf: Test switching TCP Congestion Control algorithms.") Signed-off-by: Geliang Tang <[email protected]> Reviewed-by: Alan Maguire <[email protected]> Link: https://lore.kernel.org/r/b4c841492bd4ed97964e4e61e92827ce51bf1dc9.1720615848.git.tanggeliang@kylinos.cn Signed-off-by: Martin KaFai Lau <[email protected]>
2024-07-10Merge tag 'vfio-v6.10' of https://github.com/awilliam/linux-vfioLinus Torvalds1-1/+1
Pull VFIO fix from Alex Williamson: - Recent stable backports are exposing a bug introduced in the v6.10 development cycle where a counter value is uninitialized. This leads to regressions in userspace drivers like QEMU where where the kernel might ask for an arbitrary buffer size or return out of memory itself based on a bogus value. Zero initialize the counter. (Yi Liu) * tag 'vfio-v6.10' of https://github.com/awilliam/linux-vfio: vfio/pci: Init the count variable in collecting hot-reset devices
2024-07-10Merge branch 'use network helpers, part 8'Martin KaFai Lau4-43/+60
Geliang Tang says: ==================== v11: - new patches 2, 4, 6. - drop expect_errno from network_helper_opts as Eduard and Martin suggested. - drop sockmap_ktls patches from this set. v10: - a new patch 10 is added. - patches 1-6, 8-9 unchanged, only commit logs updated. - "err = -errno" is used in patches 7, 11, 12 to get the real error number before checking value of "err". v9: - new patches 5-7, new struct member expect_errno for network_helper_opts. - patches 1-4, 8-9 unchanged. - update patches 10-11 to make sure all tests pass. v8: - only patch 8 updated, to fix errors reported by CI. v7: - address Martin's comments in v6. (thanks) - use MAX(opts->backlog, 0) instead of opts->backlog. - use connect_to_fd_opts instead connect_to_fd. - more ASSERT_* to check errors. v6: - update patch 6 as Daniel suggested. (thanks) v5: - keep make_server and make_client as Eduard suggested. v4: - a new patch to use make_sockaddr in sockmap_ktls. - a new patch to close fd in error path in drop_on_reuseport. - drop make_server() in patch 7. - drop make_client() too in patch 9. v3: - a new patch to add backlog for network_helper_opts. - use start_server_str in sockmap_ktls now, not start_server. v2: - address Eduard's comments in v1. (thanks) - fix errors reported by CI. This patch set uses network helpers in sk_lookup, and drop the local helpers inetaddr_len() and make_socket(). ==================== Signed-off-by: Martin KaFai Lau <[email protected]>
2024-07-10selftests/bpf: Use connect_fd_to_fd in sk_lookupGeliang Tang1-9/+3
This patch uses public helper connect_fd_to_fd() exported in network_helpers.h instead of using getsockname() + connect() in run_lookup_prog() in prog_tests/sk_lookup.c. This can simplify the code. Signed-off-by: Geliang Tang <[email protected]> Link: https://lore.kernel.org/r/7077c277cde5a1864cdc244727162fb75c8bb9c5.1720515893.git.tanggeliang@kylinos.cn Signed-off-by: Martin KaFai Lau <[email protected]>
2024-07-10selftests/bpf: Use start_server_addr in sk_lookupGeliang Tang1-8/+2
This patch uses public helper start_server_addr() in udp_recv_send() in prog_tests/sk_lookup.c to simplify the code. And use ASSERT_OK_FD() to check fd returned by start_server_addr(). Acked-by: Eduard Zingerman <[email protected]> Signed-off-by: Geliang Tang <[email protected]> Link: https://lore.kernel.org/r/f11cabfef4a2170ecb66a1e8e2e72116d8f621b3.1720515893.git.tanggeliang@kylinos.cn Signed-off-by: Martin KaFai Lau <[email protected]>
2024-07-10selftests/bpf: Use start_server_str in sk_lookupGeliang Tang1-24/+34
This patch uses public helper start_server_str() to simplify make_server() in prog_tests/sk_lookup.c. Add a callback setsockopts() to do all sockopts, set it to post_socket_cb pointer of struct network_helper_opts. And add a new struct cb_opts to save the data needed to pass to the callback. Then pass this network_helper_opts to start_server_str(). Also use ASSERT_OK_FD() to check fd returned by start_server_str(). Acked-by: Eduard Zingerman <[email protected]> Signed-off-by: Geliang Tang <[email protected]> Link: https://lore.kernel.org/r/5981539f5591d2c4998c962ef2bf45f34c940548.1720515893.git.tanggeliang@kylinos.cn Signed-off-by: Martin KaFai Lau <[email protected]>
2024-07-10selftests/bpf: Close fd in error path in drop_on_reuseportGeliang Tang1-1/+1
In the error path when update_lookup_map() fails in drop_on_reuseport in prog_tests/sk_lookup.c, "server1", the fd of server 1, should be closed. This patch fixes this by using "goto close_srv1" lable instead of "detach" to close "server1" in this case. Fixes: 0ab5539f8584 ("selftests/bpf: Tests for BPF_SK_LOOKUP attach point") Acked-by: Eduard Zingerman <[email protected]> Signed-off-by: Geliang Tang <[email protected]> Link: https://lore.kernel.org/r/86aed33b4b0ea3f04497c757845cff7e8e621a2d.1720515893.git.tanggeliang@kylinos.cn Signed-off-by: Martin KaFai Lau <[email protected]>
2024-07-10selftests/bpf: Add ASSERT_OK_FD macroGeliang Tang1-0/+9
Add a new dedicated ASSERT macro ASSERT_OK_FD to test whether a socket FD is valid or not. It can be used to replace macros ASSERT_GT(fd, 0, ""), ASSERT_NEQ(fd, -1, "") or statements (fd < 0), (fd != -1). Suggested-by: Martin KaFai Lau <[email protected]> Signed-off-by: Geliang Tang <[email protected]> Link: https://lore.kernel.org/r/ded75be86ac630a3a5099739431854c1ec33f0ea.1720515893.git.tanggeliang@kylinos.cn Signed-off-by: Martin KaFai Lau <[email protected]>
2024-07-10selftests/bpf: Add backlog for network_helper_optsGeliang Tang2-1/+11
Some callers expect __start_server() helper to pass their own "backlog" value to listen() instead of the default of 1. So this patch adds struct member "backlog" for network_helper_opts to allow callers to set "backlog" value via start_server_str() helper. listen(fd, 0 /* backlog */) can be used to enforce syncookie. Meaning backlog 0 is a legit value. Using 0 as a default and changing it to 1 here is fine. It makes the test program easier to write for the common case. Enforcing syncookie mode by using backlog 0 is a niche use case but it should at least have a way for the caller to do that. Thus, -ve backlog value is used here for the syncookie use case. Please see the comment in network_helpers.h for the details. Signed-off-by: Geliang Tang <[email protected]> Link: https://lore.kernel.org/r/1660229659b66eaad07aa2126e9c9fe217eba0dd.1720515893.git.tanggeliang@kylinos.cn Signed-off-by: Martin KaFai Lau <[email protected]>
2024-07-10Merge tag 'bcachefs-2024-07-10' of https://evilpiepirate.org/git/bcachefsLinus Torvalds25-142/+266
Pull bcachefs fixes from Kent Overstreet: - Switch some asserts to WARN() - Fix a few "transaction not locked" asserts in the data read retry paths and backpointers gc - Fix a race that would cause the journal to get stuck on a flush commit - Add missing fsck checks for the fragmentation LRU - The usual assorted ssorted syzbot fixes * tag 'bcachefs-2024-07-10' of https://evilpiepirate.org/git/bcachefs: (22 commits) bcachefs: Add missing bch2_trans_begin() bcachefs: Fix missing error check in journal_entry_btree_keys_validate() bcachefs: Warn on attempting a move with no replicas bcachefs: bch2_data_update_to_text() bcachefs: Log mount failure error code bcachefs: Fix undefined behaviour in eytzinger1_first() bcachefs: Mark bch_inode_info as SLAB_ACCOUNT bcachefs: Fix bch2_inode_insert() race path for tmpfiles closures: fix closure_sync + closure debugging bcachefs: Fix journal getting stuck on a flush commit bcachefs: io clock: run timer fns under clock lock bcachefs: Repair fragmentation_lru in alloc_write_key() bcachefs: add check for missing fragmentation in check_alloc_to_lru_ref() bcachefs: bch2_btree_write_buffer_maybe_flush() bcachefs: Add missing printbuf_tabstops_reset() calls bcachefs: Fix loop restart in bch2_btree_transactions_read() bcachefs: Fix bch2_read_retry_nodecode() bcachefs: Don't use the new_fs() bucket alloc path on an initialized fs bcachefs: Fix shift greater than integer size bcachefs: Change bch2_fs_journal_stop() BUG_ON() to warning ...
2024-07-10selftests/bpf: fix compilation failure when CONFIG_NF_FLOW_TABLE=mAlan Maguire1-3/+7
In many cases, kernel netfilter functionality is built as modules. If CONFIG_NF_FLOW_TABLE=m in particular, progs/xdp_flowtable.c (and hence selftests) will fail to compile, so add a ___local version of "struct flow_ports". Fixes: c77e572d3a8c ("selftests/bpf: Add selftest for bpf_xdp_flow_lookup kfunc") Signed-off-by: Alan Maguire <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2024-07-10Merge tag 'usb-serial-6.10-rc8' of ↵Greg Kroah-Hartman2-0/+73
ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/johan/usb-serial into usb-linus Johan writes: USB-serial fixes for 6.10-rc8 Here's a fix for a long-standing issue in the mos7840 driver that can trigger a crash when resuming from system suspend. Included are also some new modem device ids. All have been in linux-next with no reported issues. * tag 'usb-serial-6.10-rc8' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/johan/usb-serial: USB: serial: mos7840: fix crash on resume USB: serial: option: add Rolling RW350-GL variants USB: serial: option: add support for Foxconn T99W651 USB: serial: option: add Netprisma LCUK54 series modules
2024-07-10idpf: use libeth Rx buffer management for payload bufferAlexander Lobakin6-246/+120
idpf uses Page Pool for data buffers with hardcoded buffer lengths of 4k for "classic" buffers and 2k for "short" ones. This is not flexible and does not ensure optimal memory usage. Why would you need 4k buffers when the MTU is 1500? Use libeth for the data buffers and don't hardcode any buffer sizes. Let them be calculated from the MTU for "classics" and then divide the truesize by 2 for "short" ones. The memory usage is now greatly reduced and 2 buffer queues starts make sense: on frames <= 1024, you'll recycle (and resync) a page only after 4 HW writes rather than two. Signed-off-by: Alexander Lobakin <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2024-07-10idpf: convert header split mode to libeth + napi_build_skb()Alexander Lobakin4-116/+204
Currently, idpf uses the following model for the header buffers: * buffers are allocated via dma_alloc_coherent(); * when receiving, napi_alloc_skb() is called and then the header is copied to the newly allocated linear part. This is far from optimal as DMA coherent zone is slow on many systems and memcpy() neutralizes the idea and benefits of the header split. Not speaking of that XDP can't be run on DMA coherent buffers, but at the same time the idea of allocating an skb to run XDP program is ill. Instead, use libeth to create page_pools for the header buffers, allocate them dynamically and then build an skb via napi_build_skb() around them with no memory copy. With one exception... When you enable header split, you expect you'll always have a separate header buffer, so that you could reserve headroom and tailroom only there and then use full buffers for the data. For example, this is how TCP zerocopy works -- you have to have the payload aligned to PAGE_SIZE. The current hardware running idpf does *not* guarantee that you'll always have headers placed separately. For example, on my setup, even ICMP packets are written as one piece to the data buffers. You can't build a valid skb around a data buffer in this case. To not complicate things and not lose TCP zerocopy etc., when such thing happens, use the empty header buffer and pull either full frame (if it's short) or the Ethernet header there and build an skb around it. GRO layer will pull more from the data buffer later. This W/A will hopefully be removed one day. Signed-off-by: Alexander Lobakin <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2024-07-10libeth: support different types of buffers for RxAlexander Lobakin2-11/+140
Unlike previous generations, idpf requires more buffer types for optimal performance. This includes: header buffers, short buffers, and no-overhead buffers (w/o headroom and tailroom, for TCP zerocopy when the header split is enabled). Introduce libeth Rx buffer type and calculate page_pool params accordingly. All the HW-related details like buffer alignment are still accounted. For the header buffers, pick 256 bytes as in most places in the kernel (have you ever seen frames with bigger headers?). Reviewed-by: Przemek Kitszel <[email protected]> Signed-off-by: Alexander Lobakin <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2024-07-10idpf: remove legacy Page Pool Ethtool statsAlexander Lobakin2-29/+1
Page Pool Ethtool stats are deprecated since the Netlink Page Pool interface introduction. idpf receives big changes in Rx buffer management, including &page_pool layout, so keeping these deprecated stats does only harm, not speaking of that CONFIG_IDPF selects CONFIG_PAGE_POOL_STATS unconditionally, while the latter is often turned off for better performance. Remove all the references to PP stats from the Ethtool code. The stats are still available in their full via the generic Netlink interface. Reviewed-by: Przemek Kitszel <[email protected]> Reviewed-by: Jacob Keller <[email protected]> Signed-off-by: Alexander Lobakin <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2024-07-10idpf: reuse libeth's definitions of parsed ptype structuresAlexander Lobakin8-252/+153
idpf's in-kernel parsed ptype structure is almost identical to the one used in the previous Intel drivers, which means it can be converted to use libeth's definitions and even helpers. The only difference is that it doesn't use a constant table (libie), rather than one obtained from the device. Remove the driver counterpart and use libeth's helpers for hashes and checksums. This slightly optimizes skb fields processing due to faster checks. Also don't define big static array of ptypes in &idpf_vport -- allocate them dynamically. The pointer to it is anyway cached in &idpf_rx_queue. Reviewed-by: Przemek Kitszel <[email protected]> Reviewed-by: Jacob Keller <[email protected]> Signed-off-by: Alexander Lobakin <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2024-07-10idpf: compile singleq code only under default-n CONFIG_IDPF_SINGLEQAlexander Lobakin6-19/+44
Currently, all HW supporting idpf supports the singleq model, but none of it advertises it by default, as splitq is supported and preferred for multiple reasons. Still, this almost dead code often times adds hotpath branches and redundant cacheline accesses. While it can't currently be removed, add CONFIG_IDPF_SINGLEQ and build the singleq code only when it's enabled manually. This corresponds to -10 Kb of object code size and a good bunch of hotpath checks. idpf_is_queue_model_split() works as a gate and compiles out to `true` when the config option is disabled. Reviewed-by: Przemek Kitszel <[email protected]> Signed-off-by: Alexander Lobakin <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2024-07-10idpf: merge singleq and splitq &net_device_opsAlexander Lobakin4-63/+20
It makes no sense to have a second &net_device_ops struct (800 bytes of rodata) with only one difference in .ndo_start_xmit, which can easily be just one `if`. This `if` is a drop in the ocean and you won't see any difference. Define unified idpf_xmit_start(). The preparation for sending is the same, just call either idpf_tx_splitq_frame() or idpf_tx_singleq_frame() depending on the active model to actually map and send the skb. Reviewed-by: Przemek Kitszel <[email protected]> Reviewed-by: Jacob Keller <[email protected]> Signed-off-by: Alexander Lobakin <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2024-07-10idpf: strictly assert cachelines of queue and queue vector structuresAlexander Lobakin1-67/+118
Now that the queue and queue vector structures are separated and laid out optimally, group the fields as read-mostly, read-write, and cold cachelines and add size assertions to make sure new features won't push something out of its place and provoke perf regression. Despite looking innocent, this gives up to 2% of perf bump on Rx. Reviewed-by: Przemek Kitszel <[email protected]> Reviewed-by: Jacob Keller <[email protected]> Signed-off-by: Alexander Lobakin <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2024-07-10idpf: avoid bloating &idpf_q_vector with big %NR_CPUSAlexander Lobakin3-18/+22
With CONFIG_MAXSMP, sizeof(cpumask_t) is 1 Kb. The queue vector structure has them embedded, which means 1 additional Kb of not really hotpath data. We have cpumask_var_t, which is either an embedded cpumask or a pointer for allocating it dynamically when it's big. Use it instead of plain cpumasks and put &idpf_q_vector on a good diet. Also remove redundant pointer to the interrupt name from the structure. request_irq() saves it and free_irq() returns it on deinit, so that you can free the memory. Reviewed-by: Przemek Kitszel <[email protected]> Signed-off-by: Alexander Lobakin <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2024-07-10idpf: split &idpf_queue into 4 strictly-typed queue structuresAlexander Lobakin7-728/+1018
Currently, sizeof(struct idpf_queue) is 32 Kb. This is due to the 12-bit hashtable declaration at the end of the queue. This HT is needed only for Tx queues when the flow scheduling mode is enabled. But &idpf_queue is unified for all of the queue types, provoking excessive memory usage. The unified structure in general makes the code less effective via suboptimal fields placement. You can't avoid that unless you make unions each 2 fields. Even then, different field alignment etc., doesn't allow you to optimize things to the limit. Split &idpf_queue into 4 structures corresponding to the queue types: RQ (Rx queue), SQ (Tx queue), FQ (buffer queue), and CQ (completion queue). Place only needed fields there and shortcuts handy for hotpath. Allocate the abovementioned hashtable dynamically and only when needed, keeping &idpf_tx_queue relatively short (192 bytes, same as Rx). This HT is used only for OOO completions, which aren't really hotpath anyway. Note that this change must be done atomically, otherwise it's really easy to get lost and miss something. Signed-off-by: Alexander Lobakin <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2024-07-10idpf: stop using macros for accessing queue descriptorsAlexander Lobakin5-50/+52
In C, we have structures and unions. Casting `void *` via macros is not only error-prone, but also looks confusing and awful in general. In preparation for splitting the queue structs, replace it with a union and direct array dereferences. Reviewed-by: Przemek Kitszel <[email protected]> Reviewed-by: Mina Almasry <[email protected]> Signed-off-by: Alexander Lobakin <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2024-07-10libeth: add cacheline / struct layout assertion helpersAlexander Lobakin1-0/+66
Add helpers to assert struct field layout, a bit more crazy and networking-specific than in <linux/cache.h>. They assume you have 3 CL-aligned groups (read-mostly, read-write, cold) in a struct you want to assert, and nothing besides them. For 64-bit with 64-byte cachelines, the assertions are as strict as possible, as the size can then be easily predicted. For the rest, make sure they don't cross the specified bound. Signed-off-by: Alexander Lobakin <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2024-07-10page_pool: use __cacheline_group_{begin, end}_aligned()Alexander Lobakin2-11/+14
Instead of doing __cacheline_group_begin() __aligned(), use the new __cacheline_group_{begin,end}_aligned(), so that it will take care of the group alignment itself. Also replace open-coded `4 * sizeof(long)` in two places with a definition. Signed-off-by: Alexander Lobakin <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2024-07-10cache: add __cacheline_group_{begin, end}_aligned() (+ couple more)Alexander Lobakin1-0/+59
__cacheline_group_begin(), unfortunately, doesn't align the group anyhow. If it is wanted, then you need to do something like __cacheline_group_begin(grp) __aligned(ALIGN) which isn't really convenient nor compact. Add the _aligned() counterparts to align the groups automatically to either the specified alignment (optional) or ``SMP_CACHE_BYTES``. Note that the actual struct layout will then be (on x64 with 64-byte CL): struct x { u32 y; // offset 0, size 4, padding 56 __cacheline_group_begin__grp; // offset 64, size 0 u32 z; // offset 64, size 4, padding 4 __cacheline_group_end__grp; // offset 72, size 0 __cacheline_group_pad__grp; // offset 72, size 0, padding 56 u32 w; // offset 128 }; The end marker is aligned to long, so that you can assert the struct size more strictly, but the offset of the next field in the structure will be aligned to the group alignment, so that the next field won't fall into the group it's not intended to. Add __LARGEST_ALIGN definition and LARGEST_ALIGN() macro. __LARGEST_ALIGN is the value to which the compilers align fields when __aligned_largest is specified. Sometimes, it might be needed to get this value outside of variable definitions. LARGEST_ALIGN() is macro which just aligns a value to __LARGEST_ALIGN. Also add SMP_CACHE_ALIGN(), similar to L1_CACHE_ALIGN(), but using ``SMP_CACHE_BYTES`` instead of ``L1_CACHE_BYTES`` as the former also accounts L2, needed in some cases. Signed-off-by: Alexander Lobakin <[email protected]> Reviewed-by: Przemek Kitszel <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2024-07-10bcachefs: fix scheduling while atomic in break_cycle()Kent Overstreet3-4/+24
Signed-off-by: Kent Overstreet <[email protected]>
2024-07-10bpf: Remove tst_run from lwt_seg6local_prog_ops.Sebastian Andrzej Siewior1-1/+0
The syzbot reported that the lwt_seg6 related BPF ops can be invoked via bpf_test_run() without without entering input_action_end_bpf() first. Martin KaFai Lau said that self test for BPF_PROG_TYPE_LWT_SEG6LOCAL probably didn't work since it was introduced in commit 04d4b274e2a ("ipv6: sr: Add seg6local action End.BPF"). The reason is that the per-CPU variable seg6_bpf_srh_states::srh is never assigned in the self test case but each BPF function expects it. Remove test_run for BPF_PROG_TYPE_LWT_SEG6LOCAL. Suggested-by: Martin KaFai Lau <[email protected]> Reported-by: [email protected] Fixes: d1542d4ae4df ("seg6: Use nested-BH locking for seg6_bpf_srh_states.") Fixes: 004d4b274e2a ("ipv6: sr: Add seg6local action End.BPF") Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Acked-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Martin KaFai Lau <[email protected]>