aboutsummaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)AuthorFilesLines
2023-10-24iommufd: Add IOMMU_HWPT_SET_DIRTY_TRACKINGJoao Martins1-0/+28
Every IOMMU driver should be able to implement the needed iommu domain ops to control dirty tracking. Connect a hw_pagetable to the IOMMU core dirty tracking ops, specifically the ability to enable/disable dirty tracking on an IOMMU domain (hw_pagetable id). To that end add an io_pagetable kernel API to toggle dirty tracking: * iopt_set_dirty_tracking(iopt, [domain], state) The intended caller of this is via the hw_pagetable object that is created. Internally it will ensure the leftover dirty state is cleared /right before/ dirty tracking starts. This is also useful for iommu drivers which may decide that dirty tracking is always-enabled at boot without wanting to toggle dynamically via corresponding iommu domain op. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Joao Martins <[email protected]> Reviewed-by: Jason Gunthorpe <[email protected]> Reviewed-by: Kevin Tian <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2023-10-24iommufd: Add a flag to enforce dirty tracking on attachJoao Martins1-0/+3
Throughout IOMMU domain lifetime that wants to use dirty tracking, some guarantees are needed such that any device attached to the iommu_domain supports dirty tracking. The idea is to handle a case where IOMMU in the system are assymetric feature-wise and thus the capability may not be supported for all devices. The enforcement is done by adding a flag into HWPT_ALLOC namely: IOMMU_HWPT_ALLOC_DIRTY_TRACKING .. Passed in HWPT_ALLOC ioctl() flags. The enforcement is done by creating a iommu_domain via domain_alloc_user() and validating the requested flags with what the device IOMMU supports (and failing accordingly) advertised). Advertising the new IOMMU domain feature flag requires that the individual iommu driver capability is supported when a future device attachment happens. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Joao Martins <[email protected]> Reviewed-by: Jason Gunthorpe <[email protected]> Reviewed-by: Kevin Tian <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2023-10-24iommu: Add iommu_domain ops for dirty trackingJoao Martins2-0/+74
Add to iommu domain operations a set of callbacks to perform dirty tracking, particulary to start and stop tracking and to read and clear the dirty data. Drivers are generally expected to dynamically change its translation structures to toggle the tracking and flush some form of control state structure that stands in the IOVA translation path. Though it's not mandatory, as drivers can also enable dirty tracking at boot, and just clear the dirty bits before setting dirty tracking. For each of the newly added IOMMU core APIs: iommu_cap::IOMMU_CAP_DIRTY_TRACKING: new device iommu_capable value when probing for capabilities of the device. .set_dirty_tracking(): an iommu driver is expected to change its translation structures and enable dirty tracking for the devices in the iommu_domain. For drivers making dirty tracking always-enabled, it should just return 0. .read_and_clear_dirty(): an iommu driver is expected to walk the pagetables for the iova range passed in and use iommu_dirty_bitmap_record() to record dirty info per IOVA. When detecting that a given IOVA is dirty it should also clear its dirty state from the PTE, *unless* the flag IOMMU_DIRTY_NO_CLEAR is passed in -- flushing is steered from the caller of the domain_op via iotlb_gather. The iommu core APIs use the same data structure in use for dirty tracking for VFIO device dirty (struct iova_bitmap) abstracted by iommu_dirty_bitmap_record() helper function. domain::dirty_ops: IOMMU domains will store the dirty ops depending on whether the iommu device supports dirty tracking or not. iommu drivers can then use this field to figure if the dirty tracking is supported+enforced on attach. The enforcement is enable via domain_alloc_user() which is done via IOMMUFD hwpt flag introduced later. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Joao Martins <[email protected]> Reviewed-by: Jason Gunthorpe <[email protected]> Reviewed-by: Lu Baolu <[email protected]> Reviewed-by: Kevin Tian <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2023-10-24vfio: Move iova_bitmap into iommufdJoao Martins1-0/+26
Both VFIO and IOMMUFD will need iova bitmap for storing dirties and walking the user bitmaps, so move to the common dependency into IOMMUFD. In doing so, create the symbol IOMMUFD_DRIVER which designates the builtin code that will be used by drivers when selected. Today this means MLX5_VFIO_PCI and PDS_VFIO_PCI. IOMMU drivers will do the same (in future patches) when supporting dirty tracking and select IOMMUFD_DRIVER accordingly. Given that the symbol maybe be disabled, add header definitions in iova_bitmap.h for when IOMMUFD_DRIVER=n Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Joao Martins <[email protected]> Reviewed-by: Jason Gunthorpe <[email protected]> Reviewed-by: Brett Creeley <[email protected]> Reviewed-by: Kevin Tian <[email protected]> Reviewed-by: Alex Williamson <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2023-10-24arm64, irqchip/gic-v3, ACPI: Move MADT GICC enabled check into a helperJames Morse1-0/+5
ACPI, irqchip and the architecture code all inspect the MADT enabled bit for a GICC entry in the MADT. The addition of an 'online capable' bit means all these sites need updating. Move the current checks behind a helper to make future updates easier. Signed-off-by: James Morse <[email protected]> Reviewed-by: Jonathan Cameron <[email protected]> Reviewed-by: Gavin Shan <[email protected]> Signed-off-by: "Russell King (Oracle)" <[email protected]> Acked-by: "Rafael J. Wysocki" <[email protected]> Reviewed-by: Sudeep Holla <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2023-10-24netfilter: nf_tables: set->ops->insert returns opaque set element in case of ↵Pablo Neira Ayuso1-1/+1
EEXIST Return struct nft_elem_priv instead of struct nft_set_ext for consistency with ("netfilter: nf_tables: expose opaque set element as struct nft_elem_priv") and to prepare the introduction of element timeout updates from control path. Signed-off-by: Pablo Neira Ayuso <[email protected]>
2023-10-24netfilter: nf_tables: shrink memory consumption of set elementsPablo Neira Ayuso1-9/+9
Instead of copying struct nft_set_elem into struct nft_trans_elem, store the pointer to the opaque set element object in the transaction. Adapt set backend API (and set backend implementations) to take the pointer to opaque set element representation whenever required. This patch deconstifies .remove() and .activate() set backend API since these modify the set element opaque object. And it also constify nft_set_elem_ext() this provides access to the nft_set_ext struct without updating the object. According to pahole on x86_64, this patch shrinks struct nft_trans_elem size from 216 to 24 bytes. This patch also reduces stack memory consumption by removing the template struct nft_set_elem object, using the opaque set element object instead such as from the set iterator API, catchall elements and the get element command. Signed-off-by: Pablo Neira Ayuso <[email protected]>
2023-10-24netfilter: nf_tables: expose opaque set element as struct nft_elem_privPablo Neira Ayuso1-13/+25
Add placeholder structure and place it at the beginning of each struct nft_*_elem for each existing set backend, instead of exposing elements as void type to the frontend which defeats compiler type checks. Use this pointer to this new type to replace void *. This patch updates the following set backend API to use this new struct nft_elem_priv placeholder structure: - update - deactivate - flush - get as well as the following helper functions: - nft_set_elem_ext() - nft_set_elem_init() - nft_set_elem_destroy() - nf_tables_set_elem_destroy() This patch adds nft_elem_priv_cast() to cast struct nft_elem_priv to native element representation from the corresponding set backend. BUILD_BUG_ON() makes sure this .priv placeholder is always at the top of the opaque set element representation. Suggested-by: Florian Westphal <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]>
2023-10-24netfilter: nf_tables: set backend .flush always succeedsPablo Neira Ayuso1-1/+1
.flush is always successful since this results from iterating over the set elements to toggle mark the element as inactive in the next generation. Signed-off-by: Pablo Neira Ayuso <[email protected]>
2023-10-24netfilter: conntrack: switch connlabels to atomic_tFlorian Westphal2-2/+2
The spinlock is back from the day when connabels did not have a fixed size and reallocation had to be supported. Remove it. This change also allows to call the helpers from softirq or timers without deadlocks. Also add WARN()s to catch refcounting imbalances. Signed-off-by: Florian Westphal <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]>
2023-10-24pmdomain: Merge branch genpd_dt into nextUlf Hansson1-0/+21
Merge the immutable branch genpd_dt into next, to allow the DT bindings to be tested together with new pmdomain changes that are targeted for v6.7. Signed-off-by: Ulf Hansson <[email protected]>
2023-10-24dt-bindings: power: rpmpd: Add MSM8917, MSM8937 and QM215Otto Pflüger1-0/+21
The MSM8917, MSM8937 and QM215 SoCs have VDDCX and VDDMX power domains controlled in voltage level mode. Define the MSM8937 and QM215 power domains as aliases because these SoCs are similar to MSM8917 and may share some parts of the device tree. Also add the compatibles for these SoCs to the documentation, with qcom,msm8937-rpmpd using qcom,msm8917-rpmpd as a fallback compatible because there are no known differences. QM215 is not compatible with these because it uses different regulators. Signed-off-by: Otto Pflüger <[email protected]> Reviewed-by: Krzysztof Kozlowski <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Ulf Hansson <[email protected]>
2023-10-24gtp: uapi: fix GTPA_MAXPablo Neira Ayuso1-1/+1
Subtract one to __GTPA_MAX, otherwise GTPA_MAX is off by 2. Fixes: 459aa660eb1d ("gtp: add initial driver for datapath of GPRS Tunneling Protocol (GTP-U)") Signed-off-by: Pablo Neira Ayuso <[email protected]> Signed-off-by: Paolo Abeni <[email protected]>
2023-10-24xsk: Avoid starving the xsk further down the listAlbert Huang1-0/+7
In the previous implementation, when multiple xsk sockets were associated with a single xsk_buff_pool, a situation could arise where the xsk_tx_list maintained data at the front for one xsk socket while starving the xsk sockets at the back of the list. This could result in issues such as the inability to transmit packets, increased latency, and jitter. To address this problem, we introduce a new variable called tx_budget_spent, which limits each xsk to transmit a maximum of MAX_PER_SOCKET_BUDGET tx descriptors. This allocation ensures equitable opportunities for subsequent xsk sockets to send tx descriptors. The value of MAX_PER_SOCKET_BUDGET is set to 32. Signed-off-by: Albert Huang <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Magnus Karlsson <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2023-10-24sched/fair: Remove SIS_PROPPeter Zijlstra1-2/+0
SIS_UTIL seems to work well, lets remove the old thing. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Vincent Guittot <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2023-10-24sched: Add cpus_share_resources APIBarry Song2-1/+14
Add cpus_share_resources() API. This is the preparation for the optimization of select_idle_cpu() on platforms with cluster scheduler level. On a machine with clusters cpus_share_resources() will test whether two cpus are within the same cluster. On a non-cluster machine it will behaves the same as cpus_share_cache(). So we use "resources" here for cache resources. Signed-off-by: Barry Song <[email protected]> Signed-off-by: Yicong Yang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Gautham R. Shenoy <[email protected]> Reviewed-by: Tim Chen <[email protected]> Reviewed-by: Vincent Guittot <[email protected]> Tested-and-reviewed-by: Chen Yu <[email protected]> Tested-by: K Prateek Nayak <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2023-10-23bpf: correct loop detection for iterators convergenceEduard Zingerman1-0/+15
It turns out that .branches > 0 in is_state_visited() is not a sufficient condition to identify if two verifier states form a loop when iterators convergence is computed. This commit adds logic to distinguish situations like below: (I) initial (II) initial | | V V .---------> hdr .. | | | | V V | .------... .------.. | | | | | | V V V V | ... ... .-> hdr .. | | | | | | | V V | V V | succ <- cur | succ <- cur | | | | | V | V | ... | ... | | | | '----' '----' For both (I) and (II) successor 'succ' of the current state 'cur' was previously explored and has branches count at 0. However, loop entry 'hdr' corresponding to 'succ' might be a part of current DFS path. If that is the case 'succ' and 'cur' are members of the same loop and have to be compared exactly. Co-developed-by: Andrii Nakryiko <[email protected]> Co-developed-by: Alexei Starovoitov <[email protected]> Reviewed-by: Andrii Nakryiko <[email protected]> Signed-off-by: Eduard Zingerman <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-10-23bpf: exact states comparison for iterator convergence checksEduard Zingerman1-0/+1
Convergence for open coded iterators is computed in is_state_visited() by examining states with branches count > 1 and using states_equal(). states_equal() computes sub-state relation using read and precision marks. Read and precision marks are propagated from children states, thus are not guaranteed to be complete inside a loop when branches count > 1. This could be demonstrated using the following unsafe program: 1. r7 = -16 2. r6 = bpf_get_prandom_u32() 3. while (bpf_iter_num_next(&fp[-8])) { 4. if (r6 != 42) { 5. r7 = -32 6. r6 = bpf_get_prandom_u32() 7. continue 8. } 9. r0 = r10 10. r0 += r7 11. r8 = *(u64 *)(r0 + 0) 12. r6 = bpf_get_prandom_u32() 13. } Here verifier would first visit path 1-3, create a checkpoint at 3 with r7=-16, continue to 4-7,3 with r7=-32. Because instructions at 9-12 had not been visitied yet existing checkpoint at 3 does not have read or precision mark for r7. Thus states_equal() would return true and verifier would discard current state, thus unsafe memory access at 11 would not be caught. This commit fixes this loophole by introducing exact state comparisons for iterator convergence logic: - registers are compared using regs_exact() regardless of read or precision marks; - stack slots have to have identical type. Unfortunately, this is too strict even for simple programs like below: i = 0; while(iter_next(&it)) i++; At each iteration step i++ would produce a new distinct state and eventually instruction processing limit would be reached. To avoid such behavior speculatively forget (widen) range for imprecise scalar registers, if those registers were not precise at the end of the previous iteration and do not match exactly. This a conservative heuristic that allows to verify wide range of programs, however it precludes verification of programs that conjure an imprecise value on the first loop iteration and use it as precise on the second. Test case iter_task_vma_for_each() presents one of such cases: unsigned int seen = 0; ... bpf_for_each(task_vma, vma, task, 0) { if (seen >= 1000) break; ... seen++; } Here clang generates the following code: <LBB0_4>: 24: r8 = r6 ; stash current value of ... body ... 'seen' 29: r1 = r10 30: r1 += -0x8 31: call bpf_iter_task_vma_next 32: r6 += 0x1 ; seen++; 33: if r0 == 0x0 goto +0x2 <LBB0_6> ; exit on next() == NULL 34: r7 += 0x10 35: if r8 < 0x3e7 goto -0xc <LBB0_4> ; loop on seen < 1000 <LBB0_6>: ... exit ... Note that counter in r6 is copied to r8 and then incremented, conditional jump is done using r8. Because of this precision mark for r6 lags one state behind of precision mark on r8 and widening logic kicks in. Adding barrier_var(seen) after conditional is sufficient to force clang use the same register for both counting and conditional jump. This issue was discussed in the thread [1] which was started by Andrew Werner <[email protected]> demonstrating a similar bug in callback functions handling. The callbacks would be addressed in a followup patch. [1] https://lore.kernel.org/bpf/[email protected]/ Co-developed-by: Andrii Nakryiko <[email protected]> Co-developed-by: Alexei Starovoitov <[email protected]> Signed-off-by: Eduard Zingerman <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-10-23page_pool: update document about fragment APIYunsheng Lin1-13/+80
As more drivers begin to use the fragment API, update the document about how to decide which API to use for the driver author. Signed-off-by: Yunsheng Lin <[email protected]> CC: Lorenzo Bianconi <[email protected]> CC: Alexander Duyck <[email protected]> CC: Liang Chen <[email protected]> CC: Alexander Lobakin <[email protected]> CC: Dima Tisnek <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-10-23page_pool: introduce page_pool_alloc() APIYunsheng Lin1-0/+66
Currently page pool supports the below use cases: use case 1: allocate page without page splitting using page_pool_alloc_pages() API if the driver knows that the memory it need is always bigger than half of the page allocated from page pool. use case 2: allocate page frag with page splitting using page_pool_alloc_frag() API if the driver knows that the memory it need is always smaller than or equal to the half of the page allocated from page pool. There is emerging use case [1] & [2] that is a mix of the above two case: the driver doesn't know the size of memory it need beforehand, so the driver may use something like below to allocate memory with least memory utilization and performance penalty: if (size << 1 > max_size) page = page_pool_alloc_pages(); else page = page_pool_alloc_frag(); To avoid the driver doing something like above, add the page_pool_alloc() API to support the above use case, and update the true size of memory that is acctually allocated by updating '*size' back to the driver in order to avoid exacerbating truesize underestimate problem. Rename page_pool_free() which is used in the destroy process to __page_pool_destroy() to avoid confusion with the newly added API. 1. https://lore.kernel.org/all/d3ae6bd3537fbce379382ac6a42f67e22f27ece2.1683896626.git.lorenzo@kernel.org/ 2. https://lore.kernel.org/all/[email protected]/ Signed-off-by: Yunsheng Lin <[email protected]> CC: Lorenzo Bianconi <[email protected]> CC: Alexander Duyck <[email protected]> CC: Liang Chen <[email protected]> CC: Alexander Lobakin <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-10-23page_pool: remove PP_FLAG_PAGE_FRAGYunsheng Lin1-4/+2
PP_FLAG_PAGE_FRAG is not really needed after pp_frag_count handling is unified and page_pool_alloc_frag() is supported in 32-bit arch with 64-bit DMA, so remove it. Signed-off-by: Yunsheng Lin <[email protected]> CC: Lorenzo Bianconi <[email protected]> CC: Alexander Duyck <[email protected]> CC: Liang Chen <[email protected]> CC: Alexander Lobakin <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-10-23page_pool: unify frag_count handling in page_pool_is_last_frag()Yunsheng Lin1-13/+34
Currently when page_pool_create() is called with PP_FLAG_PAGE_FRAG flag, page_pool_alloc_pages() is only allowed to be called under the below constraints: 1. page_pool_fragment_page() need to be called to setup page->pp_frag_count immediately. 2. page_pool_defrag_page() often need to be called to drain the page->pp_frag_count when there is no more user will be holding on to that page. Those constraints exist in order to support a page to be split into multi fragments. And those constraints have some overhead because of the cache line dirtying/bouncing and atomic update. Those constraints are unavoidable for case when we need a page to be split into more than one fragment, but there is also case that we want to avoid the above constraints and their overhead when a page can't be split as it can only hold a fragment as requested by user, depending on different use cases: use case 1: allocate page without page splitting. use case 2: allocate page with page splitting. use case 3: allocate page with or without page splitting depending on the fragment size. Currently page pool only provide page_pool_alloc_pages() and page_pool_alloc_frag() API to enable the 1 & 2 separately, so we can not use a combination of 1 & 2 to enable 3, it is not possible yet because of the per page_pool flag PP_FLAG_PAGE_FRAG. So in order to allow allocating unsplit page without the overhead of split page while still allow allocating split page we need to remove the per page_pool flag in page_pool_is_last_frag(), as best as I can think of, it seems there are two methods as below: 1. Add per page flag/bit to indicate a page is split or not, which means we might need to update that flag/bit everytime the page is recycled, dirtying the cache line of 'struct page' for use case 1. 2. Unify the page->pp_frag_count handling for both split and unsplit page by assuming all pages in the page pool is split into a big fragment initially. As page pool already supports use case 1 without dirtying the cache line of 'struct page' whenever a page is recyclable, we need to support the above use case 3 with minimal overhead, especially not adding any noticeable overhead for use case 1, and we are already doing an optimization by not updating pp_frag_count in page_pool_defrag_page() for the last fragment user, this patch chooses to unify the pp_frag_count handling to support the above use case 3. There is no noticeable performance degradation and some justification for unifying the frag_count handling with this patch applied using a micro-benchmark testing in [1]. 1. https://lore.kernel.org/all/[email protected]/ Signed-off-by: Yunsheng Lin <[email protected]> CC: Lorenzo Bianconi <[email protected]> CC: Alexander Duyck <[email protected]> CC: Liang Chen <[email protected]> CC: Alexander Lobakin <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-10-24Merge tag 'topic/vmemdup-user-array-2023-10-24-1' of ↵Dave Airlie1-0/+40
git://anongit.freedesktop.org/drm/drm into drm-next vmemdup-user-array API and changes with it. This is just a process PR to merge the topic branch into drm-next, this contains some core kernel and drm changes. Signed-off-by: Dave Airlie <[email protected]> From: Dave Airlie <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
2023-10-23Merge tag 'for-net-next-2023-10-23' of ↵Jakub Kicinski3-8/+37
git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next Luiz Augusto von Dentz says: ==================== bluetooth-next pull request for net-next: - Add 0bda:b85b for Fn-Link RTL8852BE - ISO: Many fixes for broadcast support - Mark bcm4378/bcm4387 as BROKEN_LE_CODED - Add support ITTIM PE50-M75C - Add RTW8852BE device 13d3:3570 - Add support for QCA2066 - Add support for Intel Misty Peak - 8087:0038 * tag 'for-net-next-2023-10-23' of git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next: Bluetooth: hci_sync: Fix Opcode prints in bt_dev_dbg/err Bluetooth: Fix double free in hci_conn_cleanup Bluetooth: btmtksdio: enable bluetooth wakeup in system suspend Bluetooth: btusb: Add 0bda:b85b for Fn-Link RTL8852BE Bluetooth: hci_bcm4377: Mark bcm4378/bcm4387 as BROKEN_LE_CODED Bluetooth: ISO: Copy BASE if service data matches EIR_BAA_SERVICE_UUID Bluetooth: Make handle of hci_conn be unique Bluetooth: btusb: Add date->evt_skb is NULL check Bluetooth: ISO: Fix bcast listener cleanup Bluetooth: msft: __hci_cmd_sync() doesn't return NULL Bluetooth: ISO: Match QoS adv handle with BIG handle Bluetooth: ISO: Allow binding a bcast listener to 0 bises Bluetooth: btusb: Add RTW8852BE device 13d3:3570 to device tables Bluetooth: qca: add support for QCA2066 Bluetooth: ISO: Set CIS bit only for devices with CIS support Bluetooth: Add support for Intel Misty Peak - 8087:0038 Bluetooth: Add support ITTIM PE50-M75C Bluetooth: ISO: Pass BIG encryption info through QoS Bluetooth: ISO: Fix BIS cleanup ==================== Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-10-23devlink: make devlink_flash_overwrite enum named oneJiri Pirko1-1/+1
Since this enum is going to be used in generated userspace file, name it properly. Signed-off-by: Jiri Pirko <[email protected]> Reviewed-by: Jacob Keller <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-10-23bpf, tcx: Get rid of tcx_link_constDaniel Borkmann1-6/+1
Small clean up to get rid of the extra tcx_link_const() and only retain the tcx_link(). Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Martin KaFai Lau <[email protected]>
2023-10-23ASoC: Merge up v6.6-rc7Mark Brown78-196/+294
Get fixes needed so we can enable build of ams-delta in more configurations.
2023-10-23Bluetooth: Make handle of hci_conn be uniqueZiyang Xuan1-1/+5
The handle of new hci_conn is always HCI_CONN_HANDLE_MAX + 1 if the handle of the first hci_conn entry in hci_dev->conn_hash->list is not HCI_CONN_HANDLE_MAX + 1. Use ida to manage the allocation of hci_conn->handle to make it be unique. Fixes: 9f78191cc9f1 ("Bluetooth: hci_conn: Always allocate unique handles") Signed-off-by: Ziyang Xuan <[email protected]> Signed-off-by: Luiz Augusto von Dentz <[email protected]>
2023-10-23Bluetooth: ISO: Fix bcast listener cleanupIulia Tanasescu1-23/+20
This fixes the cleanup callback for slave bis and pa sync hcons. Closing all bis hcons will trigger BIG Terminate Sync, while closing all bises and the pa sync hcon will also trigger PA Terminate Sync. Signed-off-by: Iulia Tanasescu <[email protected]> Signed-off-by: Luiz Augusto von Dentz <[email protected]>
2023-10-23Bluetooth: ISO: Pass BIG encryption info through QoSIulia Tanasescu2-1/+27
This enables a broadcast sink to be informed if the PA it has synced with is associated with an encrypted BIG, by retrieving the socket QoS and checking the encryption field. After PA sync has been successfully established and the first BIGInfo advertising report is received, a new hcon is added and notified to the ISO layer. The ISO layer sets the encryption field of the socket and hcon QoS according to the encryption parameter of the BIGInfo advertising report event. After that, the userspace is woken up, and the QoS of the new PA sync socket can be read, to inspect the encryption field and follow up accordingly. Signed-off-by: Iulia Tanasescu <[email protected]> Signed-off-by: Luiz Augusto von Dentz <[email protected]>
2023-10-23Bluetooth: ISO: Fix BIS cleanupIulia Tanasescu1-0/+2
This fixes the master BIS cleanup procedure - as opposed to CIS cleanup, no HCI disconnect command should be issued. A master BIS should only be terminated by disabling periodic and extended advertising, and terminating the BIG. In case of a Broadcast Receiver, all BIS and PA connections can be cleaned up by calling hci_conn_failed, since it contains all function calls that are necessary for successful cleanup. Signed-off-by: Iulia Tanasescu <[email protected]> Signed-off-by: Luiz Augusto von Dentz <[email protected]>
2023-10-23EDAC/versal: Add a Xilinx Versal memory controller driverShubhrajyoti Datta1-0/+12
Add a EDAC driver for the RAS capabilities on the Xilinx integrated DDR Memory Controllers (DDRMCs) which support both DDR4 and LPDDR4/4X memory interfaces. It has four programmable Network-on-Chip (NoC) interface ports and is designed to handle multiple streams of traffic. The driver reports correctable and uncorrectable errors, and also creates debugfs entries for testing through error injection. [ bp: - Add a pointer to the documentation about the register unlock code. - Squash in a fix for a Smatch static checker issue as reported by Dan Carpenter: https://lore.kernel.org/r/[email protected] ] Co-developed-by: Sai Krishna Potthuri <[email protected]> Signed-off-by: Sai Krishna Potthuri <[email protected]> Signed-off-by: Shubhrajyoti Datta <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-10-23Merge branches 'rcu/torture', 'rcu/fixes', 'rcu/docs', 'rcu/refscale', ↵Frederic Weisbecker6-9/+51
'rcu/tasks' and 'rcu/stall' into rcu/next rcu/torture: RCU torture, locktorture and generic torture infrastructure rcu/fixes: Generic and misc fixes rcu/docs: RCU documentation updates rcu/refscale: RCU reference scalability test updates rcu/tasks: RCU tasks updates rcu/stall: Stall detection updates
2023-10-23ASoC: simple-card-utils: Make simple_util_remove() return voidUwe Kleine-König1-1/+1
The .remove() callback for a platform driver returns an int which makes many driver authors wrongly assume it's possible to do error handling by returning an error code. However the value returned is (mostly) ignored and this typically results in resource leaks. To improve here there is a quest to make the remove callback return void. In the first step of this quest all drivers are converted to .remove_new() which already returns void. simple_util_remove() returned zero unconditionally. Make it return void instead and convert all users to struct platform_device::remove_new(). Signed-off-by: Uwe Kleine-König <[email protected]> Reviewed-by: Herve Codina <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Mark Brown <[email protected]>
2023-10-23wifi: mac80211: don't drop all unprotected public action framesAvraham Stern1-0/+29
Not all public action frames have a protected variant. When MFP is enabled drop only public action frames that have a dual protected variant. Fixes: 76a3059cf124 ("wifi: mac80211: drop some unprotected action frames") Signed-off-by: Avraham Stern <[email protected]> Signed-off-by: Gregory Greenman <[email protected]> Link: https://lore.kernel.org/r/20231016145213.2973e3c8d3bb.I6198b8d3b04cf4a97b06660d346caec3032f232a@changeid Signed-off-by: Johannes Berg <[email protected]>
2023-10-23wifi: cfg80211: Allow AP/P2PGO to indicate port authorization to peer ↵Vinayak Yadawad1-2/+6
STA/P2PClient In 4way handshake offload, cfg80211_port_authorized enables driver to indicate successful 4way handshake to cfg80211 layer. Currently this path of port authorization is restricted to interface type NL80211_IFTYPE_STATION and NL80211_IFTYPE_P2P_CLIENT. This patch extends the support for NL80211_IFTYPE_AP and NL80211_IFTYPE_P2P_GO interfaces to authorize peer STA/P2P_CLIENT, whenever authentication is offloaded on the AP/P2P_GO interface. Signed-off-by: Vinayak Yadawad <[email protected]> Link: https://lore.kernel.org/r/dee3b0a2b4f617e932c90bff4504a89389273632.1695721435.git.vinayak.yadawad@broadcom.com Signed-off-by: Johannes Berg <[email protected]>
2023-10-23wifi: mac80211: rename struct cfg80211_rx_assoc_resp to ↵Kalle Valo1-4/+4
cfg80211_rx_assoc_resp_data make htmldocs warns: Documentation/driver-api/80211/cfg80211:48: ./include/net/cfg80211.h:7290: WARNING: Duplicate C declaration, also defined at cfg80211:7251. Declaration is '.. c:function:: void cfg80211_rx_assoc_resp (struct net_device *dev, struct cfg80211_rx_assoc_resp *data)'. This is because there's a function named cfg80211_rx_assoc_resp() and a struct named cfg80211_rx_assoc_resp, see previous patch for more info. To workaround this rename the struct to cfg80211_rx_assoc_resp_data. The parameter for the function is named 'data' anyway so the naming here is consistent. Compile tested only. Signed-off-by: Kalle Valo <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Johannes Berg <[email protected]>
2023-10-23wifi: mac80211: rename ieee80211_tx_status() to ieee80211_tx_status_skb()Kalle Valo1-15/+15
make htmldocs warns: Documentation/driver-api/80211/mac80211:109: ./include/net/mac80211.h:5170: WARNING: Duplicate C declaration, also defined at mac80211:1117. Declaration is '.. c:function:: void ieee80211_tx_status (struct ieee80211_hw *hw, struct sk_buff *skb)'. This is because there's a function named ieee80211_tx_status() and a struct named ieee80211_tx_status. This has been discussed previously but no solution found: https://lore.kernel.org/all/[email protected]/ There's also a bug open for three years with no solution in sight: https://github.com/sphinx-doc/sphinx/pull/8313 So I guess we have no other solution than to a workaround this in the code, for example to rename the function to ieee80211_tx_status_skb() to avoid the name conflict. I got the idea for the name from ieee80211_tx_status_noskb() in which the skb is not provided as an argument, instead with ieee80211_tx_status_skb() the skb is provided. Compile tested only. Signed-off-by: Kalle Valo <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Johannes Berg <[email protected]>
2023-10-23wifi: remove unused argument of ieee80211_get_tdls_action()Dmitry Antipov1-2/+1
Remove unused 'hdr_size' argument of 'ieee80211_get_tdls_action()' and adjust 'ieee80211_report_used_skb()' accordingly. Signed-off-by: Dmitry Antipov <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Johannes Berg <[email protected]>
2023-10-23wifi: nl80211: fix doc typosRandy Dunlap1-2/+2
Correct some typos. Signed-off-by: Randy Dunlap <[email protected]> Cc: Johannes Berg <[email protected]> Cc: Kalle Valo <[email protected]> Cc: [email protected] Cc: "David S. Miller" <[email protected]> Cc: Eric Dumazet <[email protected]> Cc: Jakub Kicinski <[email protected]> Cc: Paolo Abeni <[email protected]> Cc: [email protected] Reviewed-by: Simon Horman <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Johannes Berg <[email protected]>
2023-10-23wifi: mac80211: fix header kernel-doc typosRandy Dunlap1-14/+14
Correct typos and fix run-on sentences. Signed-off-by: Randy Dunlap <[email protected]> Cc: Johannes Berg <[email protected]> Cc: Kalle Valo <[email protected]> Cc: [email protected] Cc: "David S. Miller" <[email protected]> Cc: Eric Dumazet <[email protected]> Cc: Jakub Kicinski <[email protected]> Cc: Paolo Abeni <[email protected]> Cc: [email protected] Reviewed-by: Simon Horman <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Johannes Berg <[email protected]>
2023-10-23wifi: cfg80211: fix header kernel-doc typosRandy Dunlap1-10/+10
Correct spelling of several words. Signed-off-by: Randy Dunlap <[email protected]> Cc: Johannes Berg <[email protected]> Cc: Kalle Valo <[email protected]> Cc: [email protected] Cc: "David S. Miller" <[email protected]> Cc: Eric Dumazet <[email protected]> Cc: Jakub Kicinski <[email protected]> Cc: Paolo Abeni <[email protected]> Cc: [email protected] Reviewed-by: Simon Horman <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Johannes Berg <[email protected]>
2023-10-23wifi: mac80211: add link id to mgd_prepare_tx()Miri Korenblit1-0/+3
As we are moving to MLO and links terms, also the airtime protection will be done for a link rather than for a vif. Thus, some drivers will need to know for which link to protect airtime. Add link id as a parameter to the mgd_prepare_tx() callback. Signed-off-by: Miri Korenblit <[email protected]> Signed-off-by: Gregory Greenman <[email protected]> Link: https://lore.kernel.org/r/20230928172905.c7fc59a6780b.Ic88a5037d31e184a2dce0b031ece1a0a93a3a9da@changeid Signed-off-by: Johannes Berg <[email protected]>
2023-10-23wifi: mac80211: make mgd_protect_tdls_discover MLO-awareMiri Korenblit1-1/+2
Since userspace can choose now what link to establish the TDLS on, we should know on what channel to do session protection. Add a link id parameter to this callback. Signed-off-by: Miri Korenblit <[email protected]> Signed-off-by: Gregory Greenman <[email protected]> Link: https://lore.kernel.org/r/20230928172905.ef12ce3eb835.If864f406cfd9e24f36a2b88fd13a37328633fcf9@changeid Signed-off-by: Johannes Berg <[email protected]>
2023-10-23wifi: cfg80211: Fix typo in documentationIlan Peer1-1/+1
Fix a small typo in a comment. Signed-off-by: Ilan Peer <[email protected]> Signed-off-by: Gregory Greenman <[email protected]> Link: https://lore.kernel.org/r/20230928172905.9dce226e393f.I929bfb9371e31c9e8d2bb1c1a96e9b1f3d02f2d0@changeid Signed-off-by: Johannes Berg <[email protected]>
2023-10-23wifi: mac80211: Rename and update IEEE80211_VIF_DISABLE_SMPS_OVERRIDEIlan Peer1-3/+3
EMLSR operation and SMPS operation cannot coexist. Thus, when EMLSR is enabled, all SMPS signaling towards the AP should be stopped (it is expected that the AP will consider SMPS to be off). Rename IEEE80211_VIF_DISABLE_SMPS_OVERRIDE to IEEE80211_VIF_EML_ACTIVE and use the flag as an indication from the driver that EMLSR is enabled. When EMLSR is enabled SMPS flows towards the AP MLD should be stopped. Signed-off-by: Ilan Peer <[email protected]> Signed-off-by: Gregory Greenman <[email protected]> Link: https://lore.kernel.org/r/20230928172905.fb2c2f9a0645.If6df5357568abd623a081f0f33b07e63fb8bba99@changeid Signed-off-by: Johannes Berg <[email protected]>
2023-10-23wifi: mac80211: add a driver callback to add vif debugfsMiri Korenblit1-0/+6
Add a callback which the driver can use to add the vif debugfs. We used to have this back until commit d260ff12e776 ("mac80211: remove vif debugfs driver callbacks") where we thought that it will be easier to just add them during interface add/remove. However, now with multi-link, we want to have proper debugfs for drivers for multi-link where some files might be in the netdev for non-MLO connections, and in the links for MLO ones, so we need to do some reconstruction when switching the mode. Moving to this new call enables that and MLO drivers will have to use it for proper debugfs operation. Signed-off-by: Miri Korenblit <[email protected]> Signed-off-by: Johannes Berg <[email protected]> Signed-off-by: Gregory Greenman <[email protected]> Link: https://lore.kernel.org/r/20230928172905.ac38913f6ab7.Iee731d746bb08fcc628fa776f337016a12dc62ac@changeid Signed-off-by: Johannes Berg <[email protected]>
2023-10-23Merge tag 'v6.6-rc7' into sched/core, to pick up fixesIngo Molnar39-47/+118
Pick up recent sched/urgent fixes merged upstream. Signed-off-by: Ingo Molnar <[email protected]>
2023-10-23tcp: add TCPI_OPT_USEC_TSEric Dumazet1-0/+1
Add the ability to report in tcp_info.tcpi_options if a flow is using usec resolution in TCP TS val. Signed-off-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2023-10-23tcp: add support for usec resolution in TCP TS valuesEric Dumazet3-4/+9
Back in 2015, Van Jacobson suggested to use usec resolution in TCP TS values. This has been implemented in our private kernels. Goals were : 1) better observability of delays in networking stacks. 2) better disambiguation of events based on TSval/ecr values. 3) building block for congestion control modules needing usec resolution. Back then we implemented a schem based on private SYN options to negotiate the feature. For upstream submission, we chose to use a route attribute, because this feature is probably going to be used in private networks [1] [2]. ip route add 10/8 ... features tcp_usec_ts Note that RFC 7323 recommends a "timestamp clock frequency in the range 1 ms to 1 sec per tick.", but also mentions "the maximum acceptable clock frequency is one tick every 59 ns." [1] Unfortunately RFC 7323 5.5 (Outdated Timestamps) suggests to invalidate TS.Recent values after a flow was idle for more than 24 days. This is the part making usec_ts a problem for peers following this recommendation for long living idle flows. [2] Attempts to standardize usec ts went nowhere: https://www.ietf.org/proceedings/97/slides/slides-97-tcpm-tcp-options-for-low-latency-00.pdf https://datatracker.ietf.org/doc/draft-wang-tcpm-low-latency-opt/ Signed-off-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>