diff options
| author | Alexei Starovoitov <[email protected]> | 2023-10-20 10:12:55 -0700 |
|---|---|---|
| committer | Alexei Starovoitov <[email protected]> | 2023-10-20 14:15:13 -0700 |
| commit | cf559a416f9bc061f3b96f8afc6ceae5eec9b2b0 (patch) | |
| tree | e00e8bcbaedd9a6bc3783abe6edcb71b3c2ac382 /include/linux | |
| parent | da1055b673f3baac2249571c9882ce767a0aa746 (diff) | |
| parent | d440ba91ca4d3004709876779d40713a99e21f6a (diff) | |
Merge branch 'bpf-fixes-for-per-cpu-kptr'
Hou Tao says:
====================
bpf: Fixes for per-cpu kptr
From: Hou Tao <[email protected]>
Hi,
The patchset aims to fix the problems found in the review of per-cpu
kptr patch-set [0]. Patch #1 moves pcpu_lock after the invocation of
pcpu_chunk_addr_search() and it is a micro-optimization for
free_percpu(). The reason includes it in the patch is that the same
logic is used in newly-added API pcpu_alloc_size(). Patch #2 introduces
pcpu_alloc_size() for dynamic per-cpu area. Patch #2 and #3 use
pcpu_alloc_size() to check whether or not unit_size matches with the
size of underlying per-cpu area and to select a matching bpf_mem_cache.
Patch #4 fixes the freeing of per-cpu kptr when these kptrs are freed by
map destruction. The last patch adds test cases for these problems.
Please see individual patches for details. And comments are always
welcome.
Change Log:
v3:
* rebased on bpf-next
* patch 2: update API document to note that pcpu_alloc_size() doesn't
support statically allocated per-cpu area. (Dennis)
* patch 1 & 2: add Acked-by from Dennis
v2: https://lore.kernel.org/bpf/[email protected]/
* add a new patch "don't acquire pcpu_lock for pcpu_chunk_addr_search()"
* patch 2: change type of bit_off and end to unsigned long (Andrew)
* patch 2: rename the new API as pcpu_alloc_size and follow 80-column convention (Dennis)
* patch 5: move the common declaration into bpf.h (Stanislav, Alxei)
v1: https://lore.kernel.org/bpf/[email protected]/
[0]: https://lore.kernel.org/bpf/[email protected]
====================
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Alexei Starovoitov <[email protected]>
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/bpf.h | 1 | ||||
| -rw-r--r-- | include/linux/bpf_mem_alloc.h | 1 | ||||
| -rw-r--r-- | include/linux/percpu.h | 1 |
3 files changed, 3 insertions, 0 deletions
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index b4b40b45962b..b4825d3cdb29 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2058,6 +2058,7 @@ struct btf_record *btf_record_dup(const struct btf_record *rec); bool btf_record_equal(const struct btf_record *rec_a, const struct btf_record *rec_b); void bpf_obj_free_timer(const struct btf_record *rec, void *obj); void bpf_obj_free_fields(const struct btf_record *rec, void *obj); +void __bpf_obj_drop_impl(void *p, const struct btf_record *rec, bool percpu); struct bpf_map *bpf_map_get(u32 ufd); struct bpf_map *bpf_map_get_with_uref(u32 ufd); diff --git a/include/linux/bpf_mem_alloc.h b/include/linux/bpf_mem_alloc.h index d644bbb298af..bb1223b21308 100644 --- a/include/linux/bpf_mem_alloc.h +++ b/include/linux/bpf_mem_alloc.h @@ -11,6 +11,7 @@ struct bpf_mem_caches; struct bpf_mem_alloc { struct bpf_mem_caches __percpu *caches; struct bpf_mem_cache __percpu *cache; + bool percpu; struct work_struct work; }; diff --git a/include/linux/percpu.h b/include/linux/percpu.h index 68fac2e7cbe6..8c677f185901 100644 --- a/include/linux/percpu.h +++ b/include/linux/percpu.h @@ -132,6 +132,7 @@ extern void __init setup_per_cpu_areas(void); extern void __percpu *__alloc_percpu_gfp(size_t size, size_t align, gfp_t gfp) __alloc_size(1); extern void __percpu *__alloc_percpu(size_t size, size_t align) __alloc_size(1); extern void free_percpu(void __percpu *__pdata); +extern size_t pcpu_alloc_size(void __percpu *__pdata); DEFINE_FREE(free_percpu, void __percpu *, free_percpu(_T)) |