aboutsummaryrefslogtreecommitdiff
path: root/include/uapi/linux
diff options
context:
space:
mode:
authorDavid S. Miller <[email protected]>2016-03-08 15:28:33 -0500
committerDavid S. Miller <[email protected]>2016-03-08 15:28:33 -0500
commitf14b488d50b7dc234ddaed53ce4293c9eac47457 (patch)
treeff802293cd7a0020211818c8039cd9f117f30a35 /include/uapi/linux
parent8aba8b83128a04197991518e241aafd3323b705d (diff)
parentc3f85cffc50d2f259903555979581a632b945ec2 (diff)
Merge branch 'bpf-map-prealloc'
Alexei Starovoitov says: ==================== bpf: map pre-alloc v1->v2: . fix few issues spotted by Daniel . converted stackmap into pre-allocation as well . added a workaround for lockdep false positive . added pcpu_freelist_populate to be used by hashmap and stackmap this path set switches bpf hash map to use pre-allocation by default and introduces BPF_F_NO_PREALLOC flag to keep old behavior for cases where full map pre-allocation is too memory expensive. Some time back Daniel Wagner reported crashes when bpf hash map is used to compute time intervals between preempt_disable->preempt_enable and recently Tom Zanussi reported a dead lock in iovisor/bcc/funccount tool if it's used to count the number of invocations of kernel '*spin*' functions. Both problems are due to the recursive use of slub and can only be solved by pre-allocating all map elements. A lot of different solutions were considered. Many implemented, but at the end pre-allocation seems to be the only feasible answer. As far as pre-allocation goes it also was implemented 4 different ways: - simple free-list with single lock - percpu_ida with optimizations - blk-mq-tag variant customized for bpf use case - percpu_freelist For bpf style of alloc/free patterns percpu_freelist is the best and implemented in this patch set. Detailed performance numbers in patch 3. Patch 2 introduces percpu_freelist Patch 1 fixes simple deadlocks due to missing recursion checks Patch 5: converts stackmap to pre-allocation Patches 6-9: prepare test infra Patch 10: stress test for hash map infra. It attaches to spin_lock functions and bpf_map_update/delete are called from different contexts Patch 11: stress for bpf_get_stackid Patch 12: map performance test Reported-by: Daniel Wagner <[email protected]> Reported-by: Tom Zanussi <[email protected]> ==================== Signed-off-by: David S. Miller <[email protected]>
Diffstat (limited to 'include/uapi/linux')
-rw-r--r--include/uapi/linux/bpf.h3
1 files changed, 3 insertions, 0 deletions
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 9221f653fee3..0e30b19012a5 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -101,12 +101,15 @@ enum bpf_prog_type {
#define BPF_NOEXIST 1 /* create new element if it didn't exist */
#define BPF_EXIST 2 /* update existing element */
+#define BPF_F_NO_PREALLOC (1U << 0)
+
union bpf_attr {
struct { /* anonymous struct used by BPF_MAP_CREATE command */
__u32 map_type; /* one of enum bpf_map_type */
__u32 key_size; /* size of key in bytes */
__u32 value_size; /* size of value in bytes */
__u32 max_entries; /* max number of entries in a map */
+ __u32 map_flags; /* prealloc or not */
};
struct { /* anonymous struct used by BPF_MAP_*_ELEM commands */