diff options
author | Alexei Starovoitov <ast@kernel.org> | 2024-04-04 13:08:01 -0700 |
---|---|---|
committer | Alexei Starovoitov <ast@kernel.org> | 2024-04-04 13:08:01 -0700 |
commit | d82c045f9dfde6b9ea220d7f8310c98210dfc8cb (patch) | |
tree | 05376a14c790f914df2197a0431c3608ac025e66 /tools/bpf/bpftool/prog.c | |
parent | 21ab0b6d0cfcb8aa98e33baa83f933f963514027 (diff) | |
parent | 314a53623cd4e62e1b88126e5ed2bc87073d90ee (diff) |
Merge branch 'inline-bpf_get_branch_snapshot-bpf-helper'
Andrii Nakryiko says:
====================
Inline bpf_get_branch_snapshot() BPF helper
Implement inlining of bpf_get_branch_snapshot() BPF helper using generic BPF
assembly approach. This allows to reduce LBR record usage right before LBR
records are captured from inside BPF program.
See v1 cover letter ([0]) for some visual examples. I dropped them from v2
because there are multiple independent changes landing and being reviewed, all
of which remove different parts of LBR record waste, so presenting final state
of LBR "waste" gets more complicated until all of the pieces land.
[0] https://lore.kernel.org/bpf/20240321180501.734779-1-andrii@kernel.org/
v2->v3:
- fix BPF_MUL instruction definition;
v1->v2:
- inlining of bpf_get_smp_processor_id() split out into a separate patch set
implementing internal per-CPU BPF instruction;
- add efficient divide-by-24 through multiplication logic, and leave
comments to explain the idea behind it; this way inlined version of
bpf_get_branch_snapshot() has no compromises compared to non-inlined
version of the helper (Alexei).
====================
Link: https://lore.kernel.org/r/20240404002640.1774210-1-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'tools/bpf/bpftool/prog.c')
0 files changed, 0 insertions, 0 deletions