diff options
author | Alexei Starovoitov <[email protected]> | 2023-09-16 09:34:23 -0700 |
---|---|---|
committer | Alexei Starovoitov <[email protected]> | 2023-09-16 09:36:44 -0700 |
commit | ec6f1b4db95b7eedb3fe85f4f14e08fa0e9281c3 (patch) | |
tree | 2094312e53ff2b27e6aa2cb71fd29cc5b41175c4 /tools/testing/selftests/bpf/progs/exceptions.c | |
parent | c4ab64e6da42030c866f0589d1bfc13a037432dd (diff) | |
parent | d2a93715bfb0655a63bb1687f43f48eb2e61717b (diff) |
Merge branch 'exceptions-1-2'
Kumar Kartikeya Dwivedi says:
====================
Exceptions - 1/2
This series implements the _first_ part of the runtime and verifier
support needed to enable BPF exceptions. Exceptions thrown from programs
are processed as an immediate exit from the program, which unwinds all
the active stack frames until the main stack frame, and returns to the
BPF program's caller. The ability to perform this unwinding safely
allows the program to test conditions that are always true at runtime
but which the verifier has no visibility into.
Thus, it also reduces verification effort by safely terminating
redundant paths that can be taken within a program.
The patches to perform runtime resource cleanup during the
frame-by-frame unwinding will be posted as a follow-up to this set.
It must be noted that exceptions are not an error handling mechanism for
unlikely runtime conditions, but a way to safely terminate the execution
of a program in presence of conditions that should never occur at
runtime. They are meant to serve higher-level primitives such as program
assertions.
The following kfuncs and macros are introduced:
Assertion macros are also introduced, please see patch 13 for their
documentation.
/* Description
* Throw a BPF exception from the program, immediately terminating its
* execution and unwinding the stack. The supplied 'cookie' parameter
* will be the return value of the program when an exception is thrown,
* and the default exception callback is used. Otherwise, if an exception
* callback is set using the '__exception_cb(callback)' declaration tag
* on the main program, the 'cookie' parameter will be the callback's only
* input argument.
*
* Thus, in case of default exception callback, 'cookie' is subjected to
* constraints on the program's return value (as with R0 on exit).
* Otherwise, the return value of the marked exception callback will be
* subjected to the same checks.
*
* Note that throwing an exception with lingering resources (locks,
* references, etc.) will lead to a verification error.
*
* Note that callbacks *cannot* call this helper.
* Returns
* Never.
* Throws
* An exception with the specified 'cookie' value.
*/
extern void bpf_throw(u64 cookie) __ksym;
/* This macro must be used to mark the exception callback corresponding to the
* main program. For example:
*
* int exception_cb(u64 cookie) {
* return cookie;
* }
*
* SEC("tc")
* __exception_cb(exception_cb)
* int main_prog(struct __sk_buff *ctx) {
* ...
* return TC_ACT_OK;
* }
*
* Here, exception callback for the main program will be 'exception_cb'. Note
* that this attribute can only be used once, and multiple exception callbacks
* specified for the main program will lead to verification error.
*/
\#define __exception_cb(name) __attribute__((btf_decl_tag("exception_callback:" #name)))
As such, a program can only install an exception handler once for the
lifetime of a BPF program, and this handler cannot be changed at
runtime. The purpose of the handler is to simply interpret the cookie
value supplied by the bpf_throw call, and execute user-defined logic
corresponding to it. The primary purpose of allowing a handler is to
control the return value of the program. The default handler returns the
cookie value passed to bpf_throw when an exception is thrown.
Fixing the handler for the lifetime of the program eliminates tricky and
expensive handling in case of runtime changes of the handler callback
when programs begin to nest, where it becomes more complex to save and
restore the active handler at runtime.
This version of offline unwinding based BPF exceptions is truly zero
overhead, with the exception of generation of a default callback which
contains a few instructions to return a default return value (0) when no
exception callback is supplied by the user.
Callbacks are disallowed from throwing BPF exceptions for now, since
such exceptions need to cross the callback helper boundary (and
therefore must care about unwinding kernel state), however it is
possible to lift this restriction in the future follow-up.
Exceptions terminate propogating at program boundaries, hence both
BPF_PROG_TYPE_EXT and tail call targets return to their caller context
the return value of the exception callback, in the event that they throw
an exception. Thus, exceptions do not cross extension or tail call
boundary.
However, this is mostly an implementation choice, and can be changed to
suit more user-friendly semantics.
Changelog:
----------
v2 -> v3
v2: https://lore.kernel.org/bpf/[email protected]
* Add Dave's Acked-by.
* Address all comments from Alexei.
* Use bpf_is_subprog to check for main prog in bpf_stack_walker.
* Drop accidental leftover hunk in libbpf patch.
* Split libbpf patch's refactoring to aid review
* Disable fentry/fexit in addition to freplace for exception cb.
* Add selftests for fentry/fexit/freplace on exception cb and main prog.
* Use btf_find_by_name_kind in bpf_find_exception_callback_insn_off (Martin)
* Split KASAN patch into two to aid backporting (Andrey)
* Move exception callback append step to bpf_object__reloacte (Andrii)
* Ensure that the exception callback name is unique (Andrii)
* Keep ASM implementation of assertion macros instead of C, as it does
not achieve intended results for bpf_assert_range and other cases.
v1 -> v2
v1: https://lore.kernel.org/bpf/[email protected]
* Address all comments from Alexei.
* Fix a few bugs and corner cases in the implementations found during
testing. Also add new selftests for these cases.
* Reinstate patch to consider ksym.end part of the program (but
reworked to cover other corner cases).
* Implement new style of tagging exception callbacks, add libbpf
support for the new declaration tag.
* Limit support to 64-bit integer types for assertion macros. The
compiler ends up performing shifts or bitwise and operations when
finally making use of the value, which defeats the purpose of the
macro. On noalu32 mode, the shifts may also happen before use,
hurting reliability.
* Comprehensively test assertion macros and their side effects on the
verifier state, register bounds, etc.
* Fix a KASAN false positive warning.
RFC v1 -> v1
RFC v1: https://lore.kernel.org/bpf/[email protected]
* Completely rework the unwinding infrastructure to use offline
unwinding support.
* Remove the runtime exception state and program rewriting code.
* Make bpf_set_exception_callback idempotent to avoid vexing
synchronization and state clobbering issues in presence of program
nesting.
* Disable bpf_throw within callback functions, for now.
* Allow bpf_throw in tail call programs and extension programs,
removing limitations of rewrite based unwinding.
* Expand selftests.
====================
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Alexei Starovoitov <[email protected]>
Diffstat (limited to 'tools/testing/selftests/bpf/progs/exceptions.c')
-rw-r--r-- | tools/testing/selftests/bpf/progs/exceptions.c | 368 |
1 files changed, 368 insertions, 0 deletions
diff --git a/tools/testing/selftests/bpf/progs/exceptions.c b/tools/testing/selftests/bpf/progs/exceptions.c new file mode 100644 index 000000000000..2811ee842b01 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/exceptions.c @@ -0,0 +1,368 @@ +// SPDX-License-Identifier: GPL-2.0 +#include <vmlinux.h> +#include <bpf/bpf_tracing.h> +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_core_read.h> +#include <bpf/bpf_endian.h> +#include "bpf_misc.h" +#include "bpf_experimental.h" + +#ifndef ETH_P_IP +#define ETH_P_IP 0x0800 +#endif + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 4); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(__u32)); +} jmp_table SEC(".maps"); + +static __noinline int static_func(u64 i) +{ + bpf_throw(32); + return i; +} + +__noinline int global2static_simple(u64 i) +{ + static_func(i + 2); + return i - 1; +} + +__noinline int global2static(u64 i) +{ + if (i == ETH_P_IP) + bpf_throw(16); + return static_func(i); +} + +static __noinline int static2global(u64 i) +{ + return global2static(i) + i; +} + +SEC("tc") +int exception_throw_always_1(struct __sk_buff *ctx) +{ + bpf_throw(64); + return 0; +} + +/* In this case, the global func will never be seen executing after call to + * static subprog, hence verifier will DCE the remaining instructions. Ensure we + * are resilient to that. + */ +SEC("tc") +int exception_throw_always_2(struct __sk_buff *ctx) +{ + return global2static_simple(ctx->protocol); +} + +SEC("tc") +int exception_throw_unwind_1(struct __sk_buff *ctx) +{ + return static2global(bpf_ntohs(ctx->protocol)); +} + +SEC("tc") +int exception_throw_unwind_2(struct __sk_buff *ctx) +{ + return static2global(bpf_ntohs(ctx->protocol) - 1); +} + +SEC("tc") +int exception_throw_default(struct __sk_buff *ctx) +{ + bpf_throw(0); + return 1; +} + +SEC("tc") +int exception_throw_default_value(struct __sk_buff *ctx) +{ + bpf_throw(5); + return 1; +} + +SEC("tc") +int exception_tail_call_target(struct __sk_buff *ctx) +{ + bpf_throw(16); + return 0; +} + +static __noinline +int exception_tail_call_subprog(struct __sk_buff *ctx) +{ + volatile int ret = 10; + + bpf_tail_call_static(ctx, &jmp_table, 0); + return ret; +} + +SEC("tc") +int exception_tail_call(struct __sk_buff *ctx) { + volatile int ret = 0; + + ret = exception_tail_call_subprog(ctx); + return ret + 8; +} + +__noinline int exception_ext_global(struct __sk_buff *ctx) +{ + volatile int ret = 0; + + return ret; +} + +static __noinline int exception_ext_static(struct __sk_buff *ctx) +{ + return exception_ext_global(ctx); +} + +SEC("tc") +int exception_ext(struct __sk_buff *ctx) +{ + return exception_ext_static(ctx); +} + +__noinline int exception_cb_mod_global(u64 cookie) +{ + volatile int ret = 0; + + return ret; +} + +/* Example of how the exception callback supplied during verification can still + * introduce extensions by calling to dummy global functions, and alter runtime + * behavior. + * + * Right now we don't allow freplace attachment to exception callback itself, + * but if the need arises this restriction is technically feasible to relax in + * the future. + */ +__noinline int exception_cb_mod(u64 cookie) +{ + return exception_cb_mod_global(cookie) + cookie + 10; +} + +SEC("tc") +__exception_cb(exception_cb_mod) +int exception_ext_mod_cb_runtime(struct __sk_buff *ctx) +{ + bpf_throw(25); + return 0; +} + +__noinline static int subprog(struct __sk_buff *ctx) +{ + return bpf_ktime_get_ns(); +} + +__noinline static int throwing_subprog(struct __sk_buff *ctx) +{ + if (ctx->tstamp) + bpf_throw(0); + return bpf_ktime_get_ns(); +} + +__noinline int global_subprog(struct __sk_buff *ctx) +{ + return bpf_ktime_get_ns(); +} + +__noinline int throwing_global_subprog(struct __sk_buff *ctx) +{ + if (ctx->tstamp) + bpf_throw(0); + return bpf_ktime_get_ns(); +} + +SEC("tc") +int exception_throw_subprog(struct __sk_buff *ctx) +{ + switch (ctx->protocol) { + case 1: + return subprog(ctx); + case 2: + return global_subprog(ctx); + case 3: + return throwing_subprog(ctx); + case 4: + return throwing_global_subprog(ctx); + default: + break; + } + bpf_throw(1); + return 0; +} + +__noinline int assert_nz_gfunc(u64 c) +{ + volatile u64 cookie = c; + + bpf_assert(cookie != 0); + return 0; +} + +__noinline int assert_zero_gfunc(u64 c) +{ + volatile u64 cookie = c; + + bpf_assert_eq(cookie, 0); + return 0; +} + +__noinline int assert_neg_gfunc(s64 c) +{ + volatile s64 cookie = c; + + bpf_assert_lt(cookie, 0); + return 0; +} + +__noinline int assert_pos_gfunc(s64 c) +{ + volatile s64 cookie = c; + + bpf_assert_gt(cookie, 0); + return 0; +} + +__noinline int assert_negeq_gfunc(s64 c) +{ + volatile s64 cookie = c; + + bpf_assert_le(cookie, -1); + return 0; +} + +__noinline int assert_poseq_gfunc(s64 c) +{ + volatile s64 cookie = c; + + bpf_assert_ge(cookie, 1); + return 0; +} + +__noinline int assert_nz_gfunc_with(u64 c) +{ + volatile u64 cookie = c; + + bpf_assert_with(cookie != 0, cookie + 100); + return 0; +} + +__noinline int assert_zero_gfunc_with(u64 c) +{ + volatile u64 cookie = c; + + bpf_assert_eq_with(cookie, 0, cookie + 100); + return 0; +} + +__noinline int assert_neg_gfunc_with(s64 c) +{ + volatile s64 cookie = c; + + bpf_assert_lt_with(cookie, 0, cookie + 100); + return 0; +} + +__noinline int assert_pos_gfunc_with(s64 c) +{ + volatile s64 cookie = c; + + bpf_assert_gt_with(cookie, 0, cookie + 100); + return 0; +} + +__noinline int assert_negeq_gfunc_with(s64 c) +{ + volatile s64 cookie = c; + + bpf_assert_le_with(cookie, -1, cookie + 100); + return 0; +} + +__noinline int assert_poseq_gfunc_with(s64 c) +{ + volatile s64 cookie = c; + + bpf_assert_ge_with(cookie, 1, cookie + 100); + return 0; +} + +#define check_assert(name, cookie, tag) \ +SEC("tc") \ +int exception##tag##name(struct __sk_buff *ctx) \ +{ \ + return name(cookie) + 1; \ +} + +check_assert(assert_nz_gfunc, 5, _); +check_assert(assert_zero_gfunc, 0, _); +check_assert(assert_neg_gfunc, -100, _); +check_assert(assert_pos_gfunc, 100, _); +check_assert(assert_negeq_gfunc, -1, _); +check_assert(assert_poseq_gfunc, 1, _); + +check_assert(assert_nz_gfunc_with, 5, _); +check_assert(assert_zero_gfunc_with, 0, _); +check_assert(assert_neg_gfunc_with, -100, _); +check_assert(assert_pos_gfunc_with, 100, _); +check_assert(assert_negeq_gfunc_with, -1, _); +check_assert(assert_poseq_gfunc_with, 1, _); + +check_assert(assert_nz_gfunc, 0, _bad_); +check_assert(assert_zero_gfunc, 5, _bad_); +check_assert(assert_neg_gfunc, 100, _bad_); +check_assert(assert_pos_gfunc, -100, _bad_); +check_assert(assert_negeq_gfunc, 1, _bad_); +check_assert(assert_poseq_gfunc, -1, _bad_); + +check_assert(assert_nz_gfunc_with, 0, _bad_); +check_assert(assert_zero_gfunc_with, 5, _bad_); +check_assert(assert_neg_gfunc_with, 100, _bad_); +check_assert(assert_pos_gfunc_with, -100, _bad_); +check_assert(assert_negeq_gfunc_with, 1, _bad_); +check_assert(assert_poseq_gfunc_with, -1, _bad_); + +SEC("tc") +int exception_assert_range(struct __sk_buff *ctx) +{ + u64 time = bpf_ktime_get_ns(); + + bpf_assert_range(time, 0, ~0ULL); + return 1; +} + +SEC("tc") +int exception_assert_range_with(struct __sk_buff *ctx) +{ + u64 time = bpf_ktime_get_ns(); + + bpf_assert_range_with(time, 0, ~0ULL, 10); + return 1; +} + +SEC("tc") +int exception_bad_assert_range(struct __sk_buff *ctx) +{ + u64 time = bpf_ktime_get_ns(); + + bpf_assert_range(time, -100, 100); + return 1; +} + +SEC("tc") +int exception_bad_assert_range_with(struct __sk_buff *ctx) +{ + u64 time = bpf_ktime_get_ns(); + + bpf_assert_range_with(time, -1000, 1000, 10); + return 1; +} + +char _license[] SEC("license") = "GPL"; |