diff options
| author | Thomas Gleixner <[email protected]> | 2022-09-15 13:11:18 +0200 |
|---|---|---|
| committer | Peter Zijlstra <[email protected]> | 2022-10-17 16:41:10 +0200 |
| commit | bea75b33895f7f87f0c40023e36a2d087e87ffa1 (patch) | |
| tree | 301d9fc0cc4db3df6746e8163186cab5bf436632 /include/linux | |
| parent | 8f7c0d8b23c3f5f740a48db31ebadef28af17a22 (diff) | |
x86/Kconfig: Introduce function padding
Now that all functions are 16 byte aligned, add 16 bytes of NOP
padding in front of each function. This prepares things for software
call stack tracking and kCFI/FineIBT.
This significantly increases kernel .text size, around 5.1% on a
x86_64-defconfig-ish build.
However, per the random access argument used for alignment, these 16
extra bytes are code that wouldn't be used. Performance measurements
back this up by showing no significant performance regressions.
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/bpf.h | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 9e7d46d16032..5296aea9b5b4 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -984,7 +984,11 @@ int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_func } #ifdef CONFIG_X86_64 +#ifdef CONFIG_CALL_THUNKS +#define BPF_DISPATCHER_ATTRIBUTES __attribute__((patchable_function_entry(5+CONFIG_FUNCTION_PADDING_BYTES,CONFIG_FUNCTION_PADDING_BYTES))) +#else #define BPF_DISPATCHER_ATTRIBUTES __attribute__((patchable_function_entry(5))) +#endif #else #define BPF_DISPATCHER_ATTRIBUTES #endif |