Age | Commit message (Collapse) | Author | Files | Lines |
|
Interrupts are enabled in srcu_gp_end(), so this commit switches from
spin_lock_irqsave_rcu_node() and spin_unlock_irqrestore_rcu_node()
to spin_lock_irq_rcu_node() and spin_unlock_irq_rcu_node().
Link: https://lore.kernel.org/all/[email protected]/
Reported-by: Dan Carpenter <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
|
|
Interrupts are enabled in rcu_gp_init(), so this commit switches from
local_irq_save() and local_irq_restore() to local_irq_disable() and
local_irq_enable().
Link: https://lore.kernel.org/all/[email protected]/
Reported-by: Dan Carpenter <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
|
|
In the synchronize_rcu() common case, we will have less than
SR_MAX_USERS_WAKE_FROM_GP number of users per GP. Waking up the kworker
is pointless just to free the last injected wait head since at that point,
all the users have already been awakened.
Introduce a new counter to track this and prevent the wakeup in the
common case.
[ paulmck: Remove atomic_dec_return_release in cannot-happen state. ]
Signed-off-by: Joel Fernandes (Google) <[email protected]>
Reviewed-by: Uladzislau Rezki (Sony) <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
|
|
The bpf arena logic didn't account for mremap operation. Add a refcnt for
multiple mmap events to prevent use-after-free in arena_vm_close.
Fixes: 317460317a02 ("bpf: Introduce bpf_arena.")
Reported-by: Pengfei Xu <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Reviewed-by: Barret Rhoden <[email protected]>
Tested-by: Pengfei Xu <[email protected]>
Closes: https://lore.kernel.org/bpf/[email protected]
Link: https://lore.kernel.org/bpf/[email protected]
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/lsm
Pull lsm fix from Paul Moore:
"A single LSM/IMA patch to fix a problem caused by sleeping while in a
RCU critical section"
* tag 'lsm-pr-20240617' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/lsm:
ima: Avoid blocking in RCU read-side critical section
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull misc fixes from Andrew Morton:
"Mainly MM singleton fixes. And a couple of ocfs2 regression fixes"
* tag 'mm-hotfixes-stable-2024-06-17-11-43' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm:
kcov: don't lose track of remote references during softirqs
mm: shmem: fix getting incorrect lruvec when replacing a shmem folio
mm/debug_vm_pgtable: drop RANDOM_ORVALUE trick
mm: fix possible OOB in numa_rebuild_large_mapping()
mm/migrate: fix kernel BUG at mm/compaction.c:2761!
selftests: mm: make map_fixed_noreplace test names stable
mm/memfd: add documentation for MFD_NOEXEC_SEAL MFD_EXEC
mm: mmap: allow for the maximum number of bits for randomizing mmap_base by default
gcov: add support for GCC 14
zap_pid_ns_processes: clear TIF_NOTIFY_SIGNAL along with TIF_SIGPENDING
mm: huge_memory: fix misused mapping_large_folio_support() for anon folios
lib/alloc_tag: fix RCU imbalance in pgalloc_tag_get()
lib/alloc_tag: do not register sysctl interface when CONFIG_SYSCTL=n
MAINTAINERS: remove Lorenzo as vmalloc reviewer
Revert "mm: init_mlocked_on_free_v3"
mm/page_table_check: fix crash on ZONE_DEVICE
gcc: disable '-Warray-bounds' for gcc-9
ocfs2: fix NULL pointer dereference in ocfs2_abort_trigger()
ocfs2: fix NULL pointer dereference in ocfs2_journal_dirty()
|
|
In coerce_subreg_to_size_sx(), for the case where upper
sign extension bits are the same for smax32 and smin32
values, we missed to setup properly. This is especially
problematic if both smax32 and smin32's sign extension
bits are 1.
The following is a simple example illustrating the inconsistent
verifier states due to missed var_off:
0: (85) call bpf_get_prandom_u32#7 ; R0_w=scalar()
1: (bf) r3 = r0 ; R0_w=scalar(id=1) R3_w=scalar(id=1)
2: (57) r3 &= 15 ; R3_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=15,var_off=(0x0; 0xf))
3: (47) r3 |= 128 ; R3_w=scalar(smin=umin=smin32=umin32=128,smax=umax=smax32=umax32=143,var_off=(0x80; 0xf))
4: (bc) w7 = (s8)w3
REG INVARIANTS VIOLATION (alu): range bounds violation u64=[0xffffff80, 0x8f] s64=[0xffffff80, 0x8f]
u32=[0xffffff80, 0x8f] s32=[0x80, 0xffffff8f] var_off=(0x80, 0xf)
The var_off=(0x80, 0xf) is not correct, and the correct one should
be var_off=(0xffffff80; 0xf) since from insn 3, we know that at
insn 4, the sign extension bits will be 1. This patch fixed this
issue by setting var_off properly.
Fixes: 8100928c8814 ("bpf: Support new sign-extension mov insns")
Signed-off-by: Yonghong Song <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Alexei Starovoitov <[email protected]>
|
|
Zac reported a verification failure and Alexei reproduced the issue
with a simple reproducer ([1]). The verification failure is due to missed
setting for var_off.
The following is the reproducer in [1]:
0: R1=ctx() R10=fp0
0: (71) r3 = *(u8 *)(r10 -387) ;
R3_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=255,var_off=(0x0; 0xff)) R10=fp0
1: (bc) w7 = (s8)w3 ;
R3_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=255,var_off=(0x0; 0xff))
R7_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=127,var_off=(0x0; 0x7f))
2: (36) if w7 >= 0x2533823b goto pc-3
mark_precise: frame0: last_idx 2 first_idx 0 subseq_idx -1
mark_precise: frame0: regs=r7 stack= before 1: (bc) w7 = (s8)w3
mark_precise: frame0: regs=r3 stack= before 0: (71) r3 = *(u8 *)(r10 -387)
2: R7_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=127,var_off=(0x0; 0x7f))
3: (b4) w0 = 0 ; R0_w=0
4: (95) exit
Note that after insn 1, the var_off for R7 is (0x0; 0x7f). This is not correct
since upper 24 bits of w7 could be 0 or 1. So correct var_off should be
(0x0; 0xffffffff). Missing var_off setting in set_sext32_default_val() caused later
incorrect analysis in zext_32_to_64(dst_reg) and reg_bounds_sync(dst_reg).
To fix the issue, set var_off correctly in set_sext32_default_val(). The correct
reg state after insn 1 becomes:
1: (bc) w7 = (s8)w3 ;
R3_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=255,var_off=(0x0; 0xff))
R7_w=scalar(smin=0,smax=umax=0xffffffff,smin32=-128,smax32=127,var_off=(0x0; 0xffffffff))
and at insn 2, the verifier correctly determines either branch is possible.
[1] https://lore.kernel.org/bpf/CAADnVQLPU0Shz7dWV4bn2BgtGdxN3uFHPeobGBA72tpg5Xoykw@mail.gmail.com/
Fixes: 8100928c8814 ("bpf: Support new sign-extension mov insns")
Reported-by: Zac Ecob <[email protected]>
Signed-off-by: Yonghong Song <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Alexei Starovoitov <[email protected]>
|
|
ACPI MADT doesn't allow to offline a CPU after it has been woken up.
Currently, CPU hotplug is prevented based on the confidential computing
attribute which is set for Intel TDX. But TDX is not the only possible user of
the wake up method. Any platform that uses ACPI MADT wakeup method cannot
offline CPU.
Disable CPU offlining on ACPI MADT wakeup enumeration.
This has no visible effects for users: currently, TDX guest is the only platform
that uses the ACPI MADT wakeup method.
Signed-off-by: Kirill A. Shutemov <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Reviewed-by: Thomas Gleixner <[email protected]>
Acked-by: Rafael J. Wysocki <[email protected]>
Tested-by: Tao Liu <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
The ACPI MADT mailbox wakeup method doesn't allow to offline a CPU after
it has been woken up.
Currently, offlining is prevented based on the confidential computing attribute
which is set for Intel TDX. But TDX is not the only possible user of the wake up
method. The MADT wakeup can be implemented outside of a confidential computing
environment. Offline support is a property of the wakeup method, not the CoCo
implementation.
Introduce cpu_hotplug_disable_offlining() that can be called to indicate that
CPU offlining should be disabled.
This function is going to replace CC_ATTR_HOTPLUG_DISABLED for ACPI MADT wakeup
method.
Signed-off-by: Kirill A. Shutemov <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Reviewed-by: Thomas Gleixner <[email protected]>
Tested-by: Tao Liu <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
__irq_domain_add() has been replaced by irq_domain_instanciate() and so,
it is no more used.
Simply remove it.
Signed-off-by: Herve Codina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Domain creation functions use __irq_domain_add(). With the introduction
of irq_domain_instantiate(), __irq_domain_add() becomes obsolete.
In order to fully remove __irq_domain_add(), convert domain
creation function to irq_domain_instantiate()
Signed-off-by: Herve Codina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Add a devres version of irq_domain_instantiate().
Signed-off-by: Herve Codina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
The current API functions create an irq_domain and also publish this
newly created to domain. Once an irq_domain is published, consumers can
request IRQ in order to use them.
Some interrupt controller drivers have to perform some more operations
with the created irq_domain in order to have it ready to be used.
For instance:
- Allocate generic irq chips with irq_alloc_domain_generic_chips()
- Retrieve the generic irq chips with irq_get_domain_generic_chip()
- Initialize retrieved chips: set register base address and offsets,
set several hooks such as irq_mask, irq_unmask, ...
With the newly introduced irq_domain_alloc_generic_chips(), an interrupt
controller driver can use the irq_domain_chip_generic_info structure and
set the init() hook to perform its generic chips initialization.
In order to avoid a window where the domain is published but not yet
ready to be used, handle the generic chip creation (i.e the
irq_domain_alloc_generic_chips() call) before the domain is published.
Suggested-by: Thomas Gleixner <[email protected]>
Signed-off-by: Herve Codina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Most of generic chip drivers need to perform some more additional
initializations on the generic chips allocated before they can be fully
ready.
These additional initializations need to be performed before the IRQ
domain is published to avoid a race condition between IRQ consumers and
suppliers.
Introduce the init() hook to perform these initializations at the right
place just after the generic chip creation. Also introduce the exit() hook
to allow reverting operations done by the init() hook just before the
generic chip is destroyed.
Suggested-by: Thomas Gleixner <[email protected]>
Signed-off-by: Herve Codina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
The existing __irq_alloc_domain_generic_chips() uses a bunch of parameters
to describe the generic chips that need to be allocated.
Adding more parameters and wrappers to hide new parameters in the existing
code leads to more and more code without any relevant values and without
any flexibility.
Introduce irq_domain_alloc_generic_chips() where the generic chips
description is done using the irq_domain_chip_generic_info structure
instead of the bunch of parameters to allow flexibility and easy evolution.
Also introduce irq_domain_remove_generic_chips() to revert the operations
done by irq_domain_alloc_generic_chips().
Suggested-by: Thomas Gleixner <[email protected]>
Signed-off-by: Herve Codina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
The current API does not allow additional initialization before the
domain is published. This can lead to a race condition between consumers
and supplier as a domain can be available for consumers before being
fully ready.
Introduce the init() hook to allow additional initialization before
plublishing the domain. Also introduce the exit() hook to revert
operations done in init() on domain removal.
Suggested-by: Thomas Gleixner <[email protected]>
Signed-off-by: Herve Codina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
irq_domain_update_bus_token() is the only way to set the domain bus
token. This is sub-optimal as irq_domain_update_bus_token() can be called
only once the domain is created and needs to revert some operations, change
the domain name and redo the operations.
In order to avoid this revert/change/redo sequence, take the domain bus
into account token during the domain creation.
Suggested-by: Thomas Gleixner <[email protected]>
Signed-off-by: Herve Codina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
__irq_domain_create() can fail for several reasons. When it fails it
returns a NULL pointer and so filters out the exact failure reason.
The only user of __irq_domain_create() is irq_domain_instantiate() which
can return a PTR_ERR value. On __irq_domain_create() failure, it uses an
arbitrary error code.
Rather than using this arbitrary error value, make __irq_domain_create()
return is own error code and use that one.
[ tglx: Remove the pointless ERR_CAST. domain is a valid return pointer ]
Signed-off-by: Herve Codina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
irq_domain_instantiate() handles all needs to be used in
irq_domain_create_hierarchy()
Avoid code duplication and use directly irq_domain_instantiate() for
hierarchy domain creation.
Signed-off-by: Herve Codina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
To use irq_domain_instantiate() from irq_domain_create_hierarchy(),
irq_domain_instantiate() needs to handle the domain hierarchy parent.
Add the required functionality.
Signed-off-by: Herve Codina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
In order to use irq_domain_instantiate() from several places such as
irq_domain_create_hierarchy(), irq_domain_instantiate() needs to handle
additional domain flags.
Add the required infrastructure.
Suggested-by: Thomas Gleixner <[email protected]>
Signed-off-by: Herve Codina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
The existing __irq_domain_create() use a bunch of parameters to create
an irq domain.
With the introduction of irq_domain_info structure, these parameters are
available in the information structure itself.
Using directly this information structure allows future flexibility to
add other parameters in a simple way without the need to change the
__irq_domain_create() prototype.
Convert __irq_domain_create() to use the information structure.
[ tglx: Fixup struct initializer ]
Suggested-by: Thomas Gleixner <[email protected]>
Signed-off-by: Herve Codina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
The interrupt domain name computation and setting is directly done in
__irq_domain_create(). This leads to a quite long __irq_domain_create()
function.
In order to simplify __irq_domain_create() and isolate the domain name
computation and setting, move the related operations to a dedicated
function.
Signed-off-by: Herve Codina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
The existing irq_domain_add_*() functions used to instantiate an IRQ
domain are wrappers built on top of __irq_domain_add() and describe the
domain properties using a bunch of parameters.
Adding more parameters and wrappers to hide new parameters in the
existing code lead to more and more code without any relevant value and
without any flexibility.
Introduce irq_domain_instantiate() where the interrupt domain properties
are given using a irq_domain_info structure instead of the bunch of
parameters to allow flexibility and easy evolution.
irq_domain_instantiate() performs the same operation as the one done by
__irq_domain_add(). For compatibility reason with existing code, keep
__irq_domain_add() but convert it to irq_domain_instantiate().
[ tglx: Fixed up struct initializer coding style ]
Suggested-by: Thomas Gleixner <[email protected]>
Signed-off-by: Herve Codina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
In preparation of the introduction of the irq domain instantiation,
introduce irq_domain_free() to avoid code duplication on later
modifications.
This new function is an extraction of the current operations performed
to free the irq domain. No functional change intended.
Suggested-by: Thomas Gleixner <[email protected]>
Signed-off-by: Herve Codina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
fwnode_handle_get(fwnode) is called when a domain is created with fwnode
passed as a function parameter. fwnode_handle_put(domain->fwnode) is called
when the domain is destroyed but during the creation a path exists that
does not set domain->fwnode.
If this path is taken, the fwnode get will never be put.
To avoid the unbalanced get and put, set domain->fwnode unconditionally.
Fixes: d59f6617eef0 ("genirq: Allow fwnode to carry name information only")
Signed-off-by: Herve Codina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: [email protected]
Link: https://lore.kernel.org/r/[email protected]
|
|
Whenever CPU hotplug state callbacks are registered, the startup callback
is invoked on CPUs that have already reached the provided state in order of
ascending CPU IDs.
In freeze_secondary_cpus() the teardown of CPUs happens in the same are
invoked in the same order. This is known to make a difference is the
current implementation of these callbacks in arch/x86/events/intel/uncore.c:
- uncore_event_cpu_online() designates the first CPU it is invoked for
on each package as the uncore event collector for that package
- uncore_event_cpu_offline() if the CPU being offlined is the event
collector for its package, transfers that responsibility over to
the next (by ascending CPU id) one in the same package
With the current order of CPU teardowns in freeze_secondary_cpus(), the
latter ends up doing the ownership transfer work on every single CPU. That
work involves a synchronize_rcu() call, ultimately unnecessarily degrading
the performance of CPU offlining.
To address this make freeze_secondary_cpus() iterate through the CPUs in
reverse order, so that the teardown happens in order of descending CPU IDs.
[ tglx: Massage change log ]
Signed-off-by: Stanislav Spassov <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Fixes the following two Coccinelle/coccicheck warnings reported by
string_choices.cocci:
opportunity for str_plural(num_cpus)
opportunity for str_plural(num_nodes)
Signed-off-by: Thorsten Blum <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Acked-by: Paul E. McKenney <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Commit 4205e4786d0b ("cpu/hotplug: Provide dynamic range for prepare
stage") added a dynamic range for the prepare states, but did not handle
the assignment of the dynstate variable in __cpuhp_setup_state_cpuslocked().
This causes the corresponding startup callback not to be invoked when
calling __cpuhp_setup_state_cpuslocked() with the CPUHP_BP_PREPARE_DYN
parameter, even though it should be.
Currently, the users of __cpuhp_setup_state_cpuslocked(), for one reason or
another, have not triggered this bug.
Fixes: 4205e4786d0b ("cpu/hotplug: Provide dynamic range for prepare stage")
Signed-off-by: Yuntao Wang <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: [email protected]
Link: https://lore.kernel.org/r/[email protected]
|
|
In kcov_remote_start()/kcov_remote_stop(), we swap the previous KCOV
metadata of the current task into a per-CPU variable. However, the
kcov_mode_enabled(mode) check is not sufficient in the case of remote KCOV
coverage: current->kcov_mode always remains KCOV_MODE_DISABLED for remote
KCOV objects.
If the original task that has invoked the KCOV_REMOTE_ENABLE ioctl happens
to get interrupted and kcov_remote_start() is called, it ultimately leads
to kcov_remote_stop() NOT restoring the original KCOV reference. So when
the task exits, all registered remote KCOV handles remain active forever.
The most uncomfortable effect (at least for syzkaller) is that the bug
prevents the reuse of the same /sys/kernel/debug/kcov descriptor. If
we obtain it in the parent process and then e.g. drop some
capabilities and continuously fork to execute individual programs, at
some point current->kcov of the forked process is lost,
kcov_task_exit() takes no action, and all KCOV_REMOTE_ENABLE ioctls
calls from subsequent forks fail.
And, yes, the efficiency is also affected if we keep on losing remote
kcov objects.
a) kcov_remote_map keeps on growing forever.
b) (If I'm not mistaken), we're also not freeing the memory referenced
by kcov->area.
Fix it by introducing a special kcov_mode that is assigned to the task
that owns a KCOV remote object. It makes kcov_mode_enabled() return true
and yet does not trigger coverage collection in __sanitizer_cov_trace_pc()
and write_comp_data().
[[email protected]: replace WRITE_ONCE() with an ordinary assignment]
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 5ff3b30ab57d ("kcov: collect coverage from interrupts")
Signed-off-by: Aleksandr Nogikh <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Reviewed-by: Andrey Konovalov <[email protected]>
Tested-by: Andrey Konovalov <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Marco Elver <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Using gcov on kernels compiled with GCC 14 results in truncated 16-byte
long .gcda files with no usable data. To fix this, update GCOV_COUNTERS
to match the value defined by GCC 14.
Tested with GCC versions 14.1.0 and 13.2.0.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peter Oberparleiter <[email protected]>
Reported-by: Allison Henderson <[email protected]>
Reported-by: Chuck Lever III <[email protected]>
Tested-by: Chuck Lever <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
kernel_wait4() doesn't sleep and returns -EINTR if there is no
eligible child and signal_pending() is true.
That is why zap_pid_ns_processes() clears TIF_SIGPENDING but this is not
enough, it should also clear TIF_NOTIFY_SIGNAL to make signal_pending()
return false and avoid a busy-wait loop.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 12db8b690010 ("entry: Add support for TIF_NOTIFY_SIGNAL")
Signed-off-by: Oleg Nesterov <[email protected]>
Reported-by: Rachel Menge <[email protected]>
Closes: https://lore.kernel.org/all/[email protected]/
Reviewed-by: Boqun Feng <[email protected]>
Tested-by: Wei Fu <[email protected]>
Reviewed-by: Jens Axboe <[email protected]>
Cc: Allen Pais <[email protected]>
Cc: Christian Brauner <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Joel Fernandes (Google) <[email protected]>
Cc: Joel Granados <[email protected]>
Cc: Josh Triplett <[email protected]>
Cc: Lai Jiangshan <[email protected]>
Cc: Mateusz Guzik <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Mike Christie <[email protected]>
Cc: Neeraj Upadhyay <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Steven Rostedt (Google) <[email protected]>
Cc: Zqiang <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
For tests that need to allocate using vm_mmap() (e.g. usercopy and
execve), provide the interface to have the allocation tracked by KUnit
itself. This requires bringing up a placeholder userspace mm.
This combines my earlier attempt at this with Mark Rutland's version[1].
Normally alloc_mm() and arch_pick_mmap_layout() aren't exported for
modules, so export these only for KUnit testing.
Link: https://lore.kernel.org/lkml/[email protected]/ [1]
Co-developed-by: Mark Rutland <[email protected]>
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: David Gow <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf
Daniel Borkmann says:
====================
pull-request: bpf 2024-06-14
We've added 8 non-merge commits during the last 2 day(s) which contain
a total of 9 files changed, 92 insertions(+), 11 deletions(-).
The main changes are:
1) Silence a syzkaller splat under CONFIG_DEBUG_NET=y in pskb_pull_reason()
triggered via __bpf_try_make_writable(), from Florian Westphal.
2) Fix removal of kfuncs during linking phase which then throws a kernel
build warning via resolve_btfids about unresolved symbols,
from Tony Ambardar.
3) Fix a UML x86_64 compilation failure from BPF as pcpu_hot symbol
is not available on User Mode Linux, from Maciej Żenczykowski.
4) Fix a register corruption in reg_set_min_max triggering an invariant
violation in BPF verifier, from Daniel Borkmann.
* tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf:
bpf: Harden __bpf_kfunc tag against linker kfunc removal
compiler_types.h: Define __retain for __attribute__((__retain__))
bpf: Avoid splat in pskb_pull_reason
bpf: fix UML x86_64 compile failure
selftests/bpf: Add test coverage for reg_set_min_max handling
bpf: Reduce stack consumption in check_stack_write_fixed_off
bpf: Fix reg_set_min_max corruption of fake_reg
MAINTAINERS: mailmap: Update Stanislav's email address
====================
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Compilers can generate the code
r1 = r2
r1 += 0x1
if r2 < 1000 goto ...
use knowledge of r2 range in subsequent r1 operations
So remember constant delta between r2 and r1 and update r1 after 'if' condition.
Unfortunately LLVM still uses this pattern for loops with 'can_loop' construct:
for (i = 0; i < 1000 && can_loop; i++)
The "undo" pass was introduced in LLVM
https://reviews.llvm.org/D121937
to prevent this optimization, but it cannot cover all cases.
Instead of fighting middle end optimizer in BPF backend teach the verifier
about this pattern.
Signed-off-by: Alexei Starovoitov <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: Eduard Zingerman <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
|
|
In function_graph_enter() there's a loop that looks at fgraph_array[]
elements which are fgraph_ops. It first tests if it is a fgraph_stub op,
and if so skips it, as that's just there as a place holder. Then it checks
the fgraph_ops filters to see if the ops wants to trace the current
function.
But if the compiler reloads the fgraph_array[] after the check against
fgraph_stub, it could race with the fgraph_array[] being updated with the
fgraph_stub. That would cause the stub to be processed. But the stub has a
null "func_hash" field which will cause a NULL pointer dereference.
Add a READ_ONCE() so that the gops that is compared against the
fgraph_stub is also the gops that is processed later.
Link: https://lore.kernel.org/all/CA+G9fYsSVJQZH=nM=1cjTc94PgSnMF9y65BnOv6XSoCG_b6wmw@mail.gmail.com/
Link: https://lore.kernel.org/linux-trace-kernel/[email protected]
Cc: Mark Rutland <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Fixes: cc60ee813b503 ("function_graph: Use static_call and branch to optimize entry function")
Reported-by: Naresh Kamboju <[email protected]>
Signed-off-by: Steven Rostedt (Google) <[email protected]>
Acked-by: Masami Hiramatsu (Google) <[email protected]>
|
|
The addresses of a stack trace event are relative to the kallsyms. As that
can change between boots, when printing the stack trace from a buffer that
was from the last boot, it needs all the addresses to be added to the
"text_delta" that gives the delta between the addresses of the functions
for the current boot compared to the address of the last boot. Then it can
be passed to kallsyms to find the function name, otherwise it just shows a
useless list of addresses.
Link: https://lkml.kernel.org/r/[email protected]
Cc: Masami Hiramatsu <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vineeth Pillai <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Beau Belgrave <[email protected]>
Cc: Alexander Graf <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: David Howells <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Guenter Roeck <[email protected]>
Cc: Ross Zwisler <[email protected]>
Cc: Kees Cook <[email protected]>
Signed-off-by: Steven Rostedt (Google) <[email protected]>
|
|
For a persistent ring buffer that is saved across boots, if function
tracing was performed in the previous boot, it only saves the address of
the functions and uses "%pS" to print their names. But the current boot,
those functions may be in different locations. The persistent meta-data
saves the text delta between the two boots and can be used to find the
address of the saved function of where it is located in the current boot.
Link: https://lkml.kernel.org/r/[email protected]
Cc: Masami Hiramatsu <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vineeth Pillai <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Beau Belgrave <[email protected]>
Cc: Alexander Graf <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: David Howells <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Guenter Roeck <[email protected]>
Cc: Ross Zwisler <[email protected]>
Cc: Kees Cook <[email protected]>
Signed-off-by: Steven Rostedt (Google) <[email protected]>
|
|
Use the saved text_delta and data_delta of a persistent memory mapped ring
buffer that was saved from a previous boot, and use the delta in the trace
event print output so that strings and functions show up normally.
That is, for an event like trace_kmalloc() that prints the callsite via
"%pS", if it used the address saved in the ring buffer it will not match
the function that was saved in the previous boot if the kernel remaps
itself between boots.
For RCU events that point to saved static strings where only the address
of the string is saved in the ring buffer, it too will be adjusted to
point to where the string is on the current boot.
Link: https://lkml.kernel.org/r/[email protected]
Cc: Masami Hiramatsu <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vineeth Pillai <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Beau Belgrave <[email protected]>
Cc: Alexander Graf <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: David Howells <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Guenter Roeck <[email protected]>
Cc: Ross Zwisler <[email protected]>
Cc: Kees Cook <[email protected]>
Signed-off-by: Steven Rostedt (Google) <[email protected]>
|
|
If an instance is mapped to memory on boot up, create a new file called
"last_boot_info" that will hold information that can be used to properly
parse the raw data in the ring buffer.
It will export the delta of the addresses for text and data from what it
was from the last boot. It does not expose actually addresses (unless you
knew what the actual address was from the last boot).
The output will look like:
# cat last_boot_info
text delta: -268435456
data delta: -268435456
The text and data are kept separate in case they are ever made different.
Link: https://lkml.kernel.org/r/[email protected]
Cc: Masami Hiramatsu <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vineeth Pillai <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Beau Belgrave <[email protected]>
Cc: Alexander Graf <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: David Howells <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Guenter Roeck <[email protected]>
Cc: Ross Zwisler <[email protected]>
Cc: Kees Cook <[email protected]>
Signed-off-by: Steven Rostedt (Google) <[email protected]>
|
|
When a ring buffer is mapped to a specific address, save the address of a
text function and some data. This will be used to determine the delta
between the last boot and the current boot for pointers to functions as
well as to data.
Link: https://lkml.kernel.org/r/[email protected]
Cc: Masami Hiramatsu <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vineeth Pillai <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Beau Belgrave <[email protected]>
Cc: Alexander Graf <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: David Howells <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Guenter Roeck <[email protected]>
Cc: Ross Zwisler <[email protected]>
Cc: Kees Cook <[email protected]>
Signed-off-by: Steven Rostedt (Google) <[email protected]>
|
|
Add an option to the trace_instance kernel command line parameter that
allows it to use the reserved memory from memmap boot parameter.
memmap=12M$0x284500000 trace_instance=boot_mapped@0x284500000:12M
The above will reserves 12 megs at the physical address 0x284500000.
The second parameter will create a "boot_mapped" instance and use the
memory reserved as the memory for the ring buffer.
That will create an instance called "boot_mapped":
/sys/kernel/tracing/instances/boot_mapped
Note, because the ring buffer is using a defined memory ranged, it will
act just like a memory mapped ring buffer. It will not have a snapshot
buffer, as it can't swap out the buffer. The snapshot files as well as any
tracers that uses a snapshot will not be present in the boot_mapped
instance.
Link: https://lkml.kernel.org/r/[email protected]
Cc: [email protected]
Cc: Masami Hiramatsu <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vineeth Pillai <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Beau Belgrave <[email protected]>
Cc: Alexander Graf <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: David Howells <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Guenter Roeck <[email protected]>
Cc: Ross Zwisler <[email protected]>
Cc: Kees Cook <[email protected]>
Signed-off-by: Steven Rostedt (Google) <[email protected]>
|
|
Make sure all the events in each of the sub-buffers that were mapped in a
memory region are valid. This moves the code that walks the buffers for
time-stamp validation out of the CONFIG_RING_BUFFER_VALIDATE_TIME_DELTAS
ifdef block and is used to validate the content. Only the ring buffer
event meta data and time stamps are checked and not the data load.
This also has a second purpose. The buffer_page structure that points to
the data sub-buffers has accounting that keeps track of the number of
events that are on the sub-buffer. This updates that counter as well. That
counter is used in reading the buffer and knowing if the ring buffer is
empty or not.
Link: https://lkml.kernel.org/r/[email protected]
Cc: Masami Hiramatsu <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vineeth Pillai <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Beau Belgrave <[email protected]>
Cc: Alexander Graf <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: David Howells <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Guenter Roeck <[email protected]>
Cc: Ross Zwisler <[email protected]>
Cc: Kees Cook <[email protected]>
Signed-off-by: Steven Rostedt (Google) <[email protected]>
|
|
Add a test against the ring buffer memory range to see if it has valid
data. The ring_buffer_meta structure is given a new field called
"first_buffer" which holds the address of the first sub-buffer. This is
used to both determine if the other fields are valid as well as finding
the offset between the old addresses of the sub-buffer from the previous
boot to the new addresses of the current boot.
Since the values for nr_subbufs and subbuf_size is to be the same, check
if the values in the meta page match the values calculated.
Take the range of the first_buffer and the total size of all the buffers
and make sure the saved head_buffer and commit_buffer fall in the range.
Iterate through all the sub-buffers to make sure that the values in the
sub-buffer "commit" field (the field that holds the amount of data on the
sub-buffer) is within the end of the sub-buffer. Also check the index
array to make sure that all the indexes are within nr_subbufs.
Link: https://lkml.kernel.org/r/[email protected]
Cc: Masami Hiramatsu <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vineeth Pillai <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Beau Belgrave <[email protected]>
Cc: Alexander Graf <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: David Howells <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Guenter Roeck <[email protected]>
Cc: Ross Zwisler <[email protected]>
Cc: Kees Cook <[email protected]>
Signed-off-by: Steven Rostedt (Google) <[email protected]>
|
|
Add a buffer_meta per-cpu file for the trace instance that is mapped to
boot memory. This shows the current meta-data and can be used by user
space tools to record off the current mappings to help reconstruct the
ring buffer after a reboot.
It does not expose any virtual addresses, just indexes into the sub-buffer
pages.
Link: https://lkml.kernel.org/r/[email protected]
Cc: Masami Hiramatsu <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vineeth Pillai <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Beau Belgrave <[email protected]>
Cc: Alexander Graf <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: David Howells <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Guenter Roeck <[email protected]>
Cc: Ross Zwisler <[email protected]>
Cc: Kees Cook <[email protected]>
Signed-off-by: Steven Rostedt (Google) <[email protected]>
|
|
Allow for creating a new instance by passing in an address and size to map
the ring buffer for the instance to.
This will allow features like a pstore memory mapped region to be used for
an tracing instance ring buffer that can be retrieved from one boot to the
next.
Link: https://lkml.kernel.org/r/[email protected]
Cc: Masami Hiramatsu <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vineeth Pillai <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Beau Belgrave <[email protected]>
Cc: Alexander Graf <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: David Howells <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Guenter Roeck <[email protected]>
Cc: Ross Zwisler <[email protected]>
Cc: Kees Cook <[email protected]>
Signed-off-by: Steven Rostedt (Google) <[email protected]>
|
|
Populate the ring_buffer_meta array. It holds the pointer to the
head_buffer (next to read), the commit_buffer (next to write) the size of
the sub-buffers, number of sub-buffers and an array that keeps track of
the order of the sub-buffers.
This information will be stored in the persistent memory to help on reboot
to reconstruct the ring buffer.
Link: https://lkml.kernel.org/r/[email protected]
Cc: Masami Hiramatsu <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vineeth Pillai <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Beau Belgrave <[email protected]>
Cc: Alexander Graf <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: David Howells <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Guenter Roeck <[email protected]>
Cc: Ross Zwisler <[email protected]>
Cc: Kees Cook <[email protected]>
Signed-off-by: Steven Rostedt (Google) <[email protected]>
|
|
In preparation to allowing the trace ring buffer to be allocated in a
range of memory that is persistent across reboots, add
ring_buffer_alloc_range(). It takes a contiguous range of memory and will
split it up evenly for the per CPU ring buffers.
If there's not enough memory to handle all CPUs with the minimum size, it
will fail to allocate the ring buffer.
Link: https://lkml.kernel.org/r/[email protected]
Cc: Masami Hiramatsu <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vineeth Pillai <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Beau Belgrave <[email protected]>
Cc: Alexander Graf <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: David Howells <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Guenter Roeck <[email protected]>
Cc: Ross Zwisler <[email protected]>
Cc: Kees Cook <[email protected]>
Signed-off-by: Steven Rostedt (Google) <[email protected]>
|
|
In preparation for having the ring buffer mapped to a dedicated location,
which will have the same restrictions as user space memory mapped buffers,
allow it to use the "mapped" field of the ring_buffer_per_cpu structure
without having the user space meta page mapping.
When this starts using the mapped field, it will need to handle adding a
user space mapping (and removing it) from a ring buffer that is using a
dedicated memory range.
Link: https://lkml.kernel.org/r/[email protected]
Cc: Masami Hiramatsu <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vineeth Pillai <[email protected]>
Cc: Youssef Esmat <[email protected]>
Cc: Beau Belgrave <[email protected]>
Cc: Alexander Graf <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: David Howells <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Guenter Roeck <[email protected]>
Cc: Ross Zwisler <[email protected]>
Cc: Kees Cook <[email protected]>
Signed-off-by: Steven Rostedt (Google) <[email protected]>
|