diff options
| author | Eric Dumazet <[email protected]> | 2022-05-15 21:24:53 -0700 |
|---|---|---|
| committer | David S. Miller <[email protected]> | 2022-05-16 11:33:59 +0100 |
| commit | 97e719a82b43c6c2bb5eebdb3c5d479a332ac2ac (patch) | |
| tree | 7e38c6a88703169d84365cb91c9aaab5aabe0adb /tools/perf/scripts/python/event_analyzing_sample.py | |
| parent | 3daebfbeb4555cb0c113aeb88aa469192ee41d89 (diff) | |
net: fix possible race in skb_attempt_defer_free()
A cpu can observe sd->defer_count reaching 128,
and call smp_call_function_single_async()
Problem is that the remote CPU can clear sd->defer_count
before the IPI is run/acknowledged.
Other cpus can queue more packets and also decide
to call smp_call_function_single_async() while the pending
IPI was not yet delivered.
This is a common issue with smp_call_function_single_async().
Callers must ensure correct synchronization and serialization.
I triggered this issue while experimenting smaller threshold.
Performing the call to smp_call_function_single_async()
under sd->defer_lock protection did not solve the problem.
Commit 5a18ceca6350 ("smp: Allow smp_call_function_single_async()
to insert locked csd") replaced an informative WARN_ON_ONCE()
with a return of -EBUSY, which is often ignored.
Test of CSD_FLAG_LOCK presence is racy anyway.
Fixes: 68822bdf76f1 ("net: generalize skb freeing deferral to per-cpu lists")
Signed-off-by: Eric Dumazet <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Diffstat (limited to 'tools/perf/scripts/python/event_analyzing_sample.py')
0 files changed, 0 insertions, 0 deletions