diff options
author | Eric Dumazet <edumazet@google.com> | 2024-02-09 15:31:00 +0000 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2024-02-12 12:17:03 +0000 |
commit | 78c3253f27e579f7f3a1f5c0cb8266693a7b4f41 (patch) | |
tree | 4476ffa5cfd0610635097325d6abf27966681840 /net/core/net_namespace.c | |
parent | 2cd0c51e3baf7aa49e802c06cb1b2ffa9c82fbe1 (diff) |
net: use synchronize_rcu_expedited in cleanup_net()
cleanup_net() is calling synchronize_rcu() right before
acquiring RTNL.
synchronize_rcu() is much slower than synchronize_rcu_expedited(),
and cleanup_net() is currently single threaded. In many workloads
we want cleanup_net() to be fast, in order to free memory and various
sysfs and procfs entries as fast as possible.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/core/net_namespace.c')
-rw-r--r-- | net/core/net_namespace.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c index 233ec0cdd011..f0540c557515 100644 --- a/net/core/net_namespace.c +++ b/net/core/net_namespace.c @@ -622,7 +622,7 @@ static void cleanup_net(struct work_struct *work) * the rcu_barrier() below isn't sufficient alone. * Also the pre_exit() and exit() methods need this barrier. */ - synchronize_rcu(); + synchronize_rcu_expedited(); rtnl_lock(); list_for_each_entry_reverse(ops, &pernet_list, list) { |