aboutsummaryrefslogtreecommitdiff
path: root/lib/xz
diff options
context:
space:
mode:
authorKuniyuki Iwashima <kuniyu@amazon.com>2022-06-21 10:19:10 -0700
committerDavid S. Miller <davem@davemloft.net>2022-06-22 12:59:43 +0100
commitb6e811383062f88212082714db849127fa95142c (patch)
tree5999695ff674144ba8e4ec4199c30015203fcecf /lib/xz
parentf302d180c6d430ea99643b9b2b3407aedaa36703 (diff)
af_unix: Define a per-netns hash table.
This commit adds a per netns hash table for AF_UNIX, which size is fixed as UNIX_HASH_SIZE for now. The first implementation defines a per-netns hash table as a single array of lock and list: struct unix_hashbucket { spinlock_t lock; struct hlist_head head; }; struct netns_unix { struct unix_hashbucket *hash; ... }; But, Eric pointed out memory cost that the structure has holes because of sizeof(spinlock_t), which is 4 (or more if LOCKDEP is enabled). [0] It could be expensive on a host with thousands of netns and few AF_UNIX sockets. For this reason, a per-netns hash table uses two dense arrays. struct unix_table { spinlock_t *locks; struct hlist_head *buckets; }; struct netns_unix { struct unix_table table; ... }; Note the length of the list has a significant impact rather than lock contention, so having shared locks can be an option. But, per-netns locks and lists still perform better than the global locks and per-netns lists. [1] Also, this patch adds a change so that struct netns_unix disappears from struct net if CONFIG_UNIX is disabled. [0]: https://lore.kernel.org/netdev/CANn89iLVxO5aqx16azNU7p7Z-nz5NrnM5QTqOzueVxEnkVTxyg@mail.gmail.com/ [1]: https://lore.kernel.org/netdev/20220617175215.1769-1-kuniyu@amazon.com/ Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'lib/xz')
0 files changed, 0 insertions, 0 deletions