diff options
author | David S. Miller <davem@davemloft.net> | 2024-08-02 09:20:29 +0100 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2024-08-02 09:20:29 +0100 |
commit | 3361a6eae59664ffae640ff7a838f5bd89c24461 (patch) | |
tree | bed6f1d43b4e565cda910594fac8835701c715f0 /tools/testing/selftests/drivers/net/lib/py | |
parent | 5fe164fb0e6e31dbcbb4b706fd76bc578e5af4c6 (diff) | |
parent | 18ee44ce97c18ee72f5807140d07ff8cebe3cab5 (diff) |
Merge branch 'vsock-virtio' into main
Luigi Leonardi says:
====================
vsock: avoid queuing on intermediate queue if possible
This series introduces an optimization for vsock/virtio to reduce latency
and increase the throughput: When the guest sends a packet to the host,
and the intermediate queue (send_pkt_queue) is empty, if there is enough
space, the packet is put directly in the virtqueue.
v3->v4
While running experiments on fio with 64B payload, I realized that there
was a mistake in my fio configuration, so I re-ran all the experiments
and now the latency numbers are indeed lower with the patch applied.
I also noticed that I was kicking the host without the lock.
- Fixed a configuration mistake on fio and re-ran all experiments.
- Fio latency measurement using 64B payload.
- virtio_transport_send_skb_fast_path sends kick with the tx_lock acquired
- Addressed all minor style changes requested by maintainer.
- Rebased on latest net-next
- Link to v3: https://lore.kernel.org/r/20240711-pinna-v3-0-697d4164fe80@outlook.com
v2->v3
- Performed more experiments using iperf3 using multiple streams
- Handling of reply packets removed from virtio_transport_send_skb,
as is needed just by the worker.
- Removed atomic_inc/atomic_sub when queuing directly to the vq.
- Introduced virtio_transport_send_skb_fast_path that handles the
steps for sending on the vq.
- Fixed a missing mutex_unlock in error path.
- Changed authorship of the second commit
- Rebased on latest net-next
v1->v2
In this v2 I replaced a mutex_lock with a mutex_trylock because it was
insidea RCU critical section. I also added a check on tx_run, so if the
module is being removed the packet is not queued. I'd like to thank Stefano
for reporting the tx_run issue.
Applied all Stefano's suggestions:
- Minor code style changes
- Minor commit text rewrite
Performed more experiments:
- Check if all the packets go directly to the vq (Matias' suggestion)
- Used iperf3 to see if there is any improvement in overall throughput
from guest to host
- Pinned the vhost process to a pCPU.
- Run fio using 512B payload
Rebased on latest net-next
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'tools/testing/selftests/drivers/net/lib/py')
0 files changed, 0 insertions, 0 deletions