diff options
author | Israel Rukshin <[email protected]> | 2019-11-24 18:38:32 +0200 |
---|---|---|
committer | Keith Busch <[email protected]> | 2019-11-27 02:14:19 +0900 |
commit | 52e6d8ed16fdf9f1d2923a2b036222a5ac834b1d (patch) | |
tree | 23b7589ed985efafe34fe252e1e5e96a54197398 /tools/perf/scripts/python/exported-sql-viewer.py | |
parent | b1ae1a238900474a9f51431c0f7f169ade1faa19 (diff) |
nvmet-loop: Avoid preallocating big SGL for data
nvme_loop_create_io_queues() preallocates a big buffer for the IO SGL based
on SG_CHUNK_SIZE.
Modern DMA engines are often capable of dealing with very big segments so
the SG_CHUNK_SIZE is often too big. SG_CHUNK_SIZE results in a static 4KB
SGL allocation per command.
If a controller has lots of deep queues, preallocation for the sg list can
consume substantial amounts of memory. For nvmet-loop, nr_hw_queues can be
128 and each queue's depth 128. This means the resulting preallocation
for the data SGL is 128*128*4K = 64MB per controller.
Switch to runtime allocation for SGL for lists longer than 2 entries. This
is the approach used by NVMe PCI so it should be reasonable for NVMeOF as
well. Runtime SGL allocation has always been the case for the legacy I/O
path so this is nothing new.
Tested-by: Chaitanya Kulkarni <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Chaitanya Kulkarni <[email protected]>
Reviewed-by: Max Gurtovoy <[email protected]>
Signed-off-by: Israel Rukshin <[email protected]>
Signed-off-by: Keith Busch <[email protected]>
Diffstat (limited to 'tools/perf/scripts/python/exported-sql-viewer.py')
0 files changed, 0 insertions, 0 deletions