diff options
author | Michael Kelley <mhklinux@outlook.com> | 2024-04-07 21:11:42 -0700 |
---|---|---|
committer | Christoph Hellwig <hch@lst.de> | 2024-05-07 13:29:45 +0200 |
commit | 2650073f1b5858008c32712f3d9e1e808ce7e967 (patch) | |
tree | e17dad5057ec236c21a02c8d01d1d0a1f35eeb31 /drivers/iommu | |
parent | 327e2c97c46a4d971c5450a9d05b4a673f46c4da (diff) |
iommu/dma: fix zeroing of bounce buffer padding used by untrusted devices
iommu_dma_map_page() allocates swiotlb memory as a bounce buffer when an
untrusted device wants to map only part of the memory in an granule. The
goal is to disallow the untrusted device having DMA access to unrelated
kernel data that may be sharing the granule. To meet this goal, the
bounce buffer itself is zeroed, and any additional swiotlb memory up to
alloc_size after the bounce buffer end (i.e., "post-padding") is also
zeroed.
However, as of commit 901c7280ca0d ("Reinstate some of "swiotlb: rework
"fix info leak with DMA_FROM_DEVICE"""), swiotlb_tbl_map_single() always
initializes the contents of the bounce buffer to the original memory.
Zeroing the bounce buffer is redundant and probably wrong per the
discussion in that commit. Only the post-padding needs to be zeroed.
Also, when the DMA min_align_mask is non-zero, the allocated bounce
buffer space may not start on a granule boundary. The swiotlb memory
from the granule boundary to the start of the allocated bounce buffer
might belong to some unrelated bounce buffer. So as described in the
"second issue" in [1], it can't be zeroed to protect against untrusted
devices. But as of commit af133562d5af ("swiotlb: extend buffer
pre-padding to alloc_align_mask if necessary"), swiotlb_tbl_map_single()
allocates pre-padding slots when necessary to meet min_align_mask
requirements, making it possible to zero the pre-padding area as well.
Finally, iommu_dma_map_page() uses the swiotlb for untrusted devices
and also for certain kmalloc() memory. Current code does the zeroing
for both cases, but it is needed only for the untrusted device case.
Fix all of this by updating iommu_dma_map_page() to zero both the
pre-padding and post-padding areas, but not the actual bounce buffer.
Do this only in the case where the bounce buffer is used because
of an untrusted device.
[1] https://lore.kernel.org/all/20210929023300.335969-1-stevensd@google.com/
Signed-off-by: Michael Kelley <mhklinux@outlook.com>
Reviewed-by: Petr Tesarik <petr@tesarici.cz>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Diffstat (limited to 'drivers/iommu')
-rw-r--r-- | drivers/iommu/dma-iommu.c | 29 |
1 files changed, 16 insertions, 13 deletions
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 7a9e814fc6b3..c745196bc150 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1154,9 +1154,6 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, */ if (dev_use_swiotlb(dev, size, dir) && iova_offset(iovad, phys | size)) { - void *padding_start; - size_t padding_size, aligned_size; - if (!is_swiotlb_active(dev)) { dev_warn_once(dev, "DMA bounce buffers are inactive, unable to map unaligned transaction.\n"); return DMA_MAPPING_ERROR; @@ -1164,24 +1161,30 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, trace_swiotlb_bounced(dev, phys, size); - aligned_size = iova_align(iovad, size); phys = swiotlb_tbl_map_single(dev, phys, size, iova_mask(iovad), dir, attrs); if (phys == DMA_MAPPING_ERROR) return DMA_MAPPING_ERROR; - /* Cleanup the padding area. */ - padding_start = phys_to_virt(phys); - padding_size = aligned_size; + /* + * Untrusted devices should not see padding areas with random + * leftover kernel data, so zero the pre- and post-padding. + * swiotlb_tbl_map_single() has initialized the bounce buffer + * proper to the contents of the original memory buffer. + */ + if (dev_is_untrusted(dev)) { + size_t start, virt = (size_t)phys_to_virt(phys); - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) { - padding_start += size; - padding_size -= size; - } + /* Pre-padding */ + start = iova_align_down(iovad, virt); + memset((void *)start, 0, virt - start); - memset(padding_start, 0, padding_size); + /* Post-padding */ + start = virt + size; + memset((void *)start, 0, + iova_align(iovad, start) - start); + } } if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) |