iommu/vt-d: Fix aligned pages in calculate_psi_aligned_address()
The helper calculate_psi_aligned_address() is used to convert an arbitrary
range into a size-aligned one.
The aligned_pages variable is calculated from input start and end, but is
not adjusted when the start pfn is not aligned and the mask is adjusted,
which results in an incorrect number of pages returned.
The number of pages is used by qi_flush_piotlb() to flush caches for the
first-stage translation. With the wrong number of pages, the cache is not
synchronized, leading to inconsistencies in some cases.
Fixes: c4d27ffaa8
("iommu/vt-d: Add cache tag invalidation helpers")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Link: https://lore.kernel.org/r/20240709152643.28109-3-baolu.lu@linux.intel.com
Signed-off-by: Will Deacon <will@kernel.org>
This commit is contained in:
parent
c420a2b4e8
commit
0a3f6b3463
1 changed files with 1 additions and 0 deletions
|
@ -246,6 +246,7 @@ static unsigned long calculate_psi_aligned_address(unsigned long start,
|
|||
*/
|
||||
shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
|
||||
mask = shared_bits ? __ffs(shared_bits) : MAX_AGAW_PFN_WIDTH;
|
||||
aligned_pages = 1UL << mask;
|
||||
}
|
||||
|
||||
*_pages = aligned_pages;
|
||||
|
|
Loading…
Reference in a new issue