aboutsummaryrefslogtreecommitdiff
path: root/drivers/platform
diff options
context:
space:
mode:
authorAlex Sierra <alex.sierra@amd.com>2021-10-29 13:30:40 -0500
committerAlex Deucher <alexander.deucher@amd.com>2021-11-05 14:10:58 -0400
commita6283010e2907a5576f96b839e1a1c82659f137c (patch)
tree91350b0345f7beb39849c6e8b3d289237eae1216 /drivers/platform
parent25a1a08fe79be6ef00e1393b1f5545f6ba62919f (diff)
drm/amdkfd: avoid recursive lock in migrations back to RAM
[Why]: When we call hmm_range_fault to map memory after a migration, we don't expect memory to be migrated again as a result of hmm_range_fault. The driver ensures that all memory is in GPU-accessible locations so that no migration should be needed. However, there is one corner case where hmm_range_fault can unexpectedly cause a migration from DEVICE_PRIVATE back to system memory due to a write-fault when a system memory page in the same range was mapped read-only (e.g. COW). Ranges with individual pages in different locations are usually the result of failed page migrations (e.g. page lock contention). The unexpected migration back to system memory causes a deadlock from recursive locking in our driver. [How]: Creating a task reference new member under svm_range_list struct. Setting this with "current" reference, right before the hmm_range_fault is called. This member is checked against "current" reference at svm_migrate_to_ram callback function. If equal, the migration will be ignored. Signed-off-by: Alex Sierra <alex.sierra@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Diffstat (limited to 'drivers/platform')
0 files changed, 0 insertions, 0 deletions