diff options
author | Alex Sierra <[email protected]> | 2021-10-29 13:30:40 -0500 |
---|---|---|
committer | Alex Deucher <[email protected]> | 2021-11-05 14:10:58 -0400 |
commit | a6283010e2907a5576f96b839e1a1c82659f137c (patch) | |
tree | 91350b0345f7beb39849c6e8b3d289237eae1216 /scripts/gdb/linux/utils.py | |
parent | 25a1a08fe79be6ef00e1393b1f5545f6ba62919f (diff) |
drm/amdkfd: avoid recursive lock in migrations back to RAM
[Why]:
When we call hmm_range_fault to map memory after a migration, we don't
expect memory to be migrated again as a result of hmm_range_fault. The
driver ensures that all memory is in GPU-accessible locations so that
no migration should be needed. However, there is one corner case where
hmm_range_fault can unexpectedly cause a migration from DEVICE_PRIVATE
back to system memory due to a write-fault when a system memory page in
the same range was mapped read-only (e.g. COW). Ranges with individual
pages in different locations are usually the result of failed page
migrations (e.g. page lock contention). The unexpected migration back
to system memory causes a deadlock from recursive locking in our
driver.
[How]:
Creating a task reference new member under svm_range_list struct.
Setting this with "current" reference, right before the hmm_range_fault
is called. This member is checked against "current" reference at
svm_migrate_to_ram callback function. If equal, the migration will be
ignored.
Signed-off-by: Alex Sierra <[email protected]>
Reviewed-by: Felix Kuehling <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Diffstat (limited to 'scripts/gdb/linux/utils.py')
0 files changed, 0 insertions, 0 deletions