Age | Commit message (Collapse) | Author | Files | Lines |
|
Signed-off-by: Monk Liu <[email protected]>
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
And return from the wait functions the fence error code.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Nicolai Hähnle <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Instead of reading the current counter from fpriv.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Nicolai Hähnle <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Add an initial framework for changing the HW priorities of rings. The
framework allows requesting priority changes for the lifetime of an
amdgpu_job. After the job completes the priority will decay to the next
lowest priority for which a request is still valid.
A new ring function set_priority() can now be populated to take care of
the HW specific programming sequence for priority changes.
v2: set priority before emitting IB, and take a ref on amdgpu_job
v3: use AMD_SCHED_PRIORITY_* instead of AMDGPU_CTX_PRIORITY_*
v4: plug amdgpu_ring_restore_priority_cb into amdgpu_job_free_cb
v5: use atomic for tracking job priorities instead of last_job
v6: rename amdgpu_ring_priority_[get/put]() and align parameters
v7: replace spinlocks with mutexes for KIQ compatibility
v8: raise ring priority during cs_ioctl, instead of job_run
v9: priority_get() before push_job()
Reviewed-by: Christian König <[email protected]>
Acked-by: Christian König <[email protected]>
Signed-off-by: Andres Rodriguez <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
This allows us to queue IBs which needs an up to date system domain as well.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
Acked-by: Felix Kuehling <[email protected]>
|
|
The fence in dep_sync cannot be optimized.
Signed-off-by: Chunming Zhou <[email protected]>
Tested and Reviewed-by: Roger.He <[email protected]>
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
If the vm is guilty of a GPU reset, skips all its jobs.
Signed-off-by: Chunming Zhou <[email protected]>
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
that way we can know which job cause hang and
can do per sched reset/recovery instead of all
sched.
Signed-off-by: Monk Liu <[email protected]>
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
because we don't want to do sriov-gpu-reset under certain
cases, so just split those two funtion and don't invoke
sr-iov one from bare-metal one.
V2:
remove debugfs_gpu_reset routine on SRIOV case.
Signed-off-by: Monk Liu <[email protected]>
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
v2: directly return for 'if' case.
Signed-off-by: Chunming Zhou <[email protected]>
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
this is an improvement for previous patch, the sched_sync is to store fence
that could be skipped as scheduled, when job is executed, we didn't need
pipeline_sync if all fences in sched_sync are signalled, otherwise insert
pipeline_sync still.
v2: handle error when adding fence to sync failed.
Signed-off-by: Chunming Zhou <[email protected]>
Reviewed-by: Junwei Zhang <[email protected]> (v1)
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
The problem is that executing the jobs in the right order doesn't give you the right result
because consecutive jobs executed on the same engine are pipelined.
In other words job B does it buffer read before job A has written it's result.
Signed-off-by: Chunming Zhou <[email protected]>
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
[ 132.036658] amdgpu 0000:22:00.0: VM IB without ID
[ 132.036709] [drm:amdgpu_job_run [amdgpu]] *ERROR* Error scheduling IBs (-22)
[ 132.036755] [drm:amd_sched_main [amdgpu]] *ERROR* Failed to run job!
root cause is fence is signaled during sync transfer.
Signed-off-by: Chunming Zhou <[email protected]>
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Signed-off-by: Junwei Zhang <[email protected]>
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
I plan to usurp the short name of struct fence for a core kernel struct,
and so I need to rename the specialised fence/timeline for DMA
operations to make room.
A consensus was reached in
https://lists.freedesktop.org/archives/dri-devel/2016-July/113083.html
that making clear this fence applies to DMA operations was a good thing.
Since then the patch has grown a bit as usage increases, so hopefully it
remains a good thing!
(v2...: rebase, rerun spatch)
v3: Compile on msm, spotted a manual fixup that I broke.
v4: Try again for msm, sorry Daniel
coccinelle script:
@@
@@
- struct fence
+ struct dma_fence
@@
@@
- struct fence_ops
+ struct dma_fence_ops
@@
@@
- struct fence_cb
+ struct dma_fence_cb
@@
@@
- struct fence_array
+ struct dma_fence_array
@@
@@
- enum fence_flag_bits
+ enum dma_fence_flag_bits
@@
@@
(
- fence_init
+ dma_fence_init
|
- fence_release
+ dma_fence_release
|
- fence_free
+ dma_fence_free
|
- fence_get
+ dma_fence_get
|
- fence_get_rcu
+ dma_fence_get_rcu
|
- fence_put
+ dma_fence_put
|
- fence_signal
+ dma_fence_signal
|
- fence_signal_locked
+ dma_fence_signal_locked
|
- fence_default_wait
+ dma_fence_default_wait
|
- fence_add_callback
+ dma_fence_add_callback
|
- fence_remove_callback
+ dma_fence_remove_callback
|
- fence_enable_sw_signaling
+ dma_fence_enable_sw_signaling
|
- fence_is_signaled_locked
+ dma_fence_is_signaled_locked
|
- fence_is_signaled
+ dma_fence_is_signaled
|
- fence_is_later
+ dma_fence_is_later
|
- fence_later
+ dma_fence_later
|
- fence_wait_timeout
+ dma_fence_wait_timeout
|
- fence_wait_any_timeout
+ dma_fence_wait_any_timeout
|
- fence_wait
+ dma_fence_wait
|
- fence_context_alloc
+ dma_fence_context_alloc
|
- fence_array_create
+ dma_fence_array_create
|
- to_fence_array
+ to_dma_fence_array
|
- fence_is_array
+ dma_fence_is_array
|
- trace_fence_emit
+ trace_dma_fence_emit
|
- FENCE_TRACE
+ DMA_FENCE_TRACE
|
- FENCE_WARN
+ DMA_FENCE_WARN
|
- FENCE_ERR
+ DMA_FENCE_ERR
)
(
...
)
Signed-off-by: Chris Wilson <[email protected]>
Reviewed-by: Gustavo Padovan <[email protected]>
Acked-by: Sumit Semwal <[email protected]>
Acked-by: Christian König <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
Link: http://patchwork.freedesktop.org/patch/msgid/[email protected]
|
|
We get a few warnings when building kernel with W=1:
drivers/gpu/drm/amd/amdgpu/cz_smc.c:51:5: warning: no previous prototype for 'cz_send_msg_to_smc_async' [-Wmissing-prototypes]
drivers/gpu/drm/amd/amdgpu/cz_smc.c:143:5: warning: no previous prototype for 'cz_write_smc_sram_dword' [-Wmissing-prototypes]
drivers/gpu/drm/amd/amdgpu/iceland_smc.c:124:6: warning: no previous prototype for 'iceland_start_smc' [-Wmissing-prototypes]
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c:3926:6: warning: no previous prototype for 'gfx_v8_0_rlc_stop' [-Wmissing-prototypes]
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c:94:6: warning: no previous prototype for 'amdgpu_job_free_cb' [-Wmissing-prototypes]
....
In fact, these functions are only used in the file in which they are
declared and don't need a declaration, but can be made static.
So this patch marks these functions with 'static'.
Reviewed-by: Christian König <[email protected]>
Acked-by: Huang Rui <[email protected]>
Reviewed-by: Edward O'Callaghan <[email protected]>
Signed-off-by: Baoyou Xie <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
job->ctx actually is a fence_context of the entity
it belongs to, naming it as ctx is too vague, and
we'll need add amdgpu_ctx into the job structure
later.
Signed-off-by: Monk Liu <[email protected]>
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Reference should be taken when we make the assignment, not anywhere else.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
which avoids job->vm_pd_addr be changed.
V2: pass job structure to amdgpu_vm_grab_id and amdgpu_vm_flush directly.
Signed-off-by: Chunming Zhou <[email protected]>
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Signed-off-by: Chunming Zhou <[email protected]>
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
We return the fence as part of the job structur anyway,
no need to do this twice.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Chunming Zhou <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Keep the time we don't have a fence associated with the resource smaller.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Chunming Zhou <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
The fence and the sync object are not hardware resources.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Chunming Zhou <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Same problem as with the VM page tables. The user fence address must be
determined before the job is scheduled, not when the IB is executed.
This fixes a security problem where user fences could be used to overwrite
any part of VRAM.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Chunming Zhou <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Just wait for any fence to become available, instead
of waiting for the last entry of the LRU.
Acked-by: Alex Deucher <[email protected]>
Signed-off-by: Christian König <[email protected]>
Acked-by: Chunming Zhou <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Check if the sync object is idle depending on the ring a submission works with.
Acked-by: Alex Deucher <[email protected]>
Signed-off-by: Christian König <[email protected]>
Acked-by: Chunming Zhou <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Stop hiding bugs, instead print a proper error when the scheduler
doesn't handle all dependencies.
Acked-by: Alex Deucher <[email protected]>
Signed-off-by: Christian König <[email protected]>
Acked-by: Chunming Zhou <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Make it two events, one for the job being scheduled and one when it is finished.
Acked-by: Alex Deucher <[email protected]>
Signed-off-by: Christian König <[email protected]>
Acked-by: Chunming Zhou <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
so that we could actually reset the GPU when it hangs.
Signed-off-by: Chunming Zhou <[email protected]>
Reviewed-by: Junwei Zhang <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Remove the job reference counting and just properly destroy it from a
work item which blocks on any potential running timeout handler.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Monk.Liu <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
The driver shouldn't mess with the scheduler internals.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Monk.Liu <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Remembering the code path in a variable to cleanup
differently is usually not a good idea at all.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Monk.Liu <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Ther should be a new line between code and decleration.
Also use amdgpu_ib_free() instead of releasing the member manually.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Monk.Liu <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Completely pointless and confusing to use a callback
to call into the same code file.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Monk.Liu <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
We leaked the BO in the error pass, additional to that we only have
one user fence for all IBs in a job.
v2: remove white space changes
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
They are the same for all IBs.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
We only have one context for all IBs.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
ib.vm is a legacy way to get vm, after scheduler
implemented vm should be get from job, and all ibs
from one job share the same vm, no need to keep ib.vm
just move vm field to job.
this patch as well add job as paramter to ib_schedule
so it can get vm from job->vm.
v2: agd: sqaush in:
drm/amdgpu: check if ring emit_vm_flush exists in vm flush
No vm flush on engines that don't support VM.
bug:
https://bugs.freedesktop.org/show_bug.cgi?id=95195
Signed-off-by: Monk Liu <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
This marks the struct amdgpu_sched_ops const and
adjusts amd_sched_init to take a const pointer
for the ops param. The ops member of
struct amd_gpu_scheduler is also changed to const.
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Nils Wallménius <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
this is to fix fatal page fault error that occured if:
job is signaled/released after its timeout work is already
put to the global queue (in this case the cancel_delayed_work
will return false), which will lead to NX-protection error
page fault during job_timeout_func.
Signed-off-by: Monk Liu <[email protected]>
Reviewed-by: Chunming Zhou <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Add two callbacks to scheduler to maintain jobs, and invoked for
job timeout calculations. Now TDR measures time gap from
job is processed by hw.
v2:
fix typo
Signed-off-by: Monk Liu <[email protected]>
Reviewed-by: Chunming Zhou <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
for those jobs submitted through scheduler, do not
free it immediately after scheduled, instead free it
in global workqueue by its sched fence signaling
callback function.
v2:
call uf's bo_undef after job_run()
call job's sync free after job_run()
no static inline __amdgpu_job_free() anymore, just use
kfree(job) to replace it.
Signed-off-by: Monk Liu <[email protected]>
Reviewed-by: Chunming Zhou <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Consolidate job initialization in one place rather than
duplicating it in multiple places.
Signed-off-by: Monk Liu <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
when preemption feature lands, the SA bo should rely on sched
fence, because hw fence will be invalid after its job preempted
or skipped.
Signed-off-by: Monk Liu <[email protected]>
Reviewed-by: Chunming Zhou <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Signed-off-by: Monk Liu <[email protected]>
Reviewed-by: Chunming Zhou <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
thus amdgpu_ib_free() can hook sched fence to SA manager
in later patches.
BTW:
for amdgpu_free_job(), it should only fence_put() the
fence of the last ib once, so fix it as well in this patch.
Signed-off-by: Monk Liu <[email protected]>
Reviewed-by: Chunming Zhou <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Not used any more since we now always use the sheduler.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
Reviewed-by: Chunming Zhou <[email protected]>
|
|
The owner must be per ring as long as we don't
support sharing VMIDs per process. Also move the
assigned VMID and page directory address into the
IB structure.
v3: assign the VMID to all IBs, not just the first one.
v4: use correct pointer for owner
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Chunming Zhou <[email protected]>
Acked-by: Alex Deucher <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Not used any more.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
|
|
Updates from different VMs can be processed independently.
v2: agd: rebase on upstream
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
|