diff options
| author | Chris Wilson <[email protected]> | 2018-03-27 22:01:36 +0100 |
|---|---|---|
| committer | Chris Wilson <[email protected]> | 2018-03-28 20:26:03 +0100 |
| commit | c216e90686105e5b9fdbb22f6cfcc38334e432cc (patch) | |
| tree | f09aa8d9c0059fc5b299ddf2a02054f8e62b702c /tools/perf/scripts/python/bin/stackcollapse-report | |
| parent | d775a7b1840ddc96e7f25af20989ff43f2809436 (diff) | |
drm/i915/execlists: Reset ring registers on rebinding contexts
Tvrtko uncovered a fun issue with recovering from a wedge device. In his
tests, he wedged the driver by injecting an unrecoverable hang whilst a
batch was spinning. As we reset the gpu in the middle of the spinner,
when resumed it would continue on from the next instruction in the ring
and write it's breadcrumb. However, on wedging we updated our
bookkeeping to indicate that the GPU had completed executing and would
restart from after the breadcrumb; so the emission of the stale
breadcrumb from before the reset came as a bit of a surprise.
A simple fix is to when rebinding the context into the GPU, we update
the ring register state in the context image to match our bookkeeping.
We already have to update the RING_START and RING_TAIL, so updating
RING_HEAD as well is trivial. This works because whenever we unbind the
context, we keep the bookkeeping in check; and on wedging we unbind all
contexts.
Testcase: igt/gem_eio
Signed-off-by: Chris Wilson <[email protected]>
Cc: Tvrtko Ursulin <[email protected]>
Cc: Mika Kuoppala <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
Tested-by: Tvrtko Ursulin <[email protected]>
Reviewed-by: Mika Kuoppala <[email protected]>
Reviewed-by: Tvrtko Ursulin <[email protected]>
Diffstat (limited to 'tools/perf/scripts/python/bin/stackcollapse-report')
0 files changed, 0 insertions, 0 deletions