Age | Commit message (Collapse) | Author | Files | Lines |
|
When a plane moves out of bounds (i.e, outside the crtc clip region), the
plane state's "visible" parameter changes to false. When this happens, we
(a) release the hwpipe resources away from it, and
(b) unstage the corresponding hwpipe(s) from the Layer Mixers in the CRTC.
(a) requires use to acquire the global atomic state and assign a new
hwpipe. (b) requires us to re-configure the Layer Mixer, which is done in
the CRTC. We don't want to do these things in the async plane update path,
so return an error if the new state's "visible" isn't the same as the
current state's "visible".
Cc: Gustavo Padovan <[email protected]>
Signed-off-by: Archit Taneja <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
MDP5 on newer SoCs support cursor planes (i.e, cursor SSPPs). They are a
separate entity unlike the cursors within LM.
Do not try to restore the MDP5 LM cursor registers, or the corresponding
CTL bits if we are not using LM cursors.
Also, since we've introduced a new variable 'lm_cursor_enabled', we can
now use it to avoid creating a different sets of crtc_funcs for CRTCs
with LM cursors and CRTCs with cursor planes.
Fixes: "drm/msm/mdp5: restore cursor state when enabling crtc"
Signed-off-by: Archit Taneja <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
We currently call mdp5_pipe_assign() twice to assign the left and right
hwpipes for our drm_plane. When merging 2 hwpipes, there are a few
constraints that we need to keep in mind:
- Only the same types of SSPPs are preferred. I.e, a RGB pipe should
be paired with another RGB pipe, VIG with VIG etc.
- The hwpipe staged on the left should have a higher priority than
the hwpipe staged on the right. The priorities are as follows:
VIG0 > VIG1 > VIG2 > VIG3
RGB0 > RGB1 > RGB2 > RGB3
DMA0 > DMA1
We can't apply these constraints easily if mdp5_pipe_assign() is
called twice. Update mdp5_pipe_assign() to find both hwpipes in
one go, and add the extra constraints needed.
Signed-off-by: Archit Taneja <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
mdp5_pipe_assign currently returns the hwpipe pointer for the drm_plane.
Return it indirectly by setting a pointer passed as an argument. This
is needed because we want the func to find out the right hwpipe too.
Signed-off-by: Archit Taneja <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
After converting legacy cursor updates to atomic async commits
mdp5_cursor_plane_funcs just duplicates mdp5_plane_funcs now.
Cc: Rob Clark <[email protected]>
Cc: Archit Taneja <[email protected]>
Signed-off-by: Gustavo Padovan <[email protected]>
Tested-by: Archit Taneja <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
Add support to async updates of cursors by using the new atomic
interface for that. Basically what this commit does is do what
mdp5_update_cursor_plane_legacy() did but through atomic.
v5: call drm_atomic_helper_async_check() from the check hook
v4: add missing atomic async commit call to msm_atomic_commit(Archit Taneja)
v3: move size checks back to drivers (Ville Syrjälä)
v2: move fb setting to core and use new state (Eric Anholt)
Cc: Rob Clark <[email protected]>
Cc: Archit Taneja <[email protected]>
Signed-off-by: Gustavo Padovan <[email protected]>
Tested-by: Archit Taneja <[email protected]> (v4)
[added comment about not hitting async update path if hwpipes are
re-assigned or global state is touched]
Signed-off-by: Rob Clark <[email protected]>
|
|
Signed-off-by: Rob Clark <[email protected]>
|
|
Since we enabled runtime PM, we cannot count on cursor registers to
retain their values. This can result in situations where we think the
cursor is enabled when we enable the CRTC but it is trying to scan out
null (and the rest of cursor position/size is lost), resulting in faults
and generally angering the hw when coming out of DPMS with a cursor
enabled.
stable backport note: reverting 774e39ee3572 is also a suitable fix
Fixes: 774e39ee3572 drm/msm/mdp5: Set up runtime PM for MDSS
Signed-off-by: Rob Clark <[email protected]>
Reviewed-by: Archit Taneja <[email protected]>
|
|
It's only likely to paper over bugs. Unlike the gpu, where we want to
keep things alive a bit longer in expectation of the next frame's
submit, when the display is shut down we can power off immediately.
Signed-off-by: Rob Clark <[email protected]>
Acked-by: Archit Taneja <[email protected]>
|
|
Signed-off-by: Rob Clark <[email protected]>
|
|
Note we need to move update_fences() to after msm_rd_dump_submit(),
otherwise the bo's referenced by the submit may no longer be valid.
Signed-off-by: Rob Clark <[email protected]>
|
|
We need this if we want to dump the submit after cleanup (ie. from hang
or fault). But in the backoff/unpin case we want to clear them. So add
a flag so we can skip clearing the IOVAs in at cleanup.
Signed-off-by: Rob Clark <[email protected]>
|
|
For faults or hangs, it is nice to be able to include a bit more
information.
Signed-off-by: Rob Clark <[email protected]>
|
|
Split into two instances, the existing $debugfs/rd which continues to
dump all submits, and $debugfs/hangrd which will be used to dump just
submits that cause gpu hangs (and eventually faults, but that will
require some iommu framework enhancements).
Signed-off-by: Rob Clark <[email protected]>
|
|
Prep work for adding a debugfs file that dumps just submits which
trigger hangs/faults. In this case the bo may already be in the
MADV_DONTNEED state, but will be still on the active list (since
the submit hasn't completed yet). So the normal check that the
bo is in the WILLNEED state does not apply. (But of course the bo
should definitely not be in the PURGED state!)
Signed-off-by: Rob Clark <[email protected]>
|
|
Now that freedreno gallium driver defaults to using submit_queue task
(render reordering), just showing task->comm is not so useful (ie. it is
always "flush_queue:0"), so also dump the cmdline. This should also be
more useful for piglit/shader_runner.
Signed-off-by: Rob Clark <[email protected]>
|
|
Currently the rd dump avoids any buffers marked as WRITE under
the assumption that the contents are not interesting. While it
is true that the contents are uninteresting we should still print
the iova and size for all buffers so that any listening replay
tools can correctly construct the submission.
Print the header for all buffers but only dump the contents for
buffers marked as READ.
Signed-off-by: Jordan Crouse <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
Recent changes to locking have rendered struct_mutex_task
unused.
Unused since 0e08270a1f01.
Signed-off-by: Jordan Crouse <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
Implement preemption for A5XX targets - this allows multiple
ringbuffers for different priorities with automatic preemption
of a lower priority ringbuffer if a higher one is ready.
Signed-off-by: Jordan Crouse <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
We use a global ringbuffer size and block size for all targets and
at least for 5XX preemption we need to know the value the RB_CNTL
in several locations so it makes sense to calculate it once and use
it everywhere.
The only monkey wrench is that we need to disable the RPTR shadow
for A430 targets but that only needs to be done once and doesn't
affect A5XX so we can or in the value at init time.
Signed-off-by: Jordan Crouse <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
Add a shadow pointer to track the current command being written into
the ring. Don't commit it as 'cur' until the command is submitted.
Because 'cur' is used to construct the software copy of the wptr this
ensures that somebody peeking in on the ring doesn't assume that a
command is inflight while it is being written. This isn't a huge deal
with a single ring (though technically the hangcheck could assume
the system is prematurely busy when it isn't) but it will be rather
important for preemption where the decision to preempt is based
on a non-empty ringbuffer. Without a shadow an aggressive preemption
scheme could assume that the ringbuffer is non empty and switch to it
before the CPU is done writing the command and boom.
Even though preemption won't be supported for all targets because of
the way the code is organized it is simpler to make this generic for
all targets. The extra load for non-preemption targets should be
minimal.
Signed-off-by: Jordan Crouse <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
In order to manage ringbuffer priority to its fullest userspace
should know how many ringbuffers it has to work with. Add a
parameter to return the number of active rings.
Signed-off-by: Jordan Crouse <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
Add the infrastructure to support the idea of multiple ringbuffers.
Assign each ringbuffer an id and use that as an index for the various
ring specific operations.
The biggest delta is to support legacy fences. Each fence gets its own
sequence number but the legacy functions expect to use a unique integer.
To handle this we return a unique identifier for each submission but
map it to a specific ring/sequence under the covers. Newer users use
a dma_fence pointer anyway so they don't care about the actual sequence
ID or ring.
The actual mechanics for multiple ringbuffers are very target specific
so this code just allows for the possibility but still only defines
one ringbuffer for each target family.
Signed-off-by: Jordan Crouse <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
When we move to multiple ringbuffers we're going to store the data
in the memptrs on a per-ring basis. In order to prepare for that
move the current memptrs from the adreno namespace into msm_gpu.
This is way cleaner and immediately lets us kill off some sub
functions so there is much less cost later when we do move to
per-ring structs.
Signed-off-by: Jordan Crouse <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
Currently the behavior of a command stream is provided by the user
application during submission and the application is expected to internally
maintain the settings for each 'context' or 'rendering queue' and specify
the correct ones.
This works okay for simple cases but as applications become more
complex we will want to set context specific flags and do various
permission checks to allow certain contexts to enable additional
privileges.
Add kernel-side submit queues to be analogous to 'contexts' or
'rendering queues' on the application side. Each file descriptor
instance will maintain its own list of queues. Queues cannot be
shared between file descriptors.
For backwards compatibility context id '0' is defined as a default
context specifying no priority and no special flags. This is
intended to be the usual configuration for 99% of applications so
that a garden variety application can function correctly without
creating a queue. Only those applications requiring the specific
benefit of different queues need create one.
Signed-off-by: Jordan Crouse <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
Signed-off-by: Rob Clark <[email protected]>
|
|
Signed-off-by: Rob Clark <[email protected]>
|
|
We already have, as a result of upstreaming the gpu bindings,
msm_clk_get() which will try to get the clock both without and with a
"_clk" suffix. Use this in HDMI code so we can drop the "_clk" suffix
in bindings while maintaing backwards compatibility.
Signed-off-by: Rob Clark <[email protected]>
Reviewed-by: Sean Paul <[email protected]>
|
|
We already have, as a result of upstreaming the gpu bindings,
msm_clk_get() which will try to get the clock both without and with a
"_clk" suffix. Use this in eDP code so we can drop the "_clk" suffix
in bindings while maintaing backwards compatibility.
Signed-off-by: Rob Clark <[email protected]>
Reviewed-by: Sean Paul <[email protected]>
|
|
We already have, as a result of upstreaming the gpu bindings,
msm_clk_get() which will try to get the clock both without and with a
"_clk" suffix. Use this in DSI code so we can drop the "_clk" suffix
in bindings while maintaing backwards compatibility.
Signed-off-by: Rob Clark <[email protected]>
Reviewed-by: Sean Paul <[email protected]>
|
|
This is useful to see in the log, without requiring drm.debug.
Signed-off-by: Rob Clark <[email protected]>
|
|
When firmware was added to linux-firmware, it was put in a qcom sub-
directory, unlike what we'd been using before. For a300_pfp.fw and
a300_pm4.fw symlinks were created, but we'd prefer not to have to do
this in the future. So add support to look in both places when
loading firmware.
Signed-off-by: Rob Clark <[email protected]>
|
|
Prep work for the next patch.
Signed-off-by: Rob Clark <[email protected]>
|
|
Previously, in an effort to defer initializing the gpu until firmware
was available (ie. rootfs mounted), the gpu was not loaded at when the
subdevice was bound. Which resulted that clks/etc were requested in a
place that devm couldn't really help unwind if something failed.
Instead move request_firmware() to gpu->hw_init() and construct the gpu
earlier in adreno_bind(). To avoid the rest of the driver needing to
be aware of a gpu that hasn't managed to load firmware and hw_init()
yet, stash the gpu ptr in the adreno device's drvdata, and don't set
priv->gpu() until hw_init() succeeds.
Signed-off-by: Rob Clark <[email protected]>
|
|
This was used as a placeholder. It was never really input to the MDSS/HDMI
clocks.
Signed-off-by: Archit Taneja <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
We need to call reservation_object_reserve_shared() in both cases, but
this wasn't happening in the _NO_IMPLICIT submit case.
Fixes: f0a42bb ("drm/msm: submit support for in-fences")
Reported-by: Jordan Crouse <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
In systems under heavy load the IH work may experience significant
scheduling delays.
Under load + system workqueue:
Max Latency: 7.023695 ms
Avg Latency: 0.263994 ms
Under load + high priority workqueue:
Max Latency: 1.162568 ms
Avg Latency: 0.163213 ms
Further work is required to measure the impact of per-cpu settings on IH
performance.
Signed-off-by: Andres Rodriguez <[email protected]>
Signed-off-by: Felix Kuehling <[email protected]>
Acked-by: Oded Gabbay <[email protected]>
Signed-off-by: Oded Gabbay <[email protected]>
|
|
We don't need to wait for all work to complete in the IH exit function.
We only need to make sure the interrupt_work has finished executing to
guarantee that ih_kfifo is no longer in use.
Signed-off-by: Andres Rodriguez <[email protected]>
Acked-by: Oded Gabbay <[email protected]>
Signed-off-by: Oded Gabbay <[email protected]>
|
|
A larger buffer will let us accommodate applications with a large amount
of semi-simultaneous event signals.
Signed-off-by: Andres Rodriguez <[email protected]>
Signed-off-by: Felix Kuehling <[email protected]>
Acked-by: Oded Gabbay <[email protected]>
Signed-off-by: Oded Gabbay <[email protected]>
|
|
Replace our implementation of a lockless ring buffer with the standard
linux kernel kfifo.
We shouldn't maintain our own version of a standard data structure.
Signed-off-by: Andres Rodriguez <[email protected]>
Signed-off-by: Felix Kuehling <[email protected]>
Acked-by: Oded Gabbay <[email protected]>
Signed-off-by: Oded Gabbay <[email protected]>
|
|
This allows increasing the KFD_SIGNAL_EVENT_LIMIT in kfd_ioctl.h
without breaking processes built with older kfd_ioctl.h versions.
Signed-off-by: Felix Kuehling <[email protected]>
Signed-off-by: Oded Gabbay <[email protected]>
|
|
This speeds up signal lookup when the IH ring entry includes a
valid context ID or partial context ID. Only if the context ID is
found to be invalid, fall back to an exhaustive search of all
signaled events.
Signed-off-by: Felix Kuehling <[email protected]>
Acked-by: Oded Gabbay <[email protected]>
Signed-off-by: Oded Gabbay <[email protected]>
|
|
Signal slots are identical to event IDs.
Replace the used_slot_bitmap and events hash table with an IDR to
allocate and lookup event IDs and signal slots more efficiently.
Signed-off-by: Felix Kuehling <[email protected]>
Acked-by: Oded Gabbay <[email protected]>
Signed-off-by: Oded Gabbay <[email protected]>
|
|
The first event page is always big enough to handle all events.
Handling of multiple events pages is not supported by user mode, and
not necessary.
Signed-off-by: Yong Zhao <[email protected]>
Signed-off-by: Felix Kuehling <[email protected]>
Acked-by: Oded Gabbay <[email protected]>
Signed-off-by: Oded Gabbay <[email protected]>
|
|
Use standard wait queues for waiting and waking up waiting threads
instead of inventing our own. We still have our own wait loop
because the HSA event semantics require the ability to have one
thread waiting on multiple wait queues (events) at the same time.
Signed-off-by: Kent Russell <[email protected]>
Signed-off-by: Felix Kuehling <[email protected]>
Acked-by: Oded Gabbay <[email protected]>
Signed-off-by: Oded Gabbay <[email protected]>
|
|
This always identical with the index of the event_waiter in the array.
No need to store it in the waiter record.
Signed-off-by: Felix Kuehling <[email protected]>
Reviewed-by: Oded Gabbay <[email protected]>
Signed-off-by: Oded Gabbay <[email protected]>
|
|
When an event with pending waiters is destroyed, those waiters may
end up sleeping forever unless they are notified and woken up.
Implement the notification by clearing the waiter->event pointer,
which becomes invalid anyway, when the event is freed, and waking
up the waiting tasks.
Waiters on an event that's destroyed return failure.
Signed-off-by: Felix Kuehling <[email protected]>
Acked-by: Oded Gabbay <[email protected]>
Signed-off-by: Oded Gabbay <[email protected]>
|
|
Cleaned up the code while resolving some potential bugs and
inconsistencies in the process.
Clean-ups:
* Remove enum kfd_event_wait_result, which duplicates
KFD_IOC_EVENT_RESULT definitions
* alloc_event_waiters can be called without holding p->event_mutex
* Return an error code from copy_signaled_event_data instead of bool
* Clean up error handling code paths to minimize duplication in
kfd_wait_on_events
Fixes:
* Consistently return an error code from kfd_wait_on_events and set
wait_result to KFD_IOC_WAIT_RESULT_FAIL in all failure cases.
* Always call free_waiters while holding p->event_mutex
* copy_signaled_event_data might sleep. Don't call it while the task state
is TASK_INTERRUPTIBLE.
Signed-off-by: Felix Kuehling <[email protected]>
Acked-by: Oded Gabbay <[email protected]>
Signed-off-by: Oded Gabbay <[email protected]>
|
|
Signed-off-by: Sean Keely <[email protected]>
Signed-off-by: Felix Kuehling <[email protected]>
Acked-by: Oded Gabbay <[email protected]>
Signed-off-by: Oded Gabbay <[email protected]>
|
|
If kfd_wait_on_events can return immediately, we don't need to populate
the wait list and don't need to enter the sleep-loop.
Signed-off-by: Sean Keely <[email protected]>
Signed-off-by: Felix Kuehling <[email protected]>
Acked-by: Oded Gabbay <[email protected]>
Signed-off-by: Oded Gabbay <[email protected]>
|