Age | Commit message (Collapse) | Author | Files | Lines |
|
Signed-off-by: Thomas Hellstrom <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
If no reservation ticket is given to the execbuf reservation utilities,
try reservation with non-blocking semantics.
This is intended for eviction paths that use the execbuf reservation
utilities for convenience rather than for deadlock avoidance.
Signed-off-by: Thomas Hellstrom <[email protected]>
Reviewed-by: Jakob Bornecrantz <[email protected]>
|
|
Makes lockdep a lot more useful.
Signed-off-by: Maarten Lankhorst <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
Now that the code is compatible in semantics, flip the switch.
Use ww_mutex instead of the homegrown implementation.
ww_mutex uses -EDEADLK to signal that the caller has to back off,
and -EALREADY to indicate this buffer is already held by the caller.
ttm used -EAGAIN and -EDEADLK for those, respectively. So some changes
were needed to handle this correctly.
Signed-off-by: Maarten Lankhorst <[email protected]>
Reviewed-by: Jerome Glisse <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
This commit converts the source of the val_seq counter to
the ww_mutex api. The reservation objects are converted later,
because there is still a lockdep splat in nouveau that has to
resolved first.
Signed-off-by: Maarten Lankhorst <[email protected]>
Reviewed-by: Jerome Glisse <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
This requires re-use of the seqno, which increases fairness slightly.
Instead of spinning with a new seqno every time we keep the current one,
but still drop all other reservations we hold. Only when we succeed,
we try to get back our other reservations again.
This should increase fairness slightly as well.
Changes since v1:
- Increase val_seq before calling ttm_bo_reserve_slowpath_nolru and
retrying to take all entries to prevent a race.
Signed-off-by: Maarten Lankhorst <[email protected]>
Reviewed-by: Jerome Glisse <[email protected]>
|
|
With the lru lock no longer required for protecting reservations we
can just do a ttm_bo_reserve_nolru on -EBUSY, and handle all errors
in a single path.
Signed-off-by: Maarten Lankhorst <[email protected]>
Reviewed-by: Jerome Glisse <[email protected]>
|
|
There should no longer be assumptions that reserve will always succeed
with the lru lock held, so we can safely break the whole atomic
reserve/lru thing. As a bonus this fixes most lockdep annotations for
reservations.
Signed-off-by: Maarten Lankhorst <[email protected]>
Reviewed-by: Jerome Glisse <[email protected]>
|
|
This requires changing the order in ttm_bo_cleanup_refs_or_queue to
take the reservation first, as there is otherwise no race free way to
take lru lock before fence_lock.
Signed-off-by: Maarten Lankhorst <[email protected]>
Reviewed-by: Thomas Hellstrom <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
This is similar to other platforms that don't allow command submission
to buffers locked on the cpu.
Signed-off-by: Maarten Lankhorst <[email protected]>
Reviewed-by: Thomas Hellstrom <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
vmwgfx was its only user and always sets it to the same..
Signed-off-by: Maarten Lankhorst <[email protected]>
Reviewed-By: Thomas Hellstrom <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
Convert #include "..." to #include <path/...> in drivers/gpu/.
Signed-off-by: David Howells <[email protected]>
Acked-by: Dave Airlie <[email protected]>
Acked-by: Arnd Bergmann <[email protected]>
Acked-by: Thomas Gleixner <[email protected]>
Acked-by: Paul E. McKenney <[email protected]>
Acked-by: Dave Jones <[email protected]>
|
|
Rather than having the driver supply the validation sequence, leave that
responsibility to TTM. This saves some confusion and a function argument.
Signed-off-by: Thomas Hellstrom <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
Drastically reduce the number of spin lock / unlock operations by performing
unreserving and fencing under global locks.
Signed-off-by: Thomas Hellstrom <[email protected]>
Reviewed-by: Jerome Glisse <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
The bo lock used only to protect the bo sync object members, and since it
is a per bo lock, fencing a buffer list will see a lot of locks and unlocks.
Replace it with a per-device lock that protects the sync object members on
*all* bos. Reading and setting these members will always be very quick, so
the risc of heavy lock contention is microscopic. Note that waiting for
sync objects will always take place outside of this lock.
The bo device fence lock will eventually be replaced with a seqlock /
rcu mechanism so we can determine that a bo is idle under a
rcu / read seqlock.
However this change will allow us to batch fencing and unreserving of
buffers with a minimal amount of locking.
Signed-off-by: Thomas Hellstrom <[email protected]>
Reviewed-by: Jerome Glisse <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
Avoid the ttm_bo_unreserve() spinlocks by calling
ttm_eu_backoff_reservation_locked under the lru spinlock.
Signed-off-by: Thomas Hellstrom <[email protected]>
Reviewed-by: Jerome Glisse <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
Makes it possible to reserve a list of buffer objects with a single
spin lock / unlock if there is no contention.
Should improve cpu usage on SMP kernels.
v2: Initialize private list members on reserve and don't call
ttm_bo_list_ref_sub() with zero put_count.
Signed-off-by: Thomas Hellstrom <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
Utilities to reserve, unreserve and fence a list of TTM
buffer objects in a deadlock-safe manner.
Used by the vmwgfx driver.
Signed-off-by: Thomas Hellstrom <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|