| Age | Commit message (Collapse) | Author | Files | Lines |
|
This requires changing the order in ttm_bo_cleanup_refs_or_queue to
take the reservation first, as there is otherwise no race free way to
take lru lock before fence_lock.
Signed-off-by: Maarten Lankhorst <[email protected]>
Reviewed-by: Thomas Hellstrom <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
This is similar to other platforms that don't allow command submission
to buffers locked on the cpu.
Signed-off-by: Maarten Lankhorst <[email protected]>
Reviewed-by: Thomas Hellstrom <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
vmwgfx was its only user and always sets it to the same..
Signed-off-by: Maarten Lankhorst <[email protected]>
Reviewed-By: Thomas Hellstrom <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
Convert #include "..." to #include <path/...> in drivers/gpu/.
Signed-off-by: David Howells <[email protected]>
Acked-by: Dave Airlie <[email protected]>
Acked-by: Arnd Bergmann <[email protected]>
Acked-by: Thomas Gleixner <[email protected]>
Acked-by: Paul E. McKenney <[email protected]>
Acked-by: Dave Jones <[email protected]>
|
|
Rather than having the driver supply the validation sequence, leave that
responsibility to TTM. This saves some confusion and a function argument.
Signed-off-by: Thomas Hellstrom <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
Drastically reduce the number of spin lock / unlock operations by performing
unreserving and fencing under global locks.
Signed-off-by: Thomas Hellstrom <[email protected]>
Reviewed-by: Jerome Glisse <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
The bo lock used only to protect the bo sync object members, and since it
is a per bo lock, fencing a buffer list will see a lot of locks and unlocks.
Replace it with a per-device lock that protects the sync object members on
*all* bos. Reading and setting these members will always be very quick, so
the risc of heavy lock contention is microscopic. Note that waiting for
sync objects will always take place outside of this lock.
The bo device fence lock will eventually be replaced with a seqlock /
rcu mechanism so we can determine that a bo is idle under a
rcu / read seqlock.
However this change will allow us to batch fencing and unreserving of
buffers with a minimal amount of locking.
Signed-off-by: Thomas Hellstrom <[email protected]>
Reviewed-by: Jerome Glisse <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
Avoid the ttm_bo_unreserve() spinlocks by calling
ttm_eu_backoff_reservation_locked under the lru spinlock.
Signed-off-by: Thomas Hellstrom <[email protected]>
Reviewed-by: Jerome Glisse <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
Makes it possible to reserve a list of buffer objects with a single
spin lock / unlock if there is no contention.
Should improve cpu usage on SMP kernels.
v2: Initialize private list members on reserve and don't call
ttm_bo_list_ref_sub() with zero put_count.
Signed-off-by: Thomas Hellstrom <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
Utilities to reserve, unreserve and fence a list of TTM
buffer objects in a deadlock-safe manner.
Used by the vmwgfx driver.
Signed-off-by: Thomas Hellstrom <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|