diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2020-06-02 15:04:15 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2020-06-02 15:04:15 -0700 |
commit | faa392181a0bd42c5478175cef601adeecdc91b6 (patch) | |
tree | e020e1142e34786676d0cd40f539bccdbb66099e /drivers/gpu/drm/i915/gem/selftests | |
parent | cfa3b8068b09f25037146bfd5eed041b78878bee (diff) | |
parent | 9ca1f474cea0edc14a1d7ec933e5472c0ff115d3 (diff) |
Merge tag 'drm-next-2020-06-02' of git://anongit.freedesktop.org/drm/drm
Pull drm updates from Dave Airlie:
"Highlights:
- Core DRM had a lot of refactoring around managed drm resources to
make drivers simpler.
- Intel Tigerlake support is on by default
- amdgpu now support p2p PCI buffer sharing and encrypted GPU memory
Details:
core:
- uapi: error out EBUSY when existing master
- uapi: rework SET/DROP MASTER permission handling
- remove drm_pci.h
- drm_pci* are now legacy
- introduced managed DRM resources
- subclassing support for drm_framebuffer
- simple encoder helper
- edid improvements
- vblank + writeback documentation improved
- drm/mm - optimise tree searches
- port drivers to use devm_drm_dev_alloc
dma-buf:
- add flag for p2p buffer support
mst:
- ACT timeout improvements
- remove drm_dp_mst_has_audio
- don't use 2nd TX slot - spec recommends against it
bridge:
- dw-hdmi various improvements
- chrontel ch7033 support
- fix stack issues with old gcc
hdmi:
- add unpack function for drm infoframe
fbdev:
- misc fbdev driver fixes
i915:
- uapi: global sseu pinning
- uapi: OA buffer polling
- uapi: remove generated perf code
- uapi: per-engine default property values in sysfs
- Tigerlake GEN12 enabled.
- Lots of gem refactoring
- Tigerlake enablement patches
- move to drm_device logging
- Icelake gamma HW readout
- push MST link retrain to hotplug work
- bandwidth atomic helpers
- ICL fixes
- RPS/GT refactoring
- Cherryview full-ppgtt support
- i915 locking guidelines documented
- require linear fb stride to be 512 multiple on gen9
- Tigerlake SAGV support
amdgpu:
- uapi: encrypted GPU memory handling
- uapi: add MEM_SYNC IB flag
- p2p dma-buf support
- export VRAM dma-bufs
- FRU chip access support
- RAS/SR-IOV updates
- Powerplay locking fixes
- VCN DPG (powergating) enablement
- GFX10 clockgating fixes
- DC fixes
- GPU reset fixes
- navi SDMA fix
- expose FP16 for modesetting
- DP 1.4 compliance fixes
- gfx10 soft recovery
- Improved Critical Thermal Faults handling
- resizable BAR on gmc10
amdkfd:
- uapi: GWS resource management
- track GPU memory per process
- report PCI domain in topology
radeon:
- safe reg list generator fixes
nouveau:
- HD audio fixes on recent systems
- vGPU detection (fail probe if we're on one, for now)
- Interlaced mode fixes (mostly avoidance on Turing, which doesn't support it)
- SVM improvements/fixes
- NVIDIA format modifier support
- Misc other fixes.
adv7511:
- HDMI SPDIF support
ast:
- allocate crtc state size
- fix double assignment
- fix suspend
bochs:
- drop connector register
cirrus:
- move to tiny drivers.
exynos:
- fix imported dma-buf mapping
- enable runtime PM
- fixes and cleanups
mediatek:
- DPI pin mode swap
- config mipi_tx current/impedance
lima:
- devfreq + cooling device support
- task handling improvements
- runtime PM support
pl111:
- vexpress init improvements
- fix module auto-load
rcar-du:
- DT bindings conversion to YAML
- Planes zpos sanity check and fix
- MAINTAINERS entry for LVDS panel driver
mcde:
- fix return value
mgag200:
- use managed config init
stm:
- read endpoints from DT
vboxvideo:
- use PCI managed functions
- drop WC mtrr
vkms:
- enable cursor by default
rockchip:
- afbc support
virtio:
- various cleanups
qxl:
- fix cursor notify port
hisilicon:
- 128-byte stride alignment fix
sun4i:
- improved format handling"
* tag 'drm-next-2020-06-02' of git://anongit.freedesktop.org/drm/drm: (1401 commits)
drm/amd/display: Fix potential integer wraparound resulting in a hang
drm/amd/display: drop cursor position check in atomic test
drm/amdgpu: fix device attribute node create failed with multi gpu
drm/nouveau: use correct conflicting framebuffer API
drm/vblank: Fix -Wformat compile warnings on some arches
drm/amdgpu: Sync with VM root BO when switching VM to CPU update mode
drm/amd/display: Handle GPU reset for DC block
drm/amdgpu: add apu flags (v2)
drm/amd/powerpay: Disable gfxoff when setting manual mode on picasso and raven
drm/amdgpu: fix pm sysfs node handling (v2)
drm/amdgpu: move gpu_info parsing after common early init
drm/amdgpu: move discovery gfx config fetching
drm/nouveau/dispnv50: fix runtime pm imbalance on error
drm/nouveau: fix runtime pm imbalance on error
drm/nouveau: fix runtime pm imbalance on error
drm/nouveau/debugfs: fix runtime pm imbalance on error
drm/nouveau/nouveau/hmm: fix migrate zero page to GPU
drm/nouveau/nouveau/hmm: fix nouveau_dmem_chunk allocations
drm/nouveau/kms/nv50-: Share DP SST mode_valid() handling with MST
drm/nouveau/kms/nv50-: Move 8BPC limit for MST into nv50_mstc_get_modes()
...
Diffstat (limited to 'drivers/gpu/drm/i915/gem/selftests')
9 files changed, 929 insertions, 104 deletions
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c index fa16f2c3f3ac..2b46c6530da9 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c @@ -88,8 +88,7 @@ static void huge_put_pages(struct drm_i915_gem_object *obj, } static const struct drm_i915_gem_object_ops huge_ops = { - .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE | - I915_GEM_OBJECT_IS_SHRINKABLE, + .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE, .get_pages = huge_get_pages, .put_pages = huge_put_pages, }; diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index d4f94ca9ae0d..c9988b6d5c88 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -421,7 +421,7 @@ static int igt_mock_exhaust_device_supported_pages(void *arg) err = i915_vma_pin(vma, 0, 0, PIN_USER); if (err) - goto out_close; + goto out_put; err = igt_check_page_sizes(vma); @@ -432,8 +432,6 @@ static int igt_mock_exhaust_device_supported_pages(void *arg) } i915_vma_unpin(vma); - i915_vma_close(vma); - i915_gem_object_put(obj); if (err) @@ -443,8 +441,6 @@ static int igt_mock_exhaust_device_supported_pages(void *arg) goto out_device; -out_close: - i915_vma_close(vma); out_put: i915_gem_object_put(obj); out_device: @@ -492,7 +488,7 @@ static int igt_mock_memory_region_huge_pages(void *arg) err = i915_vma_pin(vma, 0, 0, PIN_USER); if (err) - goto out_close; + goto out_put; err = igt_check_page_sizes(vma); if (err) @@ -515,8 +511,6 @@ static int igt_mock_memory_region_huge_pages(void *arg) } i915_vma_unpin(vma); - i915_vma_close(vma); - __i915_gem_object_put_pages(obj); i915_gem_object_put(obj); } @@ -526,8 +520,6 @@ static int igt_mock_memory_region_huge_pages(void *arg) out_unpin: i915_vma_unpin(vma); -out_close: - i915_vma_close(vma); out_put: i915_gem_object_put(obj); out_region: @@ -587,10 +579,8 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg) } err = i915_vma_pin(vma, 0, 0, flags); - if (err) { - i915_vma_close(vma); + if (err) goto out_unpin; - } err = igt_check_page_sizes(vma); @@ -603,10 +593,8 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg) i915_vma_unpin(vma); - if (err) { - i915_vma_close(vma); + if (err) goto out_unpin; - } /* * Try all the other valid offsets until the next @@ -615,16 +603,12 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg) */ for (offset = 4096; offset < page_size; offset += 4096) { err = i915_vma_unbind(vma); - if (err) { - i915_vma_close(vma); + if (err) goto out_unpin; - } err = i915_vma_pin(vma, 0, 0, flags | offset); - if (err) { - i915_vma_close(vma); + if (err) goto out_unpin; - } err = igt_check_page_sizes(vma); @@ -636,10 +620,8 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg) i915_vma_unpin(vma); - if (err) { - i915_vma_close(vma); + if (err) goto out_unpin; - } if (igt_timeout(end_time, "%s timed out at offset %x with page-size %x\n", @@ -647,8 +629,6 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg) break; } - i915_vma_close(vma); - i915_gem_object_unpin_pages(obj); __i915_gem_object_put_pages(obj); i915_gem_object_put(obj); @@ -670,12 +650,6 @@ static void close_object_list(struct list_head *objects, struct drm_i915_gem_object *obj, *on; list_for_each_entry_safe(obj, on, objects, st_link) { - struct i915_vma *vma; - - vma = i915_vma_instance(obj, &ppgtt->vm, NULL); - if (!IS_ERR(vma)) - i915_vma_close(vma); - list_del(&obj->st_link); i915_gem_object_unpin_pages(obj); __i915_gem_object_put_pages(obj); @@ -912,7 +886,7 @@ static int igt_mock_ppgtt_64K(void *arg) err = i915_vma_pin(vma, 0, 0, flags); if (err) - goto out_vma_close; + goto out_object_unpin; err = igt_check_page_sizes(vma); if (err) @@ -945,8 +919,6 @@ static int igt_mock_ppgtt_64K(void *arg) } i915_vma_unpin(vma); - i915_vma_close(vma); - i915_gem_object_unpin_pages(obj); __i915_gem_object_put_pages(obj); i915_gem_object_put(obj); @@ -957,8 +929,6 @@ static int igt_mock_ppgtt_64K(void *arg) out_vma_unpin: i915_vma_unpin(vma); -out_vma_close: - i915_vma_close(vma); out_object_unpin: i915_gem_object_unpin_pages(obj); out_object_put: @@ -1070,7 +1040,7 @@ static int __igt_write_huge(struct intel_context *ce, err = i915_vma_unbind(vma); if (err) - goto out_vma_close; + return err; err = i915_vma_pin(vma, size, 0, flags | offset); if (err) { @@ -1081,7 +1051,7 @@ static int __igt_write_huge(struct intel_context *ce, if (err == -ENOSPC && i915_is_ggtt(ce->vm)) err = 0; - goto out_vma_close; + return err; } err = igt_check_page_sizes(vma); @@ -1102,8 +1072,6 @@ static int __igt_write_huge(struct intel_context *ce, out_vma_unpin: i915_vma_unpin(vma); -out_vma_close: - __i915_vma_put(vma); return err; } @@ -1490,7 +1458,7 @@ static int igt_ppgtt_pin_update(void *arg) err = i915_vma_pin(vma, SZ_2M, 0, flags); if (err) - goto out_close; + goto out_put; if (vma->page_sizes.sg < page_size) { pr_info("Unable to allocate page-size %x, finishing test early\n", @@ -1527,8 +1495,6 @@ static int igt_ppgtt_pin_update(void *arg) goto out_unpin; i915_vma_unpin(vma); - i915_vma_close(vma); - i915_gem_object_put(obj); } @@ -1546,7 +1512,7 @@ static int igt_ppgtt_pin_update(void *arg) err = i915_vma_pin(vma, 0, 0, flags); if (err) - goto out_close; + goto out_put; /* * Make sure we don't end up with something like where the pde is still @@ -1576,8 +1542,6 @@ static int igt_ppgtt_pin_update(void *arg) out_unpin: i915_vma_unpin(vma); -out_close: - i915_vma_close(vma); out_put: i915_gem_object_put(obj); out_vm: @@ -1629,13 +1593,11 @@ static int igt_tmpfs_fallback(void *arg) err = i915_vma_pin(vma, 0, 0, PIN_USER); if (err) - goto out_close; + goto out_put; err = igt_check_page_sizes(vma); i915_vma_unpin(vma); -out_close: - i915_vma_close(vma); out_put: i915_gem_object_put(obj); out_restore: @@ -1682,7 +1644,7 @@ static int igt_shrink_thp(void *arg) err = i915_vma_pin(vma, 0, 0, flags); if (err) - goto out_close; + goto out_put; if (obj->mm.page_sizes.phys < I915_GTT_PAGE_SIZE_2M) { pr_info("failed to allocate THP, finishing test early\n"); @@ -1706,7 +1668,7 @@ static int igt_shrink_thp(void *arg) i915_gem_context_unlock_engines(ctx); i915_vma_unpin(vma); if (err) - goto out_close; + goto out_put; /* * Now that the pages are *unpinned* shrink-all should invoke @@ -1716,18 +1678,18 @@ static int igt_shrink_thp(void *arg) if (i915_gem_object_has_pages(obj)) { pr_err("shrink-all didn't truncate the pages\n"); err = -EINVAL; - goto out_close; + goto out_put; } if (obj->mm.page_sizes.sg || obj->mm.page_sizes.phys) { pr_err("residual page-size bits left\n"); err = -EINVAL; - goto out_close; + goto out_put; } err = i915_vma_pin(vma, 0, 0, flags); if (err) - goto out_close; + goto out_put; while (n--) { err = cpu_check(obj, n, 0xdeadbeaf); @@ -1737,8 +1699,6 @@ static int igt_shrink_thp(void *arg) out_unpin: i915_vma_unpin(vma); -out_close: - i915_vma_close(vma); out_put: i915_gem_object_put(obj); out_vm: @@ -1777,21 +1737,20 @@ int i915_gem_huge_page_mock_selftests(void) if (!i915_vm_is_4lvl(&ppgtt->vm)) { pr_err("failed to create 48b PPGTT\n"); err = -EINVAL; - goto out_close; + goto out_put; } /* If we were ever hit this then it's time to mock the 64K scratch */ if (!i915_vm_has_scratch_64K(&ppgtt->vm)) { pr_err("PPGTT missing 64K scratch page\n"); err = -EINVAL; - goto out_close; + goto out_put; } err = i915_subtests(tests, ppgtt); -out_close: +out_put: i915_vm_put(&ppgtt->vm); - out_unlock: drm_dev_put(&dev_priv->drm); return err; diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c index b972be165e85..8fe3ad2ee34e 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c @@ -7,9 +7,12 @@ #include "gt/intel_engine_user.h" #include "gt/intel_gt.h" +#include "gt/intel_gpu_commands.h" +#include "gem/i915_gem_lmem.h" #include "selftests/igt_flush_test.h" #include "selftests/mock_drm.h" +#include "selftests/i915_random.h" #include "huge_gem_object.h" #include "mock_context.h" @@ -127,10 +130,573 @@ static int igt_client_fill(void *arg) } while (1); } +#define WIDTH 512 +#define HEIGHT 32 + +struct blit_buffer { + struct i915_vma *vma; + u32 start_val; + u32 tiling; +}; + +struct tiled_blits { + struct intel_context *ce; + struct blit_buffer buffers[3]; + struct blit_buffer scratch; + struct i915_vma *batch; + u64 hole; + u32 width; + u32 height; +}; + +static int prepare_blit(const struct tiled_blits *t, + struct blit_buffer *dst, + struct blit_buffer *src, + struct drm_i915_gem_object *batch) +{ + const int gen = INTEL_GEN(to_i915(batch->base.dev)); + bool use_64b_reloc = gen >= 8; + u32 src_pitch, dst_pitch; + u32 cmd, *cs; + + cs = i915_gem_object_pin_map(batch, I915_MAP_WC); + if (IS_ERR(cs)) + return PTR_ERR(cs); + + *cs++ = MI_LOAD_REGISTER_IMM(1); + *cs++ = i915_mmio_reg_offset(BCS_SWCTRL); + cmd = (BCS_SRC_Y | BCS_DST_Y) << 16; + if (src->tiling == I915_TILING_Y) + cmd |= BCS_SRC_Y; + if (dst->tiling == I915_TILING_Y) + cmd |= BCS_DST_Y; + *cs++ = cmd; + + cmd = MI_FLUSH_DW; + if (gen >= 8) + cmd++; + *cs++ = cmd; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + + cmd = XY_SRC_COPY_BLT_CMD | BLT_WRITE_RGBA | (8 - 2); + if (gen >= 8) + cmd += 2; + + src_pitch = t->width * 4; + if (src->tiling) { + cmd |= XY_SRC_COPY_BLT_SRC_TILED; + src_pitch /= 4; + } + + dst_pitch = t->width * 4; + if (dst->tiling) { + cmd |= XY_SRC_COPY_BLT_DST_TILED; + dst_pitch /= 4; + } + + *cs++ = cmd; + *cs++ = BLT_DEPTH_32 | BLT_ROP_SRC_COPY | dst_pitch; + *cs++ = 0; + *cs++ = t->height << 16 | t->width; + *cs++ = lower_32_bits(dst->vma->node.start); + if (use_64b_reloc) + *cs++ = upper_32_bits(dst->vma->node.start); + *cs++ = 0; + *cs++ = src_pitch; + *cs++ = lower_32_bits(src->vma->node.start); + if (use_64b_reloc) + *cs++ = upper_32_bits(src->vma->node.start); + + *cs++ = MI_BATCH_BUFFER_END; + + i915_gem_object_flush_map(batch); + i915_gem_object_unpin_map(batch); + + return 0; +} + +static void tiled_blits_destroy_buffers(struct tiled_blits *t) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(t->buffers); i++) + i915_vma_put(t->buffers[i].vma); + + i915_vma_put(t->scratch.vma); + i915_vma_put(t->batch); +} + +static struct i915_vma * +__create_vma(struct tiled_blits *t, size_t size, bool lmem) +{ + struct drm_i915_private *i915 = t->ce->vm->i915; + struct drm_i915_gem_object *obj; + struct i915_vma *vma; + + if (lmem) + obj = i915_gem_object_create_lmem(i915, size, 0); + else + obj = i915_gem_object_create_shmem(i915, size); + if (IS_ERR(obj)) + return ERR_CAST(obj); + + vma = i915_vma_instance(obj, t->ce->vm, NULL); + if (IS_ERR(vma)) + i915_gem_object_put(obj); + + return vma; +} + +static struct i915_vma *create_vma(struct tiled_blits *t, bool lmem) +{ + return __create_vma(t, PAGE_ALIGN(t->width * t->height * 4), lmem); +} + +static int tiled_blits_create_buffers(struct tiled_blits *t, + int width, int height, + struct rnd_state *prng) +{ + struct drm_i915_private *i915 = t->ce->engine->i915; + int i; + + t->width = width; + t->height = height; + + t->batch = __create_vma(t, PAGE_SIZE, false); + if (IS_ERR(t->batch)) + return PTR_ERR(t->batch); + + t->scratch.vma = create_vma(t, false); + if (IS_ERR(t->scratch.vma)) { + i915_vma_put(t->batch); + return PTR_ERR(t->scratch.vma); + } + + for (i = 0; i < ARRAY_SIZE(t->buffers); i++) { + struct i915_vma *vma; + + vma = create_vma(t, HAS_LMEM(i915) && i % 2); + if (IS_ERR(vma)) { + tiled_blits_destroy_buffers(t); + return PTR_ERR(vma); + } + + t->buffers[i].vma = vma; + t->buffers[i].tiling = + i915_prandom_u32_max_state(I915_TILING_Y + 1, prng); + } + + return 0; +} + +static void fill_scratch(struct tiled_blits *t, u32 *vaddr, u32 val) +{ + int i; + + t->scratch.start_val = val; + for (i = 0; i < t->width * t->height; i++) + vaddr[i] = val++; + + i915_gem_object_flush_map(t->scratch.vma->obj); +} + +static u64 swizzle_bit(unsigned int bit, u64 offset) +{ + return (offset & BIT_ULL(bit)) >> (bit - 6); +} + +static u64 tiled_offset(const struct intel_gt *gt, + u64 v, + unsigned int stride, + unsigned int tiling) +{ + unsigned int swizzle; + u64 x, y; + + if (tiling == I915_TILING_NONE) + return v; + + y = div64_u64_rem(v, stride, &x); + + if (tiling == I915_TILING_X) { + v = div64_u64_rem(y, 8, &y) * stride * 8; + v += y * 512; + v += div64_u64_rem(x, 512, &x) << 12; + v += x; + + swizzle = gt->ggtt->bit_6_swizzle_x; + } else { + const unsigned int ytile_span = 16; + const unsigned int ytile_height = 512; + + v = div64_u64_rem(y, 32, &y) * stride * 32; + v += y * ytile_span; + v += div64_u64_rem(x, ytile_span, &x) * ytile_height; + v += x; + + swizzle = gt->ggtt->bit_6_swizzle_y; + } + + switch (swizzle) { + case I915_BIT_6_SWIZZLE_9: + v ^= swizzle_bit(9, v); + break; + case I915_BIT_6_SWIZZLE_9_10: + v ^= swizzle_bit(9, v) ^ swizzle_bit(10, v); + break; + case I915_BIT_6_SWIZZLE_9_11: + v ^= swizzle_bit(9, v) ^ swizzle_bit(11, v); + break; + case I915_BIT_6_SWIZZLE_9_10_11: + v ^= swizzle_bit(9, v) ^ swizzle_bit(10, v) ^ swizzle_bit(11, v); + break; + } + + return v; +} + +static const char *repr_tiling(int tiling) +{ + switch (tiling) { + case I915_TILING_NONE: return "linear"; + case I915_TILING_X: return "X"; + case I915_TILING_Y: return "Y"; + default: return "unknown"; + } +} + +static int verify_buffer(const struct tiled_blits *t, + struct blit_buffer *buf, + struct rnd_state *prng) +{ + const u32 *vaddr; + int ret = 0; + int x, y, p; + + x = i915_prandom_u32_max_state(t->width, prng); + y = i915_prandom_u32_max_state(t->height, prng); + p = y * t->width + x; + + vaddr = i915_gem_object_pin_map(buf->vma->obj, I915_MAP_WC); + if (IS_ERR(vaddr)) + return PTR_ERR(vaddr); + + if (vaddr[0] != buf->start_val) { + ret = -EINVAL; + } else { + u64 v = tiled_offset(buf->vma->vm->gt, + p * 4, t->width * 4, + buf->tiling); + + if (vaddr[v / sizeof(*vaddr)] != buf->start_val + p) + ret = -EINVAL; + } + if (ret) { + pr_err("Invalid %s tiling detected at (%d, %d), start_val %x\n", + repr_tiling(buf->tiling), + x, y, buf->start_val); + igt_hexdump(vaddr, 4096); + } + + i915_gem_object_unpin_map(buf->vma->obj); + return ret; +} + +static int move_to_active(struct i915_vma *vma, + struct i915_request *rq, + unsigned int flags) +{ + int err; + + i915_vma_lock(vma); + err = i915_request_await_object(rq, vma->obj, false); + if (err == 0) + err = i915_vma_move_to_active(vma, rq, flags); + i915_vma_unlock(vma); + + return err; +} + +static int pin_buffer(struct i915_vma *vma, u64 addr) +{ + int err; + + if (drm_mm_node_allocated(&vma->node) && vma->node.start != addr) { + err = i915_vma_unbind(vma); + if (err) + return err; + } + + err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_OFFSET_FIXED | addr); + if (err) + return err; + + return 0; +} + +static int +tiled_blit(struct tiled_blits *t, + struct blit_buffer *dst, u64 dst_addr, + struct blit_buffer *src, u64 src_addr) +{ + struct i915_request *rq; + int err; + + err = pin_buffer(src->vma, src_addr); + if (err) { + pr_err("Cannot pin src @ %llx\n", src_addr); + return err; + } + + err = pin_buffer(dst->vma, dst_addr); + if (err) { + pr_err("Cannot pin dst @ %llx\n", dst_addr); + goto err_src; + } + + err = i915_vma_pin(t->batch, 0, 0, PIN_USER | PIN_HIGH); + if (err) { + pr_err("cannot pin batch\n"); + goto err_dst; + } + + err = prepare_blit(t, dst, src, t->batch->obj); + if (err) + goto err_bb; + + rq = intel_context_create_request(t->ce); + if (IS_ERR(rq)) { + err = PTR_ERR(rq); + goto err_bb; + } + + err = move_to_active(t->batch, rq, 0); + if (!err) + err = move_to_active(src->vma, rq, 0); + if (!err) + err = move_to_active(dst->vma, rq, 0); + if (!err) + err = rq->engine->emit_bb_start(rq, + t->batch->node.start, + t->batch->node.size, + 0); + i915_request_get(rq); + i915_request_add(rq); + if (i915_request_wait(rq, 0, HZ / 2) < 0) + err = -ETIME; + i915_request_put(rq); + + dst->start_val = src->start_val; +err_bb: + i915_vma_unpin(t->batch); +err_dst: + i915_vma_unpin(dst->vma); +err_src: + i915_vma_unpin(src->vma); + return err; +} + +static struct tiled_blits * +tiled_blits_create(struct intel_engine_cs *engine, struct rnd_state *prng) +{ + struct drm_mm_node hole; + struct tiled_blits *t; + u64 hole_size; + int err; + + t = kzalloc(sizeof(*t), GFP_KERNEL); + if (!t) + return ERR_PTR(-ENOMEM); + + t->ce = intel_context_create(engine); + if (IS_ERR(t->ce)) { + err = PTR_ERR(t->ce); + goto err_free; + } + + hole_size = 2 * PAGE_ALIGN(WIDTH * HEIGHT * 4); + hole_size *= 2; /* room to maneuver */ + hole_size += 2 * I915_GTT_MIN_ALIGNMENT; + + mutex_lock(&t->ce->vm->mutex); + memset(&hole, 0, sizeof(hole)); + err = drm_mm_insert_node_in_range(&t->ce->vm->mm, &hole, + hole_size, 0, I915_COLOR_UNEVICTABLE, + 0, U64_MAX, + DRM_MM_INSERT_BEST); + if (!err) + drm_mm_remove_node(&hole); + mutex_unlock(&t->ce->vm->mutex); + if (err) { + err = -ENODEV; + goto err_put; + } + + t->hole = hole.start + I915_GTT_MIN_ALIGNMENT; + pr_info("Using hole at %llx\n", t->hole); + + err = tiled_blits_create_buffers(t, WIDTH, HEIGHT, prng); + if (err) + goto err_put; + + return t; + +err_put: + intel_context_put(t->ce); +err_free: + kfree(t); + return ERR_PTR(err); +} + +static void tiled_blits_destroy(struct tiled_blits *t) +{ + tiled_blits_destroy_buffers(t); + + intel_context_put(t->ce); + kfree(t); +} + +static int tiled_blits_prepare(struct tiled_blits *t, + struct rnd_state *prng) +{ + u64 offset = PAGE_ALIGN(t->width * t->height * 4); + u32 *map; + int err; + int i; + + map = i915_gem_object_pin_map(t->scratch.vma->obj, I915_MAP_WC); + if (IS_ERR(map)) + return PTR_ERR(map); + + /* Use scratch to fill objects */ + for (i = 0; i < ARRAY_SIZE(t->buffers); i++) { + fill_scratch(t, map, prandom_u32_state(prng)); + GEM_BUG_ON(verify_buffer(t, &t->scratch, prng)); + + err = tiled_blit(t, + &t->buffers[i], t->hole + offset, + &t->scratch, t->hole); + if (err == 0) + err = verify_buffer(t, &t->buffers[i], prng); + if (err) { + pr_err("Failed to create buffer %d\n", i); + break; + } + } + + i915_gem_object_unpin_map(t->scratch.vma->obj); + return err; +} + +static int tiled_blits_bounce(struct tiled_blits *t, struct rnd_state *prng) +{ + u64 offset = + round_up(t->width * t->height * 4, 2 * I915_GTT_MIN_ALIGNMENT); + int err; + + /* We want to check position invariant tiling across GTT eviction */ + + err = tiled_blit(t, + &t->buffers[1], t->hole + offset / 2, + &t->buffers[0], t->hole + 2 * offset); + if (err) + return err; + + /* Reposition so that we overlap the old addresses, and slightly off */ + err = tiled_blit(t, + &t->buffers[2], t->hole + I915_GTT_MIN_ALIGNMENT, + &t->buffers[1], t->hole + 3 * offset / 2); + if (err) + return err; + + err = verify_buffer(t, &t->buffers[2], prng); + if (err) + return err; + + return 0; +} + +static int __igt_client_tiled_blits(struct intel_engine_cs *engine, + struct rnd_state *prng) +{ + struct tiled_blits *t; + int err; + + t = tiled_blits_create(engine, prng); + if (IS_ERR(t)) + return PTR_ERR(t); + + err = tiled_blits_prepare(t, prng); + if (err) + goto out; + + err = tiled_blits_bounce(t, prng); + if (err) + goto out; + +out: + tiled_blits_destroy(t); + return err; +} + +static bool has_bit17_swizzle(int sw) +{ + return (sw == I915_BIT_6_SWIZZLE_9_10_17 || + sw == I915_BIT_6_SWIZZLE_9_17); +} + +static bool bad_swizzling(struct drm_i915_private *i915) +{ + struct i915_ggtt *ggtt = &i915->ggtt; + + if (i915->quirks & QUIRK_PIN_SWIZZLED_PAGES) + return true; + + if (has_bit17_swizzle(ggtt->bit_6_swizzle_x) || + has_bit17_swizzle(ggtt->bit_6_swizzle_y)) + return true; + + return false; +} + +static int igt_client_tiled_blits(void *arg) +{ + struct drm_i915_private *i915 = arg; + I915_RND_STATE(prng); + int inst = 0; + + /* Test requires explicit BLT tiling controls */ + if (INTEL_GEN(i915) < 4) + return 0; + + if (bad_swizzling(i915)) /* Requires sane (sub-page) swizzling */ + return 0; + + do { + struct intel_engine_cs *engine; + int err; + + engine = intel_engine_lookup_user(i915, + I915_ENGINE_CLASS_COPY, + inst++); + if (!engine) + return 0; + + err = __igt_client_tiled_blits(engine, &prng); + if (err == -ENODEV) + err = 0; + if (err) + return err; + } while (1); +} + int i915_gem_client_blt_live_selftests(struct drm_i915_private *i915) { static const struct i915_subtest tests[] = { SUBTEST(igt_client_fill), + SUBTEST(igt_client_tiled_blits), }; if (intel_gt_is_wedged(&i915->gt)) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c index 3f6079e1dfb6..87d7d8aa080f 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c @@ -158,6 +158,8 @@ static int wc_set(struct context *ctx, unsigned long offset, u32 v) return PTR_ERR(map); map[offset / sizeof(*map)] = v; + + __i915_gem_object_flush_map(ctx->obj, offset, sizeof(*map)); i915_gem_object_unpin_map(ctx->obj); return 0; diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c index 54b86cf7f5d2..b81978890641 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c @@ -972,12 +972,6 @@ emit_rpcs_query(struct drm_i915_gem_object *obj, goto err_batch; } - err = rq->engine->emit_bb_start(rq, - batch->node.start, batch->node.size, - 0); - if (err) - goto err_request; - i915_vma_lock(batch); err = i915_request_await_object(rq, batch->obj, false); if (err == 0) @@ -994,6 +988,18 @@ emit_rpcs_query(struct drm_i915_gem_object *obj, if (err) goto skip_request; + if (rq->engine->emit_init_breadcrumb) { + err = rq->engine->emit_init_breadcrumb(rq); + if (err) + goto skip_request; + } + + err = rq->engine->emit_bb_start(rq, + batch->node.start, batch->node.size, + 0); + if (err) + goto skip_request; + i915_vma_unpin_and_release(&batch, 0); i915_vma_unpin(vma); @@ -1005,7 +1011,6 @@ emit_rpcs_query(struct drm_i915_gem_object *obj, skip_request: i915_request_set_error_once(rq, err); -err_request: i915_request_add(rq); err_batch: i915_vma_unpin_and_release(&batch, 0); @@ -1541,10 +1546,6 @@ static int write_to_scratch(struct i915_gem_context *ctx, goto err_unpin; } - err = engine->emit_bb_start(rq, vma->node.start, vma->node.size, 0); - if (err) - goto err_request; - i915_vma_lock(vma); err = i915_request_await_object(rq, vma->obj, false); if (err == 0) @@ -1553,6 +1554,16 @@ static int write_to_scratch(struct i915_gem_context *ctx, if (err) goto skip_request; + if (rq->engine->emit_init_breadcrumb) { + err = rq->engine->emit_init_breadcrumb(rq); + if (err) + goto skip_request; + } + + err = engine->emit_bb_start(rq, vma->node.start, vma->node.size, 0); + if (err) + goto skip_request; + i915_vma_unpin(vma); i915_request_add(rq); @@ -1560,7 +1571,6 @@ static int write_to_scratch(struct i915_gem_context *ctx, goto out_vm; skip_request: i915_request_set_error_once(rq, err); -err_request: i915_request_add(rq); err_unpin: i915_vma_unpin(vma); @@ -1674,10 +1684,6 @@ static int read_from_scratch(struct i915_gem_context *ctx, goto err_unpin; } - err = engine->emit_bb_start(rq, vma->node.start, vma->node.size, flags); - if (err) - goto err_request; - i915_vma_lock(vma); err = i915_request_await_object(rq, vma->obj, true); if (err == 0) @@ -1686,8 +1692,17 @@ static int read_from_scratch(struct i915_gem_context *ctx, if (err) goto skip_request; + if (rq->engine->emit_init_breadcrumb) { + err = rq->engine->emit_init_breadcrumb(rq); + if (err) + goto skip_request; + } + + err = engine->emit_bb_start(rq, vma->node.start, vma->node.size, flags); + if (err) + goto skip_request; + i915_vma_unpin(vma); - i915_vma_close(vma); i915_request_add(rq); @@ -1709,7 +1724,6 @@ static int read_from_scratch(struct i915_gem_context *ctx, goto out_vm; skip_request: i915_request_set_error_once(rq, err); -err_request: i915_request_add(rq); err_unpin: i915_vma_unpin(vma); @@ -1925,7 +1939,7 @@ static int mock_context_barrier(void *arg) goto out; } - rq = igt_request_alloc(ctx, i915->engine[RCS0]); + rq = igt_request_alloc(ctx, i915->gt.engine[RCS0]); if (IS_ERR(rq)) { pr_err("Request allocation failed!\n"); goto out; diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c new file mode 100644 index 000000000000..a49016f8ee0d --- /dev/null +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c @@ -0,0 +1,171 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2020 Intel Corporation + */ + +#include "i915_selftest.h" + +#include "gt/intel_engine_pm.h" +#include "selftests/igt_flush_test.h" + +static u64 read_reloc(const u32 *map, int x, const u64 mask) +{ + u64 reloc; + + memcpy(&reloc, &map[x], sizeof(reloc)); + return reloc & mask; +} + +static int __igt_gpu_reloc(struct i915_execbuffer *eb, + struct drm_i915_gem_object *obj) +{ + const unsigned int offsets[] = { 8, 3, 0 }; + const u64 mask = + GENMASK_ULL(eb->reloc_cache.use_64bit_reloc ? 63 : 31, 0); + const u32 *map = page_mask_bits(obj->mm.mapping); + struct i915_request *rq; + struct i915_vma *vma; + int err; + int i; + + vma = i915_vma_instance(obj, eb->context->vm, NULL); + if (IS_ERR(vma)) + return PTR_ERR(vma); + + err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_HIGH); + if (err) + return err; + + /* 8-Byte aligned */ + if (!__reloc_entry_gpu(eb, vma, + offsets[0] * sizeof(u32), + 0)) { + err = -EIO; + goto unpin_vma; + } + + /* !8-Byte aligned */ + if (!__reloc_entry_gpu(eb, vma, + offsets[1] * sizeof(u32), + 1)) { + err = -EIO; + goto unpin_vma; + } + + /* Skip to the end of the cmd page */ + i = PAGE_SIZE / sizeof(u32) - RELOC_TAIL - 1; + i -= eb->reloc_cache.rq_size; + memset32(eb->reloc_cache.rq_cmd + eb->reloc_cache.rq_size, + MI_NOOP, i); + eb->reloc_cache.rq_size += i; + + /* Force batch chaining */ + if (!__reloc_entry_gpu(eb, vma, + offsets[2] * sizeof(u32), + 2)) { + err = -EIO; + goto unpin_vma; + } + + GEM_BUG_ON(!eb->reloc_cache.rq); + rq = i915_request_get(eb->reloc_cache.rq); + err = reloc_gpu_flush(&eb->reloc_cache); + if (err) + goto put_rq; + GEM_BUG_ON(eb->reloc_cache.rq); + + err = i915_gem_object_wait(obj, I915_WAIT_INTERRUPTIBLE, HZ / 2); + if (err) { + intel_gt_set_wedged(eb->engine->gt); + goto put_rq; + } + + if (!i915_request_completed(rq)) { + pr_err("%s: did not wait for relocations!\n", eb->engine->name); + err = -EINVAL; + goto put_rq; + } + + for (i = 0; i < ARRAY_SIZE(offsets); i++) { + u64 reloc = read_reloc(map, offsets[i], mask); + + if (reloc != i) { + pr_err("%s[%d]: map[%d] %llx != %x\n", + eb->engine->name, i, offsets[i], reloc, i); + err = -EINVAL; + } + } + if (err) + igt_hexdump(map, 4096); + +put_rq: + i915_request_put(rq); +unpin_vma: + i915_vma_unpin(vma); + return err; +} + +static int igt_gpu_reloc(void *arg) +{ + struct i915_execbuffer eb; + struct drm_i915_gem_object *scratch; + int err = 0; + u32 *map; + + eb.i915 = arg; + + scratch = i915_gem_object_create_internal(eb.i915, 4096); + if (IS_ERR(scratch)) + return PTR_ERR(scratch); + + map = i915_gem_object_pin_map(scratch, I915_MAP_WC); + if (IS_ERR(map)) { + err = PTR_ERR(map); + goto err_scratch; + } + + for_each_uabi_engine(eb.engine, eb.i915) { + reloc_cache_init(&eb.reloc_cache, eb.i915); + memset(map, POISON_INUSE, 4096); + + intel_engine_pm_get(eb.engine); + eb.context = intel_context_create(eb.engine); + if (IS_ERR(eb.context)) { + err = PTR_ERR(eb.context); + goto err_pm; + } + + err = intel_context_pin(eb.context); + if (err) + goto err_put; + + err = __igt_gpu_reloc(&eb, scratch); + + intel_context_unpin(eb.context); +err_put: + intel_context_put(eb.context); +err_pm: + intel_engine_pm_put(eb.engine); + if (err) + break; + } + + if (igt_flush_test(eb.i915)) + err = -EIO; + +err_scratch: + i915_gem_object_put(scratch); + return err; +} + +int i915_gem_execbuffer_live_selftests(struct drm_i915_private *i915) +{ + static const struct i915_subtest tests[] = { + SUBTEST(igt_gpu_reloc), + }; + + if (intel_gt_is_wedged(&i915->gt)) + return 0; + + return i915_live_subtests(tests, i915); +} diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index 43912e9b683d..9c7402ce5bf9 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -952,6 +952,129 @@ static int igt_mmap(void *arg) return 0; } +static const char *repr_mmap_type(enum i915_mmap_type type) +{ + switch (type) { + case I915_MMAP_TYPE_GTT: return "gtt"; + case I915_MMAP_TYPE_WB: return "wb"; + case I915_MMAP_TYPE_WC: return "wc"; + case I915_MMAP_TYPE_UC: return "uc"; + default: return "unknown"; + } +} + +static bool can_access(const struct drm_i915_gem_object *obj) +{ + unsigned int flags = + I915_GEM_OBJECT_HAS_STRUCT_PAGE | I915_GEM_OBJECT_HAS_IOMEM; + + return i915_gem_object_type_has(obj, flags); +} + +static int __igt_mmap_access(struct drm_i915_private *i915, + struct drm_i915_gem_object *obj, + enum i915_mmap_type type) +{ + struct i915_mmap_offset *mmo; + unsigned long __user *ptr; + unsigned long A, B; + unsigned long x, y; + unsigned long addr; + int err; + + memset(&A, 0xAA, sizeof(A)); + memset(&B, 0xBB, sizeof(B)); + + if (!can_mmap(obj, type) || !can_access(obj)) + return 0; + + mmo = mmap_offset_attach(obj, type, NULL); + if (IS_ERR(mmo)) + return PTR_ERR(mmo); + + addr = igt_mmap_node(i915, &mmo->vma_node, 0, PROT_WRITE, MAP_SHARED); + if (IS_ERR_VALUE(addr)) + return addr; + ptr = (unsigned long __user *)addr; + + err = __put_user(A, ptr); + if (err) { + pr_err("%s(%s): failed to write into user mmap\n", + obj->mm.region->name, repr_mmap_type(type)); + goto out_unmap; + } + + intel_gt_flush_ggtt_writes(&i915->gt); + + err = access_process_vm(current, addr, &x, sizeof(x), 0); + if (err != sizeof(x)) { + pr_err("%s(%s): access_process_vm() read failed\n", + obj->mm.region->name, repr_mmap_type(type)); + goto out_unmap; + } + + err = access_process_vm(current, addr, &B, sizeof(B), FOLL_WRITE); + if (err != sizeof(B)) { + pr_err("%s(%s): access_process_vm() write failed\n", + obj->mm.region->name, repr_mmap_type(type)); + goto out_unmap; + } + + intel_gt_flush_ggtt_writes(&i915->gt); + + err = __get_user(y, ptr); + if (err) { + pr_err("%s(%s): failed to read from user mmap\n", + obj->mm.region->name, repr_mmap_type(type)); + goto out_unmap; + } + + if (x != A || y != B) { + pr_err("%s(%s): failed to read/write values, found (%lx, %lx)\n", + obj->mm.region->name, repr_mmap_type(type), + x, y); + err = -EINVAL; + goto out_unmap; + } + +out_unmap: + vm_munmap(addr, obj->base.size); + return err; +} + +static int igt_mmap_access(void *arg) +{ + struct drm_i915_private *i915 = arg; + struct intel_memory_region *mr; + enum intel_region_id id; + + for_each_memory_region(mr, i915, id) { + struct drm_i915_gem_object *obj; + int err; + + obj = i915_gem_object_create_region(mr, PAGE_SIZE, 0); + if (obj == ERR_PTR(-ENODEV)) + continue; + + if (IS_ERR(obj)) + return PTR_ERR(obj); + + err = __igt_mmap_access(i915, obj, I915_MMAP_TYPE_GTT); + if (err == 0) + err = __igt_mmap_access(i915, obj, I915_MMAP_TYPE_WB); + if (err == 0) + err = __igt_mmap_access(i915, obj, I915_MMAP_TYPE_WC); + if (err == 0) + err = __igt_mmap_access(i915, obj, I915_MMAP_TYPE_UC); + + i915_gem_object_put(obj); + if (err) + return err; + } + + return 0; +} + static int __igt_mmap_gpu(struct drm_i915_private *i915, struct drm_i915_gem_object *obj, enum i915_mmap_type type) @@ -1156,9 +1279,6 @@ static int __igt_mmap_revoke(struct drm_i915_private *i915, if (err) goto out_unmap; - GEM_BUG_ON(mmo->mmap_type == I915_MMAP_TYPE_GTT && - !atomic_read(&obj->bind_count)); - err = check_present(addr, obj->base.size); if (err) { pr_err("%s: was not present\n", obj->mm.region->name); @@ -1175,7 +1295,6 @@ static int __igt_mmap_revoke(struct drm_i915_private *i915, pr_err("Failed to unbind object!\n"); goto out_unmap; } - GEM_BUG_ON(atomic_read(&obj->bind_count)); if (type != I915_MMAP_TYPE_GTT) { __i915_gem_object_put_pages(obj); @@ -1233,6 +1352,7 @@ int i915_gem_mman_live_selftests(struct drm_i915_private *i915) SUBTEST(igt_smoke_tiling), SUBTEST(igt_mmap_offset_exhaustion), SUBTEST(igt_mmap), + SUBTEST(igt_mmap_access), SUBTEST(igt_mmap_revoke), SUBTEST(igt_mmap_gpu), }; diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c index 2b6db6f799de..faa5b6d91795 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c @@ -14,7 +14,7 @@ static int igt_gem_object(void *arg) { struct drm_i915_private *i915 = arg; struct drm_i915_gem_object *obj; - int err = -ENOMEM; + int err; /* Basic test to ensure we can create an object */ diff --git a/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c b/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c index 772d8cba7da9..e21b5023ca7d 100644 --- a/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c +++ b/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c @@ -83,6 +83,8 @@ igt_emit_store_dw(struct i915_vma *vma, offset += PAGE_SIZE; } *cmd = MI_BATCH_BUFFER_END; + + i915_gem_object_flush_map(obj); i915_gem_object_unpin_map(obj); intel_gt_chipset_flush(vma->vm->gt); @@ -126,16 +128,6 @@ int igt_gpu_fill_dw(struct intel_context *ce, goto err_batch; } - flags = 0; - if (INTEL_GEN(ce->vm->i915) <= 5) - flags |= I915_DISPATCH_SECURE; - - err = rq->engine->emit_bb_start(rq, - batch->node.start, batch->node.size, - flags); - if (err) - goto err_request; - i915_vma_lock(batch); err = i915_request_await_object(rq, batch->obj, false); if (err == 0) @@ -152,15 +144,17 @@ int igt_gpu_fill_dw(struct intel_context *ce, if (err) goto skip_request; - i915_request_add(rq); - - i915_vma_unpin_and_release(&batch, 0); + flags = 0; + if (INTEL_GEN(ce->vm->i915) <= 5) + flags |= I915_DISPATCH_SECURE; - return 0; + err = rq->engine->emit_bb_start(rq, + batch->node.start, batch->node.size, + flags); skip_request: - i915_request_set_error_once(rq, err); -err_request: + if (err) + i915_request_set_error_once(rq, err); i915_request_add(rq); err_batch: i915_vma_unpin_and_release(&batch, 0); |