On 03/07/2019 18:19, Chris Wilson wrote:
If we hit an error while allocating the page tables, we have to unwind
the incomplete updates, and wish to free the unused pd. However, we are
not allowed to be hoding the spinlock at that point, and so must use the

holding

later free to defer it until after we drop the lock.

<3> [414.363795] BUG: sleeping function called from invalid context at 
drivers/gpu/drm/i915/i915_gem_gtt.c:472
<3> [414.364167] in_atomic(): 1, irqs_disabled(): 0, pid: 3905, name: 
i915_selftest
<4> [414.364406] 3 locks held by i915_selftest/3905:
<4> [414.364408]  #0: 0000000034fe8aa8 (&dev->mutex){....}, at: 
device_driver_attach+0x18/0x50
<4> [414.364415]  #1: 000000006bd8a560 (&dev->struct_mutex){+.+.}, at: 
igt_ctx_exec+0xb7/0x410 [i915]
<4> [414.364476]  #2: 000000003dfdc766 (&(&pd->lock)->rlock){+.+.}, at: 
gen8_ppgtt_alloc_pdp+0x448/0x540 [i915]
<3> [414.364529] Preemption disabled at:
<4> [414.364530] [<0000000000000000>] 0x0
<4> [414.364696] CPU: 0 PID: 3905 Comm: i915_selftest Tainted: G     U          
  5.2.0-rc7-CI-CI_DRM_6403+ #1
<4> [414.364698] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
rel-1.10.1-0-g8891697-prebuilt.qemu-project.org 04/01/2014
<4> [414.364699] Call Trace:
<4> [414.364704]  dump_stack+0x67/0x9b
<4> [414.364708]  ___might_sleep+0x167/0x250
<4> [414.364777]  vm_free_page+0x24/0xc0 [i915]
<4> [414.364852]  free_pd+0xf/0x20 [i915]
<4> [414.364897]  gen8_ppgtt_alloc_pdp+0x489/0x540 [i915]
<4> [414.364946]  gen8_ppgtt_alloc_4lvl+0x8e/0x2e0 [i915]
<4> [414.364992]  ppgtt_bind_vma+0x2e/0x60 [i915]
<4> [414.365039]  i915_vma_bind+0xe8/0x2c0 [i915]
<4> [414.365088]  __i915_vma_do_pin+0xa1/0xd20 [i915]

Fixes: 1d1b5490b91c ("drm/i915/gtt: Replace struct_mutex serialisation for 
allocation")
Signed-off-by: Chris Wilson <ch...@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.a...@intel.com>
Cc: Mika Kuoppala <mika.kuopp...@intel.com>
---
  drivers/gpu/drm/i915/i915_gem_gtt.c | 6 ++++--
  1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c 
b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 9e76347e039e..1065753e86fb 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -1489,7 +1489,8 @@ static int gen8_ppgtt_alloc_pdp(struct i915_address_space 
*vm,
                gen8_ppgtt_set_pdpe(pdp, vm->scratch_pd, pdpe);
                GEM_BUG_ON(!atomic_read(&pdp->used));
                atomic_dec(&pdp->used);
-               free_pd(vm, pd);
+               GEM_BUG_ON(alloc);

Pretty sure it's possible for alloc != NULL at this point...two threads, one is late to the party, we are unlucky and hit the unwind_pd path. No?

+               alloc = pd; /* defer the free to after the lock */
        }
        spin_unlock(&pdp->lock);
  unwind:
@@ -1558,7 +1559,8 @@ static int gen8_ppgtt_alloc_4lvl(struct 
i915_address_space *vm,
        spin_lock(&pml4->lock);
        if (atomic_dec_and_test(&pdp->used)) {
                gen8_ppgtt_set_pml4e(pml4, vm->scratch_pdp, pml4e);
-               free_pd(vm, pdp);
+               GEM_BUG_ON(alloc);
+               alloc = pdp; /* defer the free until after the lock */
        }
        spin_unlock(&pml4->lock);
  unwind:

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to