On 2016å¹´06æ15æ¥ 19:44, Christian König wrote: > From: Christian König <christian.koenig at amd.com> > > When we pipeline evictions the page directory could already be > moving somewhere else when grab_id is called. Isn't PD bo protected by job fence? I think before job fence is signalled, the PD bo is safe, there shouldn't be a chance to evict PD bo.
Regards, David Zhou > > Signed-off-by: Christian König <christian.koenig at amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 ++ > drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 6 ++---- > 2 files changed, 4 insertions(+), 4 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c > b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c > index a3d7d13..850c4dd 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c > @@ -661,6 +661,8 @@ static int amdgpu_cs_ib_vm_chunk(struct amdgpu_device > *adev, > } > } > > + p->job->vm_pd_addr = amdgpu_bo_gpu_offset(vm->page_directory); > + > r = amdgpu_bo_vm_update_pte(p, vm); > if (!r) > amdgpu_cs_sync_rings(p); > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c > b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c > index d3e0576..82efb40 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c > @@ -177,7 +177,6 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct > amdgpu_ring *ring, > struct amdgpu_sync *sync, struct fence *fence, > unsigned *vm_id, uint64_t *vm_pd_addr) > { > - uint64_t pd_addr = amdgpu_bo_gpu_offset(vm->page_directory); > struct amdgpu_device *adev = ring->adev; > struct fence *updates = sync->last_vm_update; > struct amdgpu_vm_id *id, *idle; > @@ -250,7 +249,7 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct > amdgpu_ring *ring, > if (atomic64_read(&id->owner) != vm->client_id) > continue; > > - if (pd_addr != id->pd_gpu_addr) > + if (*vm_pd_addr != id->pd_gpu_addr) > continue; > > if (!same_ring && > @@ -298,14 +297,13 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct > amdgpu_ring *ring, > fence_put(id->flushed_updates); > id->flushed_updates = fence_get(updates); > > - id->pd_gpu_addr = pd_addr; > + id->pd_gpu_addr = *vm_pd_addr; > > list_move_tail(&id->list, &adev->vm_manager.ids_lru); > atomic64_set(&id->owner, vm->client_id); > vm->ids[ring->idx] = id; > > *vm_id = id - adev->vm_manager.ids; > - *vm_pd_addr = pd_addr; > trace_amdgpu_vm_grab_id(vm, ring->idx, *vm_id, *vm_pd_addr); > > error: