Re: [PATCH umr] Add family text for family 141

2017-03-26 Thread Edward O'Callaghan
Reviewed-by: Edward O'Callaghan 

On 03/25/2017 01:08 AM, Tom St Denis wrote:
> Signed-off-by: Tom St Denis 
> ---
>  src/app/print_config.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/src/app/print_config.c b/src/app/print_config.c
> index 6dbe0d42b8dc..e295302ab7a3 100644
> --- a/src/app/print_config.c
> +++ b/src/app/print_config.c
> @@ -91,6 +91,7 @@ static const struct {
>   { "Kaveri", 125 },
>   { "Volcanic Islands", 130 },
>   { "Carrizo", 135 },
> + { "Arctic Islands", 141 },
>   { NULL, 0 },
>  };
>  
> 



signature.asc
Description: OpenPGP digital signature
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/radeon: Override fpfn for all VRAM placements in radeon_evict_flags

2017-03-26 Thread Michel Dänzer
From: Michel Dänzer 

We were accidentally only overriding the first VRAM placement. For BOs
with the RADEON_GEM_NO_CPU_ACCESS flag set,
radeon_ttm_placement_from_domain creates a second VRAM placment with
fpfn == 0. If VRAM is almost full, the first VRAM placement with
fpfn > 0 may not work, but the second one with fpfn == 0 always will
(the BO's current location trivially satisfies it). Because "moving"
the BO to its current location puts it back on the LRU list, this
results in an infinite loop.

Fixes: 2a85aedd117c ("drm/radeon: Try evicting from CPU accessible to
  inaccessible VRAM first")
Reported-by: Zachary Michaels 
Reported-and-Tested-by: Julien Isorce 
Signed-off-by: Michel Dänzer 
---
 drivers/gpu/drm/radeon/radeon_ttm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c 
b/drivers/gpu/drm/radeon/radeon_ttm.c
index 5c7cf644ba1d..37d68cd1f272 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -213,8 +213,8 @@ static void radeon_evict_flags(struct ttm_buffer_object *bo,
rbo->placement.num_busy_placement = 0;
for (i = 0; i < rbo->placement.num_placement; i++) {
if (rbo->placements[i].flags & 
TTM_PL_FLAG_VRAM) {
-   if (rbo->placements[0].fpfn < fpfn)
-   rbo->placements[0].fpfn = fpfn;
+   if (rbo->placements[i].fpfn < fpfn)
+   rbo->placements[i].fpfn = fpfn;
} else {
rbo->placement.busy_placement =
&rbo->placements[i];
-- 
2.11.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] Revert "drm/radeon: Try evicting from CPU accessible to inaccessible VRAM first"

2017-03-26 Thread Michel Dänzer
On 25/03/17 03:59 AM, Julien Isorce wrote:
> Hi Michel,
> 
> I double checked and you are right, the change 0 -> i works. 

Thanks for testing, fix submitted for review.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 09/13] drm/amdgpu:fix gmc_v9 vm fault process for SRIOV

2017-03-26 Thread Zhang, Jerry (Junwei)

On 03/24/2017 06:38 PM, Monk Liu wrote:

for SRIOV we cannot use access register when in IRQ routine
with regular KIQ method

Change-Id: Ifae3164cf12311b851ae131f58175f6ec3174f82
Signed-off-by: Monk Liu 
---
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 24 
  1 file changed, 16 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 51a1919..88221bb 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -138,20 +138,28 @@ static int gmc_v9_0_process_interrupt(struct 
amdgpu_device *adev,
addr = (u64)entry->src_data[0] << 12;
addr |= ((u64)entry->src_data[1] & 0xf) << 44;

-   if (entry->vm_id_src) {
-   status = RREG32(mmhub->vm_l2_pro_fault_status);
-   WREG32_P(mmhub->vm_l2_pro_fault_cntl, 1, ~1);
-   } else {
-   status = RREG32(gfxhub->vm_l2_pro_fault_status);
-   WREG32_P(gfxhub->vm_l2_pro_fault_cntl, 1, ~1);
-   }
+   if (!amdgpu_sriov_vf(adev)) {
+   if (entry->vm_id_src) {
+   status = RREG32(mmhub->vm_l2_pro_fault_status);
+   WREG32_P(mmhub->vm_l2_pro_fault_cntl, 1, ~1);
+   } else {
+   status = RREG32(gfxhub->vm_l2_pro_fault_status);
+   WREG32_P(gfxhub->vm_l2_pro_fault_cntl, 1, ~1);
+   }


Even though SRIOV don't use status info, is it needed to clear vm L2 fault cntl 
regs?


If not,
Reviewed-by: Junwei Zhang 



-   DRM_ERROR("[%s]VMC page fault (src_id:%u ring:%u vm_id:%u pas_id:%u) "
+   DRM_ERROR("[%s]VMC page fault (src_id:%u ring:%u vm_id:%u pas_id:%u) 
"
  "at page 0x%016llx from %d\n"
  "VM_L2_PROTECTION_FAULT_STATUS:0x%08X\n",
  entry->vm_id_src ? "mmhub" : "gfxhub",
  entry->src_id, entry->ring_id, entry->vm_id, entry->pas_id,
  addr, entry->client_id, status);
+   } else {
+   DRM_ERROR("[%s]VMC page fault (src_id:%u ring:%u vm_id:%u pas_id:%u) 
"
+ "at page 0x%016llx from %d\n",
+ entry->vm_id_src ? "mmhub" : "gfxhub",
+ entry->src_id, entry->ring_id, entry->vm_id, entry->pas_id,
+ addr, entry->client_id);
+   }

return 0;
  }


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 00/15] *** Multiple level VMPT enablement ***

2017-03-26 Thread Chunming Zhou
*** BLURB HERE ***
From Vega, ascis start to support multiple level vmpt, the series is to 
implement it.

Tested successfully with 2/3/4 levels. 

V2: address Christian comments.

Max vm size 256TB tested ok.


Christian König (10):
  drm/amdgpu: rename page_directory_fence to last_dir_update
  drm/amdgpu: add the VM pointer to the amdgpu_pte_update_params as well
  drm/amdgpu: add num_level to the VM manager
  drm/amdgpu: generalize page table level
  drm/amdgpu: handle multi level PD size calculation
  drm/amdgpu: handle multi level PD during validation
  drm/amdgpu: handle multi level PD in the LRU
  drm/amdgpu: handle multi level PD updates V2
  drm/amdgpu: handle multi level PD during PT updates
  drm/amdgpu: add alloc/free for multi level PDs V2

Chunming Zhou (5):
  drm/amdgpu: abstract block size to one function
  drm/amdgpu: limit block size to one page
  drm/amdgpu: adapt vm size for multi vmpt
  drm/amdgpu: set page table depth by num_level
  drm/amdgpu: enable four level VMPT for gmc9

 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |  67 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c|   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 474 +++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  16 +-
 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c   |   3 +-
 drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c  |   1 +
 drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c  |   1 +
 drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c  |   1 +
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c  |   7 +
 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c|   2 +-
 11 files changed, 380 insertions(+), 200 deletions(-)

-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 01/15] drm/amdgpu: rename page_directory_fence to last_dir_update

2017-03-26 Thread Chunming Zhou
From: Christian König 

Decribes better what this is used for.

Change-Id: I1bd496522fbdd6531d2c1d17434822b53bec06d0
Signed-off-by: Christian König 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 8 
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 2 +-
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index f225d63..0e5d851 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -791,7 +791,7 @@ static int amdgpu_bo_vm_update_pte(struct amdgpu_cs_parser 
*p,
if (r)
return r;
 
-   r = amdgpu_sync_fence(adev, &p->job->sync, vm->page_directory_fence);
+   r = amdgpu_sync_fence(adev, &p->job->sync, vm->last_dir_update);
if (r)
return r;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 01418c8..66f5b91 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -705,8 +705,8 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device 
*adev,
goto error_free;
 
amdgpu_bo_fence(vm->page_directory, fence, true);
-   fence_put(vm->page_directory_fence);
-   vm->page_directory_fence = fence_get(fence);
+   fence_put(vm->last_dir_update);
+   vm->last_dir_update = fence_get(fence);
fence_put(fence);
 
return 0;
@@ -1596,7 +1596,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
amdgpu_vm *vm)
if (r)
goto err;
 
-   vm->page_directory_fence = NULL;
+   vm->last_dir_update = NULL;
 
r = amdgpu_bo_create(adev, pd_size, align, true,
 AMDGPU_GEM_DOMAIN_VRAM,
@@ -1673,7 +1673,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct 
amdgpu_vm *vm)
 
amdgpu_bo_unref(&vm->page_directory->shadow);
amdgpu_bo_unref(&vm->page_directory);
-   fence_put(vm->page_directory_fence);
+   fence_put(vm->last_dir_update);
 }
 
 /**
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 1a7922b..6be6c71 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -97,7 +97,7 @@ struct amdgpu_vm {
/* contains the page directory */
struct amdgpu_bo*page_directory;
unsignedmax_pde_used;
-   struct fence*page_directory_fence;
+   struct fence*last_dir_update;
uint64_tlast_eviction_counter;
 
/* array of page tables, one for each page directory entry */
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 02/15] drm/amdgpu: add the VM pointer to the amdgpu_pte_update_params as well

2017-03-26 Thread Chunming Zhou
From: Christian König 

This way we save passing it through the different functions.

Change-Id: Id94564a70d106b0ef36c7f45de2b25ca176db2d2
Signed-off-by: Christian König 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 21 +++--
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 66f5b91..1f27300 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -61,6 +61,8 @@
 struct amdgpu_pte_update_params {
/* amdgpu device we do this update for */
struct amdgpu_device *adev;
+   /* optional amdgpu_vm we do this update for */
+   struct amdgpu_vm *vm;
/* address where to copy page table entries from */
uint64_t src;
/* indirect buffer to fill with commands */
@@ -729,7 +731,6 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device 
*adev,
  * Update the page tables in the range @start - @end.
  */
 static void amdgpu_vm_update_ptes(struct amdgpu_pte_update_params *params,
- struct amdgpu_vm *vm,
  uint64_t start, uint64_t end,
  uint64_t dst, uint64_t flags)
 {
@@ -745,7 +746,7 @@ static void amdgpu_vm_update_ptes(struct 
amdgpu_pte_update_params *params,
/* initialize the variables */
addr = start;
pt_idx = addr >> amdgpu_vm_block_size;
-   pt = vm->page_tables[pt_idx].bo;
+   pt = params->vm->page_tables[pt_idx].bo;
if (params->shadow) {
if (!pt->shadow)
return;
@@ -768,7 +769,7 @@ static void amdgpu_vm_update_ptes(struct 
amdgpu_pte_update_params *params,
/* walk over the address space and update the page tables */
while (addr < end) {
pt_idx = addr >> amdgpu_vm_block_size;
-   pt = vm->page_tables[pt_idx].bo;
+   pt = params->vm->page_tables[pt_idx].bo;
if (params->shadow) {
if (!pt->shadow)
return;
@@ -819,7 +820,6 @@ static void amdgpu_vm_update_ptes(struct 
amdgpu_pte_update_params *params,
  * @flags: hw mapping flags
  */
 static void amdgpu_vm_frag_ptes(struct amdgpu_pte_update_params*params,
-   struct amdgpu_vm *vm,
uint64_t start, uint64_t end,
uint64_t dst, uint64_t flags)
 {
@@ -853,25 +853,25 @@ static void amdgpu_vm_frag_ptes(struct 
amdgpu_pte_update_params   *params,
if (params->src || !(flags & AMDGPU_PTE_VALID) ||
(frag_start >= frag_end)) {
 
-   amdgpu_vm_update_ptes(params, vm, start, end, dst, flags);
+   amdgpu_vm_update_ptes(params, start, end, dst, flags);
return;
}
 
/* handle the 4K area at the beginning */
if (start != frag_start) {
-   amdgpu_vm_update_ptes(params, vm, start, frag_start,
+   amdgpu_vm_update_ptes(params, start, frag_start,
  dst, flags);
dst += (frag_start - start) * AMDGPU_GPU_PAGE_SIZE;
}
 
/* handle the area in the middle */
-   amdgpu_vm_update_ptes(params, vm, frag_start, frag_end, dst,
+   amdgpu_vm_update_ptes(params, frag_start, frag_end, dst,
  flags | frag_flags);
 
/* handle the 4K area at the end */
if (frag_end != end) {
dst += (frag_end - frag_start) * AMDGPU_GPU_PAGE_SIZE;
-   amdgpu_vm_update_ptes(params, vm, frag_end, end, dst, flags);
+   amdgpu_vm_update_ptes(params, frag_end, end, dst, flags);
}
 }
 
@@ -911,6 +911,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device 
*adev,
 
memset(¶ms, 0, sizeof(params));
params.adev = adev;
+   params.vm = vm;
params.src = src;
 
ring = container_of(vm->entity.sched, struct amdgpu_ring, sched);
@@ -992,9 +993,9 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device 
*adev,
goto error_free;
 
params.shadow = true;
-   amdgpu_vm_frag_ptes(¶ms, vm, start, last + 1, addr, flags);
+   amdgpu_vm_frag_ptes(¶ms, start, last + 1, addr, flags);
params.shadow = false;
-   amdgpu_vm_frag_ptes(¶ms, vm, start, last + 1, addr, flags);
+   amdgpu_vm_frag_ptes(¶ms, start, last + 1, addr, flags);
 
amdgpu_ring_pad_ib(ring, params.ib);
WARN_ON(params.ib->length_dw > ndw);
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 04/15] drm/amdgpu: generalize page table level

2017-03-26 Thread Chunming Zhou
From: Christian König 

No functional change, but the base for multi level page tables.

Change-Id: If5729be07e15cc8618ae7bce15c6b27aa4f24393
Signed-off-by: Christian König 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 87 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  9 ++--
 3 files changed, 50 insertions(+), 48 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 0e5d851..d9308cf 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -873,7 +873,7 @@ static int amdgpu_cs_ib_vm_chunk(struct amdgpu_device *adev,
}
 
if (p->job->vm) {
-   p->job->vm_pd_addr = amdgpu_bo_gpu_offset(vm->page_directory);
+   p->job->vm_pd_addr = amdgpu_bo_gpu_offset(vm->root.bo);
 
r = amdgpu_bo_vm_update_pte(p, vm);
if (r)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 1f27300..9172954 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -113,9 +113,9 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
 struct list_head *validated,
 struct amdgpu_bo_list_entry *entry)
 {
-   entry->robj = vm->page_directory;
+   entry->robj = vm->root.bo;
entry->priority = 0;
-   entry->tv.bo = &vm->page_directory->tbo;
+   entry->tv.bo = &entry->robj->tbo;
entry->tv.shared = true;
entry->user_pages = NULL;
list_add(&entry->tv.head, validated);
@@ -147,8 +147,8 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
return 0;
 
/* add the vm page table to the list */
-   for (i = 0; i <= vm->max_pde_used; ++i) {
-   struct amdgpu_bo *bo = vm->page_tables[i].bo;
+   for (i = 0; i <= vm->root.last_entry_used; ++i) {
+   struct amdgpu_bo *bo = vm->root.entries[i].bo;
 
if (!bo)
continue;
@@ -176,8 +176,8 @@ void amdgpu_vm_move_pt_bos_in_lru(struct amdgpu_device 
*adev,
unsigned i;
 
spin_lock(&glob->lru_lock);
-   for (i = 0; i <= vm->max_pde_used; ++i) {
-   struct amdgpu_bo *bo = vm->page_tables[i].bo;
+   for (i = 0; i <= vm->root.last_entry_used; ++i) {
+   struct amdgpu_bo *bo = vm->root.entries[i].bo;
 
if (!bo)
continue;
@@ -597,15 +597,15 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device 
*adev,
int r;
 
ring = container_of(vm->entity.sched, struct amdgpu_ring, sched);
-   shadow = vm->page_directory->shadow;
+   shadow = vm->root.bo->shadow;
 
/* padding, etc. */
ndw = 64;
 
/* assume the worst case */
-   ndw += vm->max_pde_used * 6;
+   ndw += vm->root.last_entry_used * 6;
 
-   pd_addr = amdgpu_bo_gpu_offset(vm->page_directory);
+   pd_addr = amdgpu_bo_gpu_offset(vm->root.bo);
if (shadow) {
r = amdgpu_ttm_bind(&shadow->tbo, &shadow->tbo.mem);
if (r)
@@ -625,8 +625,8 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device 
*adev,
params.ib = &job->ibs[0];
 
/* walk over the address space and update the page directory */
-   for (pt_idx = 0; pt_idx <= vm->max_pde_used; ++pt_idx) {
-   struct amdgpu_bo *bo = vm->page_tables[pt_idx].bo;
+   for (pt_idx = 0; pt_idx <= vm->root.last_entry_used; ++pt_idx) {
+   struct amdgpu_bo *bo = vm->root.entries[pt_idx].bo;
uint64_t pde, pt;
 
if (bo == NULL)
@@ -642,10 +642,10 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device 
*adev,
}
 
pt = amdgpu_bo_gpu_offset(bo);
-   if (vm->page_tables[pt_idx].addr == pt)
+   if (vm->root.entries[pt_idx].addr == pt)
continue;
 
-   vm->page_tables[pt_idx].addr = pt;
+   vm->root.entries[pt_idx].addr = pt;
 
pde = pd_addr + pt_idx * 8;
if (((last_pde + 8 * count) != pde) ||
@@ -680,7 +680,7 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device 
*adev,
if (count) {
uint64_t pt_addr = amdgpu_vm_adjust_mc_addr(adev, last_pt);
 
-   if (vm->page_directory->shadow)
+   if (vm->root.bo->shadow)
amdgpu_vm_do_set_ptes(¶ms, last_shadow, pt_addr,
  count, incr, AMDGPU_PTE_VALID);
 
@@ -694,7 +694,7 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device 
*adev,
}
 
amdgpu_ring_pad_ib(ring, params.ib);
-   amdgpu_sync_resv(adev, &job->sync, vm->page_directory->tbo.resv,
+   amdgpu_sync_resv(adev, &job->sync, vm->

[PATCH 05/15] drm/amdgpu: handle multi level PD size calculation

2017-03-26 Thread Chunming Zhou
From: Christian König 

Allows us to get the size for all levels as well.

Change-Id: Iaf2f9b2bf19c3623018a2215f8cf01a61bdbe8ea
Signed-off-by: Christian König 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 34 ++
 1 file changed, 22 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 9172954..90494ce 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -76,27 +76,37 @@ struct amdgpu_pte_update_params {
 };
 
 /**
- * amdgpu_vm_num_pde - return the number of page directory entries
+ * amdgpu_vm_num_entries - return the number of entries in a PD/PT
  *
  * @adev: amdgpu_device pointer
  *
- * Calculate the number of page directory entries.
+ * Calculate the number of entries in a page directory or page table.
  */
-static unsigned amdgpu_vm_num_pdes(struct amdgpu_device *adev)
+static unsigned amdgpu_vm_num_entries(struct amdgpu_device *adev,
+ unsigned level)
 {
-   return adev->vm_manager.max_pfn >> amdgpu_vm_block_size;
+   if (level == 0)
+   /* For the root directory */
+   return adev->vm_manager.max_pfn >>
+   (amdgpu_vm_block_size * adev->vm_manager.num_level);
+   else if (level == adev->vm_manager.num_level)
+   /* For the page tables on the leaves */
+   return AMDGPU_VM_PTE_COUNT;
+   else
+   /* Everything in between */
+   return 1 << amdgpu_vm_block_size;
 }
 
 /**
- * amdgpu_vm_directory_size - returns the size of the page directory in bytes
+ * amdgpu_vm_bo_size - returns the size of the BOs in bytes
  *
  * @adev: amdgpu_device pointer
  *
- * Calculate the size of the page directory in bytes.
+ * Calculate the size of the BO for a page directory or page table in bytes.
  */
-static unsigned amdgpu_vm_directory_size(struct amdgpu_device *adev)
+static unsigned amdgpu_vm_bo_size(struct amdgpu_device *adev, unsigned level)
 {
-   return AMDGPU_GPU_PAGE_ALIGN(amdgpu_vm_num_pdes(adev) * 8);
+   return AMDGPU_GPU_PAGE_ALIGN(amdgpu_vm_num_entries(adev, level) * 8);
 }
 
 /**
@@ -1393,7 +1403,7 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
saddr >>= amdgpu_vm_block_size;
eaddr >>= amdgpu_vm_block_size;
 
-   BUG_ON(eaddr >= amdgpu_vm_num_pdes(adev));
+   BUG_ON(eaddr >= amdgpu_vm_num_entries(adev, 0));
 
if (eaddr > vm->root.last_entry_used)
vm->root.last_entry_used = eaddr;
@@ -1576,8 +1586,8 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
amdgpu_vm *vm)
INIT_LIST_HEAD(&vm->cleared);
INIT_LIST_HEAD(&vm->freed);
 
-   pd_size = amdgpu_vm_directory_size(adev);
-   pd_entries = amdgpu_vm_num_pdes(adev);
+   pd_size = amdgpu_vm_bo_size(adev, 0);
+   pd_entries = amdgpu_vm_num_entries(adev, 0);
 
/* allocate page table array */
vm->root.entries = drm_calloc_large(pd_entries,
@@ -1662,7 +1672,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct 
amdgpu_vm *vm)
kfree(mapping);
}
 
-   for (i = 0; i < amdgpu_vm_num_pdes(adev); i++) {
+   for (i = 0; i < amdgpu_vm_num_entries(adev, 0); i++) {
struct amdgpu_bo *pt = vm->root.entries[i].bo;
 
if (!pt)
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 08/15] drm/amdgpu: handle multi level PD updates V2

2017-03-26 Thread Chunming Zhou
From: Christian König 

Update all levels of the page directory.

V2:
a. sub level pdes always are written to incorrect place.
b. sub levels need to update regardless of parent updates.

Change-Id: I0ce3fc1fd88397aedf693b0b6e2efb2db704e615
Signed-off-by: Christian König  (V1)
Reviewed-by: Alex Deucher  (V1)
Signed-off-by: Chunming Zhou  (V2)
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c  | 97 ++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h  |  4 +-
 4 files changed, 68 insertions(+), 37 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index d9308cf..de1c4c3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -787,7 +787,7 @@ static int amdgpu_bo_vm_update_pte(struct amdgpu_cs_parser 
*p,
struct amdgpu_bo *bo;
int i, r;
 
-   r = amdgpu_vm_update_page_directory(adev, vm);
+   r = amdgpu_vm_update_directories(adev, vm);
if (r)
return r;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 48ab967..008b8ab 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -691,7 +691,7 @@ static void amdgpu_gem_va_update_vm(struct amdgpu_device 
*adev,
if (r)
goto error;
 
-   r = amdgpu_vm_update_page_directory(adev, bo_va->vm);
+   r = amdgpu_vm_update_directories(adev, bo_va->vm);
if (r)
goto error;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index fe3db17..5a62a53 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -625,24 +625,24 @@ static uint64_t amdgpu_vm_map_gart(const dma_addr_t 
*pages_addr, uint64_t addr)
 }
 
 /*
- * amdgpu_vm_update_pdes - make sure that page directory is valid
+ * amdgpu_vm_update_level - update a single level in the hierarchy
  *
  * @adev: amdgpu_device pointer
  * @vm: requested vm
- * @start: start of GPU address range
- * @end: end of GPU address range
+ * @parent: parent directory
  *
- * Allocates new page tables if necessary
- * and updates the page directory.
+ * Makes sure all entries in @parent are up to date.
  * Returns 0 for success, error for failure.
  */
-int amdgpu_vm_update_page_directory(struct amdgpu_device *adev,
-   struct amdgpu_vm *vm)
+static int amdgpu_vm_update_level(struct amdgpu_device *adev,
+ struct amdgpu_vm *vm,
+ struct amdgpu_vm_pt *parent,
+ unsigned level)
 {
struct amdgpu_bo *shadow;
struct amdgpu_ring *ring;
uint64_t pd_addr, shadow_addr;
-   uint32_t incr = AMDGPU_VM_PTE_COUNT * 8;
+   uint32_t incr = amdgpu_vm_bo_size(adev, level + 1);
uint64_t last_pde = ~0, last_pt = ~0, last_shadow = ~0;
unsigned count = 0, pt_idx, ndw;
struct amdgpu_job *job;
@@ -651,16 +651,19 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device 
*adev,
 
int r;
 
+   if (!parent->entries)
+   return 0;
ring = container_of(vm->entity.sched, struct amdgpu_ring, sched);
-   shadow = vm->root.bo->shadow;
 
/* padding, etc. */
ndw = 64;
 
/* assume the worst case */
-   ndw += vm->root.last_entry_used * 6;
+   ndw += parent->last_entry_used * 6;
+
+   pd_addr = amdgpu_bo_gpu_offset(parent->bo);
 
-   pd_addr = amdgpu_bo_gpu_offset(vm->root.bo);
+   shadow = parent->bo->shadow;
if (shadow) {
r = amdgpu_ttm_bind(&shadow->tbo, &shadow->tbo.mem);
if (r)
@@ -679,9 +682,9 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device 
*adev,
params.adev = adev;
params.ib = &job->ibs[0];
 
-   /* walk over the address space and update the page directory */
-   for (pt_idx = 0; pt_idx <= vm->root.last_entry_used; ++pt_idx) {
-   struct amdgpu_bo *bo = vm->root.entries[pt_idx].bo;
+   /* walk over the address space and update the directory */
+   for (pt_idx = 0; pt_idx <= parent->last_entry_used; ++pt_idx) {
+   struct amdgpu_bo *bo = parent->entries[pt_idx].bo;
uint64_t pde, pt;
 
if (bo == NULL)
@@ -697,10 +700,10 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device 
*adev,
}
 
pt = amdgpu_bo_gpu_offset(bo);
-   if (vm->root.entries[pt_idx].addr == pt)
+   if (parent->entries[pt_idx].addr == pt)
continue;
 
-   vm->root.entries[pt_idx].addr = pt;
+   parent->entries[pt_idx].addr = pt;
 
pde = pd_addr + pt_idx * 8;
if (((last_pde + 8 * co

[PATCH 07/15] drm/amdgpu: handle multi level PD in the LRU

2017-03-26 Thread Chunming Zhou
From: Christian König 

Move all levels to the end after command submission.

Change-Id: I6d41aac90be29476780b897cf5943a2261580a78
Signed-off-by: Christian König 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 36 +-
 1 file changed, 27 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 23674ed..fe3db17 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -199,28 +199,46 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
 }
 
 /**
- * amdgpu_vm_move_pt_bos_in_lru - move the PT BOs to the LRU tail
+ * amdgpu_vm_move_level_in_lru - move one level of PT BOs to the LRU tail
  *
  * @adev: amdgpu device instance
  * @vm: vm providing the BOs
  *
  * Move the PT BOs to the tail of the LRU.
  */
-void amdgpu_vm_move_pt_bos_in_lru(struct amdgpu_device *adev,
- struct amdgpu_vm *vm)
+static void amdgpu_vm_move_level_in_lru(struct amdgpu_vm_pt *parent)
 {
-   struct ttm_bo_global *glob = adev->mman.bdev.glob;
unsigned i;
 
-   spin_lock(&glob->lru_lock);
-   for (i = 0; i <= vm->root.last_entry_used; ++i) {
-   struct amdgpu_bo *bo = vm->root.entries[i].bo;
+   if (!parent->entries)
+   return;
 
-   if (!bo)
+   for (i = 0; i <= parent->last_entry_used; ++i) {
+   struct amdgpu_vm_pt *entry = &parent->entries[i];
+
+   if (!entry->bo)
continue;
 
-   ttm_bo_move_to_lru_tail(&bo->tbo);
+   ttm_bo_move_to_lru_tail(&entry->bo->tbo);
+   amdgpu_vm_move_level_in_lru(entry);
}
+}
+
+/**
+ * amdgpu_vm_move_pt_bos_in_lru - move the PT BOs to the LRU tail
+ *
+ * @adev: amdgpu device instance
+ * @vm: vm providing the BOs
+ *
+ * Move the PT BOs to the tail of the LRU.
+ */
+void amdgpu_vm_move_pt_bos_in_lru(struct amdgpu_device *adev,
+ struct amdgpu_vm *vm)
+{
+   struct ttm_bo_global *glob = adev->mman.bdev.glob;
+
+   spin_lock(&glob->lru_lock);
+   amdgpu_vm_move_level_in_lru(&vm->root);
spin_unlock(&glob->lru_lock);
 }
 
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 09/15] drm/amdgpu: handle multi level PD during PT updates

2017-03-26 Thread Chunming Zhou
From: Christian König 

Not the best solution, but good enough for now.

Change-Id: I45ac1a9d8513ebe51bce9a276da39ddf3524b058
Signed-off-by: Christian König 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 39 +-
 1 file changed, 34 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 5a62a53..280fa19 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -805,6 +805,32 @@ int amdgpu_vm_update_directories(struct amdgpu_device 
*adev,
 }
 
 /**
+ * amdgpu_vm_find_pt - find the page table for an address
+ *
+ * @p: see amdgpu_pte_update_params definition
+ * @addr: virtual address in question
+ *
+ * Find the page table BO for a virtual address, return NULL when none found.
+ */
+static struct amdgpu_bo *amdgpu_vm_get_pt(struct amdgpu_pte_update_params *p,
+ uint64_t addr)
+{
+   struct amdgpu_vm_pt *entry = &p->vm->root;
+   unsigned idx, level = p->adev->vm_manager.num_level;
+
+   while (entry->entries) {
+   idx = addr >> (amdgpu_vm_block_size * level--);
+   idx %= amdgpu_bo_size(entry->bo) / 8;
+   entry = &entry->entries[idx];
+   }
+
+   if (level)
+   return NULL;
+
+   return entry->bo;
+}
+
+/**
  * amdgpu_vm_update_ptes - make sure that page tables are valid
  *
  * @params: see amdgpu_pte_update_params definition
@@ -824,15 +850,16 @@ static void amdgpu_vm_update_ptes(struct 
amdgpu_pte_update_params *params,
 
uint64_t cur_pe_start, cur_nptes, cur_dst;
uint64_t addr; /* next GPU address to be updated */
-   uint64_t pt_idx;
struct amdgpu_bo *pt;
unsigned nptes; /* next number of ptes to be updated */
uint64_t next_pe_start;
 
/* initialize the variables */
addr = start;
-   pt_idx = addr >> amdgpu_vm_block_size;
-   pt = params->vm->root.entries[pt_idx].bo;
+   pt = amdgpu_vm_get_pt(params, addr);
+   if (!pt)
+   return;
+
if (params->shadow) {
if (!pt->shadow)
return;
@@ -854,8 +881,10 @@ static void amdgpu_vm_update_ptes(struct 
amdgpu_pte_update_params *params,
 
/* walk over the address space and update the page tables */
while (addr < end) {
-   pt_idx = addr >> amdgpu_vm_block_size;
-   pt = params->vm->root.entries[pt_idx].bo;
+   pt = amdgpu_vm_get_pt(params, addr);
+   if (!pt)
+   return;
+
if (params->shadow) {
if (!pt->shadow)
return;
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 06/15] drm/amdgpu: handle multi level PD during validation

2017-03-26 Thread Chunming Zhou
From: Christian König 

All page directory levels should be in place after this.

Change-Id: Ied101d6e14676acc07fe2d46ecba4563007b5045
Signed-off-by: Christian König 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 57 +-
 1 file changed, 42 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 90494ce..23674ed 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -132,6 +132,47 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
 }
 
 /**
+ * amdgpu_vm_validate_layer - validate a single page table level
+ *
+ * @parent: parent page table level
+ * @validate: callback to do the validation
+ * @param: parameter for the validation callback
+ *
+ * Validate the page table BOs on command submission if neccessary.
+ */
+static int amdgpu_vm_validate_level(struct amdgpu_vm_pt *parent,
+   int (*validate)(void *, struct amdgpu_bo *),
+   void *param)
+{
+   unsigned i;
+   int r;
+
+   if (!parent->entries)
+   return 0;
+
+   for (i = 0; i <= parent->last_entry_used; ++i) {
+   struct amdgpu_vm_pt *entry = &parent->entries[i];
+
+   if (!entry->bo)
+   continue;
+
+   r = validate(param, entry->bo);
+   if (r)
+   return r;
+
+   /*
+* Recurse into the sub directory. This is harmless because we
+* have only a maximum of 5 layers.
+*/
+   r = amdgpu_vm_validate_level(entry, validate, param);
+   if (r)
+   return r;
+   }
+
+   return r;
+}
+
+/**
  * amdgpu_vm_validate_pt_bos - validate the page table BOs
  *
  * @adev: amdgpu device pointer
@@ -146,8 +187,6 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
  void *param)
 {
uint64_t num_evictions;
-   unsigned i;
-   int r;
 
/* We only need to validate the page tables
 * if they aren't already valid.
@@ -156,19 +195,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
if (num_evictions == vm->last_eviction_counter)
return 0;
 
-   /* add the vm page table to the list */
-   for (i = 0; i <= vm->root.last_entry_used; ++i) {
-   struct amdgpu_bo *bo = vm->root.entries[i].bo;
-
-   if (!bo)
-   continue;
-
-   r = validate(param, bo);
-   if (r)
-   return r;
-   }
-
-   return 0;
+   return amdgpu_vm_validate_level(&vm->root, validate, param);
 }
 
 /**
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 12/15] drm/amdgpu: limit block size to one page

2017-03-26 Thread Chunming Zhou
Change-Id: I00ff5d2c7ff29563394cb8af4c57780b91876743
Signed-off-by: Chunming Zhou 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 7bad6b6..4041d72 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1079,6 +1079,14 @@ static bool amdgpu_check_pot_argument(int arg)
 
 static void amdgpu_get_block_size(struct amdgpu_device *adev)
 {
+   /* from AI, asic starts to support multiple level VMPT */
+   if (adev->family >= AMDGPU_FAMILY_AI) {
+   if (amdgpu_vm_block_size != 9)
+   dev_warn(adev->dev, "Multi-VMPT limits block size to"
+"one page!\n");
+   amdgpu_vm_block_size = 9;
+   return;
+   }
/* defines number of bits in page table versus page directory,
 * a page is 4KB so we have 12 bits offset, minimum 9 bits in the
 * page table and the remaining bits are in the page directory */
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 10/15] drm/amdgpu: add alloc/free for multi level PDs V2

2017-03-26 Thread Chunming Zhou
From: Christian König 

Allocate and free page directories on demand.

V2:
a. clear entries allocation
b. fix entries index calculation
c. need alloc sub level even parent bo was allocated

Change-Id: I341b72b911377033257af888dd1a96ca54f586e9
Signed-off-by: Christian König  (v1)
Reviewed-by: Alex Deucher  (v1)
Signed-off-by: Chunming Zhou  (v2)
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 179 -
 1 file changed, 108 insertions(+), 71 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 280fa19..7f54502 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1431,6 +1431,84 @@ struct amdgpu_bo_va *amdgpu_vm_bo_add(struct 
amdgpu_device *adev,
return bo_va;
 }
 
+ /**
+ * amdgpu_vm_alloc_levels - allocate the PD/PT levels
+ *
+ * @adev: amdgpu_device pointer
+ * @vm: requested vm
+ * @saddr: start of the address range
+ * @eaddr: end of the address range
+ *
+ * Make sure the page directories and page tables are allocated
+ */
+static int amdgpu_vm_alloc_levels(struct amdgpu_device *adev,
+ struct amdgpu_vm *vm,
+ struct amdgpu_vm_pt *parent,
+ uint64_t saddr, uint64_t eaddr,
+ unsigned level)
+{
+   unsigned shift = (adev->vm_manager.num_level - level) *
+   amdgpu_vm_block_size;
+   unsigned pt_idx, from, to;
+   int r;
+
+   if (!parent->entries) {
+   unsigned num_entries = amdgpu_vm_num_entries(adev, level);
+
+   parent->entries = drm_calloc_large(num_entries,
+  sizeof(struct amdgpu_vm_pt));
+   if (!parent->entries)
+   return -ENOMEM;
+   memset(parent->entries, 0 , sizeof(struct amdgpu_vm_pt));
+   }
+
+   from = (saddr >> shift) % amdgpu_vm_num_entries(adev, level);
+   to = (eaddr >> shift) % amdgpu_vm_num_entries(adev, level);
+
+   if (to > parent->last_entry_used)
+   parent->last_entry_used = to;
+
+   ++level;
+
+   /* walk over the address space and allocate the page tables */
+   for (pt_idx = from; pt_idx <= to; ++pt_idx) {
+   struct reservation_object *resv = vm->root.bo->tbo.resv;
+   struct amdgpu_vm_pt *entry = &parent->entries[pt_idx];
+   struct amdgpu_bo *pt;
+
+   if (!entry->bo) {
+   r = amdgpu_bo_create(adev,
+amdgpu_vm_bo_size(adev, level),
+AMDGPU_GPU_PAGE_SIZE, true,
+AMDGPU_GEM_DOMAIN_VRAM,
+AMDGPU_GEM_CREATE_NO_CPU_ACCESS |
+AMDGPU_GEM_CREATE_SHADOW |
+AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS |
+AMDGPU_GEM_CREATE_VRAM_CLEARED,
+NULL, resv, &pt);
+   if (r)
+   return r;
+
+   /* Keep a reference to the root directory to avoid
+   * freeing them up in the wrong order.
+   */
+   pt->parent = amdgpu_bo_ref(vm->root.bo);
+
+   entry->bo = pt;
+   entry->addr = 0;
+   }
+
+   if (level < adev->vm_manager.num_level) {
+   r = amdgpu_vm_alloc_levels(adev, vm, entry, saddr,
+  eaddr, level);
+   if (r)
+   return r;
+   }
+   }
+
+   return 0;
+}
+
 /**
  * amdgpu_vm_bo_map - map bo inside a vm
  *
@@ -1453,7 +1531,7 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
struct amdgpu_bo_va_mapping *mapping;
struct amdgpu_vm *vm = bo_va->vm;
struct interval_tree_node *it;
-   unsigned last_pfn, pt_idx;
+   unsigned last_pfn;
uint64_t eaddr;
int r;
 
@@ -1504,46 +1582,10 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
list_add(&mapping->list, &bo_va->invalids);
interval_tree_insert(&mapping->it, &vm->va);
 
-   /* Make sure the page tables are allocated */
-   saddr >>= amdgpu_vm_block_size;
-   eaddr >>= amdgpu_vm_block_size;
-
-   BUG_ON(eaddr >= amdgpu_vm_num_entries(adev, 0));
-
-   if (eaddr > vm->root.last_entry_used)
-   vm->root.last_entry_used = eaddr;
-
-   /* walk over the address space and allocate the page tables */
-   for (pt_idx = saddr; pt_idx <= eaddr; ++pt_idx) {
-   struct reservation_object *resv = vm->root.bo->tbo.resv;
-   struct amdgpu_bo *pt;
-
-   

[PATCH 11/15] drm/amdgpu: abstract block size to one function

2017-03-26 Thread Chunming Zhou
Change-Id: I7709a0f7af1365a147659aa0a02b1d41f53af40a
Signed-off-by: Chunming Zhou 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 59 --
 1 file changed, 32 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index b0ac610..7bad6b6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1077,6 +1077,37 @@ static bool amdgpu_check_pot_argument(int arg)
return (arg & (arg - 1)) == 0;
 }
 
+static void amdgpu_get_block_size(struct amdgpu_device *adev)
+{
+   /* defines number of bits in page table versus page directory,
+* a page is 4KB so we have 12 bits offset, minimum 9 bits in the
+* page table and the remaining bits are in the page directory */
+   if (amdgpu_vm_block_size == -1) {
+
+   /* Total bits covered by PD + PTs */
+   unsigned bits = ilog2(amdgpu_vm_size) + 18;
+
+   /* Make sure the PD is 4K in size up to 8GB address space.
+  Above that split equal between PD and PTs */
+   if (amdgpu_vm_size <= 8)
+   amdgpu_vm_block_size = bits - 9;
+   else
+   amdgpu_vm_block_size = (bits + 3) / 2;
+
+   } else if (amdgpu_vm_block_size < 9) {
+   dev_warn(adev->dev, "VM page table size (%d) too small\n",
+amdgpu_vm_block_size);
+   amdgpu_vm_block_size = 9;
+   }
+
+   if (amdgpu_vm_block_size > 24 ||
+   (amdgpu_vm_size * 1024) < (1ull << amdgpu_vm_block_size)) {
+   dev_warn(adev->dev, "VM page table size (%d) too large\n",
+amdgpu_vm_block_size);
+   amdgpu_vm_block_size = 9;
+   }
+}
+
 /**
  * amdgpu_check_arguments - validate module params
  *
@@ -1127,33 +1158,7 @@ static void amdgpu_check_arguments(struct amdgpu_device 
*adev)
amdgpu_vm_size = 8;
}
 
-   /* defines number of bits in page table versus page directory,
-* a page is 4KB so we have 12 bits offset, minimum 9 bits in the
-* page table and the remaining bits are in the page directory */
-   if (amdgpu_vm_block_size == -1) {
-
-   /* Total bits covered by PD + PTs */
-   unsigned bits = ilog2(amdgpu_vm_size) + 18;
-
-   /* Make sure the PD is 4K in size up to 8GB address space.
-  Above that split equal between PD and PTs */
-   if (amdgpu_vm_size <= 8)
-   amdgpu_vm_block_size = bits - 9;
-   else
-   amdgpu_vm_block_size = (bits + 3) / 2;
-
-   } else if (amdgpu_vm_block_size < 9) {
-   dev_warn(adev->dev, "VM page table size (%d) too small\n",
-amdgpu_vm_block_size);
-   amdgpu_vm_block_size = 9;
-   }
-
-   if (amdgpu_vm_block_size > 24 ||
-   (amdgpu_vm_size * 1024) < (1ull << amdgpu_vm_block_size)) {
-   dev_warn(adev->dev, "VM page table size (%d) too large\n",
-amdgpu_vm_block_size);
-   amdgpu_vm_block_size = 9;
-   }
+   amdgpu_get_block_size(adev);
 
if (amdgpu_vram_page_split != -1 && (amdgpu_vram_page_split < 16 ||
!amdgpu_check_pot_argument(amdgpu_vram_page_split))) {
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 15/15] drm/amdgpu: enable four level VMPT for gmc9

2017-03-26 Thread Chunming Zhou
Change-Id: I3bb5f77f0d1b715247bb2bbaf6bce3087883b5ce
Signed-off-by: Chunming Zhou 
Reviewed-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 613c8f6..1da16ac 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -514,7 +514,7 @@ static int gmc_v9_0_vm_init(struct amdgpu_device *adev)
DRM_WARN("vm size at least is 256GB!\n");
amdgpu_vm_size = 256;
}
-   adev->vm_manager.num_level = 1;
+   adev->vm_manager.num_level = 3;
amdgpu_vm_manager_init(adev);
 
return 0;
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 03/15] drm/amdgpu: add num_level to the VM manager

2017-03-26 Thread Chunming Zhou
From: Christian König 

Needs to be filled with handling.

Change-Id: I04881a2b304a020c259ce85e94b12900a77f1c02
Signed-off-by: Christian König 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 1 +
 drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c  | 1 +
 drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c  | 1 +
 drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c  | 1 +
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c  | 1 +
 5 files changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 6be6c71..e208186f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -151,6 +151,7 @@ struct amdgpu_vm_manager {
unsignedseqno[AMDGPU_MAX_RINGS];
 
uint32_tmax_pfn;
+   uint32_tnum_level;
/* vram base address for page table entry  */
u64 vram_base_offset;
/* is vm enabled? */
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
index 7155ae5..0ce0d0a 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
@@ -607,6 +607,7 @@ static int gmc_v6_0_vm_init(struct amdgpu_device *adev)
 * amdkfd will use VMIDs 8-15
 */
adev->vm_manager.num_ids = AMDGPU_NUM_OF_VMIDS;
+   adev->vm_manager.num_level = 1;
amdgpu_vm_manager_init(adev);
 
/* base offset of vram pages */
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
index ff4cc63..f90dba5 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
@@ -734,6 +734,7 @@ static int gmc_v7_0_vm_init(struct amdgpu_device *adev)
 * amdkfd will use VMIDs 8-15
 */
adev->vm_manager.num_ids = AMDGPU_NUM_OF_VMIDS;
+   adev->vm_manager.num_level = 1;
adev->vm_manager.shared_aperture_start = 0x2000ULL;
adev->vm_manager.shared_aperture_end =
adev->vm_manager.shared_aperture_start + (4ULL << 30) - 1;
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
index d7d025a..fe79328 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
@@ -865,6 +865,7 @@ static int gmc_v8_0_vm_init(struct amdgpu_device *adev)
 * amdkfd will use VMIDs 8-15
 */
adev->vm_manager.num_ids = AMDGPU_NUM_OF_VMIDS;
+   adev->vm_manager.num_level = 1;
adev->vm_manager.shared_aperture_start = 0x2000ULL;
adev->vm_manager.shared_aperture_end =
adev->vm_manager.shared_aperture_start + (4ULL << 30) - 1;
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 58557add8..6625a2f 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -508,6 +508,7 @@ static int gmc_v9_0_vm_init(struct amdgpu_device *adev)
 * amdkfd will use VMIDs 8-15
 */
adev->vm_manager.num_ids = AMDGPU_NUM_OF_VMIDS;
+   adev->vm_manager.num_level = 1;
amdgpu_vm_manager_init(adev);
 
return 0;
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 13/15] drm/amdgpu: adapt vm size for multi vmpt

2017-03-26 Thread Chunming Zhou
Change-Id: I17b40aec68404e46961a9fda22dfadd1ae9d6f2c
Signed-off-by: Chunming Zhou 
---
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 6625a2f..613c8f6 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -508,6 +508,12 @@ static int gmc_v9_0_vm_init(struct amdgpu_device *adev)
 * amdkfd will use VMIDs 8-15
 */
adev->vm_manager.num_ids = AMDGPU_NUM_OF_VMIDS;
+   /* Because of four level VMPTs, vm size at least is 256GB.
+   256TB is OK as well */
+   if (amdgpu_vm_size < 256) {
+   DRM_WARN("vm size at least is 256GB!\n");
+   amdgpu_vm_size = 256;
+   }
adev->vm_manager.num_level = 1;
amdgpu_vm_manager_init(adev);
 
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 14/15] drm/amdgpu: set page table depth by num_level

2017-03-26 Thread Chunming Zhou
Change-Id: I6180bedb8948398429fb32b36faa35960b3b85e6
Signed-off-by: Chunming Zhou 
Reviewed-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c | 3 ++-
 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c  | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
index a47f9dc..3a6f50a 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
@@ -200,7 +200,8 @@ int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev)
for (i = 0; i <= 14; i++) {
tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_CNTL) + i);
tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL, ENABLE_CONTEXT, 1);
-   tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL, PAGE_TABLE_DEPTH, 1);
+   tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL, PAGE_TABLE_DEPTH,
+   adev->vm_manager.num_level);
tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c 
b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
index 01f3aa5..07af98c 100644
--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
@@ -218,7 +218,7 @@ int mmhub_v1_0_gart_enable(struct amdgpu_device *adev)
tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
ENABLE_CONTEXT, 1);
tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
-   PAGE_TABLE_DEPTH, 1);
+   PAGE_TABLE_DEPTH, adev->vm_manager.num_level);
tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] iommu/amd: flush IOTLB for specific domains only

2017-03-26 Thread arindam . nath
From: Arindam Nath 

The idea behind flush queues is to defer the IOTLB flushing
for domains for which the mappings are no longer valid. We
add such domains in queue_add(), and when the queue size
reaches FLUSH_QUEUE_SIZE, we perform __queue_flush().

Since we have already taken lock before __queue_flush()
is called, we need to make sure the IOTLB flushing is
performed as quickly as possible.

In the current implementation, we perform IOTLB flushing
for all domains irrespective of which ones were actually
added in the flush queue initially. This can be quite
expensive especially for domains for which unmapping is
not required at this point of time.

This patch makes use of domain information in
'struct flush_queue_entry' to make sure we only flush
IOTLBs for domains who need it, skipping others.

Signed-off-by: Arindam Nath 
---
 drivers/iommu/amd_iommu.c | 15 ---
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 98940d1..6a9a048 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -2227,15 +2227,16 @@ static struct iommu_group 
*amd_iommu_device_group(struct device *dev)
 
 static void __queue_flush(struct flush_queue *queue)
 {
-   struct protection_domain *domain;
-   unsigned long flags;
int idx;
 
-   /* First flush TLB of all known domains */
-   spin_lock_irqsave(&amd_iommu_pd_lock, flags);
-   list_for_each_entry(domain, &amd_iommu_pd_list, list)
-   domain_flush_tlb(domain);
-   spin_unlock_irqrestore(&amd_iommu_pd_lock, flags);
+   /* First flush TLB of all domains which were added to flush queue */
+   for (idx = 0; idx < queue->next; ++idx) {
+   struct flush_queue_entry *entry;
+
+   entry = queue->entries + idx;
+
+   domain_flush_tlb(&entry->dma_dom->domain);
+   }
 
/* Wait until flushes have completed */
domain_flush_complete(NULL);
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/radeon: Override fpfn for all VRAM placements in radeon_evict_flags

2017-03-26 Thread Christian König

Am 27.03.2017 um 02:58 schrieb Michel Dänzer:

From: Michel Dänzer 

We were accidentally only overriding the first VRAM placement. For BOs
with the RADEON_GEM_NO_CPU_ACCESS flag set,
radeon_ttm_placement_from_domain creates a second VRAM placment with
fpfn == 0. If VRAM is almost full, the first VRAM placement with
fpfn > 0 may not work, but the second one with fpfn == 0 always will
(the BO's current location trivially satisfies it). Because "moving"
the BO to its current location puts it back on the LRU list, this
results in an infinite loop.

Fixes: 2a85aedd117c ("drm/radeon: Try evicting from CPU accessible to
   inaccessible VRAM first")
Reported-by: Zachary Michaels 
Reported-and-Tested-by: Julien Isorce 
Signed-off-by: Michel Dänzer 


Reviewed-by: Christian König 


---
  drivers/gpu/drm/radeon/radeon_ttm.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c 
b/drivers/gpu/drm/radeon/radeon_ttm.c
index 5c7cf644ba1d..37d68cd1f272 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -213,8 +213,8 @@ static void radeon_evict_flags(struct ttm_buffer_object *bo,
rbo->placement.num_busy_placement = 0;
for (i = 0; i < rbo->placement.num_placement; i++) {
if (rbo->placements[i].flags & 
TTM_PL_FLAG_VRAM) {
-   if (rbo->placements[0].fpfn < fpfn)
-   rbo->placements[0].fpfn = fpfn;
+   if (rbo->placements[i].fpfn < fpfn)
+   rbo->placements[i].fpfn = fpfn;
} else {
rbo->placement.busy_placement =
&rbo->placements[i];



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 09/13] drm/amdgpu:fix gmc_v9 vm fault process for SRIOV

2017-03-26 Thread Christian König

Am 27.03.2017 um 03:51 schrieb Zhang, Jerry (Junwei):

On 03/24/2017 06:38 PM, Monk Liu wrote:

for SRIOV we cannot use access register when in IRQ routine
with regular KIQ method

Change-Id: Ifae3164cf12311b851ae131f58175f6ec3174f82
Signed-off-by: Monk Liu 
---
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 24 
  1 file changed, 16 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c

index 51a1919..88221bb 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -138,20 +138,28 @@ static int gmc_v9_0_process_interrupt(struct 
amdgpu_device *adev,

  addr = (u64)entry->src_data[0] << 12;
  addr |= ((u64)entry->src_data[1] & 0xf) << 44;

-if (entry->vm_id_src) {
-status = RREG32(mmhub->vm_l2_pro_fault_status);
-WREG32_P(mmhub->vm_l2_pro_fault_cntl, 1, ~1);
-} else {
-status = RREG32(gfxhub->vm_l2_pro_fault_status);
-WREG32_P(gfxhub->vm_l2_pro_fault_cntl, 1, ~1);
-}
+if (!amdgpu_sriov_vf(adev)) {
+if (entry->vm_id_src) {
+status = RREG32(mmhub->vm_l2_pro_fault_status);
+WREG32_P(mmhub->vm_l2_pro_fault_cntl, 1, ~1);
+} else {
+status = RREG32(gfxhub->vm_l2_pro_fault_status);
+WREG32_P(gfxhub->vm_l2_pro_fault_cntl, 1, ~1);
+}


Even though SRIOV don't use status info, is it needed to clear vm L2 
fault cntl regs?


Actually it is forbidden to clear that register under SRIOV. So the 
answer is no we shouldn't clear it.




If not,
Reviewed-by: Junwei Zhang 


Reviewed-by: Christian König  as well.

Regards,
Christian.





-DRM_ERROR("[%s]VMC page fault (src_id:%u ring:%u vm_id:%u 
pas_id:%u) "
+DRM_ERROR("[%s]VMC page fault (src_id:%u ring:%u vm_id:%u 
pas_id:%u) "

"at page 0x%016llx from %d\n"
"VM_L2_PROTECTION_FAULT_STATUS:0x%08X\n",
entry->vm_id_src ? "mmhub" : "gfxhub",
entry->src_id, entry->ring_id, entry->vm_id, entry->pas_id,
addr, entry->client_id, status);
+} else {
+DRM_ERROR("[%s]VMC page fault (src_id:%u ring:%u vm_id:%u 
pas_id:%u) "

+  "at page 0x%016llx from %d\n",
+  entry->vm_id_src ? "mmhub" : "gfxhub",
+  entry->src_id, entry->ring_id, entry->vm_id, entry->pas_id,
+  addr, entry->client_id);
+}

  return 0;
  }


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx