Re: [PATCH 1/5] drm/ttm: use a static ttm_mem_global instance

2018-10-22 Thread Thomas Zimmermann
Hi Christian

Am 19.10.18 um 18:41 schrieb Christian König:
> As the name says we only need one global instance of ttm_mem_global.
> 
> Drop all the driver initialization and just use a single exported
> instance which is initialized during BO global initialization.
> 
> Signed-off-by: Christian König 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 44 
> -
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h |  1 -
>  drivers/gpu/drm/ast/ast_drv.h   |  1 -
>  drivers/gpu/drm/ast/ast_ttm.c   | 32 ++
>  drivers/gpu/drm/bochs/bochs.h   |  1 -
>  drivers/gpu/drm/bochs/bochs_mm.c| 30 ++---
>  drivers/gpu/drm/cirrus/cirrus_drv.h |  1 -
>  drivers/gpu/drm/cirrus/cirrus_ttm.c | 32 ++
>  drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h |  1 -
>  drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c | 31 +++--
>  drivers/gpu/drm/mgag200/mgag200_drv.h   |  1 -
>  drivers/gpu/drm/mgag200/mgag200_ttm.c   | 32 ++
>  drivers/gpu/drm/nouveau/nouveau_drv.h   |  1 -
>  drivers/gpu/drm/nouveau/nouveau_ttm.c   | 34 ++-
>  drivers/gpu/drm/qxl/qxl_drv.h   |  1 -
>  drivers/gpu/drm/qxl/qxl_ttm.c   | 28 
>  drivers/gpu/drm/radeon/radeon.h |  1 -
>  drivers/gpu/drm/radeon/radeon_ttm.c | 26 ---
>  drivers/gpu/drm/ttm/ttm_bo.c| 10 --
>  drivers/gpu/drm/ttm/ttm_memory.c|  5 +--
>  drivers/gpu/drm/virtio/virtgpu_drv.h|  1 -
>  drivers/gpu/drm/virtio/virtgpu_ttm.c| 27 ---
>  drivers/gpu/drm/vmwgfx/vmwgfx_drv.c |  4 +--
>  drivers/gpu/drm/vmwgfx/vmwgfx_drv.h |  3 +-
>  drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c| 27 ---
>  drivers/staging/vboxvideo/vbox_drv.h|  1 -
>  drivers/staging/vboxvideo/vbox_ttm.c| 24 --
>  include/drm/ttm/ttm_bo_driver.h |  8 ++---
>  include/drm/ttm/ttm_memory.h|  4 +--
>  29 files changed, 32 insertions(+), 380 deletions(-)

Great that you removed all the global TTM state from all the drivers.
This removes a lot of duplication and simplifies driver development a bit.


> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index 9edece6510d3..3006050b1720 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -1526,18 +1526,22 @@ void ttm_bo_global_release(struct ttm_bo_global *glob)
>  {
>   kobject_del(&glob->kobj);
>   kobject_put(&glob->kobj);
> + ttm_mem_global_release(&ttm_mem_glob);
>  }
>  EXPORT_SYMBOL(ttm_bo_global_release);
>  
> -int ttm_bo_global_init(struct ttm_bo_global *glob,
> -struct ttm_mem_global *mem_glob)
> +int ttm_bo_global_init(struct ttm_bo_global *glob)
>  {
>   int ret;
>   unsigned i;
>  
> + ret = ttm_mem_global_init(&ttm_mem_glob);
> + if (ret)
> + return ret;
> +

What I really dislike about this patch set is that it mixes state and
implementation into that same functions. The original code had a fairly
good separation of both. Now the mechanisms and policies are located in
the same places.[^1]

This looks like a simplification, but from my experience, such code is a
setup for long-term maintenance problems. For example, I can imagine
that someone at some point wants multiple global buffers (e.g., on a
NUMA-like architecture).

I understand that I'm new here, have no say, and probably don't get the
big picture, but from my point of view, this is not a forward-thinking
change.

Best regards
Thomas

[^1] More philosophically speaking, program state can be global, but
data structures can only be share-able. ttm_mem_global and ttm_bo_global
should be renamed to something like ttm_shared_mem, rsp. ttm_shared_bo.


>   mutex_init(&glob->device_list_mutex);
>   spin_lock_init(&glob->lru_lock);
> - glob->mem_glob = mem_glob;
> + glob->mem_glob = &ttm_mem_glob;
>   glob->mem_glob->bo_glob = glob;
>   glob->dummy_read_page = alloc_page(__GFP_ZERO | GFP_DMA32);
>  
> diff --git a/drivers/gpu/drm/ttm/ttm_memory.c 
> b/drivers/gpu/drm/ttm/ttm_memory.c
> index 450387c92b63..7704e17c402f 100644
> --- a/drivers/gpu/drm/ttm/ttm_memory.c
> +++ b/drivers/gpu/drm/ttm/ttm_memory.c
> @@ -41,6 +41,9 @@
>  
>  #define TTM_MEMORY_ALLOC_RETRIES 4
>  
> +struct ttm_mem_global ttm_mem_glob;
> +EXPORT_SYMBOL(ttm_mem_glob);
> +
>  struct ttm_mem_zone {
>   struct kobject kobj;
>   struct ttm_mem_global *glob;
> @@ -464,7 +467,6 @@ int ttm_mem_global_init(struct ttm_mem_global *glob)
>   ttm_mem_global_release(glob);
>   return ret;
>  }
> -EXPORT_SYMBOL(ttm_mem_global_init);
>  
>  void ttm_mem_global_release(struct ttm_mem_global *glob)
>  {
> @@ -486,7 +

Re: [PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

2018-10-22 Thread Zhu, Rex
No, if the vm size is small, there may only on root pd entry.

we need to make sure the mask >= 0;


Maybe this change revert Christian's commit:


commit 72af632549b97ead9251bb155f08fefd1fb6f5c3
Author: Christian König 
Date:   Sat Sep 15 10:02:13 2018 +0200

drm/amdgpu: add amdgpu_vm_entries_mask v2

We can't get the mask for the root directory from the number of entries.

So add a new function to avoid that problem.


Best Regards

Rex



From: amd-gfx  on behalf of Zhang, 
Jerry(Junwei) 
Sent: Tuesday, October 23, 2018 1:12 PM
To: Zhu, Rex; amd-gfx@lists.freedesktop.org; Deucher, Alexander; Koenig, 
Christian
Subject: Re: [PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

On 10/23/2018 11:29 AM, Rex Zhu wrote:
> when the VA address located in the last PD entries,
> the alloc_pts will faile.
>
> Use the right PD mask instand of hardcode, suggested
> by jerry.zhang.
>
> Signed-off-by: Rex Zhu 

Thanks to verify that.
Feel free to add
Reviewed-by: Junwei Zhang 

Also like to get to know some background about these two functions from
Christian.
Perhaps we may make it more simple, e.g. merging them together.

Regards,
Jerry

> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 5 -
>   1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 054633b..3939013 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -202,8 +202,11 @@ static unsigned amdgpu_vm_num_entries(struct 
> amdgpu_device *adev,
>   static uint32_t amdgpu_vm_entries_mask(struct amdgpu_device *adev,
>   unsigned int level)
>   {
> + unsigned shift = amdgpu_vm_level_shift(adev,
> +adev->vm_manager.root_level);
> +
>if (level <= adev->vm_manager.root_level)
> - return 0x;
> + return (round_up(adev->vm_manager.max_pfn, 1 << shift) >> 
> shift) - 1;
>else if (level != AMDGPU_VM_PTB)
>return 0x1ff;
>else

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
amd-gfx Info Page - 
freedesktop.org
lists.freedesktop.org
To see the collection of prior postings to the list, visit the amd-gfx 
Archives.. Using amd-gfx: To post a message to all the list members, send email 
to amd-gfx@lists.freedesktop.org. You can subscribe to the list, or change your 
existing subscription, in the sections below.


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

2018-10-22 Thread Zhang, Jerry(Junwei)

On 10/23/2018 01:12 PM, Zhang, Jerry(Junwei) wrote:

On 10/23/2018 11:29 AM, Rex Zhu wrote:

when the VA address located in the last PD entries,
the alloc_pts will faile.

Use the right PD mask instand of hardcode, suggested
by jerry.zhang.

Signed-off-by: Rex Zhu 


Thanks to verify that.
Feel free to add
Reviewed-by: Junwei Zhang 

Also like to get to know some background about these two functions 
from Christian.

Perhaps we may make it more simple, e.g. merging them together.


If we really needs them all, we may simplify that like:
```
amdgpu_vm_entries_mask(struct amdgpu_device *adev, unsigned int level)
{
    return amdgpu_vm_num_entries(adev, level) - 1;
}
```

Jerry


Regards,
Jerry


---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 5 -
  1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

index 054633b..3939013 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -202,8 +202,11 @@ static unsigned amdgpu_vm_num_entries(struct 
amdgpu_device *adev,

  static uint32_t amdgpu_vm_entries_mask(struct amdgpu_device *adev,
 unsigned int level)
  {
+    unsigned shift = amdgpu_vm_level_shift(adev,
+   adev->vm_manager.root_level);
+
  if (level <= adev->vm_manager.root_level)
-    return 0x;
+    return (round_up(adev->vm_manager.max_pfn, 1 << shift) >> 
shift) - 1;

  else if (level != AMDGPU_VM_PTB)
  return 0x1ff;
  else


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

2018-10-22 Thread Zhang, Jerry(Junwei)

On 10/23/2018 11:29 AM, Rex Zhu wrote:

when the VA address located in the last PD entries,
the alloc_pts will faile.

Use the right PD mask instand of hardcode, suggested
by jerry.zhang.

Signed-off-by: Rex Zhu 


Thanks to verify that.
Feel free to add
Reviewed-by: Junwei Zhang 

Also like to get to know some background about these two functions from 
Christian.

Perhaps we may make it more simple, e.g. merging them together.

Regards,
Jerry


---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 5 -
  1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 054633b..3939013 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -202,8 +202,11 @@ static unsigned amdgpu_vm_num_entries(struct amdgpu_device 
*adev,
  static uint32_t amdgpu_vm_entries_mask(struct amdgpu_device *adev,
   unsigned int level)
  {
+   unsigned shift = amdgpu_vm_level_shift(adev,
+  adev->vm_manager.root_level);
+
if (level <= adev->vm_manager.root_level)
-   return 0x;
+   return (round_up(adev->vm_manager.max_pfn, 1 << shift) >> 
shift) - 1;
else if (level != AMDGPU_VM_PTB)
return 0x1ff;
else


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

2018-10-22 Thread Zhu, Rex
Thanks Jerry.

Good suggestion.

Use the right mask for PD instand of hardcode.

so don't need to revert the whole patch.


Best Regards

Rex



From: Zhang, Jerry
Sent: Tuesday, October 23, 2018 10:02 AM
To: Zhu, Rex; amd-gfx@lists.freedesktop.org; Koenig, Christian
Subject: Re: [PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

On 10/23/2018 12:09 AM, Rex Zhu wrote:
> When the va address located in the last pd entry,

Do you mean the root PD?
maybe we need roundup root PD in amdgpu_vm_entries_mask() like
amdgpu_vm_num_entries().

BTW, looks amdgpu_vm_entries_mask() is going to replace the
amdgpu_vm_num_entries()

Jerry
> the alloc_pts will failed.
> caused by
> "drm/amdgpu: add amdgpu_vm_entries_mask v2"
> commit 72af632549b97ead9251bb155f08fefd1fb6f5c3.
>
> Signed-off-by: Rex Zhu 
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 34 
> +++---
>   1 file changed, 7 insertions(+), 27 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 054633b..1a3af72 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -191,26 +191,6 @@ static unsigned amdgpu_vm_num_entries(struct 
> amdgpu_device *adev,
>   }
>
>   /**
> - * amdgpu_vm_entries_mask - the mask to get the entry number of a PD/PT
> - *
> - * @adev: amdgpu_device pointer
> - * @level: VMPT level
> - *
> - * Returns:
> - * The mask to extract the entry number of a PD/PT from an address.
> - */
> -static uint32_t amdgpu_vm_entries_mask(struct amdgpu_device *adev,
> -unsigned int level)
> -{
> - if (level <= adev->vm_manager.root_level)
> - return 0x;
> - else if (level != AMDGPU_VM_PTB)
> - return 0x1ff;
> - else
> - return AMDGPU_VM_PTE_COUNT(adev) - 1;
> -}
> -
> -/**
>* amdgpu_vm_bo_size - returns the size of the BOs in bytes
>*
>* @adev: amdgpu_device pointer
> @@ -419,17 +399,17 @@ static void amdgpu_vm_pt_start(struct amdgpu_device 
> *adev,
>   static bool amdgpu_vm_pt_descendant(struct amdgpu_device *adev,
>struct amdgpu_vm_pt_cursor *cursor)
>   {
> - unsigned mask, shift, idx;
> + unsigned num_entries, shift, idx;
>
>if (!cursor->entry->entries)
>return false;
>
>BUG_ON(!cursor->entry->base.bo);
> - mask = amdgpu_vm_entries_mask(adev, cursor->level);
> + num_entries = amdgpu_vm_num_entries(adev, cursor->level);
>shift = amdgpu_vm_level_shift(adev, cursor->level);
>
>++cursor->level;
> - idx = (cursor->pfn >> shift) & mask;
> + idx = (cursor->pfn >> shift) % num_entries;
>cursor->parent = cursor->entry;
>cursor->entry = &cursor->entry->entries[idx];
>return true;
> @@ -1618,7 +1598,7 @@ static int amdgpu_vm_update_ptes(struct 
> amdgpu_pte_update_params *params,
>amdgpu_vm_pt_start(adev, params->vm, start, &cursor);
>while (cursor.pfn < end) {
>struct amdgpu_bo *pt = cursor.entry->base.bo;
> - unsigned shift, parent_shift, mask;
> + unsigned shift, parent_shift, num_entries;
>uint64_t incr, entry_end, pe_start;
>
>if (!pt)
> @@ -1673,9 +1653,9 @@ static int amdgpu_vm_update_ptes(struct 
> amdgpu_pte_update_params *params,
>
>/* Looks good so far, calculate parameters for the update */
>incr = AMDGPU_GPU_PAGE_SIZE << shift;
> - mask = amdgpu_vm_entries_mask(adev, cursor.level);
> - pe_start = ((cursor.pfn >> shift) & mask) * 8;
> - entry_end = (mask + 1) << shift;
> + num_entries = amdgpu_vm_num_entries(adev, cursor.level);
> + pe_start = ((cursor.pfn >> shift) & (num_entries - 1)) * 8;
> + entry_end = num_entries << shift;
>entry_end += cursor.pfn & ~(entry_end - 1);
>entry_end = min(entry_end, end);
>

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

2018-10-22 Thread Rex Zhu
when the VA address located in the last PD entries,
the alloc_pts will faile.

Use the right PD mask instand of hardcode, suggested
by jerry.zhang.

Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 054633b..3939013 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -202,8 +202,11 @@ static unsigned amdgpu_vm_num_entries(struct amdgpu_device 
*adev,
 static uint32_t amdgpu_vm_entries_mask(struct amdgpu_device *adev,
   unsigned int level)
 {
+   unsigned shift = amdgpu_vm_level_shift(adev,
+  adev->vm_manager.root_level);
+
if (level <= adev->vm_manager.root_level)
-   return 0x;
+   return (round_up(adev->vm_manager.max_pfn, 1 << shift) >> 
shift) - 1;
else if (level != AMDGPU_VM_PTB)
return 0x1ff;
else
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Reverse the sequence of ctx_mgr_fini and vm_fini in amdgpu_driver_postclose_kms

2018-10-22 Thread Zhang, Jerry(Junwei)

On 10/22/2018 05:47 PM, Rex Zhu wrote:

csa buffer will be created per ctx, when ctx fini,
the csa buffer and va will be released. so need to
do ctx_mgr fin before vm fini.

Signed-off-by: Rex Zhu 

Reviewed-by: Junwei Zhang 


---
  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
index 27de848..f2ef9a1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
@@ -1054,8 +1054,8 @@ void amdgpu_driver_postclose_kms(struct drm_device *dev,
pasid = fpriv->vm.pasid;
pd = amdgpu_bo_ref(fpriv->vm.root.base.bo);
  
-	amdgpu_vm_fini(adev, &fpriv->vm);

amdgpu_ctx_mgr_fini(&fpriv->ctx_mgr);
+   amdgpu_vm_fini(adev, &fpriv->vm);
  
  	if (pasid)

amdgpu_pasid_free_delayed(pd->tbo.resv, pasid);


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

2018-10-22 Thread Zhang, Jerry(Junwei)

On 10/23/2018 12:09 AM, Rex Zhu wrote:

When the va address located in the last pd entry,


Do you mean the root PD?
maybe we need roundup root PD in amdgpu_vm_entries_mask() like 
amdgpu_vm_num_entries().


BTW, looks amdgpu_vm_entries_mask() is going to replace the 
amdgpu_vm_num_entries()


Jerry

the alloc_pts will failed.
caused by
"drm/amdgpu: add amdgpu_vm_entries_mask v2"
commit 72af632549b97ead9251bb155f08fefd1fb6f5c3.

Signed-off-by: Rex Zhu 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 34 +++---
  1 file changed, 7 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 054633b..1a3af72 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -191,26 +191,6 @@ static unsigned amdgpu_vm_num_entries(struct amdgpu_device 
*adev,
  }
  
  /**

- * amdgpu_vm_entries_mask - the mask to get the entry number of a PD/PT
- *
- * @adev: amdgpu_device pointer
- * @level: VMPT level
- *
- * Returns:
- * The mask to extract the entry number of a PD/PT from an address.
- */
-static uint32_t amdgpu_vm_entries_mask(struct amdgpu_device *adev,
-  unsigned int level)
-{
-   if (level <= adev->vm_manager.root_level)
-   return 0x;
-   else if (level != AMDGPU_VM_PTB)
-   return 0x1ff;
-   else
-   return AMDGPU_VM_PTE_COUNT(adev) - 1;
-}
-
-/**
   * amdgpu_vm_bo_size - returns the size of the BOs in bytes
   *
   * @adev: amdgpu_device pointer
@@ -419,17 +399,17 @@ static void amdgpu_vm_pt_start(struct amdgpu_device *adev,
  static bool amdgpu_vm_pt_descendant(struct amdgpu_device *adev,
struct amdgpu_vm_pt_cursor *cursor)
  {
-   unsigned mask, shift, idx;
+   unsigned num_entries, shift, idx;
  
  	if (!cursor->entry->entries)

return false;
  
  	BUG_ON(!cursor->entry->base.bo);

-   mask = amdgpu_vm_entries_mask(adev, cursor->level);
+   num_entries = amdgpu_vm_num_entries(adev, cursor->level);
shift = amdgpu_vm_level_shift(adev, cursor->level);
  
  	++cursor->level;

-   idx = (cursor->pfn >> shift) & mask;
+   idx = (cursor->pfn >> shift) % num_entries;
cursor->parent = cursor->entry;
cursor->entry = &cursor->entry->entries[idx];
return true;
@@ -1618,7 +1598,7 @@ static int amdgpu_vm_update_ptes(struct 
amdgpu_pte_update_params *params,
amdgpu_vm_pt_start(adev, params->vm, start, &cursor);
while (cursor.pfn < end) {
struct amdgpu_bo *pt = cursor.entry->base.bo;
-   unsigned shift, parent_shift, mask;
+   unsigned shift, parent_shift, num_entries;
uint64_t incr, entry_end, pe_start;
  
  		if (!pt)

@@ -1673,9 +1653,9 @@ static int amdgpu_vm_update_ptes(struct 
amdgpu_pte_update_params *params,
  
  		/* Looks good so far, calculate parameters for the update */

incr = AMDGPU_GPU_PAGE_SIZE << shift;
-   mask = amdgpu_vm_entries_mask(adev, cursor.level);
-   pe_start = ((cursor.pfn >> shift) & mask) * 8;
-   entry_end = (mask + 1) << shift;
+   num_entries = amdgpu_vm_num_entries(adev, cursor.level);
+   pe_start = ((cursor.pfn >> shift) & (num_entries - 1)) * 8;
+   entry_end = num_entries << shift;
entry_end += cursor.pfn & ~(entry_end - 1);
entry_end = min(entry_end, end);
  


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 4/5] drm/ttm: initialize globals during device init

2018-10-22 Thread Zhang, Jerry(Junwei)

On 10/22/2018 08:35 PM, Christian König wrote:

Am 22.10.18 um 08:45 schrieb Zhang, Jerry(Junwei):

A question in ttm_bo.c
[SNIP]

    int ttm_bo_device_release(struct ttm_bo_device *bdev)
  {
@@ -1623,18 +1620,25 @@ int ttm_bo_device_release(struct 
ttm_bo_device *bdev)

drm_vma_offset_manager_destroy(&bdev->vma_manager);
  +    if (!ret)
+    ttm_bo_global_release();


if ttm_bo_clean_mm() fails, it will skip ttm_bo_global_release().
When will it be called?


Never.



Shall add it to delayed work? or maybe we could release it directly?


No, when ttm_bo_device_release() fails somebody is trying to unload a 
driver while this driver still has memory allocated.


In this case BO accounting should not be released because we should 
make sure that all the leaked memory is still accounted.


In this case, it's rather a bug to fix then.
Thanks to explain it .

looks fine for me, feel free to add
Reviewed-by: Junwei Zhang 

Jerry



Christian.



Regards,
Jerry




___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Enable default GPU reset for dGPU on gfx8/9.

2018-10-22 Thread Alex Deucher
On Mon, Oct 22, 2018 at 5:20 PM Andrey Grodzovsky
 wrote:
>
> After testing looks like this subset of ASICs has GPU reset
> working for the most part. Enable reset due to job timeout.
>
> Signed-off-by: Andrey Grodzovsky 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 24 +++-
>  1 file changed, 19 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index d11489e..75308d2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -3292,18 +3292,32 @@ static int amdgpu_device_reset_sriov(struct 
> amdgpu_device *adev,
>   */
>  bool amdgpu_device_should_recover_gpu(struct amdgpu_device *adev)
>  {
> +   struct amdgpu_ip_block *ip_block;
> +
> if (!amdgpu_device_ip_check_soft_reset(adev)) {
> DRM_INFO("Timeout, but no hardware hang detected.\n");
> return false;
> }
>
> -   if (amdgpu_gpu_recovery == 0 || (amdgpu_gpu_recovery == -1  &&
> -!amdgpu_sriov_vf(adev))) {
> -   DRM_INFO("GPU recovery disabled.\n");
> -   return false;
> -   }
> +   if (amdgpu_gpu_recovery == 0)
> +   goto disabled;
> +
> +   if (amdgpu_sriov_vf(adev))
> +   return true;
> +
> +   ip_block = amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_GFX);
> +
> +   if (amdgpu_gpu_recovery == -1  &&
> +   ((adev->flags & AMD_IS_APU) ||
> +   ip_block->version->major < 8 ||
> +   ip_block->version->major > 9))
> +   goto disabled;

I would prefer to tie this to asic_type rather than gfx version.  E.g.,

switch (adev->asic_type) {
case CHIP_TOPAZ:
case CHIP_TONGA:
case CHIP_FIJI:
case CHIP_POLARIS10:
case CHIP_POLARIS11:
case CHIP_POLARIS12:
case CHIP_VEGAM:
case CHIP_VEGA20:
case CHIP_VEGA10:
case CHIP_VEGA12:
break;
default:
goto disabled;
}

Alex

>
> return true;
> +
> +disabled:
> +   DRM_INFO("GPU recovery disabled.\n");
> +   return false;
>  }
>
>  /**
> --
> 2.7.4
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu/amdkfd: clean up mmhub and gfxhub includes

2018-10-22 Thread Alex Deucher
Use the appropriate mmhub and gfxhub headers rather than adding
them to the gmc9 header.

Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c | 3 ++-
 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h  | 2 ++
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h | 6 --
 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h   | 2 ++
 4 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
index 54c369091f6c..02d1d363931b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
@@ -46,7 +46,8 @@
 #include "v9_structs.h"
 #include "soc15.h"
 #include "soc15d.h"
-#include "gmc_v9_0.h"
+#include "mmhub_v1_0.h"
+#include "gfxhub_v1_0.h"
 
 /* HACK: MMHUB and GC both have VM-related register with the same
  * names but different offsets. Define the MMHUB register we need here
diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h 
b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
index 206e29cad753..92d3a70cd9b1 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
+++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
@@ -30,5 +30,7 @@ void gfxhub_v1_0_set_fault_enable_default(struct 
amdgpu_device *adev,
  bool value);
 void gfxhub_v1_0_init(struct amdgpu_device *adev);
 u64 gfxhub_v1_0_get_mc_fb_offset(struct amdgpu_device *adev);
+void gfxhub_v1_0_setup_vm_pt_regs(struct amdgpu_device *adev, uint32_t vmid,
+   uint64_t page_table_base);
 
 #endif
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
index 1fd178a65e66..b030ca5ea107 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
@@ -27,10 +27,4 @@
 extern const struct amd_ip_funcs gmc_v9_0_ip_funcs;
 extern const struct amdgpu_ip_block_version gmc_v9_0_ip_block;
 
-/* amdgpu_amdkfd*.c */
-void gfxhub_v1_0_setup_vm_pt_regs(struct amdgpu_device *adev, uint32_t vmid,
-   uint64_t page_table_base);
-void mmhub_v1_0_setup_vm_pt_regs(struct amdgpu_device *adev, uint32_t vmid,
-   uint64_t page_table_base);
-
 #endif
diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h 
b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
index bef3d0c0c117..0de0fdf98c00 100644
--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
+++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
@@ -34,5 +34,7 @@ int mmhub_v1_0_set_clockgating(struct amdgpu_device *adev,
 void mmhub_v1_0_get_clockgating(struct amdgpu_device *adev, u32 *flags);
 void mmhub_v1_0_update_power_gating(struct amdgpu_device *adev,
 bool enable);
+void mmhub_v1_0_setup_vm_pt_regs(struct amdgpu_device *adev, uint32_t vmid,
+   uint64_t page_table_base);
 
 #endif
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: Enable default GPU reset for dGPU on gfx8/9.

2018-10-22 Thread Andrey Grodzovsky
After testing looks like this subset of ASICs has GPU reset
working for the most part. Enable reset due to job timeout.

Signed-off-by: Andrey Grodzovsky 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 24 +++-
 1 file changed, 19 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index d11489e..75308d2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3292,18 +3292,32 @@ static int amdgpu_device_reset_sriov(struct 
amdgpu_device *adev,
  */
 bool amdgpu_device_should_recover_gpu(struct amdgpu_device *adev)
 {
+   struct amdgpu_ip_block *ip_block;
+
if (!amdgpu_device_ip_check_soft_reset(adev)) {
DRM_INFO("Timeout, but no hardware hang detected.\n");
return false;
}
 
-   if (amdgpu_gpu_recovery == 0 || (amdgpu_gpu_recovery == -1  &&
-!amdgpu_sriov_vf(adev))) {
-   DRM_INFO("GPU recovery disabled.\n");
-   return false;
-   }
+   if (amdgpu_gpu_recovery == 0)
+   goto disabled;
+
+   if (amdgpu_sriov_vf(adev))
+   return true;
+
+   ip_block = amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_GFX);
+
+   if (amdgpu_gpu_recovery == -1  &&
+   ((adev->flags & AMD_IS_APU) ||
+   ip_block->version->major < 8 ||
+   ip_block->version->major > 9))
+   goto disabled;
 
return true;
+
+disabled:
+   DRM_INFO("GPU recovery disabled.\n");
+   return false;
 }
 
 /**
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v3 2/2] drm/amdgpu: Retire amdgpu_ring.ready flag v3

2018-10-22 Thread Andrey Grodzovsky
Start using drm_gpu_scheduler.ready isntead.

v3:
Add helper function to run ring test and set
sched.ready flag status accordingly, clean explicit
sched.ready sets from the IP specific files.

Signed-off-by: Andrey Grodzovsky 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c|  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c|  6 ++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c   | 18 +++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c|  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c  | 13 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  |  3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  2 +-
 drivers/gpu/drm/amd/amdgpu/cik_sdma.c | 12 -
 drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c | 16 
 drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c | 16 
 drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 29 +-
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 30 +--
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c |  2 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c| 12 -
 drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c| 12 -
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c| 18 ++
 drivers/gpu/drm/amd/amdgpu/si_dma.c   | 10 +++-
 drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c |  9 +++
 drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c |  9 +++
 drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c | 16 
 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c | 16 
 drivers/gpu/drm/amd/amdgpu/vce_v2_0.c |  6 +
 drivers/gpu/drm/amd/amdgpu/vce_v3_0.c |  7 +-
 drivers/gpu/drm/amd/amdgpu/vce_v4_0.c |  9 ++-
 drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c | 24 ++
 26 files changed, 118 insertions(+), 183 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
index c31a884..eaa58bb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
@@ -144,7 +144,7 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
  KGD_MAX_QUEUES);
 
/* remove the KIQ bit as well */
-   if (adev->gfx.kiq.ring.ready)
+   if (adev->gfx.kiq.ring.sched.ready)
clear_bit(amdgpu_gfx_queue_to_bit(adev,
  adev->gfx.kiq.ring.me 
- 1,
  
adev->gfx.kiq.ring.pipe,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
index 42cb4c4..f7819a5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
@@ -876,7 +876,7 @@ static int invalidate_tlbs(struct kgd_dev *kgd, uint16_t 
pasid)
if (adev->in_gpu_reset)
return -EIO;
 
-   if (ring->ready)
+   if (ring->sched.ready)
return invalidate_tlbs_with_kiq(adev, pasid);
 
for (vmid = 0; vmid < 16; vmid++) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
index b8963b7..fc74f40a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
@@ -146,7 +146,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
num_ibs,
fence_ctx = 0;
}
 
-   if (!ring->ready) {
+   if (!ring->sched.ready) {
dev_err(adev->dev, "couldn't schedule ib on ring <%s>\n", 
ring->name);
return -EINVAL;
}
@@ -351,7 +351,7 @@ int amdgpu_ib_ring_tests(struct amdgpu_device *adev)
struct amdgpu_ring *ring = adev->rings[i];
long tmo;
 
-   if (!ring || !ring->ready)
+   if (!ring || !ring->sched.ready)
continue;
 
/* skip IB tests for KIQ in general for the below reasons:
@@ -375,7 +375,7 @@ int amdgpu_ib_ring_tests(struct amdgpu_device *adev)
 
r = amdgpu_ring_test_ib(ring, tmo);
if (r) {
-   ring->ready = false;
+   ring->sched.ready = false;
 
if (ring == &adev->gfx.gfx_ring[0]) {
/* oh, oh, that's really bad */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
index 50ece76..25307a4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
@@ -336,7 +336,7 @@ static int amdgpu_hw_ip_info(struct amdgpu_device *adev,
case AMDGPU_HW_IP_GFX:
type = AMD_IP_BLOCK_TYPE_GFX;
for (i = 0; i < ad

[PATCH v3 1/2] drm/sched: Add boolean to mark if sched is ready to work v2

2018-10-22 Thread Andrey Grodzovsky
Problem:
A particular scheduler may become unsuable (underlying HW) after
some event (e.g. GPU reset). If it's later chosen by
the get free sched. policy a command will fail to be
submitted.

Fix:
Add a driver specific callback to report the sched status so
rq with bad sched can be avoided in favor of working one or
none in which case job init will fail.

v2: Switch from driver callback to flag in scheduler.

Signed-off-by: Andrey Grodzovsky 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c |  2 +-
 drivers/gpu/drm/etnaviv/etnaviv_sched.c   |  2 +-
 drivers/gpu/drm/scheduler/sched_entity.c  |  9 -
 drivers/gpu/drm/scheduler/sched_main.c| 10 +-
 drivers/gpu/drm/v3d/v3d_sched.c   |  4 ++--
 include/drm/gpu_scheduler.h   |  5 -
 6 files changed, 25 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 5448cf2..bf845b0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -450,7 +450,7 @@ int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring,
 
r = drm_sched_init(&ring->sched, &amdgpu_sched_ops,
   num_hw_submission, amdgpu_job_hang_limit,
-  timeout, ring->name);
+  timeout, ring->name, false);
if (r) {
DRM_ERROR("Failed to create scheduler on ring %s.\n",
  ring->name);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c 
b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
index f8c5f1e..9dca347 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
@@ -178,7 +178,7 @@ int etnaviv_sched_init(struct etnaviv_gpu *gpu)
 
ret = drm_sched_init(&gpu->sched, &etnaviv_sched_ops,
 etnaviv_hw_jobs_limit, etnaviv_job_hang_limit,
-msecs_to_jiffies(500), dev_name(gpu->dev));
+msecs_to_jiffies(500), dev_name(gpu->dev), true);
if (ret)
return ret;
 
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c 
b/drivers/gpu/drm/scheduler/sched_entity.c
index 3e22a54..ba54c30 100644
--- a/drivers/gpu/drm/scheduler/sched_entity.c
+++ b/drivers/gpu/drm/scheduler/sched_entity.c
@@ -130,7 +130,14 @@ drm_sched_entity_get_free_sched(struct drm_sched_entity 
*entity)
int i;
 
for (i = 0; i < entity->num_rq_list; ++i) {
-   num_jobs = atomic_read(&entity->rq_list[i]->sched->num_jobs);
+   struct drm_gpu_scheduler *sched = entity->rq_list[i]->sched;
+
+   if (!entity->rq_list[i]->sched->ready) {
+   DRM_WARN("sched%s is not ready, skipping", sched->name);
+   continue;
+   }
+
+   num_jobs = atomic_read(&sched->num_jobs);
if (num_jobs < min_jobs) {
min_jobs = num_jobs;
rq = entity->rq_list[i];
diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
b/drivers/gpu/drm/scheduler/sched_main.c
index 63b997d..772adec 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -420,6 +420,9 @@ int drm_sched_job_init(struct drm_sched_job *job,
struct drm_gpu_scheduler *sched;
 
drm_sched_entity_select_rq(entity);
+   if (!entity->rq)
+   return -ENOENT;
+
sched = entity->rq->sched;
 
job->sched = sched;
@@ -598,6 +601,7 @@ static int drm_sched_main(void *param)
  * @hang_limit: number of times to allow a job to hang before dropping it
  * @timeout: timeout value in jiffies for the scheduler
  * @name: name used for debugging
+ * @ready: marks if the underlying HW is ready to work
  *
  * Return 0 on success, otherwise error code.
  */
@@ -606,7 +610,8 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
   unsigned hw_submission,
   unsigned hang_limit,
   long timeout,
-  const char *name)
+  const char *name,
+  bool ready)
 {
int i;
sched->ops = ops;
@@ -633,6 +638,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
return PTR_ERR(sched->thread);
}
 
+   sched->ready = ready;
return 0;
 }
 EXPORT_SYMBOL(drm_sched_init);
@@ -648,5 +654,7 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched)
 {
if (sched->thread)
kthread_stop(sched->thread);
+
+   sched->ready = false;
 }
 EXPORT_SYMBOL(drm_sched_fini);
diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
index 80b641f..7cedb5f 100644
--- a/drivers/gpu/drm/v3d/v3d_sched.c
+++ b/drivers/gpu/drm/v3d/v3d_sched.c
@@ -212,7 +212,7 @@ v3d_sched_init(struct v3d_dev *v3d)
 &v3d_sched_ops,
  

[PATCH 3/3] drm/amdkfd: Use functions from amdgpu to invalidate vmid in kfd

2018-10-22 Thread Zhao, Yong
Change-Id: I306305e43d4b4032316909b3f4e3f9f5ca4520ae
Signed-off-by: Yong Zhao 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c | 32 +--
 1 file changed, 1 insertion(+), 31 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
index 3ade5d5..18f161b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
@@ -48,17 +48,6 @@
 #include "soc15d.h"
 #include "gmc_v9_0.h"
 
-/* HACK: MMHUB and GC both have VM-related register with the same
- * names but different offsets. Define the MMHUB register we need here
- * with a prefix. A proper solution would be to move the functions
- * programming these registers into gfx_v9_0.c and mmhub_v1_0.c
- * respectively.
- */
-#define mmMMHUB_VM_INVALIDATE_ENG16_REQ0x06f3
-#define mmMMHUB_VM_INVALIDATE_ENG16_REQ_BASE_IDX   0
-
-#define mmMMHUB_VM_INVALIDATE_ENG16_ACK0x0705
-#define mmMMHUB_VM_INVALIDATE_ENG16_ACK_BASE_IDX   0
 
 #define V9_PIPE_PER_MEC(4)
 #define V9_QUEUES_PER_PIPE_MEC (8)
@@ -742,13 +731,6 @@ static uint16_t get_atc_vmid_pasid_mapping_pasid(struct 
kgd_dev *kgd,
 static void write_vmid_invalidate_request(struct kgd_dev *kgd, uint8_t vmid)
 {
struct amdgpu_device *adev = (struct amdgpu_device *) kgd;
-   uint32_t req = (1 << vmid) |
-   (0 << VM_INVALIDATE_ENG16_REQ__FLUSH_TYPE__SHIFT) | /* legacy */
-   VM_INVALIDATE_ENG16_REQ__INVALIDATE_L2_PTES_MASK |
-   VM_INVALIDATE_ENG16_REQ__INVALIDATE_L2_PDE0_MASK |
-   VM_INVALIDATE_ENG16_REQ__INVALIDATE_L2_PDE1_MASK |
-   VM_INVALIDATE_ENG16_REQ__INVALIDATE_L2_PDE2_MASK |
-   VM_INVALIDATE_ENG16_REQ__INVALIDATE_L1_PTES_MASK;
 
mutex_lock(&adev->srbm_mutex);
 
@@ -767,19 +749,7 @@ static void write_vmid_invalidate_request(struct kgd_dev 
*kgd, uint8_t vmid)
 * TODO 2: support range-based invalidation, requires kfg2kgd
 * interface change
 */
-   WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_INVALIDATE_ENG16_REQ), req);
-
-   WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMMHUB_VM_INVALIDATE_ENG16_REQ),
-   req);
-
-   while (!(RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_INVALIDATE_ENG16_ACK)) &
-   (1 << vmid)))
-   cpu_relax();
-
-   while (!(RREG32(SOC15_REG_OFFSET(MMHUB, 0,
-   mmMMHUB_VM_INVALIDATE_ENG16_ACK)) &
-   (1 << vmid)))
-   cpu_relax();
+   gmc_v9_0_flush_gpu_tlb_helper(adev, vmid, 0, 16, false);
 
mutex_unlock(&adev->srbm_mutex);
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/3] drm/amdgpu: Expose gmc_v9_0_flush_gpu_tlb_helper() for kfd to use

2018-10-22 Thread Zhao, Yong
Change-Id: I3dcd71955297c53b181f82e7078981230c642c01
Signed-off-by: Yong Zhao 
---
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 64 ---
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h |  3 ++
 2 files changed, 40 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index f35d7a5..6f96545 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -293,14 +293,15 @@ static void gmc_v9_0_set_irq_funcs(struct amdgpu_device 
*adev)
adev->gmc.vm_fault.funcs = &gmc_v9_0_irq_funcs;
 }
 
-static uint32_t gmc_v9_0_get_invalidate_req(unsigned int vmid)
+static uint32_t gmc_v9_0_get_invalidate_req(unsigned int vmid,
+   uint32_t flush_type)
 {
u32 req = 0;
 
/* invalidate using legacy mode on vmid*/
req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
PER_VMID_INVALIDATE_REQ, 1 << vmid);
-   req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, FLUSH_TYPE, 0);
+   req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, FLUSH_TYPE, 
flush_type);
req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PTES, 1);
req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE0, 1);
req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE1, 1);
@@ -354,32 +355,15 @@ static signed long  amdgpu_kiq_reg_write_reg_wait(struct 
amdgpu_device *adev,
return r;
 }
 
-/*
- * GART
- * VMID 0 is the physical GPU addresses as used by the kernel.
- * VMIDs 1-15 are used for userspace clients and are handled
- * by the amdgpu vm/hsa code.
- */
-
-/**
- * gmc_v9_0_flush_gpu_tlb - gart tlb flush callback
- *
- * @adev: amdgpu_device pointer
- * @vmid: vm instance to flush
- *
- * Flush the TLB for the requested page table.
- */
-static void gmc_v9_0_flush_gpu_tlb(struct amdgpu_device *adev,
-   uint32_t vmid)
+void gmc_v9_0_flush_gpu_tlb_helper(struct amdgpu_device *adev, uint32_t vmid,
+   uint32_t flush_type, uint32_t eng, bool lock)
 {
-   /* Use register 17 for GART */
-   const unsigned eng = 17;
unsigned i, j;
int r;
 
for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
struct amdgpu_vmhub *hub = &adev->vmhub[i];
-   u32 tmp = gmc_v9_0_get_invalidate_req(vmid);
+   u32 tmp = gmc_v9_0_get_invalidate_req(vmid, flush_type);
 
if (adev->gfx.kiq.ring.ready &&
(amdgpu_sriov_runtime(adev) || !amdgpu_sriov_vf(adev)) &&
@@ -390,7 +374,8 @@ static void gmc_v9_0_flush_gpu_tlb(struct amdgpu_device 
*adev,
continue;
}
 
-   spin_lock(&adev->gmc.invalidate_lock);
+   if (lock)
+   spin_lock(&adev->gmc.invalidate_lock);
 
WREG32_NO_KIQ(hub->vm_inv_eng0_req + eng, tmp);
 
@@ -403,7 +388,8 @@ static void gmc_v9_0_flush_gpu_tlb(struct amdgpu_device 
*adev,
cpu_relax();
}
if (j < 100) {
-   spin_unlock(&adev->gmc.invalidate_lock);
+   if (lock)
+   spin_unlock(&adev->gmc.invalidate_lock);
continue;
}
 
@@ -416,20 +402,44 @@ static void gmc_v9_0_flush_gpu_tlb(struct amdgpu_device 
*adev,
udelay(1);
}
if (j < adev->usec_timeout) {
-   spin_unlock(&adev->gmc.invalidate_lock);
+   if (lock)
+   spin_unlock(&adev->gmc.invalidate_lock);
continue;
}
-   spin_unlock(&adev->gmc.invalidate_lock);
+   if (lock)
+   spin_unlock(&adev->gmc.invalidate_lock);
DRM_ERROR("Timeout waiting for VM flush ACK!\n");
}
 }
 
+/*
+ * GART
+ * VMID 0 is the physical GPU addresses as used by the kernel.
+ * VMIDs 1-15 are used for userspace clients and are handled
+ * by the amdgpu vm/hsa code.
+ */
+
+/**
+ * gmc_v9_0_flush_gpu_tlb - gart tlb flush callback
+ *
+ * @adev: amdgpu_device pointer
+ * @vmid: vm instance to flush
+ *
+ * Flush the TLB for the requested page table.
+ */
+static void gmc_v9_0_flush_gpu_tlb(struct amdgpu_device *adev,
+   uint32_t vmid)
+{
+   /* Use engine 17 for amdgpu */
+   gmc_v9_0_flush_gpu_tlb_helper(adev, vmid, 0, 17, true);
+}
+
 static uint64_t gmc_v9_0_emit_flush_gpu_tlb(struct amdgpu_ring *ring,
unsigned vmid, uint64_t pd_addr)
 {
struct amdgpu_device *adev = ring->adev;
struct amdgpu_vmhub *hub = &adev->vmhub[ring->funcs->vmhub];
-   uint32_t req = gmc_v9_0_get_invalidate_req(vmid);
+   uint32_t req = gmc_v9_0_get

[PATCH 1/3] drm/amdkfd: Remove unnecessary register setting when invalidating tlb in kfd

2018-10-22 Thread Zhao, Yong
Those register settings have been done in gfxhub_v1_0_program_invalidation()
and mmhub_v1_0_program_invalidation().

Change-Id: I9b9b44f17ac2a6ff0c9c78f91885665da75543d0
Signed-off-by: Yong Zhao 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c | 17 -
 1 file changed, 17 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
index 60b5f56c..3ade5d5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
@@ -60,11 +60,6 @@
 #define mmMMHUB_VM_INVALIDATE_ENG16_ACK0x0705
 #define mmMMHUB_VM_INVALIDATE_ENG16_ACK_BASE_IDX   0
 
-#define mmMMHUB_VM_INVALIDATE_ENG16_ADDR_RANGE_LO320x0727
-#define mmMMHUB_VM_INVALIDATE_ENG16_ADDR_RANGE_LO32_BASE_IDX   0
-#define mmMMHUB_VM_INVALIDATE_ENG16_ADDR_RANGE_HI320x0728
-#define mmMMHUB_VM_INVALIDATE_ENG16_ADDR_RANGE_HI32_BASE_IDX   0
-
 #define V9_PIPE_PER_MEC(4)
 #define V9_QUEUES_PER_PIPE_MEC (8)
 
@@ -772,18 +767,6 @@ static void write_vmid_invalidate_request(struct kgd_dev 
*kgd, uint8_t vmid)
 * TODO 2: support range-based invalidation, requires kfg2kgd
 * interface change
 */
-   WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_INVALIDATE_ENG16_ADDR_RANGE_LO32),
-   0x);
-   WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_INVALIDATE_ENG16_ADDR_RANGE_HI32),
-   0x001f);
-
-   WREG32(SOC15_REG_OFFSET(MMHUB, 0,
-   mmMMHUB_VM_INVALIDATE_ENG16_ADDR_RANGE_LO32),
-   0x);
-   WREG32(SOC15_REG_OFFSET(MMHUB, 0,
-   mmMMHUB_VM_INVALIDATE_ENG16_ADDR_RANGE_HI32),
-   0x001f);
-
WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_INVALIDATE_ENG16_REQ), req);
 
WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMMHUB_VM_INVALIDATE_ENG16_REQ),
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/2] drm/amdkfd: Delete a duplicate statement in set_pasid_vmid_mapping()

2018-10-22 Thread Zhao, Yong
The same statement is later done in kgd_set_pasid_vmid_mapping() already,
so no need to in set_pasid_vmid_mapping() again.

Change-Id: Iaf64b90c7dcb59944fb2012a58473dd063e73c60
Signed-off-by: Yong Zhao 
---
 drivers/gpu/drm/amd/amdkfd/cik_regs.h | 2 --
 drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c | 9 +
 2 files changed, 1 insertion(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/cik_regs.h 
b/drivers/gpu/drm/amd/amdkfd/cik_regs.h
index 37ce6dd..8e2a166 100644
--- a/drivers/gpu/drm/amd/amdkfd/cik_regs.h
+++ b/drivers/gpu/drm/amd/amdkfd/cik_regs.h
@@ -68,6 +68,4 @@
 
 #define GRBM_GFX_INDEX 0x30800
 
-#defineATC_VMID_PASID_MAPPING_VALID(1U << 31)
-
 #endif
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index dfd8f9e5..fb9d66e 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -846,15 +846,8 @@ static int
 set_pasid_vmid_mapping(struct device_queue_manager *dqm, unsigned int pasid,
unsigned int vmid)
 {
-   uint32_t pasid_mapping;
-
-   pasid_mapping = (pasid == 0) ? 0 :
-   (uint32_t)pasid |
-   ATC_VMID_PASID_MAPPING_VALID;
-
return dqm->dev->kfd2kgd->set_pasid_vmid_mapping(
-   dqm->dev->kgd, pasid_mapping,
-   vmid);
+   dqm->dev->kgd, pasid, vmid);
 }
 
 static void init_interrupts(struct device_queue_manager *dqm)
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/2] drm/amdkfd: page_table_base already have the flags needed

2018-10-22 Thread Zhao, Yong
The flags are added when calling amdgpu_gmc_pd_addr().

Change-Id: Idd85b1ac35d3d100154df8229ea20721d9a7045c
Signed-off-by: Yong Zhao 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c | 5 ++---
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h | 1 +
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
index 54c3690..60b5f56c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
@@ -978,7 +978,6 @@ static void set_vm_context_page_table_base(struct kgd_dev 
*kgd, uint32_t vmid,
uint64_t page_table_base)
 {
struct amdgpu_device *adev = get_amdgpu_device(kgd);
-   uint64_t base = page_table_base | AMDGPU_PTE_VALID;
 
if (!amdgpu_amdkfd_is_kfd_vmid(adev, vmid)) {
pr_err("trying to set page table base for wrong VMID %u\n",
@@ -990,7 +989,7 @@ static void set_vm_context_page_table_base(struct kgd_dev 
*kgd, uint32_t vmid,
 * now, all processes share the same address space size, like
 * on GFX8 and older.
 */
-   mmhub_v1_0_setup_vm_pt_regs(adev, vmid, base);
+   mmhub_v1_0_setup_vm_pt_regs(adev, vmid, page_table_base);
 
-   gfxhub_v1_0_setup_vm_pt_regs(adev, vmid, base);
+   gfxhub_v1_0_setup_vm_pt_regs(adev, vmid, page_table_base);
 }
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h 
b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
index 53ff86d..dec8e64 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
@@ -507,6 +507,7 @@ struct qcm_process_device {
 * All the memory management data should be here too
 */
uint64_t gds_context_area;
+   /* Contains page table flags such as AMDGPU_PTE_VALID since gfx9 */
uint64_t page_table_base;
uint32_t sh_mem_config;
uint32_t sh_mem_bases;
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] mm: convert totalram_pages, totalhigh_pages and managed_pages to atomic.

2018-10-22 Thread Arun KS
Remove managed_page_count_lock spinlock and instead use atomic
variables.

Suggested-by: Michal Hocko 
Suggested-by: Vlastimil Babka 
Signed-off-by: Arun KS 

---
As discussed here,
https://patchwork.kernel.org/patch/10627521/#22261253
---
---
 arch/csky/mm/init.c   |  4 +-
 arch/powerpc/platforms/pseries/cmm.c  | 11 ++--
 arch/s390/mm/init.c   |  2 +-
 arch/um/kernel/mem.c  |  4 +-
 arch/x86/kernel/cpu/microcode/core.c  |  5 +-
 drivers/char/agp/backend.c|  4 +-
 drivers/gpu/drm/amd/amdkfd/kfd_crat.c |  2 +-
 drivers/gpu/drm/i915/i915_gem.c   |  2 +-
 drivers/gpu/drm/i915/selftests/i915_gem_gtt.c |  4 +-
 drivers/hv/hv_balloon.c   | 19 +++
 drivers/md/dm-bufio.c |  5 +-
 drivers/md/dm-crypt.c |  4 +-
 drivers/md/dm-integrity.c |  4 +-
 drivers/md/dm-stats.c |  3 +-
 drivers/media/platform/mtk-vpu/mtk_vpu.c  |  3 +-
 drivers/misc/vmw_balloon.c|  2 +-
 drivers/parisc/ccio-dma.c |  5 +-
 drivers/parisc/sba_iommu.c|  5 +-
 drivers/staging/android/ion/ion_system_heap.c |  2 +-
 drivers/xen/xen-selfballoon.c |  7 +--
 fs/ceph/super.h   |  3 +-
 fs/file_table.c   |  9 ++--
 fs/fuse/inode.c   |  4 +-
 fs/nfs/write.c|  3 +-
 fs/nfsd/nfscache.c|  3 +-
 fs/ntfs/malloc.h  |  2 +-
 fs/proc/base.c|  3 +-
 include/linux/highmem.h   |  2 +-
 include/linux/mm.h|  2 +-
 include/linux/mmzone.h| 10 +---
 include/linux/swap.h  |  2 +-
 kernel/fork.c |  6 +--
 kernel/kexec_core.c   |  5 +-
 kernel/power/snapshot.c   |  2 +-
 lib/show_mem.c|  3 +-
 mm/highmem.c  |  2 +-
 mm/huge_memory.c  |  2 +-
 mm/kasan/quarantine.c |  4 +-
 mm/memblock.c |  6 +--
 mm/memory_hotplug.c   |  4 +-
 mm/mm_init.c  |  3 +-
 mm/oom_kill.c |  2 +-
 mm/page_alloc.c   | 75 ++-
 mm/shmem.c| 12 +++--
 mm/slab.c |  3 +-
 mm/swap.c |  3 +-
 mm/util.c |  2 +-
 mm/vmalloc.c  |  4 +-
 mm/vmstat.c   |  4 +-
 mm/workingset.c   |  2 +-
 mm/zswap.c|  2 +-
 net/dccp/proto.c  |  6 +--
 net/decnet/dn_route.c |  2 +-
 net/ipv4/tcp_metrics.c|  2 +-
 net/netfilter/nf_conntrack_core.c |  6 +--
 net/netfilter/xt_hashlimit.c  |  4 +-
 net/sctp/protocol.c   |  6 +--
 security/integrity/ima/ima_kexec.c|  2 +-
 58 files changed, 171 insertions(+), 143 deletions(-)

diff --git a/arch/csky/mm/init.c b/arch/csky/mm/init.c
index dc07c07..3f4d35e 100644
--- a/arch/csky/mm/init.c
+++ b/arch/csky/mm/init.c
@@ -71,7 +71,7 @@ void free_initrd_mem(unsigned long start, unsigned long end)
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
-   totalram_pages++;
+   atomic_long_inc(&totalram_pages);
}
 }
 #endif
@@ -88,7 +88,7 @@ void free_initmem(void)
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
-   totalram_pages++;
+   atomic_long_inc(&totalram_pages);
addr += PAGE_SIZE;
}
 
diff --git a/arch/powerpc/platforms/pseries/cmm.c 
b/arch/powerpc/platforms/pseries/cmm.c
index 25427a4..85fe503 100644
--- a/arch/powerpc/platforms/pseries/cmm.c
+++ b/arch/powerpc/platforms/pseries/cmm.c
@@ -208,7 +208,7 @@ static long cmm_alloc_pages(long nr)
 
pa->page[pa->index++] = addr;
loaned_pages++;
-   totalram_pages--;
+   atomic_long_dec(&totalram_pages);
spin_unlock(&cmm_lock);
nr--;
}
@@ -247,7 +247,7 @@ static long cmm_free_pages(long nr)
free_page(addr);
loaned_pages--;
nr--;
-   totalram_pages++;
+

Re: [PATCH] mm: convert totalram_pages, totalhigh_pages and managed_pages to atomic.

2018-10-22 Thread Michal Hocko
On Mon 22-10-18 22:53:22, Arun KS wrote:
> Remove managed_page_count_lock spinlock and instead use atomic
> variables.

I assume this has been auto-generated. If yes, it would be better to
mention the script so that people can review it and regenerate for
comparision. Such a large change is hard to review manually.
-- 
Michal Hocko
SUSE Labs
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: fix a missing-check bug

2018-10-22 Thread Kuehling, Felix
The BIOS signature check does not guarantee integrity of the BIOS image
either way. As I understand it, the signature is just a magic number.
It's not a cryptographic signature. The check is just a sanity check.
Therefore this change doesn't add any meaningful protection against the
scenario you described.

Regards,
  Felix

On 2018-10-20 4:57 p.m., Wenwen Wang wrote:
> In amdgpu_read_bios_from_rom(), the header of the VBIOS is firstly copied
> to 'header' from an IO memory region through
> amdgpu_asic_read_bios_from_rom(). Then the header is checked to see whether
> it is a valid header. If yes, the whole VBIOS, including the header, is
> then copied to 'adev->bios'. The problem here is that no check is enforced
> on the header after the second copy. Given that the device also has the
> permission to access the IO memory region, it is possible for a malicious
> device controlled by an attacker to modify the header between these two
> copies. By doing so, the attacker can supply compromised VBIOS, which can
> cause undefined behavior of the kernel and introduce potential security
> issues.
>
> This patch rewrites the header in 'adev->bios' using the header acquired in
> the first copy.
>
> Signed-off-by: Wenwen Wang 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
> index a5df80d..ac701f4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
> @@ -181,6 +181,8 @@ static bool amdgpu_read_bios_from_rom(struct 
> amdgpu_device *adev)
>   /* read complete BIOS */
>   amdgpu_asic_read_bios_from_rom(adev, adev->bios, len);
>  
> + memcpy(adev->bios, header, AMD_VBIOS_SIGNATURE_END);
> +
>   if (!check_atom_bios(adev->bios, len)) {
>   kfree(adev->bios);
>   return false;
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

2018-10-22 Thread Deucher, Alexander
This re-introduces a 64 division that is not handled correctly with the % 
operator.


Alex


From: amd-gfx  on behalf of Rex Zhu 

Sent: Monday, October 22, 2018 12:09:21 PM
To: amd-gfx@lists.freedesktop.org; Koenig, Christian
Cc: Zhu, Rex
Subject: [PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

When the va address located in the last pd entry,
the alloc_pts will failed.
caused by
"drm/amdgpu: add amdgpu_vm_entries_mask v2"
commit 72af632549b97ead9251bb155f08fefd1fb6f5c3.

Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 34 +++---
 1 file changed, 7 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 054633b..1a3af72 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -191,26 +191,6 @@ static unsigned amdgpu_vm_num_entries(struct amdgpu_device 
*adev,
 }

 /**
- * amdgpu_vm_entries_mask - the mask to get the entry number of a PD/PT
- *
- * @adev: amdgpu_device pointer
- * @level: VMPT level
- *
- * Returns:
- * The mask to extract the entry number of a PD/PT from an address.
- */
-static uint32_t amdgpu_vm_entries_mask(struct amdgpu_device *adev,
-  unsigned int level)
-{
-   if (level <= adev->vm_manager.root_level)
-   return 0x;
-   else if (level != AMDGPU_VM_PTB)
-   return 0x1ff;
-   else
-   return AMDGPU_VM_PTE_COUNT(adev) - 1;
-}
-
-/**
  * amdgpu_vm_bo_size - returns the size of the BOs in bytes
  *
  * @adev: amdgpu_device pointer
@@ -419,17 +399,17 @@ static void amdgpu_vm_pt_start(struct amdgpu_device *adev,
 static bool amdgpu_vm_pt_descendant(struct amdgpu_device *adev,
 struct amdgpu_vm_pt_cursor *cursor)
 {
-   unsigned mask, shift, idx;
+   unsigned num_entries, shift, idx;

 if (!cursor->entry->entries)
 return false;

 BUG_ON(!cursor->entry->base.bo);
-   mask = amdgpu_vm_entries_mask(adev, cursor->level);
+   num_entries = amdgpu_vm_num_entries(adev, cursor->level);
 shift = amdgpu_vm_level_shift(adev, cursor->level);

 ++cursor->level;
-   idx = (cursor->pfn >> shift) & mask;
+   idx = (cursor->pfn >> shift) % num_entries;
 cursor->parent = cursor->entry;
 cursor->entry = &cursor->entry->entries[idx];
 return true;
@@ -1618,7 +1598,7 @@ static int amdgpu_vm_update_ptes(struct 
amdgpu_pte_update_params *params,
 amdgpu_vm_pt_start(adev, params->vm, start, &cursor);
 while (cursor.pfn < end) {
 struct amdgpu_bo *pt = cursor.entry->base.bo;
-   unsigned shift, parent_shift, mask;
+   unsigned shift, parent_shift, num_entries;
 uint64_t incr, entry_end, pe_start;

 if (!pt)
@@ -1673,9 +1653,9 @@ static int amdgpu_vm_update_ptes(struct 
amdgpu_pte_update_params *params,

 /* Looks good so far, calculate parameters for the update */
 incr = AMDGPU_GPU_PAGE_SIZE << shift;
-   mask = amdgpu_vm_entries_mask(adev, cursor.level);
-   pe_start = ((cursor.pfn >> shift) & mask) * 8;
-   entry_end = (mask + 1) << shift;
+   num_entries = amdgpu_vm_num_entries(adev, cursor.level);
+   pe_start = ((cursor.pfn >> shift) & (num_entries - 1)) * 8;
+   entry_end = num_entries << shift;
 entry_end += cursor.pfn & ~(entry_end - 1);
 entry_end = min(entry_end, end);

--
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: Fix amdgpu_vm_alloc_pts failed

2018-10-22 Thread Rex Zhu
When the va address located in the last pd entry,
the alloc_pts will failed.
caused by
"drm/amdgpu: add amdgpu_vm_entries_mask v2"
commit 72af632549b97ead9251bb155f08fefd1fb6f5c3.

Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 34 +++---
 1 file changed, 7 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 054633b..1a3af72 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -191,26 +191,6 @@ static unsigned amdgpu_vm_num_entries(struct amdgpu_device 
*adev,
 }
 
 /**
- * amdgpu_vm_entries_mask - the mask to get the entry number of a PD/PT
- *
- * @adev: amdgpu_device pointer
- * @level: VMPT level
- *
- * Returns:
- * The mask to extract the entry number of a PD/PT from an address.
- */
-static uint32_t amdgpu_vm_entries_mask(struct amdgpu_device *adev,
-  unsigned int level)
-{
-   if (level <= adev->vm_manager.root_level)
-   return 0x;
-   else if (level != AMDGPU_VM_PTB)
-   return 0x1ff;
-   else
-   return AMDGPU_VM_PTE_COUNT(adev) - 1;
-}
-
-/**
  * amdgpu_vm_bo_size - returns the size of the BOs in bytes
  *
  * @adev: amdgpu_device pointer
@@ -419,17 +399,17 @@ static void amdgpu_vm_pt_start(struct amdgpu_device *adev,
 static bool amdgpu_vm_pt_descendant(struct amdgpu_device *adev,
struct amdgpu_vm_pt_cursor *cursor)
 {
-   unsigned mask, shift, idx;
+   unsigned num_entries, shift, idx;
 
if (!cursor->entry->entries)
return false;
 
BUG_ON(!cursor->entry->base.bo);
-   mask = amdgpu_vm_entries_mask(adev, cursor->level);
+   num_entries = amdgpu_vm_num_entries(adev, cursor->level);
shift = amdgpu_vm_level_shift(adev, cursor->level);
 
++cursor->level;
-   idx = (cursor->pfn >> shift) & mask;
+   idx = (cursor->pfn >> shift) % num_entries;
cursor->parent = cursor->entry;
cursor->entry = &cursor->entry->entries[idx];
return true;
@@ -1618,7 +1598,7 @@ static int amdgpu_vm_update_ptes(struct 
amdgpu_pte_update_params *params,
amdgpu_vm_pt_start(adev, params->vm, start, &cursor);
while (cursor.pfn < end) {
struct amdgpu_bo *pt = cursor.entry->base.bo;
-   unsigned shift, parent_shift, mask;
+   unsigned shift, parent_shift, num_entries;
uint64_t incr, entry_end, pe_start;
 
if (!pt)
@@ -1673,9 +1653,9 @@ static int amdgpu_vm_update_ptes(struct 
amdgpu_pte_update_params *params,
 
/* Looks good so far, calculate parameters for the update */
incr = AMDGPU_GPU_PAGE_SIZE << shift;
-   mask = amdgpu_vm_entries_mask(adev, cursor.level);
-   pe_start = ((cursor.pfn >> shift) & mask) * 8;
-   entry_end = (mask + 1) << shift;
+   num_entries = amdgpu_vm_num_entries(adev, cursor.level);
+   pe_start = ((cursor.pfn >> shift) & (num_entries - 1)) * 8;
+   entry_end = num_entries << shift;
entry_end += cursor.pfn & ~(entry_end - 1);
entry_end = min(entry_end, end);
 
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [Linux-v4.18-rc6] modpost-errors when compiling with clang-7 and CONFIG_DRM_AMDGPU=m

2018-10-22 Thread Sedat Dilek
On Wed, Sep 19, 2018 at 11:47 AM Sedat Dilek  wrote:
>
> On Sun, Jul 29, 2018 at 4:39 PM, Christian König
>  wrote:
> >> Do you need further informations?
> >
> > No, that is a known issue.
> >
> > Regards,
> > Christian.
> >
>
> Hi Christian,
>
> is/was this issue fixed?
>
> Regards,
> - Sedat -
>
> >
> > Am 29.07.2018 um 15:52 schrieb Sedat Dilek:
> >>
> >> Hi,
> >>
> >> when compiling with clang-7 and CONFIG_DRM_AMDGPU=m I see the following...
> >>
> >>if [ "" = "-pg" ]; then if [ arch/x86/boot/compressed/misc.o !=
> >> "scripts/mod/empty.o" ]; then ./scripts/recordmcount
> >> "arch/x86/boot/compressed/misc.o"; fi; fi;
> >> ERROR: "__addsf3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__subdf3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__gedf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__fixunssfsi" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__floatunsisf" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__unordsf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__gesf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__mulsf3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__truncdfsf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__ltsf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__muldf3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__divdf3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__eqsf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__floatsisf" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__ledf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__gtsf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__fixdfsi" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__floatunsidf" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__nesf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__adddf3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__extendsfdf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__fixunsdfsi" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__lesf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__ltdf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__floatsidf" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__subsf3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__gtdf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__fixsfsi" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__divsf3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> ERROR: "__floatdidf" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
> >> make[4]: *** [scripts/Makefile.modpost:92: __modpost] Error 1
> >> make[3]: *** [Makefile:1208: modules] Error 2
> >> make[3]: *** Waiting for unfinished jobs
> >>
> >> For now I have disabled CONFIG_DRM_AMDGPU=n.
> >>
> >> Do you need further informations?
> >>

Hi,

any news on this issue?
With Linux v4.19 final I am still building with CONFIG_DRM_AMDGPU=n.
Thanks in advance.

Regards,
- Sedat -
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 4/5] drm/ttm: initialize globals during device init

2018-10-22 Thread Christian König

Am 22.10.18 um 08:45 schrieb Zhang, Jerry(Junwei):

A question in ttm_bo.c
[SNIP]

    int ttm_bo_device_release(struct ttm_bo_device *bdev)
  {
@@ -1623,18 +1620,25 @@ int ttm_bo_device_release(struct 
ttm_bo_device *bdev)

drm_vma_offset_manager_destroy(&bdev->vma_manager);
  +    if (!ret)
+    ttm_bo_global_release();


if ttm_bo_clean_mm() fails, it will skip ttm_bo_global_release().
When will it be called?


Never.



Shall add it to delayed work? or maybe we could release it directly?


No, when ttm_bo_device_release() fails somebody is trying to unload a 
driver while this driver still has memory allocated.


In this case BO accounting should not be released because we should make 
sure that all the leaked memory is still accounted.


Christian.



Regards,
Jerry


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Reverse the sequence of ctx_mgr_fini and vm_fini in amdgpu_driver_postclose_kms

2018-10-22 Thread Christian König

Am 22.10.18 um 11:47 schrieb Rex Zhu:

csa buffer will be created per ctx, when ctx fini,
the csa buffer and va will be released. so need to
do ctx_mgr fin before vm fini.

Signed-off-by: Rex Zhu 


Reviewed-by: Christian König 



---
  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
index 27de848..f2ef9a1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
@@ -1054,8 +1054,8 @@ void amdgpu_driver_postclose_kms(struct drm_device *dev,
pasid = fpriv->vm.pasid;
pd = amdgpu_bo_ref(fpriv->vm.root.base.bo);
  
-	amdgpu_vm_fini(adev, &fpriv->vm);

amdgpu_ctx_mgr_fini(&fpriv->ctx_mgr);
+   amdgpu_vm_fini(adev, &fpriv->vm);
  
  	if (pasid)

amdgpu_pasid_free_delayed(pd->tbo.resv, pasid);


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: Reverse the sequence of ctx_mgr_fini and vm_fini in amdgpu_driver_postclose_kms

2018-10-22 Thread Rex Zhu
csa buffer will be created per ctx, when ctx fini,
the csa buffer and va will be released. so need to
do ctx_mgr fin before vm fini.

Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
index 27de848..f2ef9a1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
@@ -1054,8 +1054,8 @@ void amdgpu_driver_postclose_kms(struct drm_device *dev,
pasid = fpriv->vm.pasid;
pd = amdgpu_bo_ref(fpriv->vm.root.base.bo);
 
-   amdgpu_vm_fini(adev, &fpriv->vm);
amdgpu_ctx_mgr_fini(&fpriv->ctx_mgr);
+   amdgpu_vm_fini(adev, &fpriv->vm);
 
if (pasid)
amdgpu_pasid_free_delayed(pd->tbo.resv, pasid);
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v2 2/2] drm/amdgpu: Retire amdgpu_ring.ready flag.

2018-10-22 Thread Koenig, Christian
Am 19.10.18 um 22:52 schrieb Andrey Grodzovsky:
> Start using drm_gpu_scheduler.ready isntead.

Please drop all occurrences of setting sched.ready manually around the 
ring tests.

Instead add a helper function into amdgpu_ring.c which does the ring 
tests and sets ready depending on the result.

Regards,
Christian.

>
> Signed-off-by: Andrey Grodzovsky 
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c|  2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c |  2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c|  6 +++---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c   | 18 -
>   drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c|  2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c  |  2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  |  1 -
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  2 +-
>   drivers/gpu/drm/amd/amdgpu/cik_sdma.c |  8 
>   drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c | 14 ++---
>   drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c | 12 ++--
>   drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 24 
> ---
>   drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 18 -
>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c |  2 +-
>   drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c|  8 
>   drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c|  8 
>   drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c| 14 ++---
>   drivers/gpu/drm/amd/amdgpu/si_dma.c   |  6 +++---
>   drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c |  6 +++---
>   drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c |  6 +++---
>   drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c | 10 +-
>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c | 10 +-
>   drivers/gpu/drm/amd/amdgpu/vce_v2_0.c |  4 ++--
>   drivers/gpu/drm/amd/amdgpu/vce_v3_0.c |  4 ++--
>   drivers/gpu/drm/amd/amdgpu/vce_v4_0.c |  6 +++---
>   drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c | 14 ++---
>   26 files changed, 105 insertions(+), 104 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
> index c31a884..eaa58bb 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
> @@ -144,7 +144,7 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
> KGD_MAX_QUEUES);
>   
>   /* remove the KIQ bit as well */
> - if (adev->gfx.kiq.ring.ready)
> + if (adev->gfx.kiq.ring.sched.ready)
>   clear_bit(amdgpu_gfx_queue_to_bit(adev,
> adev->gfx.kiq.ring.me 
> - 1,
> 
> adev->gfx.kiq.ring.pipe,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
> index 42cb4c4..f7819a5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
> @@ -876,7 +876,7 @@ static int invalidate_tlbs(struct kgd_dev *kgd, uint16_t 
> pasid)
>   if (adev->in_gpu_reset)
>   return -EIO;
>   
> - if (ring->ready)
> + if (ring->sched.ready)
>   return invalidate_tlbs_with_kiq(adev, pasid);
>   
>   for (vmid = 0; vmid < 16; vmid++) {
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> index b8963b7..fc74f40a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> @@ -146,7 +146,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
> num_ibs,
>   fence_ctx = 0;
>   }
>   
> - if (!ring->ready) {
> + if (!ring->sched.ready) {
>   dev_err(adev->dev, "couldn't schedule ib on ring <%s>\n", 
> ring->name);
>   return -EINVAL;
>   }
> @@ -351,7 +351,7 @@ int amdgpu_ib_ring_tests(struct amdgpu_device *adev)
>   struct amdgpu_ring *ring = adev->rings[i];
>   long tmo;
>   
> - if (!ring || !ring->ready)
> + if (!ring || !ring->sched.ready)
>   continue;
>   
>   /* skip IB tests for KIQ in general for the below reasons:
> @@ -375,7 +375,7 @@ int amdgpu_ib_ring_tests(struct amdgpu_device *adev)
>   
>   r = amdgpu_ring_test_ib(ring, tmo);
>   if (r) {
> - ring->ready = false;
> + ring->sched.ready = false;
>   
>   if (ring == &adev->gfx.gfx_ring[0]) {
>   /* oh, oh, that's really bad */
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> index 50ece76..25307a4 100644
> --- a/driver

Re: [PATCH v2 1/2] drm/sched: Add boolean to mark if sched is ready to work v2

2018-10-22 Thread Koenig, Christian
Am 19.10.18 um 22:52 schrieb Andrey Grodzovsky:
> Problem:
> A particular scheduler may become unsuable (underlying HW) after
> some event (e.g. GPU reset). If it's later chosen by
> the get free sched. policy a command will fail to be
> submitted.
>
> Fix:
> Add a driver specific callback to report the sched status so
> rq with bad sched can be avoided in favor of working one or
> none in which case job init will fail.
>
> v2: Switch from driver callback to flag in scheduler.
>
> Signed-off-by: Andrey Grodzovsky 
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c |  2 +-
>   drivers/gpu/drm/etnaviv/etnaviv_sched.c   |  2 +-
>   drivers/gpu/drm/scheduler/sched_entity.c  |  9 -
>   drivers/gpu/drm/scheduler/sched_main.c| 10 +-
>   drivers/gpu/drm/v3d/v3d_sched.c   |  4 ++--
>   include/drm/gpu_scheduler.h   |  5 -
>   6 files changed, 25 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> index 5448cf2..bf845b0 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> @@ -450,7 +450,7 @@ int amdgpu_fence_driver_init_ring(struct amdgpu_ring 
> *ring,
>   
>   r = drm_sched_init(&ring->sched, &amdgpu_sched_ops,
>  num_hw_submission, amdgpu_job_hang_limit,
> -timeout, ring->name);
> +timeout, ring->name, false);
>   if (r) {
>   DRM_ERROR("Failed to create scheduler on ring %s.\n",
> ring->name);
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c 
> b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> index f8c5f1e..9dca347 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> @@ -178,7 +178,7 @@ int etnaviv_sched_init(struct etnaviv_gpu *gpu)
>   
>   ret = drm_sched_init(&gpu->sched, &etnaviv_sched_ops,
>etnaviv_hw_jobs_limit, etnaviv_job_hang_limit,
> -  msecs_to_jiffies(500), dev_name(gpu->dev));
> +  msecs_to_jiffies(500), dev_name(gpu->dev), true);
>   if (ret)
>   return ret;
>   
> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c 
> b/drivers/gpu/drm/scheduler/sched_entity.c
> index 3e22a54..ba54c30 100644
> --- a/drivers/gpu/drm/scheduler/sched_entity.c
> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
> @@ -130,7 +130,14 @@ drm_sched_entity_get_free_sched(struct drm_sched_entity 
> *entity)
>   int i;
>   
>   for (i = 0; i < entity->num_rq_list; ++i) {
> - num_jobs = atomic_read(&entity->rq_list[i]->sched->num_jobs);
> + struct drm_gpu_scheduler *sched = entity->rq_list[i]->sched;
> +
> + if (!entity->rq_list[i]->sched->ready) {
> + DRM_WARN("sched%s is not ready, skipping", sched->name);
> + continue;
> + }
> +
> + num_jobs = atomic_read(&sched->num_jobs);
>   if (num_jobs < min_jobs) {
>   min_jobs = num_jobs;
>   rq = entity->rq_list[i];
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
> b/drivers/gpu/drm/scheduler/sched_main.c
> index 63b997d..772adec 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -420,6 +420,9 @@ int drm_sched_job_init(struct drm_sched_job *job,
>   struct drm_gpu_scheduler *sched;
>   
>   drm_sched_entity_select_rq(entity);
> + if (!entity->rq)
> + return -ENOENT;
> +
>   sched = entity->rq->sched;
>   
>   job->sched = sched;
> @@ -598,6 +601,7 @@ static int drm_sched_main(void *param)
>* @hang_limit: number of times to allow a job to hang before dropping it
>* @timeout: timeout value in jiffies for the scheduler
>* @name: name used for debugging
> + * @ready: marks if the underlying HW is ready to work
>*
>* Return 0 on success, otherwise error code.
>*/
> @@ -606,7 +610,8 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
>  unsigned hw_submission,
>  unsigned hang_limit,
>  long timeout,
> -const char *name)
> +const char *name,
> +bool ready)

Please drop the ready flag here. We should consider a scheduler ready as 
soon as it is initialized.

Apart from that looks good to me,
Christian.

>   {
>   int i;
>   sched->ops = ops;
> @@ -633,6 +638,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
>   return PTR_ERR(sched->thread);
>   }
>   
> + sched->ready = ready;
>   return 0;
>   }
>   EXPORT_SYMBOL(drm_sched_init);
> @@ -648,5 +654,7 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched)
>   {
>   if (sched->thread)
>   kthread_stop(sched->thread);
>

Re: [PATCH] drm/amdgpu: fix sdma v4 ring is disabled accidently

2018-10-22 Thread Koenig, Christian
Mhm, good catch.

And yes using the paging queue when it is available sounds like a good 
idea to me as well.

So far I've only used it for VM updates to actually test if it works as 
expected.

Regards,
Christian.

Am 19.10.18 um 21:53 schrieb Kuehling, Felix:
> [+Christian]
>
> Should the buffer funcs also use the paging ring? I think that would be
> important for being able to clear page tables or migrating a BO while
> handling a page fault.
>
> Regards,
>    Felix
>
> On 2018-10-19 3:13 p.m., Yang, Philip wrote:
>> For sdma v4, there is bug caused by
>> commit d4e869b6b5d6 ("drm/amdgpu: add ring test for page queue")'
>>
>> local variable ring is reused and changed, so 
>> amdgpu_ttm_set_buffer_funcs_status(adev, true)
>> is skipped accidently. As a result, amdgpu_fill_buffer() will fail, kernel 
>> message:
>>
>> [drm:amdgpu_fill_buffer [amdgpu]] *ERROR* Trying to clear memory with ring 
>> turned off.
>> [   25.260444] [drm:amdgpu_fill_buffer [amdgpu]] *ERROR* Trying to clear 
>> memory with ring turned off.
>> [   25.260627] [drm:amdgpu_fill_buffer [amdgpu]] *ERROR* Trying to clear 
>> memory with ring turned off.
>> [   25.290119] [drm:amdgpu_fill_buffer [amdgpu]] *ERROR* Trying to clear 
>> memory with ring turned off.
>> [   25.290370] [drm:amdgpu_fill_buffer [amdgpu]] *ERROR* Trying to clear 
>> memory with ring turned off.
>> [   25.319971] [drm:amdgpu_fill_buffer [amdgpu]] *ERROR* Trying to clear 
>> memory with ring turned off.
>> [   25.320486] amdgpu :19:00.0: [mmhub] VMC page fault (src_id:0 
>> ring:154 vmid:8 pasid:32768, for process  pid 0 thread  pid 0)
>> [   25.320533] amdgpu :19:00.0:   in page starting at address 
>> 0x from 18
>> [   25.320563] amdgpu :19:00.0: VM_L2_PROTECTION_FAULT_STATUS:0x00800134
>>
>> Change-Id: Idacdf8e60557edb0a4a499aa4051b75d87ce4091
>> Signed-off-by: Philip Yang 
>> ---
>>   drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 7 ---
>>   1 file changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
>> b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
>> index ede149a..cd368ac 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
>> @@ -1151,10 +1151,11 @@ static int sdma_v4_0_start(struct amdgpu_device 
>> *adev)
>>  }
>>   
>>  if (adev->sdma.has_page_queue) {
>> -ring = &adev->sdma.instance[i].page;
>> -r = amdgpu_ring_test_ring(ring);
>> +struct amdgpu_ring *page = &adev->sdma.instance[i].page;
>> +
>> +r = amdgpu_ring_test_ring(page);
>>  if (r) {
>> -ring->ready = false;
>> +page->ready = false;
>>  return r;
>>  }
>>  }

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [Linux-v4.18-rc6] modpost-errors when compiling with clang-7 and CONFIG_DRM_AMDGPU=m

2018-10-22 Thread Koenig, Christian
Am 22.10.18 um 10:40 schrieb Sedat Dilek:
> [SNIP]
>>> Am 29.07.2018 um 15:52 schrieb Sedat Dilek:
 Hi,

 when compiling with clang-7 and CONFIG_DRM_AMDGPU=m I see the following...

 if [ "" = "-pg" ]; then if [ arch/x86/boot/compressed/misc.o !=
 "scripts/mod/empty.o" ]; then ./scripts/recordmcount
 "arch/x86/boot/compressed/misc.o"; fi; fi;
 ERROR: "__addsf3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__subdf3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__gedf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__fixunssfsi" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__floatunsisf" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__unordsf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__gesf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__mulsf3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__truncdfsf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__ltsf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__muldf3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__divdf3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__eqsf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__floatsisf" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__ledf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__gtsf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__fixdfsi" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__floatunsidf" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__nesf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__adddf3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__extendsfdf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__fixunsdfsi" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__lesf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__ltdf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__floatsidf" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__subsf3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__gtdf2" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__fixsfsi" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__divsf3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 ERROR: "__floatdidf" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
 make[4]: *** [scripts/Makefile.modpost:92: __modpost] Error 1
 make[3]: *** [Makefile:1208: modules] Error 2
 make[3]: *** Waiting for unfinished jobs

 For now I have disabled CONFIG_DRM_AMDGPU=n.

 Do you need further informations?

> Hi,
>
> any news on this issue?

Unfortunately not as far as I know. Harray any time line when we could 
fix that?

Thanks,
Christian.

> With Linux v4.19 final I am still building with CONFIG_DRM_AMDGPU=n.
> Thanks in advance.
>
> Regards,
> - Sedat -

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amd/powerplay: commit get_performance_level API as DAL needed

2018-10-22 Thread Xu, Feifei
Reviewed-by: Feifei Xu 

-Original Message-
From: amd-gfx  On Behalf Of Evan Quan
Sent: Monday, October 22, 2018 3:20 PM
To: amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Xu, Feifei 
; Quan, Evan 
Subject: [PATCH] drm/amd/powerplay: commit get_performance_level API as DAL 
needed

This can suppress the error reported on driver loading. Also these are empty 
APIs as Vega12/Vega20 has no performance levels.

Change-Id: Ifa322a0e57fe3be4bfd9503f26e8deb7daab096d
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 8   
drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c | 9 +
 2 files changed, 17 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
index 9600e2f226e9..74bc37308dc0 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
@@ -2356,6 +2356,13 @@ static int vega12_gfx_off_control(struct pp_hwmgr 
*hwmgr, bool enable)
return vega12_disable_gfx_off(hwmgr);  }
 
+static int vega12_get_performance_level(struct pp_hwmgr *hwmgr, const struct 
pp_hw_power_state *state,
+   PHM_PerformanceLevelDesignation designation, 
uint32_t index,
+   PHM_PerformanceLevel *level)
+{
+   return 0;
+}
+
 static const struct pp_hwmgr_func vega12_hwmgr_funcs = {
.backend_init = vega12_hwmgr_backend_init,
.backend_fini = vega12_hwmgr_backend_fini, @@ -2406,6 +2413,7 @@ static 
const struct pp_hwmgr_func vega12_hwmgr_funcs = {
.register_irq_handlers = smu9_register_irq_handlers,
.start_thermal_controller = vega12_start_thermal_controller,
.powergate_gfx = vega12_gfx_off_control,
+   .get_performance_level = vega12_get_performance_level,
 };
 
 int vega12_hwmgr_init(struct pp_hwmgr *hwmgr) diff --git 
a/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
index b4dbbb7c334c..894eae4b9d21 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
@@ -2041,6 +2041,13 @@ int vega20_display_clock_voltage_request(struct pp_hwmgr 
*hwmgr,
return result;
 }
 
+static int vega20_get_performance_level(struct pp_hwmgr *hwmgr, const struct 
pp_hw_power_state *state,
+   PHM_PerformanceLevelDesignation designation, 
uint32_t index,
+   PHM_PerformanceLevel *level)
+{
+   return 0;
+}
+
 static int vega20_notify_smc_display_config_after_ps_adjustment(
struct pp_hwmgr *hwmgr)
 {
@@ -3476,6 +3483,8 @@ static const struct pp_hwmgr_func vega20_hwmgr_funcs = {
vega20_set_watermarks_for_clocks_ranges,
.display_clock_voltage_request =
vega20_display_clock_voltage_request,
+   .get_performance_level =
+   vega20_get_performance_level,
/* UMD pstate, profile related */
.force_dpm_level =
vega20_dpm_force_dpm_level,
--
2.19.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amd/powerplay: commit get_performance_level API as DAL needed

2018-10-22 Thread Evan Quan
This can suppress the error reported on driver loading. Also these
are empty APIs as Vega12/Vega20 has no performance levels.

Change-Id: Ifa322a0e57fe3be4bfd9503f26e8deb7daab096d
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 8 
 drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c | 9 +
 2 files changed, 17 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
index 9600e2f226e9..74bc37308dc0 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
@@ -2356,6 +2356,13 @@ static int vega12_gfx_off_control(struct pp_hwmgr 
*hwmgr, bool enable)
return vega12_disable_gfx_off(hwmgr);
 }
 
+static int vega12_get_performance_level(struct pp_hwmgr *hwmgr, const struct 
pp_hw_power_state *state,
+   PHM_PerformanceLevelDesignation designation, 
uint32_t index,
+   PHM_PerformanceLevel *level)
+{
+   return 0;
+}
+
 static const struct pp_hwmgr_func vega12_hwmgr_funcs = {
.backend_init = vega12_hwmgr_backend_init,
.backend_fini = vega12_hwmgr_backend_fini,
@@ -2406,6 +2413,7 @@ static const struct pp_hwmgr_func vega12_hwmgr_funcs = {
.register_irq_handlers = smu9_register_irq_handlers,
.start_thermal_controller = vega12_start_thermal_controller,
.powergate_gfx = vega12_gfx_off_control,
+   .get_performance_level = vega12_get_performance_level,
 };
 
 int vega12_hwmgr_init(struct pp_hwmgr *hwmgr)
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
index b4dbbb7c334c..894eae4b9d21 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
@@ -2041,6 +2041,13 @@ int vega20_display_clock_voltage_request(struct pp_hwmgr 
*hwmgr,
return result;
 }
 
+static int vega20_get_performance_level(struct pp_hwmgr *hwmgr, const struct 
pp_hw_power_state *state,
+   PHM_PerformanceLevelDesignation designation, 
uint32_t index,
+   PHM_PerformanceLevel *level)
+{
+   return 0;
+}
+
 static int vega20_notify_smc_display_config_after_ps_adjustment(
struct pp_hwmgr *hwmgr)
 {
@@ -3476,6 +3483,8 @@ static const struct pp_hwmgr_func vega20_hwmgr_funcs = {
vega20_set_watermarks_for_clocks_ranges,
.display_clock_voltage_request =
vega20_display_clock_voltage_request,
+   .get_performance_level =
+   vega20_get_performance_level,
/* UMD pstate, profile related */
.force_dpm_level =
vega20_dpm_force_dpm_level,
-- 
2.19.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx