Re: [PATCH] drm/amdgpu: re-validate per VM BOs if required

2018-03-21 Thread zhoucm1



On 2018年03月21日 20:00, Christian König wrote:

Am 21.03.2018 um 11:31 schrieb zhoucm1:



On 2018年03月20日 17:13, zhoucm1 wrote:



On 2018年03月20日 15:49, zhoucm1 wrote:



On 2018年03月19日 18:50, Christian König wrote:
If a per VM BO ends up in a allowed domain it never moves back 
into the

prefered domain.

Signed-off-by: Christian König 
Yeah, it's better than mine, Reviewed-by: Chunming Zhou 



the left problem is BOs validation order.
For old bo list usage, it has fixed order for BOs in bo list,
but for per-vm-bo feature, the order isn't fixed, which will result 
in the performance is undulate.
e.g. steam game F1 generally is 40fps when using old bo list, it's 
very stable, but when enabling per-vm-bo feature, the fps is 
between 37~40fps.

even worse, sometime, fps could drop to 18fps.
the root cause is some *KEY* BOs are randomly placed to allowed 
domain without fixed validation order.
For old bo list case, its later BOs can be evictable, so the front 
BOs are validated with preferred domain first, that is also why the 
performance is stable to 40fps when using old bo list.


Some more thinking:
Could user space pass validation order for per-vm BOs? or set BOs 
index for every per-vm BO?

Ping...
If no objection, I will try to make a bo list for per-vm case to 
determine the validation order.


I've already tried to give the list a certain order in 
amdgpu_vm_bo_invalidate(), e.g. we add kernel BOs (page tables) to the 
front and normal BOs to the back of the list.


What you could do is to splice the evicted list to a local copy in 
amdgpu_vm_validate_pt_bos(), then use list_sort() from linux/list_sort.h.


As criteria we could use the BO in kernel priority + some new user 
definable priority, this way page tables are still validated first.

Great, this idea also is what I discussed with UMD guys.
OK, let's go on this one.

Thanks for feedback.
David Zhou


Regards,
Christian.



Regards,
David Zhou


Any comment?


Regards,
David Zhou



Any thought?

Regards,
David Zhou


---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 15 +--
  1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

index 24474294c92a..e8b515dd032c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1770,14 +1770,16 @@ int amdgpu_vm_handle_moved(struct 
amdgpu_device *adev,

    spin_lock(>status_lock);
  while (!list_empty(>moved)) {
-    struct amdgpu_bo_va *bo_va;
  struct reservation_object *resv;
+    struct amdgpu_bo_va *bo_va;
+    struct amdgpu_bo *bo;
    bo_va = list_first_entry(>moved,
  struct amdgpu_bo_va, base.vm_status);
  spin_unlock(>status_lock);
  -    resv = bo_va->base.bo->tbo.resv;
+    bo = bo_va->base.bo;
+    resv = bo->tbo.resv;
    /* Per VM BOs never need to bo cleared in the page 
tables */

  if (resv == vm->root.base.bo->tbo.resv)
@@ -1797,6 +1799,15 @@ int amdgpu_vm_handle_moved(struct 
amdgpu_device *adev,

  reservation_object_unlock(resv);
    spin_lock(>status_lock);
+
+    /* If the BO prefers to be in VRAM, but currently isn't 
add it
+ * back to the evicted list so that it gets validated 
again on

+ * the next command submission.
+ */
+    if (resv == vm->root.base.bo->tbo.resv &&
+    bo->preferred_domains == AMDGPU_GEM_DOMAIN_VRAM &&
+    bo->tbo.mem.mem_type != TTM_PL_VRAM)
+    list_add_tail(_va->base.vm_status, >evicted);
  }
  spin_unlock(>status_lock);










___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amdgpu: give more chance for tlb flush if failed

2018-03-21 Thread Deng, Emily
Hi Christian,
 I agree with that the patch will hide the real problem, it is just a 
workaround, I will change the patch as you suggest.
as the sriov has lots of issues on the staging,  maybe we could first submit 
the two workarounds, later, I will spend some time
to find out the root cause.
 I think the issue reproduce is reliable.

Best Wishes,
Emily Deng


> -Original Message-
> From: Christian König [mailto:ckoenig.leichtzumer...@gmail.com]
> Sent: Tuesday, March 20, 2018 6:24 PM
> To: Deng, Emily ; amd-gfx@lists.freedesktop.org
> Cc: Liu, Monk 
> Subject: Re: [PATCH] drm/amdgpu: give more chance for tlb flush if failed
> 
> Am 20.03.2018 um 07:29 schrieb Emily Deng:
> > under SR-IOV sometimes CPU based tlb flush would timeout within the
> > given 100ms period, instead let it fail and continue we can give it
> > more chance to repeat the tlb flush on the failed VMHUB
> >
> > this could fix the massive "Timeout waiting for VM flush ACK"
> > error during vk_encoder test.
> 
> Well that one is a big NAK since it once more just hides the real problem that
> we sometimes drop register writes.
> 
> What we did during debugging to avoid the problem is the following:
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> > b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> > index a70cbc45c4c1..3536d50375fa 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> > @@ -338,6 +338,10 @@ static void gmc_v9_0_flush_gpu_tlb(struct
> > amdgpu_device *adev,
> >     u32 tmp = gmc_v9_0_get_invalidate_req(vmid);
> >
> >     WREG32_NO_KIQ(hub->vm_inv_eng0_req + eng, tmp);
> > +   while (RREG32_NO_KIQ(hub->vm_inv_eng0_req + eng) !=
> > +tmp) {
> > +   DRM_ERROR("Need one more try to write the
> > VMHUB flush request!");
> > +   WREG32_NO_KIQ(hub->vm_inv_eng0_req + eng,
> > +tmp);
> > +   }
> >
> >     /* Busy wait for ACK.*/
> >     for (j = 0; j < 100; j++) {
> 
> But that can only be a temporary workaround as well.
> 
> The question is rather can you reliable reproduce this issue with the
> vk_encoder test?
> 
> Thanks,
> Christian.
> 
> >
> > Signed-off-by: Monk Liu 
> > ---
> >   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 24 +++--
> ---
> >   1 file changed, 19 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> > b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> > index a70cbc4..517712b 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> > @@ -329,13 +329,18 @@ static void gmc_v9_0_flush_gpu_tlb(struct
> amdgpu_device *adev,
> >   {
> > /* Use register 17 for GART */
> > const unsigned eng = 17;
> > -   unsigned i, j;
> > +   unsigned i, j, loop = 0;
> > +   unsigned flush_done = 0;
> > +
> > +retry:
> >
> > spin_lock(>gmc.invalidate_lock);
> >
> > for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
> > struct amdgpu_vmhub *hub = >vmhub[i];
> > u32 tmp = gmc_v9_0_get_invalidate_req(vmid);
> > +   if (flush_done & (1 << i)) /* this vmhub flushed */
> > +   continue;
> >
> > WREG32_NO_KIQ(hub->vm_inv_eng0_req + eng, tmp);
> >
> > @@ -347,8 +352,10 @@ static void gmc_v9_0_flush_gpu_tlb(struct
> amdgpu_device *adev,
> > break;
> > cpu_relax();
> > }
> > -   if (j < 100)
> > +   if (j < 100) {
> > +   flush_done |= (1 << i);
> > continue;
> > +   }
> >
> > /* Wait for ACK with a delay.*/
> > for (j = 0; j < adev->usec_timeout; j++) { @@ -358,15 +365,22
> @@
> > static void gmc_v9_0_flush_gpu_tlb(struct amdgpu_device *adev,
> > break;
> > udelay(1);
> > }
> > -   if (j < adev->usec_timeout)
> > +   if (j < adev->usec_timeout) {
> > +   flush_done |= (1 << i);
> > continue;
> > -
> > -   DRM_ERROR("Timeout waiting for VM flush ACK!\n");
> > +   }
> > }
> >
> > spin_unlock(>gmc.invalidate_lock);
> > +   if (flush_done != 3) {
> > +   if (loop++ < 3)
> > +   goto retry;
> > +   else
> > +   DRM_ERROR("Timeout waiting for VM flush ACK!\n");
> > +   }
> >   }
> >
> > +
> >   static uint64_t gmc_v9_0_emit_flush_gpu_tlb(struct amdgpu_ring *ring,
> > unsigned vmid, uint64_t pd_addr)
> >   {

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amdgpu: Fix NULL ptr on driver unload due to init failure.

2018-03-21 Thread Zhu, Rex
 kfree(adev->irq.client[i].sources);
+adev->irq.client[i].sources = NULL;

Set adev->irq.client[i].sources to NULL in amdgpu_irq_fini also can fix NULL 
ptr in amdgpu_irq_disable_all..

But I didn't check why amdgpu_device_fini was called twice.
 
This patch looks good.


Best Regards
Rex

-Original Message-
From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf Of 
Andrey Grodzovsky
Sent: Thursday, March 22, 2018 2:23 AM
To: amd-gfx@lists.freedesktop.org
Cc: Grodzovsky, Andrey
Subject: [PATCH] drm/amdgpu: Fix NULL ptr on driver unload due to init failure.

Problem:
When unloading due to failure amdgpu_device_fini was called twice which was 
leading to NULL ptr in amdgpu_irq_disable_all.

Fix:
Call amdgpu_device_fini only once from amdgpu_driver_unload_kms.

Signed-off-by: Andrey Grodzovsky 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 60e577c..c51be05 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2023,7 +2023,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
}
dev_err(adev->dev, "amdgpu_device_ip_init failed\n");
amdgpu_vf_error_put(adev, AMDGIM_ERROR_VF_AMDGPU_INIT_FAIL, 0, 
0);
-   amdgpu_device_ip_fini(adev);
goto failed;
}
 
--
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [trivial PATCH V2] treewide: Align function definition open/close braces

2018-03-21 Thread Nicolin Chen
On Wed, Mar 21, 2018 at 03:09:32PM -0700, Joe Perches wrote:
> Some functions definitions have either the initial open brace and/or
> the closing brace outside of column 1.
> 
> Move those braces to column 1.
> 
> This allows various function analyzers like gnu complexity to work
> properly for these modified functions.
> 
> Signed-off-by: Joe Perches 
> Acked-by: Andy Shevchenko 
> Acked-by: Paul Moore 
> Acked-by: Alex Deucher 
> Acked-by: Dave Chinner 
> Reviewed-by: Darrick J. Wong 
> Acked-by: Alexandre Belloni 
> Acked-by: Martin K. Petersen 
> Acked-by: Takashi Iwai 
> Acked-by: Mauro Carvalho Chehab 
> ---
> 
> git diff -w still shows no difference.
> 
> This patch was sent but December and not applied.
> 
> As the trivial maintainer seems not active, it'd be nice if
> Andrew Morton picks this up.
> 
> V2: Remove fs/xfs/libxfs/xfs_alloc.c as it's updated and remerge the rest
> 
>  arch/x86/include/asm/atomic64_32.h   |  2 +-
>  drivers/acpi/custom_method.c |  2 +-
>  drivers/acpi/fan.c   |  2 +-
>  drivers/gpu/drm/amd/display/dc/core/dc.c |  2 +-
>  drivers/media/i2c/msp3400-kthreads.c |  2 +-
>  drivers/message/fusion/mptsas.c  |  2 +-
>  drivers/net/ethernet/qlogic/netxen/netxen_nic_init.c |  2 +-
>  drivers/net/wireless/ath/ath9k/xmit.c|  2 +-
>  drivers/platform/x86/eeepc-laptop.c  |  2 +-
>  drivers/rtc/rtc-ab-b5ze-s3.c |  2 +-
>  drivers/scsi/dpt_i2o.c   |  2 +-
>  drivers/scsi/sym53c8xx_2/sym_glue.c  |  2 +-
>  fs/locks.c   |  2 +-
>  fs/ocfs2/stack_user.c|  2 +-
>  fs/xfs/xfs_export.c  |  2 +-
>  kernel/audit.c   |  6 +++---
>  kernel/trace/trace_printk.c  |  4 ++--
>  lib/raid6/sse2.c | 14 +++---

For fsl_dma.c:
>  sound/soc/fsl/fsl_dma.c  |  2 +-

Acked-by: Nicolin Chen 

Thanks

>  19 files changed, 28 insertions(+), 28 deletions(-)
> 
> diff --git a/arch/x86/include/asm/atomic64_32.h 
> b/arch/x86/include/asm/atomic64_32.h
> index 46e1ef17d92d..92212bf0484f 100644
> --- a/arch/x86/include/asm/atomic64_32.h
> +++ b/arch/x86/include/asm/atomic64_32.h
> @@ -123,7 +123,7 @@ static inline long long arch_atomic64_read(const 
> atomic64_t *v)
>   long long r;
>   alternative_atomic64(read, "=" (r), "c" (v) : "memory");
>   return r;
> - }
> +}
>  
>  /**
>   * arch_atomic64_add_return - add and return
> diff --git a/drivers/acpi/custom_method.c b/drivers/acpi/custom_method.c
> index b33fba70ec51..a07fbe999eb6 100644
> --- a/drivers/acpi/custom_method.c
> +++ b/drivers/acpi/custom_method.c
> @@ -97,7 +97,7 @@ static void __exit acpi_custom_method_exit(void)
>  {
>   if (cm_dentry)
>   debugfs_remove(cm_dentry);
> - }
> +}
>  
>  module_init(acpi_custom_method_init);
>  module_exit(acpi_custom_method_exit);
> diff --git a/drivers/acpi/fan.c b/drivers/acpi/fan.c
> index 6cf4988206f2..3563103590c6 100644
> --- a/drivers/acpi/fan.c
> +++ b/drivers/acpi/fan.c
> @@ -219,7 +219,7 @@ fan_set_cur_state(struct thermal_cooling_device *cdev, 
> unsigned long state)
>   return fan_set_state_acpi4(device, state);
>   else
>   return fan_set_state(device, state);
> - }
> +}
>  
>  static const struct thermal_cooling_device_ops fan_cooling_ops = {
>   .get_max_state = fan_get_max_state,
> diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c 
> b/drivers/gpu/drm/amd/display/dc/core/dc.c
> index 8394d69b963f..e934326a95d3 100644
> --- a/drivers/gpu/drm/amd/display/dc/core/dc.c
> +++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
> @@ -588,7 +588,7 @@ static void disable_dangling_plane(struct dc *dc, struct 
> dc_state *context)
>   
> **/
>  
>  struct dc *dc_create(const struct dc_init_data *init_params)
> - {
> +{
>   struct dc *dc = kzalloc(sizeof(*dc), GFP_KERNEL);
>   unsigned int full_pipe_count;
>  
> diff --git a/drivers/media/i2c/msp3400-kthreads.c 
> b/drivers/media/i2c/msp3400-kthreads.c
> index 4dd01e9f553b..dc6cb8d475b3 100644
> --- a/drivers/media/i2c/msp3400-kthreads.c
> +++ b/drivers/media/i2c/msp3400-kthreads.c
> @@ -885,7 +885,7 @@ static int msp34xxg_modus(struct i2c_client *client)
>  }
>  
>  static void msp34xxg_set_source(struct i2c_client *client, u16 reg, int in)
> - {
> +{
>   struct msp_state *state = 

[trivial PATCH V2] treewide: Align function definition open/close braces

2018-03-21 Thread Joe Perches
Some functions definitions have either the initial open brace and/or
the closing brace outside of column 1.

Move those braces to column 1.

This allows various function analyzers like gnu complexity to work
properly for these modified functions.

Signed-off-by: Joe Perches 
Acked-by: Andy Shevchenko 
Acked-by: Paul Moore 
Acked-by: Alex Deucher 
Acked-by: Dave Chinner 
Reviewed-by: Darrick J. Wong 
Acked-by: Alexandre Belloni 
Acked-by: Martin K. Petersen 
Acked-by: Takashi Iwai 
Acked-by: Mauro Carvalho Chehab 
---

git diff -w still shows no difference.

This patch was sent but December and not applied.

As the trivial maintainer seems not active, it'd be nice if
Andrew Morton picks this up.

V2: Remove fs/xfs/libxfs/xfs_alloc.c as it's updated and remerge the rest

 arch/x86/include/asm/atomic64_32.h   |  2 +-
 drivers/acpi/custom_method.c |  2 +-
 drivers/acpi/fan.c   |  2 +-
 drivers/gpu/drm/amd/display/dc/core/dc.c |  2 +-
 drivers/media/i2c/msp3400-kthreads.c |  2 +-
 drivers/message/fusion/mptsas.c  |  2 +-
 drivers/net/ethernet/qlogic/netxen/netxen_nic_init.c |  2 +-
 drivers/net/wireless/ath/ath9k/xmit.c|  2 +-
 drivers/platform/x86/eeepc-laptop.c  |  2 +-
 drivers/rtc/rtc-ab-b5ze-s3.c |  2 +-
 drivers/scsi/dpt_i2o.c   |  2 +-
 drivers/scsi/sym53c8xx_2/sym_glue.c  |  2 +-
 fs/locks.c   |  2 +-
 fs/ocfs2/stack_user.c|  2 +-
 fs/xfs/xfs_export.c  |  2 +-
 kernel/audit.c   |  6 +++---
 kernel/trace/trace_printk.c  |  4 ++--
 lib/raid6/sse2.c | 14 +++---
 sound/soc/fsl/fsl_dma.c  |  2 +-
 19 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/arch/x86/include/asm/atomic64_32.h 
b/arch/x86/include/asm/atomic64_32.h
index 46e1ef17d92d..92212bf0484f 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -123,7 +123,7 @@ static inline long long arch_atomic64_read(const atomic64_t 
*v)
long long r;
alternative_atomic64(read, "=" (r), "c" (v) : "memory");
return r;
- }
+}
 
 /**
  * arch_atomic64_add_return - add and return
diff --git a/drivers/acpi/custom_method.c b/drivers/acpi/custom_method.c
index b33fba70ec51..a07fbe999eb6 100644
--- a/drivers/acpi/custom_method.c
+++ b/drivers/acpi/custom_method.c
@@ -97,7 +97,7 @@ static void __exit acpi_custom_method_exit(void)
 {
if (cm_dentry)
debugfs_remove(cm_dentry);
- }
+}
 
 module_init(acpi_custom_method_init);
 module_exit(acpi_custom_method_exit);
diff --git a/drivers/acpi/fan.c b/drivers/acpi/fan.c
index 6cf4988206f2..3563103590c6 100644
--- a/drivers/acpi/fan.c
+++ b/drivers/acpi/fan.c
@@ -219,7 +219,7 @@ fan_set_cur_state(struct thermal_cooling_device *cdev, 
unsigned long state)
return fan_set_state_acpi4(device, state);
else
return fan_set_state(device, state);
- }
+}
 
 static const struct thermal_cooling_device_ops fan_cooling_ops = {
.get_max_state = fan_get_max_state,
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c 
b/drivers/gpu/drm/amd/display/dc/core/dc.c
index 8394d69b963f..e934326a95d3 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -588,7 +588,7 @@ static void disable_dangling_plane(struct dc *dc, struct 
dc_state *context)
  
**/
 
 struct dc *dc_create(const struct dc_init_data *init_params)
- {
+{
struct dc *dc = kzalloc(sizeof(*dc), GFP_KERNEL);
unsigned int full_pipe_count;
 
diff --git a/drivers/media/i2c/msp3400-kthreads.c 
b/drivers/media/i2c/msp3400-kthreads.c
index 4dd01e9f553b..dc6cb8d475b3 100644
--- a/drivers/media/i2c/msp3400-kthreads.c
+++ b/drivers/media/i2c/msp3400-kthreads.c
@@ -885,7 +885,7 @@ static int msp34xxg_modus(struct i2c_client *client)
 }
 
 static void msp34xxg_set_source(struct i2c_client *client, u16 reg, int in)
- {
+{
struct msp_state *state = to_state(i2c_get_clientdata(client));
int source, matrix;
 
diff --git a/drivers/message/fusion/mptsas.c b/drivers/message/fusion/mptsas.c
index 439ee9c5f535..231f3a1e27bf 100644
--- a/drivers/message/fusion/mptsas.c
+++ b/drivers/message/fusion/mptsas.c
@@ -2967,7 +2967,7 @@ mptsas_exp_repmanufacture_info(MPT_ADAPTER *ioc,
mutex_unlock(>sas_mgmt.mutex);
 out:
  

Re: [trivial PATCH V2] treewide: Align function definition open/close braces

2018-03-21 Thread Martin K. Petersen

Joe,

> Some functions definitions have either the initial open brace and/or
> the closing brace outside of column 1.
>
> Move those braces to column 1.

drivers/scsi and drivers/message/fusion parts look fine.

Acked-by: Martin K. Petersen 

-- 
Martin K. Petersen  Oracle Linux Engineering
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[pull] radeon and amdgpu drm-fixes-4.16

2018-03-21 Thread Alex Deucher
Hi Dave,

A few more fixes for 4.16.  Mostly for displays:
- A fix for DP handling on radeon
- Fix banding on eDP panels
- Fix HBR audio
- Fix for disabling VGA mode on Raven that leads to a corrupt or
  blank display on some platforms


The following changes since commit 67f1976665900c86989cfe99b884dc51bddfb0e9:

  Merge tag 'drm-intel-fixes-2018-03-14' of 
git://anongit.freedesktop.org/drm/drm-intel into drm-fixes (2018-03-15 09:26:11 
+1000)

are available in the git repository at:

  git://people.freedesktop.org/~agd5f/linux drm-fixes-4.16

for you to fetch changes up to 731a373698c9675d5aed8a30d8c9861bea9c41a2:

  drm/amd/display: Add one to EDID's audio channel count when passing to DC 
(2018-03-21 00:24:47 -0500)


Clark Zheng (1):
  drm/amd/display: Refine disable VGA

Harry Wentland (2):
  drm/amd/display: We shouldn't set format_default on plane as atomic driver
  drm/amd/display: Add one to EDID's audio channel count when passing to DC

Michel Dänzer (1):
  drm/radeon: Don't turn off DP sink when disconnected

Mikita Lipski (3):
  drm/amdgpu: Use atomic function to disable crtcs with dc enabled
  drm/amd/display: Allow truncation to 10 bits
  drm/amd/display: Fix FMT truncation programming

Shirish S (1):
  drm/amd/display: fix dereferencing possible ERR_PTR()

 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |  9 ---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  |  5 ++--
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c  |  2 +-
 drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.h |  8 +-
 drivers/gpu/drm/amd/display/dc/dce/dce_opp.c   |  9 +++
 .../drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c  | 20 ++
 drivers/gpu/drm/radeon/radeon_connectors.c | 31 +-
 7 files changed, 48 insertions(+), 36 deletions(-)
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [trivial PATCH V2] treewide: Align function definition open/close braces

2018-03-21 Thread Rafael J. Wysocki
On Wed, Mar 21, 2018 at 11:09 PM, Joe Perches  wrote:
> Some functions definitions have either the initial open brace and/or
> the closing brace outside of column 1.
>
> Move those braces to column 1.
>
> This allows various function analyzers like gnu complexity to work
> properly for these modified functions.
>
> Signed-off-by: Joe Perches 
> Acked-by: Andy Shevchenko 
> Acked-by: Paul Moore 
> Acked-by: Alex Deucher 
> Acked-by: Dave Chinner 
> Reviewed-by: Darrick J. Wong 
> Acked-by: Alexandre Belloni 
> Acked-by: Martin K. Petersen 
> Acked-by: Takashi Iwai 
> Acked-by: Mauro Carvalho Chehab 
> ---
>
> git diff -w still shows no difference.
>
> This patch was sent but December and not applied.
>
> As the trivial maintainer seems not active, it'd be nice if
> Andrew Morton picks this up.
>
> V2: Remove fs/xfs/libxfs/xfs_alloc.c as it's updated and remerge the rest
>
>  arch/x86/include/asm/atomic64_32.h   |  2 +-
>  drivers/acpi/custom_method.c |  2 +-
>  drivers/acpi/fan.c   |  2 +-

For the ACPI changes:

Acked-by: Rafael J. Wysocki 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/pp: use mlck_table.count for array loop index limit

2018-03-21 Thread Joe Perches
On Wed, 2018-03-21 at 18:26 +, Colin King wrote:
> From: Colin Ian King 
> 
> The for-loops process data in the mclk_table but use slck_table.count
> as the loop index limit.  I believe these are cut-n-paste errors from
> the previous almost identical loops as indicated by static analysis.
> Fix these.

Nice tool.

> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
[]
> @@ -855,7 +855,7 @@ static int smu7_odn_initial_default_setting(struct 
> pp_hwmgr *hwmgr)
>  
>   odn_table->odn_memory_clock_dpm_levels.num_of_pl =
>   
> data->golden_dpm_table.mclk_table.count;
> - for (i=0; igolden_dpm_table.sclk_table.count; i++) {
> + for (i=0; igolden_dpm_table.mclk_table.count; i++) {
>   odn_table->odn_memory_clock_dpm_levels.entries[i].clock =
>   
> data->golden_dpm_table.mclk_table.dpm_levels[i].value;
>   odn_table->odn_memory_clock_dpm_levels.entries[i].enabled = 
> true;

Probably more sensible to use temporaries too.
Maybe something like the below (also trivially reduces object size)
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 19 ++-
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
index df2a312ca6c9..339b897146af 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
@@ -834,6 +834,7 @@ static int smu7_odn_initial_default_setting(struct pp_hwmgr 
*hwmgr)
 
struct phm_ppt_v1_clock_voltage_dependency_table *dep_sclk_table;
struct phm_ppt_v1_clock_voltage_dependency_table *dep_mclk_table;
+   struct phm_odn_performance_level *entries;
 
if (table_info == NULL)
return -EINVAL;
@@ -843,11 +844,11 @@ static int smu7_odn_initial_default_setting(struct 
pp_hwmgr *hwmgr)
 
odn_table->odn_core_clock_dpm_levels.num_of_pl =

data->golden_dpm_table.sclk_table.count;
+   entries = odn_table->odn_core_clock_dpm_levels.entries;
for (i=0; igolden_dpm_table.sclk_table.count; i++) {
-   odn_table->odn_core_clock_dpm_levels.entries[i].clock =
-   
data->golden_dpm_table.sclk_table.dpm_levels[i].value;
-   odn_table->odn_core_clock_dpm_levels.entries[i].enabled = true;
-   odn_table->odn_core_clock_dpm_levels.entries[i].vddc = 
dep_sclk_table->entries[i].vddc;
+   entries[i].clock = 
data->golden_dpm_table.sclk_table.dpm_levels[i].value;
+   entries[i].enabled = true;
+   entries[i].vddc = dep_sclk_table->entries[i].vddc;
}
 
smu7_get_voltage_dependency_table(dep_sclk_table,
@@ -855,11 +856,11 @@ static int smu7_odn_initial_default_setting(struct 
pp_hwmgr *hwmgr)
 
odn_table->odn_memory_clock_dpm_levels.num_of_pl =

data->golden_dpm_table.mclk_table.count;
-   for (i=0; igolden_dpm_table.sclk_table.count; i++) {
-   odn_table->odn_memory_clock_dpm_levels.entries[i].clock =
-   
data->golden_dpm_table.mclk_table.dpm_levels[i].value;
-   odn_table->odn_memory_clock_dpm_levels.entries[i].enabled = 
true;
-   odn_table->odn_memory_clock_dpm_levels.entries[i].vddc = 
dep_mclk_table->entries[i].vddc;
+   entries = odn_table->odn_memory_clock_dpm_levels.entries;
+   for (i=0; igolden_dpm_table.mclk_table.count; i++) {
+   entries[i].clock = 
data->golden_dpm_table.mclk_table.dpm_levels[i].value;
+   entries[i].enabled = true;
+   entries[i].vddc = dep_mclk_table->entries[i].vddc;
}
 
smu7_get_voltage_dependency_table(dep_mclk_table,
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amd/pp: use mlck_table.count for array loop index limit

2018-03-21 Thread Colin King
From: Colin Ian King 

The for-loops process data in the mclk_table but use slck_table.count
as the loop index limit.  I believe these are cut-n-paste errors from
the previous almost identical loops as indicated by static analysis.
Fix these.

Detected by CoverityScan, CID#1466001 ("Copy-paste error")

Fixes: 5d97cf39ff24 ("drm/amd/pp: Add and initialize OD_dpm_table for CI/VI.")
Fixes: 5e4d4fbea557 ("drm/amd/pp: Implement edit_dpm_table on smu7")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
index df2a312ca6c9..d1983273ec7c 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
@@ -855,7 +855,7 @@ static int smu7_odn_initial_default_setting(struct pp_hwmgr 
*hwmgr)
 
odn_table->odn_memory_clock_dpm_levels.num_of_pl =

data->golden_dpm_table.mclk_table.count;
-   for (i=0; igolden_dpm_table.sclk_table.count; i++) {
+   for (i=0; igolden_dpm_table.mclk_table.count; i++) {
odn_table->odn_memory_clock_dpm_levels.entries[i].clock =

data->golden_dpm_table.mclk_table.dpm_levels[i].value;
odn_table->odn_memory_clock_dpm_levels.entries[i].enabled = 
true;
@@ -4735,7 +4735,7 @@ static void smu7_check_dpm_table_updated(struct pp_hwmgr 
*hwmgr)
}
}
 
-   for (i=0; idpm_table.sclk_table.count; i++) {
+   for (i=0; idpm_table.mclk_table.count; i++) {
if (odn_table->odn_memory_clock_dpm_levels.entries[i].clock !=

data->dpm_table.mclk_table.dpm_levels[i].value) {
data->need_update_smu7_dpm_table |= 
DPMTABLE_OD_UPDATE_MCLK;
-- 
2.15.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Fix NULL ptr on driver unload due to init failure.

2018-03-21 Thread Alex Deucher
On Wed, Mar 21, 2018 at 2:22 PM, Andrey Grodzovsky
 wrote:
> Problem:
> When unloading due to failure amdgpu_device_fini was called twice
> which was leading to NULL ptr in amdgpu_irq_disable_all.
>
> Fix:
> Call amdgpu_device_fini only once from amdgpu_driver_unload_kms.
>
> Signed-off-by: Andrey Grodzovsky 

Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index 60e577c..c51be05 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -2023,7 +2023,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
> }
> dev_err(adev->dev, "amdgpu_device_ip_init failed\n");
> amdgpu_vf_error_put(adev, AMDGIM_ERROR_VF_AMDGPU_INIT_FAIL, 
> 0, 0);
> -   amdgpu_device_ip_fini(adev);
> goto failed;
> }
>
> --
> 2.7.4
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/2] drm/amdgpu: Enable scatter gather display support

2018-03-21 Thread Christian König

Am 21.03.2018 um 19:23 schrieb Marek Olšák:
On Wed, Mar 21, 2018 at 2:15 PM, Christian König 
> wrote:


Am 21.03.2018 um 19:04 schrieb Marek Olšák:

On Wed, Mar 21, 2018 at 10:07 AM, Christian König
> wrote:

Am 21.03.2018 um 14:57 schrieb Marek Olšák:

On Wed, Mar 21, 2018 at 4:13 AM, Christian König
> wrote:

Am 21.03.2018 um 06:08 schrieb Marek Olšák:

On Tue, Mar 20, 2018 at 4:16 PM, Christian König
> wrote:

That's what I meant with use up the otherwise
unused VRAM. I don't see any disadvantage of always
setting GTT as second domain on APUs.

My assumption was that we dropped this in userspace
for displayable surfaces, but Marek proved that wrong.

So what we should do is actually to add GTT as
fallback to all BOs on APUs in Mesa and only make
sure that the kernel is capable of handling GTT
with optimal performance (e.g. have user huge pages
etc..).


VRAM|GTT is practically as good as GTT. VRAM with BO
priorities and eviction throttling is the true "VRAM|GTT".

I don't know how else to make use of VRAM intelligently.


Well why not set VRAM|GTT as default on APUs? That
should still save quite a bunch of moves even with
throttling.


I explained why: VRAM|GTT is practically as good as GTT.


I mean there really shouldn't be any advantage to use
VRAM any more except that we want to use it up as long
as it is available.


Why are you suggesting to use VRAM|GTT then? Let's just only
use GTT on all APUs.


Then we don't use the memory stolen for VRAM.

See we want to get to a point where we have any ~16MB of
stolen VRAM on APUs and everything else in GTT.

But we still have to support cases where we have 1GB stolen
VRAM, and wasting those 1GB would suck a bit.


BO priorities and BO move throttling should take care of optimal
VRAM usage regardless of the VRAM size. We can adjust the
throttling for small VRAM, but that's about all we can do.


Well at least on APUs move throttling is complete nonsense. VRAM
should expose the same performance as GTT.

So the only usage we have for VRAM is for special cases like page
tables and to allow to actually use the stolen memory.


VRAM|GTT doesn't guarantee that VRAM will be used usefully. In
fact, it doesn't guarantee anything about VRAM.


Why not? VRAM|GTT means that we should use VRAM as long as it is
available and if it is used up fallback to GTT.

When BOs are evicted from VRAM they are never moved back into it.
As far as I can see that is exactly what we need on APUs.


I see. You don't want to use VRAM usefully. You just want to fill it 
up with something (anything) so that it's not unused.


Yes, exactly. The point is we really don't have any special use case for 
it on APUs any more on newer kernels/hardware.


We need a bit for firmware, but that is fixed and allocate at driver 
load time.


Page tables are still in VRAM, but at least for Raven that is just an 
issue that I didn't had free time to implement it otherwise.


If I could I would give the unused parts back to the OS for general 
purpose usage, but you need to reconfigure the northbridge to do that 
and well that is easier said than done.


Christian.



Marek


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/2] drm/amdgpu: Enable scatter gather display support

2018-03-21 Thread Marek Olšák
On Wed, Mar 21, 2018 at 2:15 PM, Christian König 
wrote:

> Am 21.03.2018 um 19:04 schrieb Marek Olšák:
>
> On Wed, Mar 21, 2018 at 10:07 AM, Christian König <
> christian.koe...@amd.com> wrote:
>
>> Am 21.03.2018 um 14:57 schrieb Marek Olšák:
>>
>> On Wed, Mar 21, 2018 at 4:13 AM, Christian König <
>> ckoenig.leichtzumer...@gmail.com> wrote:
>>
>>> Am 21.03.2018 um 06:08 schrieb Marek Olšák:
>>>
>>> On Tue, Mar 20, 2018 at 4:16 PM, Christian König <
>>> christian.koe...@amd.com> wrote:
>>>
 That's what I meant with use up the otherwise unused VRAM. I don't see
 any disadvantage of always setting GTT as second domain on APUs.

 My assumption was that we dropped this in userspace for displayable
 surfaces, but Marek proved that wrong.

 So what we should do is actually to add GTT as fallback to all BOs on
 APUs in Mesa and only make sure that the kernel is capable of handling GTT
 with optimal performance (e.g. have user huge pages etc..).

>>>
>>> VRAM|GTT is practically as good as GTT. VRAM with BO priorities and
>>> eviction throttling is the true "VRAM|GTT".
>>>
>>> I don't know how else to make use of VRAM intelligently.
>>>
>>>
>>> Well why not set VRAM|GTT as default on APUs? That should still save
>>> quite a bunch of moves even with throttling.
>>>
>>
>> I explained why: VRAM|GTT is practically as good as GTT.
>>
>>
>>>
>>> I mean there really shouldn't be any advantage to use VRAM any more
>>> except that we want to use it up as long as it is available.
>>>
>>
>> Why are you suggesting to use VRAM|GTT then? Let's just only use GTT on
>> all APUs.
>>
>>
>> Then we don't use the memory stolen for VRAM.
>>
>> See we want to get to a point where we have any ~16MB of stolen VRAM on
>> APUs and everything else in GTT.
>>
>> But we still have to support cases where we have 1GB stolen VRAM, and
>> wasting those 1GB would suck a bit.
>>
>
> BO priorities and BO move throttling should take care of optimal VRAM
> usage regardless of the VRAM size. We can adjust the throttling for small
> VRAM, but that's about all we can do.
>
>
> Well at least on APUs move throttling is complete nonsense. VRAM should
> expose the same performance as GTT.
>
> So the only usage we have for VRAM is for special cases like page tables
> and to allow to actually use the stolen memory.
>
> VRAM|GTT doesn't guarantee that VRAM will be used usefully. In fact, it
> doesn't guarantee anything about VRAM.
>
>
> Why not? VRAM|GTT means that we should use VRAM as long as it is available
> and if it is used up fallback to GTT.
>
> When BOs are evicted from VRAM they are never moved back into it. As far
> as I can see that is exactly what we need on APUs.
>

I see. You don't want to use VRAM usefully. You just want to fill it up
with something (anything) so that it's not unused.

Marek
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: Fix NULL ptr on driver unload due to init failure.

2018-03-21 Thread Andrey Grodzovsky
Problem:
When unloading due to failure amdgpu_device_fini was called twice
which was leading to NULL ptr in amdgpu_irq_disable_all.

Fix:
Call amdgpu_device_fini only once from amdgpu_driver_unload_kms.

Signed-off-by: Andrey Grodzovsky 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 60e577c..c51be05 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2023,7 +2023,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
}
dev_err(adev->dev, "amdgpu_device_ip_init failed\n");
amdgpu_vf_error_put(adev, AMDGIM_ERROR_VF_AMDGPU_INIT_FAIL, 0, 
0);
-   amdgpu_device_ip_fini(adev);
goto failed;
}
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/2] drm/amdgpu: Enable scatter gather display support

2018-03-21 Thread Christian König

Am 21.03.2018 um 19:04 schrieb Marek Olšák:
On Wed, Mar 21, 2018 at 10:07 AM, Christian König 
> wrote:


Am 21.03.2018 um 14:57 schrieb Marek Olšák:

On Wed, Mar 21, 2018 at 4:13 AM, Christian König
> wrote:

Am 21.03.2018 um 06:08 schrieb Marek Olšák:

On Tue, Mar 20, 2018 at 4:16 PM, Christian König
>
wrote:

That's what I meant with use up the otherwise unused
VRAM. I don't see any disadvantage of always setting GTT
as second domain on APUs.

My assumption was that we dropped this in userspace for
displayable surfaces, but Marek proved that wrong.

So what we should do is actually to add GTT as fallback
to all BOs on APUs in Mesa and only make sure that the
kernel is capable of handling GTT with optimal
performance (e.g. have user huge pages etc..).


VRAM|GTT is practically as good as GTT. VRAM with BO
priorities and eviction throttling is the true "VRAM|GTT".

I don't know how else to make use of VRAM intelligently.


Well why not set VRAM|GTT as default on APUs? That should
still save quite a bunch of moves even with throttling.


I explained why: VRAM|GTT is practically as good as GTT.


I mean there really shouldn't be any advantage to use VRAM
any more except that we want to use it up as long as it is
available.


Why are you suggesting to use VRAM|GTT then? Let's just only use
GTT on all APUs.


Then we don't use the memory stolen for VRAM.

See we want to get to a point where we have any ~16MB of stolen
VRAM on APUs and everything else in GTT.

But we still have to support cases where we have 1GB stolen VRAM,
and wasting those 1GB would suck a bit.


BO priorities and BO move throttling should take care of optimal VRAM 
usage regardless of the VRAM size. We can adjust the throttling for 
small VRAM, but that's about all we can do.


Well at least on APUs move throttling is complete nonsense. VRAM should 
expose the same performance as GTT.


So the only usage we have for VRAM is for special cases like page tables 
and to allow to actually use the stolen memory.


VRAM|GTT doesn't guarantee that VRAM will be used usefully. In fact, 
it doesn't guarantee anything about VRAM.


Why not? VRAM|GTT means that we should use VRAM as long as it is 
available and if it is used up fallback to GTT.


When BOs are evicted from VRAM they are never moved back into it. As far 
as I can see that is exactly what we need on APUs.


Christian.



Marek


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/2] drm/amdgpu: Enable scatter gather display support

2018-03-21 Thread Marek Olšák
On Wed, Mar 21, 2018 at 10:07 AM, Christian König 
wrote:

> Am 21.03.2018 um 14:57 schrieb Marek Olšák:
>
> On Wed, Mar 21, 2018 at 4:13 AM, Christian König <
> ckoenig.leichtzumer...@gmail.com> wrote:
>
>> Am 21.03.2018 um 06:08 schrieb Marek Olšák:
>>
>> On Tue, Mar 20, 2018 at 4:16 PM, Christian König <
>> christian.koe...@amd.com> wrote:
>>
>>> That's what I meant with use up the otherwise unused VRAM. I don't see
>>> any disadvantage of always setting GTT as second domain on APUs.
>>>
>>> My assumption was that we dropped this in userspace for displayable
>>> surfaces, but Marek proved that wrong.
>>>
>>> So what we should do is actually to add GTT as fallback to all BOs on
>>> APUs in Mesa and only make sure that the kernel is capable of handling GTT
>>> with optimal performance (e.g. have user huge pages etc..).
>>>
>>
>> VRAM|GTT is practically as good as GTT. VRAM with BO priorities and
>> eviction throttling is the true "VRAM|GTT".
>>
>> I don't know how else to make use of VRAM intelligently.
>>
>>
>> Well why not set VRAM|GTT as default on APUs? That should still save
>> quite a bunch of moves even with throttling.
>>
>
> I explained why: VRAM|GTT is practically as good as GTT.
>
>
>>
>> I mean there really shouldn't be any advantage to use VRAM any more
>> except that we want to use it up as long as it is available.
>>
>
> Why are you suggesting to use VRAM|GTT then? Let's just only use GTT on
> all APUs.
>
>
> Then we don't use the memory stolen for VRAM.
>
> See we want to get to a point where we have any ~16MB of stolen VRAM on
> APUs and everything else in GTT.
>
> But we still have to support cases where we have 1GB stolen VRAM, and
> wasting those 1GB would suck a bit.
>

BO priorities and BO move throttling should take care of optimal VRAM usage
regardless of the VRAM size. We can adjust the throttling for small VRAM,
but that's about all we can do.

VRAM|GTT doesn't guarantee that VRAM will be used usefully. In fact, it
doesn't guarantee anything about VRAM.

Marek
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 00/20] Add KFD GPUVM support for dGPUs v4

2018-03-21 Thread Felix Kuehling
On 2018-03-21 03:52 AM, Oded Gabbay wrote:
>
> Hi Felix,
> I did a quick pass on the patch-set and didn't see anything scary.
> Patches 1-14 are already applied to my -next tree. If I will send it
> now to Dave I believe we would be ok from schedule POV.
> I suggest we delay userptr support for the next kernel release because
> you need to address what Christian said and I also want to take a
> closer look at it.
> What do you think ?

OK. I have changes to address Christian's comments that I'm testing
internally. I'll send out an update this week.

Thanks,
  Felix

>
> Thanks,
> Oded

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/2] drm/amdgpu: Enable scatter gather display support

2018-03-21 Thread Li, Samuel
Ø  But we still have to support cases where we have 1GB stolen VRAM, and 
wasting those 1GB would suck a bit.
Not really, since we only move display buffer here.

Regards,
Samuel Li

From: Koenig, Christian
Sent: Wednesday, March 21, 2018 10:07 AM
To: Marek Olšák 
Cc: Deucher, Alexander ; Alex Deucher 
; Michel Dänzer ; amd-gfx list 
; Li, Samuel 
Subject: Re: [PATCH 1/2] drm/amdgpu: Enable scatter gather display support

Am 21.03.2018 um 14:57 schrieb Marek Olšák:
On Wed, Mar 21, 2018 at 4:13 AM, Christian König 
> 
wrote:
Am 21.03.2018 um 06:08 schrieb Marek Olšák:
On Tue, Mar 20, 2018 at 4:16 PM, Christian König 
> wrote:
That's what I meant with use up the otherwise unused VRAM. I don't see any 
disadvantage of always setting GTT as second domain on APUs.

My assumption was that we dropped this in userspace for displayable surfaces, 
but Marek proved that wrong.

So what we should do is actually to add GTT as fallback to all BOs on APUs in 
Mesa and only make sure that the kernel is capable of handling GTT with optimal 
performance (e.g. have user huge pages etc..).

VRAM|GTT is practically as good as GTT. VRAM with BO priorities and eviction 
throttling is the true "VRAM|GTT".
I don't know how else to make use of VRAM intelligently.

Well why not set VRAM|GTT as default on APUs? That should still save quite a 
bunch of moves even with throttling.

I explained why: VRAM|GTT is practically as good as GTT.


I mean there really shouldn't be any advantage to use VRAM any more except that 
we want to use it up as long as it is available.

Why are you suggesting to use VRAM|GTT then? Let's just only use GTT on all 
APUs.

Then we don't use the memory stolen for VRAM.

See we want to get to a point where we have any ~16MB of stolen VRAM on APUs 
and everything else in GTT.

But we still have to support cases where we have 1GB stolen VRAM, and wasting 
those 1GB would suck a bit.

Christian.



Marek

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: fix "mitigate workaround for i915"

2018-03-21 Thread Mike Lothian
On 21 March 2018 at 13:08, Christian König
 wrote:
> Mixed up exporter and importer here. E.g. while mapping the BO we need
> to check the importer not the exporter.
>
> Signed-off-by: Christian König 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c | 8 +---
>  1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> index 1c9991738477..4b584cb75bf4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> @@ -132,6 +132,7 @@ static int amdgpu_gem_map_attach(struct dma_buf *dma_buf,
>  {
> struct drm_gem_object *obj = dma_buf->priv;
> struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> +   struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
> long r;
>
> r = drm_gem_map_attach(dma_buf, target_dev, attach);
> @@ -143,7 +144,7 @@ static int amdgpu_gem_map_attach(struct dma_buf *dma_buf,
> goto error_detach;
>
>
> -   if (dma_buf->ops != _dmabuf_ops) {
> +   if (attach->dev->driver != adev->dev->driver) {
> /*
>  * Wait for all shared fences to complete before we switch to 
> future
>  * use of exclusive fence on this prime shared bo.
> @@ -162,7 +163,7 @@ static int amdgpu_gem_map_attach(struct dma_buf *dma_buf,
> if (r)
> goto error_unreserve;
>
> -   if (dma_buf->ops != _dmabuf_ops)
> +   if (attach->dev->driver != adev->dev->driver)
> bo->prime_shared_count++;
>
>  error_unreserve:
> @@ -179,6 +180,7 @@ static void amdgpu_gem_map_detach(struct dma_buf *dma_buf,
>  {
> struct drm_gem_object *obj = dma_buf->priv;
> struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> +   struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
> int ret = 0;
>
> ret = amdgpu_bo_reserve(bo, true);
> @@ -186,7 +188,7 @@ static void amdgpu_gem_map_detach(struct dma_buf *dma_buf,
> goto error;
>
> amdgpu_bo_unpin(bo);
> -   if (dma_buf->ops != _dmabuf_ops && bo->prime_shared_count)
> +   if (attach->dev->driver != adev->dev->driver && 
> bo->prime_shared_count)
> bo->prime_shared_count--;
> amdgpu_bo_unreserve(bo);
>
> --
> 2.14.1
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

As per the bug report

Tested-by: Mike Lothian 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 16/42] drm/amdgpu/gmc9: fix vega12's athub golden setting.

2018-03-21 Thread Alex Deucher
On Wed, Mar 21, 2018 at 10:19 AM, Christian König
 wrote:
> Am 21.03.2018 um 14:46 schrieb Alex Deucher:
>>
>> From: Feifei Xu 
>>
>> The athub's golden setting is for vega10 only now.
>> Remove this from vega12, which is introduced by branch merge.
>>
>> Signed-off-by: Feifei Xu 
>> Reviewed-by: Ken Wang 
>> Signed-off-by: Alex Deucher 
>
>
> Shouldn't that one be squashed into the predecessor?

Yes, I'll squash it in.  thanks!

Alex

>
> Christian.
>
>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> index c4467742badd..e687363900bb 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> @@ -960,7 +960,6 @@ static void gmc_v9_0_init_golden_registers(struct
>> amdgpu_device *adev)
>> switch (adev->asic_type) {
>> case CHIP_VEGA10:
>> -   case CHIP_VEGA12:
>> soc15_program_register_sequence(adev,
>>
>> golden_settings_mmhub_1_0_0,
>>
>> ARRAY_SIZE(golden_settings_mmhub_1_0_0));
>> @@ -968,6 +967,8 @@ static void gmc_v9_0_init_golden_registers(struct
>> amdgpu_device *adev)
>>
>> golden_settings_athub_1_0_0,
>>
>> ARRAY_SIZE(golden_settings_athub_1_0_0));
>> break;
>> +   case CHIP_VEGA12:
>> +   break;
>> case CHIP_RAVEN:
>> soc15_program_register_sequence(adev,
>>
>> golden_settings_athub_1_0_0,
>
>
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 00/42] Add vega12 support

2018-03-21 Thread Christian König
Apart from patch #16 Acked-by: Christian König 
 for the series.


Christian.

Am 21.03.2018 um 14:45 schrieb Alex Deucher:

Vega12 is a new GPU from AMD.  This adds support for it.

Patch 1 just adds new register headers and is pretty big,
so I haven't sent it to the mailing list.  The entire
series can be viewed here:
https://cgit.freedesktop.org/~agd5f/linux/log/?h=amd-staging-drm-next-vega12

Alex Deucher (20):
   drm/amdgpu: add gpu_info firmware for vega12
   drm/amdgpu: set asic family and ip blocks for vega12
   drm/amdgpu/psp: initial vega12 support
   drm/amdgpu: specify vega12 uvd firmware
   drm/amdgpu: specify vega12 vce firmware
   drm/amdgpu/virtual_dce: add vega12 support
   drm/amd/display/dm: add vega12 support
   drm/amdgpu: add vega12 to dc support check
   drm/amdgpu/gmc9: add vega12 support
   drm/amdgpu/mmhub: add clockgating support for vega12
   drm/amdgpu/sdma4: specify vega12 firmware
   drm/amdgpu/sdma4: Add placeholder for vega12 golden settings
   drm/amdgpu/sdma4: add clockgating support for vega12
   drm/amdgpu/gfx9: add support for vega12 firmware
   drm/amdgpu/gfx9: Add placeholder for vega12 golden settings
   drm/amdgpu/gfx9: add gfx config for vega12
   drm/amdgpu/gfx9: add support for vega12
   drm/amdgpu/gfx9: add clockgating support for vega12
   drm/amdgpu/soc15: add support for vega12
   drm/amdgpu: add vega12 pci ids (v2)

Evan Quan (11):
   drm/amdgpu: initilize vega12 psp firmwares
   drm/amdgpu/soc15: update vega12 cg_flags
   drm/amd/powerplay: add vega12_inc.h
   drm/amd/powerplay: update atomfirmware.h (v2)
   drm/amd/powerplay: add new smu9_driver_if.h for vega12 (v2)
   drm/amd/powerplay: add vega12_ppsmc.h
   drm/amd/powerplay: add vega12_pptable.h
   drm/amd/powerplay: update ppatomfwctl (v2)
   drm/amd/powerplay: add new pp_psm infrastructure for vega12 (v2)
   drm/amd/powerplay: add the smu manager for vega12 (v4)
   drm/amd/powerplay: add the hw manager for vega12 (v4)

Feifei Xu (6):
   drm/amd/include: Add ip header files for vega12.
   drm/amdgpu: add vega12 to asic_type enum
   drm/amdgpu: add vega12 ucode loading method
   drm/amdgpu/gmc9: fix vega12's athub golden setting.
   drm/amdgpu/sdma4: Update vega12 sdma golden setting.
   drm/amd/soc15: Add external_rev_id for vega12.

Hawking Zhang (4):
   drm/amdgpu: vega12 to smu firmware
   drm/amdgpu/sdma4: add sdma4_0_1 support for vega12 (v3)
   drm/amdgpu/gfx9: add golden setting for vega12 (v3)
   drm/amdgpu/soc15: initialize reg base for vega12

Jerry (Fangzhi) Zuo (1):
   drm/amd/display: Add bios firmware info version for VG12

  drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c| 3 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |11 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c| 6 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c| 1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c  | 1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c| 9 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c| 9 +-
  drivers/gpu/drm/amd/amdgpu/dce_virtual.c   | 1 +
  drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c  |65 +
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c  | 4 +
  drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c| 1 +
  drivers/gpu/drm/amd/amdgpu/psp_v3_1.c  | 5 +
  drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c |25 +-
  drivers/gpu/drm/amd/amdgpu/soc15.c |25 +
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  | 4 +
  drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c | 1 +
  .../drm/amd/include/asic_reg/gc/gc_9_2_1_offset.h  |  7497 +
  .../drm/amd/include/asic_reg/gc/gc_9_2_1_sh_mask.h | 31160 +++
  .../include/asic_reg/mmhub/mmhub_9_3_0_offset.h|  1991 ++
  .../include/asic_reg/mmhub/mmhub_9_3_0_sh_mask.h   | 10265 ++
  .../amd/include/asic_reg/oss/osssys_4_0_1_offset.h |   337 +
  .../include/asic_reg/oss/osssys_4_0_1_sh_mask.h|  1249 +
  drivers/gpu/drm/amd/include/atomfirmware.h |82 +-
  drivers/gpu/drm/amd/include/dm_pp_interface.h  | 2 +-
  drivers/gpu/drm/amd/powerplay/hwmgr/Makefile   | 4 +-
  drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c| 6 +
  drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c   |   244 +-
  .../gpu/drm/amd/powerplay/hwmgr/pp_psm_legacy.c|   262 +
  .../gpu/drm/amd/powerplay/hwmgr/pp_psm_legacy.h|40 +
  drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_new.c   |76 +
  drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_new.h   |40 +
  drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c |87 +
  drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h |65 +
  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c |  2444 ++
  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h |   470 +
  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_inc.h   |39 +
  .../gpu/drm/amd/powerplay/hwmgr/vega12_powertune.c |  1364 +
  

Re: [PATCH 16/42] drm/amdgpu/gmc9: fix vega12's athub golden setting.

2018-03-21 Thread Christian König

Am 21.03.2018 um 14:46 schrieb Alex Deucher:

From: Feifei Xu 

The athub's golden setting is for vega10 only now.
Remove this from vega12, which is introduced by branch merge.

Signed-off-by: Feifei Xu 
Reviewed-by: Ken Wang 
Signed-off-by: Alex Deucher 


Shouldn't that one be squashed into the predecessor?

Christian.


---
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index c4467742badd..e687363900bb 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -960,7 +960,6 @@ static void gmc_v9_0_init_golden_registers(struct 
amdgpu_device *adev)
  
  	switch (adev->asic_type) {

case CHIP_VEGA10:
-   case CHIP_VEGA12:
soc15_program_register_sequence(adev,
golden_settings_mmhub_1_0_0,

ARRAY_SIZE(golden_settings_mmhub_1_0_0));
@@ -968,6 +967,8 @@ static void gmc_v9_0_init_golden_registers(struct 
amdgpu_device *adev)
golden_settings_athub_1_0_0,

ARRAY_SIZE(golden_settings_athub_1_0_0));
break;
+   case CHIP_VEGA12:
+   break;
case CHIP_RAVEN:
soc15_program_register_sequence(adev,
golden_settings_athub_1_0_0,


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/2] drm/amdgpu: Enable scatter gather display support

2018-03-21 Thread Christian König

Am 21.03.2018 um 14:57 schrieb Marek Olšák:
On Wed, Mar 21, 2018 at 4:13 AM, Christian König 
> wrote:


Am 21.03.2018 um 06:08 schrieb Marek Olšák:

On Tue, Mar 20, 2018 at 4:16 PM, Christian König
> wrote:

That's what I meant with use up the otherwise unused VRAM. I
don't see any disadvantage of always setting GTT as second
domain on APUs.

My assumption was that we dropped this in userspace for
displayable surfaces, but Marek proved that wrong.

So what we should do is actually to add GTT as fallback to
all BOs on APUs in Mesa and only make sure that the kernel is
capable of handling GTT with optimal performance (e.g. have
user huge pages etc..).


VRAM|GTT is practically as good as GTT. VRAM with BO priorities
and eviction throttling is the true "VRAM|GTT".

I don't know how else to make use of VRAM intelligently.


Well why not set VRAM|GTT as default on APUs? That should still
save quite a bunch of moves even with throttling.


I explained why: VRAM|GTT is practically as good as GTT.


I mean there really shouldn't be any advantage to use VRAM any
more except that we want to use it up as long as it is available.


Why are you suggesting to use VRAM|GTT then? Let's just only use GTT 
on all APUs.


Then we don't use the memory stolen for VRAM.

See we want to get to a point where we have any ~16MB of stolen VRAM on 
APUs and everything else in GTT.


But we still have to support cases where we have 1GB stolen VRAM, and 
wasting those 1GB would suck a bit.


Christian.



Marek


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/2] drm/amdgpu: Enable scatter gather display support

2018-03-21 Thread Marek Olšák
On Wed, Mar 21, 2018 at 4:13 AM, Christian König <
ckoenig.leichtzumer...@gmail.com> wrote:

> Am 21.03.2018 um 06:08 schrieb Marek Olšák:
>
> On Tue, Mar 20, 2018 at 4:16 PM, Christian König  > wrote:
>
>> That's what I meant with use up the otherwise unused VRAM. I don't see
>> any disadvantage of always setting GTT as second domain on APUs.
>>
>> My assumption was that we dropped this in userspace for displayable
>> surfaces, but Marek proved that wrong.
>>
>> So what we should do is actually to add GTT as fallback to all BOs on
>> APUs in Mesa and only make sure that the kernel is capable of handling GTT
>> with optimal performance (e.g. have user huge pages etc..).
>>
>
> VRAM|GTT is practically as good as GTT. VRAM with BO priorities and
> eviction throttling is the true "VRAM|GTT".
>
> I don't know how else to make use of VRAM intelligently.
>
>
> Well why not set VRAM|GTT as default on APUs? That should still save quite
> a bunch of moves even with throttling.
>

I explained why: VRAM|GTT is practically as good as GTT.


>
> I mean there really shouldn't be any advantage to use VRAM any more except
> that we want to use it up as long as it is available.
>

Why are you suggesting to use VRAM|GTT then? Let's just only use GTT on all
APUs.

Marek
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: fix "mitigate workaround for i915"

2018-03-21 Thread Deucher, Alexander
Acked-by: Alex Deucher 


From: amd-gfx  on behalf of Christian 
König 
Sent: Wednesday, March 21, 2018 9:08:18 AM
To: amd-gfx@lists.freedesktop.org
Subject: [PATCH] drm/amdgpu: fix "mitigate workaround for i915"

Mixed up exporter and importer here. E.g. while mapping the BO we need
to check the importer not the exporter.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
index 1c9991738477..4b584cb75bf4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
@@ -132,6 +132,7 @@ static int amdgpu_gem_map_attach(struct dma_buf *dma_buf,
 {
 struct drm_gem_object *obj = dma_buf->priv;
 struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
+   struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
 long r;

 r = drm_gem_map_attach(dma_buf, target_dev, attach);
@@ -143,7 +144,7 @@ static int amdgpu_gem_map_attach(struct dma_buf *dma_buf,
 goto error_detach;


-   if (dma_buf->ops != _dmabuf_ops) {
+   if (attach->dev->driver != adev->dev->driver) {
 /*
  * Wait for all shared fences to complete before we switch to 
future
  * use of exclusive fence on this prime shared bo.
@@ -162,7 +163,7 @@ static int amdgpu_gem_map_attach(struct dma_buf *dma_buf,
 if (r)
 goto error_unreserve;

-   if (dma_buf->ops != _dmabuf_ops)
+   if (attach->dev->driver != adev->dev->driver)
 bo->prime_shared_count++;

 error_unreserve:
@@ -179,6 +180,7 @@ static void amdgpu_gem_map_detach(struct dma_buf *dma_buf,
 {
 struct drm_gem_object *obj = dma_buf->priv;
 struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
+   struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
 int ret = 0;

 ret = amdgpu_bo_reserve(bo, true);
@@ -186,7 +188,7 @@ static void amdgpu_gem_map_detach(struct dma_buf *dma_buf,
 goto error;

 amdgpu_bo_unpin(bo);
-   if (dma_buf->ops != _dmabuf_ops && bo->prime_shared_count)
+   if (attach->dev->driver != adev->dev->driver && bo->prime_shared_count)
 bo->prime_shared_count--;
 amdgpu_bo_unreserve(bo);

--
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
amd-gfx Info Page - 
freedesktop.org
lists.freedesktop.org
Subscribing to amd-gfx: Subscribe to amd-gfx by filling out the following form. 
Use of all freedesktop.org lists is subject to our Code of ...


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 42/42] drm/amdgpu: add vega12 pci ids (v2)

2018-03-21 Thread Alex Deucher
v2: add additional pci ids

Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index e6709362994a..1bfce79bc074 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -544,6 +544,12 @@ static const struct pci_device_id pciidlist[] = {
{0x1002, 0x6868, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x686c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x687f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
+   /* Vega 12 */
+   {0x1002, 0x69A0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA12},
+   {0x1002, 0x69A1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA12},
+   {0x1002, 0x69A2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA12},
+   {0x1002, 0x69A3, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA12},
+   {0x1002, 0x69AF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA12},
/* Raven */
{0x1002, 0x15dd, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RAVEN|AMD_IS_APU},
 
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 40/42] drm/amd/powerplay: add the smu manager for vega12 (v4)

2018-03-21 Thread Alex Deucher
From: Evan Quan 

handles the driver interaction with the smu firmware

v2: squash in:
- s3 fix for firmware loading
- smu loading through the psp
- unecessary calls to is_smc_ram_running()
- smu table cleanups
v3: rebase
v4: rebase, smu bo allocation fixes, add dpm running callback

Signed-off-by: Evan Quan 
Reviewed-by: Alex Deucher 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/powerplay/smumgr/Makefile  |   3 +-
 .../gpu/drm/amd/powerplay/smumgr/vega12_smumgr.c   | 561 +
 .../gpu/drm/amd/powerplay/smumgr/vega12_smumgr.h   |  62 +++
 3 files changed, 625 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/powerplay/smumgr/vega12_smumgr.c
 create mode 100644 drivers/gpu/drm/amd/powerplay/smumgr/vega12_smumgr.h

diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/Makefile 
b/drivers/gpu/drm/amd/powerplay/smumgr/Makefile
index 735c38624ce1..958755075421 100644
--- a/drivers/gpu/drm/amd/powerplay/smumgr/Makefile
+++ b/drivers/gpu/drm/amd/powerplay/smumgr/Makefile
@@ -25,7 +25,8 @@
 
 SMU_MGR = smumgr.o smu8_smumgr.o tonga_smumgr.o fiji_smumgr.o \
  polaris10_smumgr.o iceland_smumgr.o \
- smu7_smumgr.o vega10_smumgr.o smu10_smumgr.o ci_smumgr.o
+ smu7_smumgr.o vega10_smumgr.o smu10_smumgr.o ci_smumgr.o \
+ vega12_smumgr.o
 
 AMD_PP_SMUMGR = $(addprefix $(AMD_PP_PATH)/smumgr/,$(SMU_MGR))
 
diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/vega12_smumgr.c 
b/drivers/gpu/drm/amd/powerplay/smumgr/vega12_smumgr.c
new file mode 100644
index ..55cd204c1789
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/smumgr/vega12_smumgr.c
@@ -0,0 +1,561 @@
+/*
+ * Copyright 2017 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include "smumgr.h"
+#include "vega12_inc.h"
+#include "pp_soc15.h"
+#include "vega12_smumgr.h"
+#include "vega12_ppsmc.h"
+#include "vega12/smu9_driver_if.h"
+
+#include "ppatomctrl.h"
+#include "pp_debug.h"
+#include "smu_ucode_xfer_vi.h"
+#include "smu7_smumgr.h"
+
+/* MP Apertures */
+#define MP0_Public  0x0380
+#define MP0_SRAM0x0390
+#define MP1_Public  0x03b0
+#define MP1_SRAM0x03c4
+
+#define smnMP1_FIRMWARE_FLAGS  
 0x3010028
+#define smnMP0_FW_INTF 
 0x3010104
+#define smnMP1_PUB_CTRL
 0x3010b14
+
+static bool vega12_is_smc_ram_running(struct pp_hwmgr *hwmgr)
+{
+   uint32_t mp1_fw_flags, reg;
+
+   reg = soc15_get_register_offset(NBIF_HWID, 0,
+   mmPCIE_INDEX2_BASE_IDX, mmPCIE_INDEX2);
+
+   cgs_write_register(hwmgr->device, reg,
+   (MP1_Public | (smnMP1_FIRMWARE_FLAGS & 0x)));
+
+   reg = soc15_get_register_offset(NBIF_HWID, 0,
+   mmPCIE_DATA2_BASE_IDX, mmPCIE_DATA2);
+
+   mp1_fw_flags = cgs_read_register(hwmgr->device, reg);
+
+   if ((mp1_fw_flags & MP1_FIRMWARE_FLAGS__INTERRUPTS_ENABLED_MASK) >>
+   MP1_FIRMWARE_FLAGS__INTERRUPTS_ENABLED__SHIFT)
+   return true;
+
+   return false;
+}
+
+/*
+ * Check if SMC has responded to previous message.
+ *
+ * @paramsmumgr  the address of the powerplay hardware manager.
+ * @return   TRUESMC has responded, FALSE otherwise.
+ */
+static uint32_t vega12_wait_for_response(struct pp_hwmgr *hwmgr)
+{
+   uint32_t reg;
+
+   reg = soc15_get_register_offset(MP1_HWID, 0,
+   mmMP1_SMN_C2PMSG_90_BASE_IDX, mmMP1_SMN_C2PMSG_90);
+
+   phm_wait_for_register_unequal(hwmgr, reg,
+   0, 

[PATCH 33/42] drm/amd/powerplay: add vega12_inc.h

2018-03-21 Thread Alex Deucher
From: Evan Quan 

Signed-off-by: Evan Quan 
Reviewed-by: Alex Deucher 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_inc.h | 39 
 1 file changed, 39 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_inc.h

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_inc.h 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_inc.h
new file mode 100644
index ..30b278c50222
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_inc.h
@@ -0,0 +1,39 @@
+/*
+ * Copyright 2017 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef VEGA12_INC_H
+#define VEGA12_INC_H
+
+#include "asic_reg/thm/thm_9_0_default.h"
+#include "asic_reg/thm/thm_9_0_offset.h"
+#include "asic_reg/thm/thm_9_0_sh_mask.h"
+
+#include "asic_reg/mp/mp_9_0_offset.h"
+#include "asic_reg/mp/mp_9_0_sh_mask.h"
+
+#include "asic_reg/gc/gc_9_2_1_offset.h"
+#include "asic_reg/gc/gc_9_2_1_sh_mask.h"
+
+#include "asic_reg/nbio/nbio_6_1_offset.h"
+
+#endif
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 30/42] drm/amdgpu/soc15: update vega12 cg_flags

2018-03-21 Thread Alex Deucher
From: Evan Quan 

Add the appropriate clockgating flags for vega12

Signed-off-by: Evan Quan 
Acked-by: Alex Deucher 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/soc15.c | 19 ++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c 
b/drivers/gpu/drm/amd/amdgpu/soc15.c
index 91b0ef579c75..0ad9272c7a5d 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -653,7 +653,24 @@ static int soc15_common_early_init(void *handle)
adev->external_rev_id = 0x1;
break;
case CHIP_VEGA12:
-   adev->cg_flags = 0;
+   adev->cg_flags = AMD_CG_SUPPORT_GFX_MGCG |
+   AMD_CG_SUPPORT_GFX_MGLS |
+   AMD_CG_SUPPORT_GFX_CGCG |
+   AMD_CG_SUPPORT_GFX_CGLS |
+   AMD_CG_SUPPORT_GFX_3D_CGCG |
+   AMD_CG_SUPPORT_GFX_3D_CGLS |
+   AMD_CG_SUPPORT_GFX_CP_LS |
+   AMD_CG_SUPPORT_MC_LS |
+   AMD_CG_SUPPORT_MC_MGCG |
+   AMD_CG_SUPPORT_SDMA_MGCG |
+   AMD_CG_SUPPORT_SDMA_LS |
+   AMD_CG_SUPPORT_BIF_MGCG |
+   AMD_CG_SUPPORT_BIF_LS |
+   AMD_CG_SUPPORT_HDP_MGCG |
+   AMD_CG_SUPPORT_HDP_LS |
+   AMD_CG_SUPPORT_ROM_MGCG |
+   AMD_CG_SUPPORT_VCE_MGCG |
+   AMD_CG_SUPPORT_UVD_MGCG;
adev->pg_flags = 0;
adev->external_rev_id = 0x1; /* ??? */
break;
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 34/42] drm/amd/powerplay: update atomfirmware.h (v2)

2018-03-21 Thread Alex Deucher
From: Evan Quan 

Add new smu_info table.

v2: update table format.

Signed-off-by: Evan Quan 
Reviewed-by: Alex Deucher 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/include/atomfirmware.h | 82 +-
 1 file changed, 81 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/include/atomfirmware.h 
b/drivers/gpu/drm/amd/include/atomfirmware.h
index 7c92f4707085..3ae3da4e7c14 100644
--- a/drivers/gpu/drm/amd/include/atomfirmware.h
+++ b/drivers/gpu/drm/amd/include/atomfirmware.h
@@ -381,7 +381,7 @@ struct atom_rom_hw_function_header
 struct atom_master_list_of_data_tables_v2_1{
   uint16_t utilitypipeline;   /* Offest for the utility to get 
parser info,Don't change this position!*/
   uint16_t multimedia_info;   
-  uint16_t sw_datatable2;
+  uint16_t smc_dpm_info;
   uint16_t sw_datatable3; 
   uint16_t firmwareinfo;  /* Shared by various SW components */
   uint16_t sw_datatable5;
@@ -1198,6 +1198,86 @@ struct atom_smu_info_v3_1
   uint8_t  fw_ctf_polarity; // GPIO polarity for CTF
 };
 
+/*
+ ***
+   Data Table smc_dpm_info  structure
+ ***
+ */
+struct atom_smc_dpm_info_v4_1
+{
+  struct   atom_common_table_header  table_header;
+  uint8_t  liquid1_i2c_address;
+  uint8_t  liquid2_i2c_address;
+  uint8_t  vr_i2c_address;
+  uint8_t  plx_i2c_address;
+
+  uint8_t  liquid_i2c_linescl;
+  uint8_t  liquid_i2c_linesda;
+  uint8_t  vr_i2c_linescl;
+  uint8_t  vr_i2c_linesda;
+
+  uint8_t  plx_i2c_linescl;
+  uint8_t  plx_i2c_linesda;
+  uint8_t  vrsensorpresent;
+  uint8_t  liquidsensorpresent;
+
+  uint16_t maxvoltagestepgfx;
+  uint16_t maxvoltagestepsoc;
+
+  uint8_t  vddgfxvrmapping;
+  uint8_t  vddsocvrmapping;
+  uint8_t  vddmem0vrmapping;
+  uint8_t  vddmem1vrmapping;
+
+  uint8_t  gfxulvphasesheddingmask;
+  uint8_t  soculvphasesheddingmask;
+  uint8_t  padding8_v[2];
+
+  uint16_t gfxmaxcurrent;
+  uint8_t  gfxoffset;
+  uint8_t  padding_telemetrygfx;
+
+  uint16_t socmaxcurrent;
+  uint8_t  socoffset;
+  uint8_t  padding_telemetrysoc;
+
+  uint16_t mem0maxcurrent;
+  uint8_t  mem0offset;
+  uint8_t  padding_telemetrymem0;
+
+  uint16_t mem1maxcurrent;
+  uint8_t  mem1offset;
+  uint8_t  padding_telemetrymem1;
+
+  uint8_t  acdcgpio;
+  uint8_t  acdcpolarity;
+  uint8_t  vr0hotgpio;
+  uint8_t  vr0hotpolarity;
+
+  uint8_t  vr1hotgpio;
+  uint8_t  vr1hotpolarity;
+  uint8_t  padding1;
+  uint8_t  padding2;
+
+  uint8_t  ledpin0;
+  uint8_t  ledpin1;
+  uint8_t  ledpin2;
+  uint8_t  padding8_4;
+
+  uint8_t  gfxclkspreadenabled;
+  uint8_t  gfxclkspreadpercent;
+  uint16_t gfxclkspreadfreq;
+
+  uint8_t uclkspreadenabled;
+  uint8_t uclkspreadpercent;
+  uint16_t uclkspreadfreq;
+
+  uint8_t socclkspreadenabled;
+  uint8_t socclkspreadpercent;
+  uint16_t socclkspreadfreq;
+
+  uint32_t boardreserved[3];
+};
 
 
 /* 
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 32/42] drm/amdgpu/soc15: initialize reg base for vega12

2018-03-21 Thread Alex Deucher
From: Hawking Zhang 

Signed-off-by: Hawking Zhang 
Reviewed-by: Alex Deucher 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/soc15.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c 
b/drivers/gpu/drm/amd/amdgpu/soc15.c
index e308c3c6ca4f..51cf8a30f6c2 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -508,6 +508,7 @@ int soc15_set_ip_blocks(struct amdgpu_device *adev)
/* Set IP register base before any HW register access */
switch (adev->asic_type) {
case CHIP_VEGA10:
+   case CHIP_VEGA12:
case CHIP_RAVEN:
vega10_reg_base_init(adev);
break;
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 39/42] drm/amd/powerplay: add new pp_psm infrastructure for vega12 (v2)

2018-03-21 Thread Alex Deucher
From: Evan Quan 

New psm infrastructure for vega12.

v2: rebase (Alex)

Signed-off-by: Evan Quan 
Acked-by: Alex Deucher 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/Makefile   |   2 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c   | 244 +++
 .../gpu/drm/amd/powerplay/hwmgr/pp_psm_legacy.c| 262 +
 .../gpu/drm/amd/powerplay/hwmgr/pp_psm_legacy.h|  40 
 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_new.c   |  76 ++
 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_new.h   |  40 
 6 files changed, 452 insertions(+), 212 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_legacy.c
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_legacy.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_new.c
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_new.h

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile 
b/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile
index f868b955da92..c1249e03c912 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile
@@ -31,7 +31,7 @@ HARDWARE_MGR = hwmgr.o processpptables.o \
smu7_clockpowergating.o \
vega10_processpptables.o vega10_hwmgr.o vega10_powertune.o \
vega10_thermal.o smu10_hwmgr.o pp_psm.o\
-   pp_overdriver.o smu_helper.o
+   pp_overdriver.o smu_helper.o pp_psm_legacy.o pp_psm_new.o
 
 AMD_PP_HWMGR = $(addprefix $(AMD_PP_PATH)/hwmgr/,$(HARDWARE_MGR))
 
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c
index d0ef8f9c1361..295ab9fed3f0 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c
@@ -21,243 +21,65 @@
  *
  */
 
-#include 
-#include 
-#include 
 #include "pp_psm.h"
+#include "pp_psm_legacy.h"
+#include "pp_psm_new.h"
 
 int psm_init_power_state_table(struct pp_hwmgr *hwmgr)
 {
-   int result;
-   unsigned int i;
-   unsigned int table_entries;
-   struct pp_power_state *state;
-   int size;
-
-   if (hwmgr->hwmgr_func->get_num_of_pp_table_entries == NULL)
-   return -EINVAL;
-
-   if (hwmgr->hwmgr_func->get_power_state_size == NULL)
-   return -EINVAL;
-
-   hwmgr->num_ps = table_entries = 
hwmgr->hwmgr_func->get_num_of_pp_table_entries(hwmgr);
-
-   hwmgr->ps_size = size = hwmgr->hwmgr_func->get_power_state_size(hwmgr) +
- sizeof(struct pp_power_state);
-
-   hwmgr->ps = kzalloc(size * table_entries, GFP_KERNEL);
-   if (hwmgr->ps == NULL)
-   return -ENOMEM;
-
-   hwmgr->request_ps = kzalloc(size, GFP_KERNEL);
-   if (hwmgr->request_ps == NULL) {
-   kfree(hwmgr->ps);
-   hwmgr->ps = NULL;
-   return -ENOMEM;
-   }
-
-   hwmgr->current_ps = kzalloc(size, GFP_KERNEL);
-   if (hwmgr->current_ps == NULL) {
-   kfree(hwmgr->request_ps);
-   kfree(hwmgr->ps);
-   hwmgr->request_ps = NULL;
-   hwmgr->ps = NULL;
-   return -ENOMEM;
-   }
-
-   state = hwmgr->ps;
-
-   for (i = 0; i < table_entries; i++) {
-   result = hwmgr->hwmgr_func->get_pp_table_entry(hwmgr, i, state);
-
-   if (state->classification.flags & 
PP_StateClassificationFlag_Boot) {
-   hwmgr->boot_ps = state;
-   memcpy(hwmgr->current_ps, state, size);
-   memcpy(hwmgr->request_ps, state, size);
-   }
-
-   state->id = i + 1; /* assigned unique num for every power state 
id */
-
-   if (state->classification.flags & 
PP_StateClassificationFlag_Uvd)
-   hwmgr->uvd_ps = state;
-   state = (struct pp_power_state *)((unsigned long)state + size);
-   }
-
-   return 0;
+   if (hwmgr->chip_id != CHIP_VEGA12)
+   return psm_legacy_init_power_state_table(hwmgr);
+   else
+   return psm_new_init_power_state_table(hwmgr);
 }
 
 int psm_fini_power_state_table(struct pp_hwmgr *hwmgr)
 {
-   if (hwmgr == NULL)
-   return -EINVAL;
-
-   kfree(hwmgr->current_ps);
-   kfree(hwmgr->request_ps);
-   kfree(hwmgr->ps);
-   hwmgr->request_ps = NULL;
-   hwmgr->ps = NULL;
-   hwmgr->current_ps = NULL;
-   return 0;
-}
-
-static int psm_get_ui_state(struct pp_hwmgr *hwmgr,
-   enum PP_StateUILabel ui_label,
-   unsigned long *state_id)
-{
-   struct pp_power_state *state;
-   int table_entries;
-   int i;
-
-   table_entries = hwmgr->num_ps;
-   state = hwmgr->ps;
-
-   for (i = 0; i < table_entries; i++) {
-   

[PATCH 36/42] drm/amd/powerplay: add vega12_ppsmc.h

2018-03-21 Thread Alex Deucher
From: Evan Quan 

Signed-off-by: Evan Quan 
Reviewed-by: Alex Deucher 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/powerplay/inc/vega12_ppsmc.h | 123 +++
 1 file changed, 123 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/powerplay/inc/vega12_ppsmc.h

diff --git a/drivers/gpu/drm/amd/powerplay/inc/vega12_ppsmc.h 
b/drivers/gpu/drm/amd/powerplay/inc/vega12_ppsmc.h
new file mode 100644
index ..f985c78d746a
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/inc/vega12_ppsmc.h
@@ -0,0 +1,123 @@
+/*
+ * Copyright 2017 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef VEGA12_PP_SMC_H
+#define VEGA12_PP_SMC_H
+
+#pragma pack(push, 1)
+
+#define SMU_UCODE_VERSION  0x00270a00
+
+/* SMU Response Codes: */
+#define PPSMC_Result_OK0x1
+#define PPSMC_Result_Failed0xFF
+#define PPSMC_Result_UnknownCmd0xFE
+#define PPSMC_Result_CmdRejectedPrereq 0xFD
+#define PPSMC_Result_CmdRejectedBusy   0xFC
+
+#define PPSMC_MSG_TestMessage0x1
+#define PPSMC_MSG_GetSmuVersion  0x2
+#define PPSMC_MSG_GetDriverIfVersion 0x3
+#define PPSMC_MSG_SetAllowedFeaturesMaskLow  0x4
+#define PPSMC_MSG_SetAllowedFeaturesMaskHigh 0x5
+#define PPSMC_MSG_EnableAllSmuFeatures   0x6
+#define PPSMC_MSG_DisableAllSmuFeatures  0x7
+#define PPSMC_MSG_EnableSmuFeaturesLow   0x8
+#define PPSMC_MSG_EnableSmuFeaturesHigh  0x9
+#define PPSMC_MSG_DisableSmuFeaturesLow  0xA
+#define PPSMC_MSG_DisableSmuFeaturesHigh 0xB
+#define PPSMC_MSG_GetEnabledSmuFeaturesLow   0xC
+#define PPSMC_MSG_GetEnabledSmuFeaturesHigh  0xD
+#define PPSMC_MSG_SetWorkloadMask0xE
+#define PPSMC_MSG_SetPptLimit0xF
+#define PPSMC_MSG_SetDriverDramAddrHigh  0x10
+#define PPSMC_MSG_SetDriverDramAddrLow   0x11
+#define PPSMC_MSG_SetToolsDramAddrHigh   0x12
+#define PPSMC_MSG_SetToolsDramAddrLow0x13
+#define PPSMC_MSG_TransferTableSmu2Dram  0x14
+#define PPSMC_MSG_TransferTableDram2Smu  0x15
+#define PPSMC_MSG_UseDefaultPPTable  0x16
+#define PPSMC_MSG_UseBackupPPTable   0x17
+#define PPSMC_MSG_RunBtc 0x18
+#define PPSMC_MSG_RequestI2CBus  0x19
+#define PPSMC_MSG_ReleaseI2CBus  0x1A
+#define PPSMC_MSG_SetFloorSocVoltage 0x21
+#define PPSMC_MSG_SoftReset  0x22
+#define PPSMC_MSG_StartBacoMonitor   0x23
+#define PPSMC_MSG_CancelBacoMonitor  0x24
+#define PPSMC_MSG_EnterBaco  0x25
+#define PPSMC_MSG_SetSoftMinByFreq   0x26
+#define PPSMC_MSG_SetSoftMaxByFreq   0x27
+#define PPSMC_MSG_SetHardMinByFreq   0x28
+#define PPSMC_MSG_SetHardMaxByFreq   0x29
+#define PPSMC_MSG_GetMinDpmFreq  0x2A
+#define PPSMC_MSG_GetMaxDpmFreq  0x2B
+#define PPSMC_MSG_GetDpmFreqByIndex  0x2C
+#define PPSMC_MSG_GetDpmClockFreq0x2D
+#define PPSMC_MSG_GetSsVoltageByDpm  0x2E
+#define PPSMC_MSG_SetMemoryChannelConfig 0x2F
+#define PPSMC_MSG_SetGeminiMode  0x30
+#define PPSMC_MSG_SetGeminiApertureHigh  0x31
+#define PPSMC_MSG_SetGeminiApertureLow   0x32
+#define PPSMC_MSG_SetMinLinkDpmByIndex   0x33
+#define PPSMC_MSG_OverridePcieParameters 0x34
+#define PPSMC_MSG_OverDriveSetPercentage 0x35
+#define PPSMC_MSG_SetMinDeepSleepDcefclk 0x36
+#define PPSMC_MSG_ReenableAcDcInterrupt  0x37
+#define PPSMC_MSG_NotifyPowerSource  0x38
+#define PPSMC_MSG_SetUclkFastSwitch  0x39

[PATCH 37/42] drm/amd/powerplay: add vega12_pptable.h

2018-03-21 Thread Alex Deucher
From: Evan Quan 

Signed-off-by: Evan Quan 
Reviewed-by: Alex Deucher 
Signed-off-by: Alex Deucher 
---
 .../gpu/drm/amd/powerplay/hwmgr/vega12_pptable.h   | 109 +
 1 file changed, 109 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_pptable.h

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_pptable.h 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_pptable.h
new file mode 100644
index ..bf4f5095b80d
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_pptable.h
@@ -0,0 +1,109 @@
+/*
+ * Copyright 2017 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#ifndef _VEGA12_PPTABLE_H_
+#define _VEGA12_PPTABLE_H_
+
+#pragma pack(push, 1)
+
+#define ATOM_VEGA12_PP_THERMALCONTROLLER_NONE   0
+#define ATOM_VEGA12_PP_THERMALCONTROLLER_VEGA12 25
+
+#define ATOM_VEGA12_PP_PLATFORM_CAP_POWERPLAY   0x1
+#define ATOM_VEGA12_PP_PLATFORM_CAP_SBIOSPOWERSOURCE0x2
+#define ATOM_VEGA12_PP_PLATFORM_CAP_HARDWAREDC  0x4
+#define ATOM_VEGA12_PP_PLATFORM_CAP_BACO0x8
+#define ATOM_VEGA12_PP_PLATFORM_CAP_BAMACO  0x10
+#define ATOM_VEGA12_PP_PLATFORM_CAP_ENABLESHADOWPSTATE  0x20
+
+#define ATOM_VEGA12_TABLE_REVISION_VEGA12 9
+
+enum ATOM_VEGA12_ODSETTING_ID {
+  ATOM_VEGA12_ODSETTING_GFXCLKFMAX = 0,
+  ATOM_VEGA12_ODSETTING_GFXCLKFMIN,
+  ATOM_VEGA12_ODSETTING_VDDGFXCURVEFREQ_P1,
+  ATOM_VEGA12_ODSETTING_VDDGFXCURVEVOLTAGEOFFSET_P1,
+  ATOM_VEGA12_ODSETTING_VDDGFXCURVEFREQ_P2,
+  ATOM_VEGA12_ODSETTING_VDDGFXCURVEVOLTAGEOFFSET_P2,
+  ATOM_VEGA12_ODSETTING_VDDGFXCURVEFREQ_P3,
+  ATOM_VEGA12_ODSETTING_VDDGFXCURVEVOLTAGEOFFSET_P3,
+  ATOM_VEGA12_ODSETTING_UCLKFMAX,
+  ATOM_VEGA12_ODSETTING_POWERPERCENTAGE,
+  ATOM_VEGA12_ODSETTING_FANRPMMIN,
+  ATOM_VEGA12_ODSETTING_FANRPMACOUSTICLIMIT,
+  ATOM_VEGA12_ODSETTING_FANTARGETTEMPERATURE,
+  ATOM_VEGA12_ODSETTING_OPERATINGTEMPMAX,
+  ATOM_VEGA12_ODSETTING_COUNT,
+};
+typedef enum ATOM_VEGA12_ODSETTING_ID ATOM_VEGA12_ODSETTING_ID;
+
+enum ATOM_VEGA12_PPCLOCK_ID {
+  ATOM_VEGA12_PPCLOCK_GFXCLK = 0,
+  ATOM_VEGA12_PPCLOCK_VCLK,
+  ATOM_VEGA12_PPCLOCK_DCLK,
+  ATOM_VEGA12_PPCLOCK_ECLK,
+  ATOM_VEGA12_PPCLOCK_SOCCLK,
+  ATOM_VEGA12_PPCLOCK_UCLK,
+  ATOM_VEGA12_PPCLOCK_DCEFCLK,
+  ATOM_VEGA12_PPCLOCK_DISPCLK,
+  ATOM_VEGA12_PPCLOCK_PIXCLK,
+  ATOM_VEGA12_PPCLOCK_PHYCLK,
+  ATOM_VEGA12_PPCLOCK_COUNT,
+};
+typedef enum ATOM_VEGA12_PPCLOCK_ID ATOM_VEGA12_PPCLOCK_ID;
+
+
+typedef struct _ATOM_VEGA12_POWERPLAYTABLE
+{
+  struct atom_common_table_header sHeader;
+  UCHAR  ucTableRevision;
+  USHORT usTableSize;
+  ULONG  ulGoldenPPID;
+  ULONG  ulGoldenRevision;
+  USHORT usFormatID;
+
+  ULONG  ulPlatformCaps;
+
+  UCHAR  ucThermalControllerType;
+
+  USHORT usSmallPowerLimit1;
+  USHORT usSmallPowerLimit2;
+  USHORT usBoostPowerLimit;
+  USHORT usODTurboPowerLimit;
+  USHORT usODPowerSavePowerLimit;
+  USHORT usSoftwareShutdownTemp;
+
+  ULONG PowerSavingClockMax  [ATOM_VEGA12_PPCLOCK_COUNT];
+  ULONG PowerSavingClockMin  [ATOM_VEGA12_PPCLOCK_COUNT];
+
+  ULONG ODSettingsMax [ATOM_VEGA12_ODSETTING_COUNT];
+  ULONG ODSettingsMin [ATOM_VEGA12_ODSETTING_COUNT];
+
+  USHORT usReserve[5];
+
+  PPTable_t smcPPTable;
+
+} ATOM_Vega12_POWERPLAYTABLE;
+
+#pragma pack(pop)
+
+#endif
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 28/42] drm/amdgpu/gfx9: add golden setting for vega12 (v3)

2018-03-21 Thread Alex Deucher
From: Hawking Zhang 

Add gfx9_2_1 golden setting.

v2: switch to soc15_program_register_sequence for
golden setting programming
v3: squash in additional golden updates

Signed-off-by: Feifei Xu 
Reviewed-by: Ken Wang 
Signed-off-by: Hawking Zhang 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 44 +--
 1 file changed, 42 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 9ce1e9e552d9..1ae3de1094f9 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -151,7 +151,42 @@ static const struct soc15_reg_golden 
golden_settings_gc_9_x_common[] =
SOC15_REG_GOLDEN_VALUE(GC, 0, mmGRBM_CAM_DATA, 0x, 0x2544c382)
 };
 
+static const struct soc15_reg_golden golden_settings_gc_9_2_1[] =
+{
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG2, 0xf00f, 0x0420),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmGB_GPU_ID, 0x000f, 0x),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_BINNER_EVENT_CNTL_3, 0x0003, 
0x82400024),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_ENHANCE, 0x3fff, 0x0001),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_LINE_STIPPLE_STATE, 0xff0f, 
0x),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmSH_MEM_CONFIG, 0x1000, 0x1000),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmSPI_RESOURCE_RESERVE_CU_0, 0x0007, 
0x0800),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmSPI_RESOURCE_RESERVE_CU_1, 0x0007, 
0x0800),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmSPI_RESOURCE_RESERVE_EN_CU_0, 
0x01ff, 0xff87),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmSPI_RESOURCE_RESERVE_EN_CU_1, 
0x01ff, 0xff8f),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQC_CONFIG, 0x0300, 0x020a2000),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmTA_CNTL_AUX, 0xfeef, 0x010b),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmTCP_CHAN_STEER_HI, 0x, 
0x4a2c0e68),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmTCP_CHAN_STEER_LO, 0x, 
0xb5d3f197),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmVGT_CACHE_INVALIDATION, 0x3fff3af3, 
0x1920),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmVGT_GS_MAX_WAVE_ID, 0x0fff, 
0x03ff)
+};
+
+static const struct soc15_reg_golden golden_settings_gc_9_2_1_vg12[] =
+{
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmCB_DCC_CONFIG, 0x0080, 0x0480),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmCB_HW_CONTROL, 0xfffdf3cf, 0x00014104),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmCB_HW_CONTROL_2, 0x0f00, 
0x0a00),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmGB_ADDR_CONFIG, 0x77ff, 0x24104041),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmGB_ADDR_CONFIG_READ, 0x77ff, 
0x24104041),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_ENHANCE_1, 0x, 
0x0404),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmSPI_CONFIG_CNTL_1, 0x03ff, 
0x01000107),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmTCP_CHAN_STEER_HI, 0x, 
0x),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmTCP_CHAN_STEER_LO, 0x, 
0x76325410),
+   SOC15_REG_GOLDEN_VALUE(GC, 0, mmTD_CNTL, 0x01bd9f33, 0x0100)
+};
+
 #define VEGA10_GB_ADDR_CONFIG_GOLDEN 0x2a114042
+#define VEGA12_GB_ADDR_CONFIG_GOLDEN 0x24104041
 #define RAVEN_GB_ADDR_CONFIG_GOLDEN 0x2442
 
 static void gfx_v9_0_set_ring_funcs(struct amdgpu_device *adev);
@@ -176,7 +211,12 @@ static void gfx_v9_0_init_golden_registers(struct 
amdgpu_device *adev)
 
ARRAY_SIZE(golden_settings_gc_9_0_vg10));
break;
case CHIP_VEGA12:
-   DRM_ERROR("missing golden settings for gfx9 on vega12!\n");
+   soc15_program_register_sequence(adev,
+   golden_settings_gc_9_2_1,
+   
ARRAY_SIZE(golden_settings_gc_9_2_1));
+   soc15_program_register_sequence(adev,
+   golden_settings_gc_9_2_1_vg12,
+   
ARRAY_SIZE(golden_settings_gc_9_2_1_vg12));
break;
case CHIP_RAVEN:
soc15_program_register_sequence(adev,
@@ -987,7 +1027,7 @@ static void gfx_v9_0_gpu_early_init(struct amdgpu_device 
*adev)
adev->gfx.config.sc_prim_fifo_size_backend = 0x100;
adev->gfx.config.sc_hiz_tile_fifo_size = 0x30;
adev->gfx.config.sc_earlyz_tile_fifo_size = 0x4C0;
-   gb_addr_config = VEGA10_GB_ADDR_CONFIG_GOLDEN;
+   gb_addr_config = VEGA12_GB_ADDR_CONFIG_GOLDEN;
DRM_INFO("fix gfx.config for vega12\n");
break;
case CHIP_RAVEN:
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org

[PATCH 26/42] drm/amdgpu/gfx9: add support for vega12

2018-03-21 Thread Alex Deucher
Same as vega10 and raven.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 5f6113ebfc3f..673b81841500 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -1271,6 +1271,7 @@ static int gfx_v9_0_sw_init(void *handle)
 
switch (adev->asic_type) {
case CHIP_VEGA10:
+   case CHIP_VEGA12:
case CHIP_RAVEN:
adev->gfx.mec.num_mec = 2;
break;
@@ -4475,6 +4476,7 @@ static void gfx_v9_0_set_rlc_funcs(struct amdgpu_device 
*adev)
 {
switch (adev->asic_type) {
case CHIP_VEGA10:
+   case CHIP_VEGA12:
case CHIP_RAVEN:
adev->gfx.rlc.funcs = _v9_0_rlc_funcs;
break;
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 29/42] drm/amdgpu/soc15: add support for vega12

2018-03-21 Thread Alex Deucher
Add the IP blocks, clock and powergating flags, and
common clockgating support.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/amdgpu/soc15.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c 
b/drivers/gpu/drm/amd/amdgpu/soc15.c
index 242c30b72b10..91b0ef579c75 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -527,6 +527,7 @@ int soc15_set_ip_blocks(struct amdgpu_device *adev)
 
switch (adev->asic_type) {
case CHIP_VEGA10:
+   case CHIP_VEGA12:
amdgpu_device_ip_block_add(adev, _common_ip_block);
amdgpu_device_ip_block_add(adev, _v9_0_ip_block);
amdgpu_device_ip_block_add(adev, _ih_ip_block);
@@ -651,6 +652,11 @@ static int soc15_common_early_init(void *handle)
adev->pg_flags = 0;
adev->external_rev_id = 0x1;
break;
+   case CHIP_VEGA12:
+   adev->cg_flags = 0;
+   adev->pg_flags = 0;
+   adev->external_rev_id = 0x1; /* ??? */
+   break;
case CHIP_RAVEN:
adev->cg_flags = AMD_CG_SUPPORT_GFX_MGCG |
AMD_CG_SUPPORT_GFX_MGLS |
@@ -883,6 +889,7 @@ static int soc15_common_set_clockgating_state(void *handle,
 
switch (adev->asic_type) {
case CHIP_VEGA10:
+   case CHIP_VEGA12:
adev->nbio_funcs->update_medium_grain_clock_gating(adev,
state == AMD_CG_STATE_GATE ? true : false);
adev->nbio_funcs->update_medium_grain_light_sleep(adev,
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 31/42] drm/amd/soc15: Add external_rev_id for vega12.

2018-03-21 Thread Alex Deucher
From: Feifei Xu 

Add external_rev_id for vega12.

Signed-off-by: Feifei Xu 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/soc15.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c 
b/drivers/gpu/drm/amd/amdgpu/soc15.c
index 0ad9272c7a5d..e308c3c6ca4f 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -672,7 +672,7 @@ static int soc15_common_early_init(void *handle)
AMD_CG_SUPPORT_VCE_MGCG |
AMD_CG_SUPPORT_UVD_MGCG;
adev->pg_flags = 0;
-   adev->external_rev_id = 0x1; /* ??? */
+   adev->external_rev_id = adev->rev_id + 0x14;
break;
case CHIP_RAVEN:
adev->cg_flags = AMD_CG_SUPPORT_GFX_MGCG |
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 25/42] drm/amdgpu/gfx9: add gfx config for vega12

2018-03-21 Thread Alex Deucher
Just a place holder for now.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 9 +
 1 file changed, 9 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 5eb609d455a8..5f6113ebfc3f 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -981,6 +981,15 @@ static void gfx_v9_0_gpu_early_init(struct amdgpu_device 
*adev)
adev->gfx.config.sc_earlyz_tile_fifo_size = 0x4C0;
gb_addr_config = VEGA10_GB_ADDR_CONFIG_GOLDEN;
break;
+   case CHIP_VEGA12:
+   adev->gfx.config.max_hw_contexts = 8;
+   adev->gfx.config.sc_prim_fifo_size_frontend = 0x20;
+   adev->gfx.config.sc_prim_fifo_size_backend = 0x100;
+   adev->gfx.config.sc_hiz_tile_fifo_size = 0x30;
+   adev->gfx.config.sc_earlyz_tile_fifo_size = 0x4C0;
+   gb_addr_config = VEGA10_GB_ADDR_CONFIG_GOLDEN;
+   DRM_INFO("fix gfx.config for vega12\n");
+   break;
case CHIP_RAVEN:
adev->gfx.config.max_hw_contexts = 8;
adev->gfx.config.sc_prim_fifo_size_frontend = 0x20;
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 27/42] drm/amdgpu/gfx9: add clockgating support for vega12

2018-03-21 Thread Alex Deucher
Same as vega10 and raven.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 673b81841500..9ce1e9e552d9 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -3505,6 +3505,7 @@ static int gfx_v9_0_set_clockgating_state(void *handle,
 
switch (adev->asic_type) {
case CHIP_VEGA10:
+   case CHIP_VEGA12:
case CHIP_RAVEN:
gfx_v9_0_update_gfx_clock_gating(adev,
 state == AMD_CG_STATE_GATE ? 
true : false);
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 22/42] drm/amdgpu/sdma4: Update vega12 sdma golden setting.

2018-03-21 Thread Alex Deucher
From: Feifei Xu 

Update vega12 sdma golden setting.

Signed-off-by: Feifei Xu 
Reviewed-by: Ken Wang 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index 106b9813f7ee..2a8184082cd1 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -87,10 +87,10 @@ static const struct soc15_reg_golden 
golden_settings_sdma_vg10[] = {
 };
 
 static const struct soc15_reg_golden golden_settings_sdma_vg12[] = {
-   SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG, 0x0018773f, 
0x00104002),
-   SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG_READ, 
0x0018773f, 0x00104002),
-   SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG, 0x0018773f, 
0x00104002),
-   SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG_READ, 
0x0018773f, 0x00104002)
+   SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG, 0x0018773f, 
0x00104001),
+   SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG_READ, 
0x0018773f, 0x00104001),
+   SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG, 0x0018773f, 
0x00104001),
+   SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG_READ, 
0x0018773f, 0x00104001)
 };
 
 static const struct soc15_reg_golden golden_settings_sdma_4_1[] =
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 20/42] drm/amdgpu/sdma4: add clockgating support for vega12

2018-03-21 Thread Alex Deucher
Same as vega10 for now.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index 4eddd850b72d..3d059ecd8758 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -1497,6 +1497,7 @@ static int sdma_v4_0_set_clockgating_state(void *handle,
 
switch (adev->asic_type) {
case CHIP_VEGA10:
+   case CHIP_VEGA12:
case CHIP_RAVEN:
sdma_v4_0_update_medium_grain_clock_gating(adev,
state == AMD_CG_STATE_GATE ? true : false);
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 12/42] drm/amd/display/dm: add vega12 support

2018-03-21 Thread Alex Deucher
Add support for vega12 to the display manager.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 9e2cdc97dc89..68ab325ce6f2 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1130,6 +1130,7 @@ static int dce110_register_irq_handlers(struct 
amdgpu_device *adev)
unsigned client_id = AMDGPU_IH_CLIENTID_LEGACY;
 
if (adev->asic_type == CHIP_VEGA10 ||
+   adev->asic_type == CHIP_VEGA12 ||
adev->asic_type == CHIP_RAVEN)
client_id = SOC15_IH_CLIENTID_DCE;
 
@@ -1501,6 +1502,7 @@ static int amdgpu_dm_initialize_drm_device(struct 
amdgpu_device *adev)
case CHIP_POLARIS10:
case CHIP_POLARIS12:
case CHIP_VEGA10:
+   case CHIP_VEGA12:
if (dce110_register_irq_handlers(dm->adev)) {
DRM_ERROR("DM: Failed to initialize IRQ\n");
goto fail;
@@ -1703,6 +1705,7 @@ static int dm_early_init(void *handle)
adev->mode_info.plane_type = dm_plane_type_default;
break;
case CHIP_VEGA10:
+   case CHIP_VEGA12:
adev->mode_info.num_crtc = 6;
adev->mode_info.num_hpd = 6;
adev->mode_info.num_dig = 6;
@@ -1950,6 +1953,7 @@ static int fill_plane_attributes_from_fb(struct 
amdgpu_device *adev,
AMDGPU_TILING_GET(tiling_flags, PIPE_CONFIG);
 
if (adev->asic_type == CHIP_VEGA10 ||
+   adev->asic_type == CHIP_VEGA12 ||
adev->asic_type == CHIP_RAVEN) {
/* Fill GFX9 params */
plane_state->tiling_info.gfx9.num_pipes =
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 24/42] drm/amdgpu/gfx9: Add placeholder for vega12 golden settings

2018-03-21 Thread Alex Deucher
Fill these in when we get them.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index b91ff70bbee8..5eb609d455a8 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -175,6 +175,9 @@ static void gfx_v9_0_init_golden_registers(struct 
amdgpu_device *adev)
 golden_settings_gc_9_0_vg10,
 
ARRAY_SIZE(golden_settings_gc_9_0_vg10));
break;
+   case CHIP_VEGA12:
+   DRM_ERROR("missing golden settings for gfx9 on vega12!\n");
+   break;
case CHIP_RAVEN:
soc15_program_register_sequence(adev,
 golden_settings_gc_9_1,
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 19/42] drm/amdgpu/sdma4: Add placeholder for vega12 golden settings

2018-03-21 Thread Alex Deucher
Fill these in when we get them.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index e00b6ff566f6..4eddd850b72d 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -124,6 +124,9 @@ static void sdma_v4_0_init_golden_registers(struct 
amdgpu_device *adev)
 golden_settings_sdma_vg10,
 
ARRAY_SIZE(golden_settings_sdma_vg10));
break;
+   case CHIP_VEGA12:
+   DRM_ERROR("todo: Missing SDMA4 golden settings for vega12\n");
+   break;
case CHIP_RAVEN:
soc15_program_register_sequence(adev,
 golden_settings_sdma_4_1,
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 23/42] drm/amdgpu/gfx9: add support for vega12 firmware

2018-03-21 Thread Alex Deucher
Declare and fetch the appriopriate files.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index d1d2c27156b2..b91ff70bbee8 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -57,6 +57,13 @@ MODULE_FIRMWARE("amdgpu/vega10_mec.bin");
 MODULE_FIRMWARE("amdgpu/vega10_mec2.bin");
 MODULE_FIRMWARE("amdgpu/vega10_rlc.bin");
 
+MODULE_FIRMWARE("amdgpu/vega12_ce.bin");
+MODULE_FIRMWARE("amdgpu/vega12_pfp.bin");
+MODULE_FIRMWARE("amdgpu/vega12_me.bin");
+MODULE_FIRMWARE("amdgpu/vega12_mec.bin");
+MODULE_FIRMWARE("amdgpu/vega12_mec2.bin");
+MODULE_FIRMWARE("amdgpu/vega12_rlc.bin");
+
 MODULE_FIRMWARE("amdgpu/raven_ce.bin");
 MODULE_FIRMWARE("amdgpu/raven_pfp.bin");
 MODULE_FIRMWARE("amdgpu/raven_me.bin");
@@ -369,6 +376,9 @@ static int gfx_v9_0_init_microcode(struct amdgpu_device 
*adev)
case CHIP_VEGA10:
chip_name = "vega10";
break;
+   case CHIP_VEGA12:
+   chip_name = "vega12";
+   break;
case CHIP_RAVEN:
chip_name = "raven";
break;
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 13/42] drm/amd/display: Add bios firmware info version for VG12

2018-03-21 Thread Alex Deucher
From: "Jerry (Fangzhi) Zuo" 

VG12 shows minor revision version of 2 which is not handled in
bios_parser_get_firmware_info() routine.

Signed-off-by: Jerry (Fangzhi) Zuo 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c 
b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
index e7680c41f117..985fe8c22875 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
@@ -1321,6 +1321,7 @@ static enum bp_result bios_parser_get_firmware_info(
case 3:
switch (revision.minor) {
case 1:
+   case 2:
result = get_firmware_info_v3_1(bp, info);
break;
default:
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 21/42] drm/amdgpu/sdma4: add sdma4_0_1 support for vega12 (v3)

2018-03-21 Thread Alex Deucher
From: Hawking Zhang 

Add sdma golden setting for vega12.

v2: switch to soc15_program_register_sequence for
golden register programming
v3: squash in unused declaration fix

Signed-off-by: Feifei Xu 
Reviewed-by: Alex Deucher 
Reviewed-by: Christian König 
Signed-off-by: Hawking Zhang 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 18 +++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index 3d059ecd8758..106b9813f7ee 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -86,6 +86,13 @@ static const struct soc15_reg_golden 
golden_settings_sdma_vg10[] = {
SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG_READ, 
0x0018773f, 0x00104002)
 };
 
+static const struct soc15_reg_golden golden_settings_sdma_vg12[] = {
+   SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG, 0x0018773f, 
0x00104002),
+   SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG_READ, 
0x0018773f, 0x00104002),
+   SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG, 0x0018773f, 
0x00104002),
+   SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG_READ, 
0x0018773f, 0x00104002)
+};
+
 static const struct soc15_reg_golden golden_settings_sdma_4_1[] =
 {
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_CHICKEN_BITS, 0xfe931f07, 
0x02831d07),
@@ -125,7 +132,12 @@ static void sdma_v4_0_init_golden_registers(struct 
amdgpu_device *adev)
 
ARRAY_SIZE(golden_settings_sdma_vg10));
break;
case CHIP_VEGA12:
-   DRM_ERROR("todo: Missing SDMA4 golden settings for vega12\n");
+   soc15_program_register_sequence(adev,
+   golden_settings_sdma_4,
+   
ARRAY_SIZE(golden_settings_sdma_4));
+   soc15_program_register_sequence(adev,
+   golden_settings_sdma_vg12,
+   
ARRAY_SIZE(golden_settings_sdma_vg12));
break;
case CHIP_RAVEN:
soc15_program_register_sequence(adev,
@@ -1627,7 +1639,7 @@ static void sdma_v4_0_set_irq_funcs(struct amdgpu_device 
*adev)
  * @dst_offset: dst GPU address
  * @byte_count: number of bytes to xfer
  *
- * Copy GPU buffers using the DMA engine (VEGA10).
+ * Copy GPU buffers using the DMA engine (VEGA10/12).
  * Used by the amdgpu ttm implementation to move pages if
  * registered as the asic copy callback.
  */
@@ -1654,7 +1666,7 @@ static void sdma_v4_0_emit_copy_buffer(struct amdgpu_ib 
*ib,
  * @dst_offset: dst GPU address
  * @byte_count: number of bytes to xfer
  *
- * Fill GPU buffers using the DMA engine (VEGA10).
+ * Fill GPU buffers using the DMA engine (VEGA10/12).
  */
 static void sdma_v4_0_emit_fill_buffer(struct amdgpu_ib *ib,
   uint32_t src_data,
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 18/42] drm/amdgpu/sdma4: specify vega12 firmware

2018-03-21 Thread Alex Deucher
Declare the firmware and fetch the proper file.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index 9448c45d1b60..e00b6ff566f6 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -40,6 +40,8 @@
 
 MODULE_FIRMWARE("amdgpu/vega10_sdma.bin");
 MODULE_FIRMWARE("amdgpu/vega10_sdma1.bin");
+MODULE_FIRMWARE("amdgpu/vega12_sdma.bin");
+MODULE_FIRMWARE("amdgpu/vega12_sdma1.bin");
 MODULE_FIRMWARE("amdgpu/raven_sdma.bin");
 
 #define SDMA0_POWER_CNTL__ON_OFF_CONDITION_HOLD_TIME_MASK  0x00F8L
@@ -162,6 +164,9 @@ static int sdma_v4_0_init_microcode(struct amdgpu_device 
*adev)
case CHIP_VEGA10:
chip_name = "vega10";
break;
+   case CHIP_VEGA12:
+   chip_name = "vega12";
+   break;
case CHIP_RAVEN:
chip_name = "raven";
break;
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 16/42] drm/amdgpu/gmc9: fix vega12's athub golden setting.

2018-03-21 Thread Alex Deucher
From: Feifei Xu 

The athub's golden setting is for vega10 only now.
Remove this from vega12, which is introduced by branch merge.

Signed-off-by: Feifei Xu 
Reviewed-by: Ken Wang 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index c4467742badd..e687363900bb 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -960,7 +960,6 @@ static void gmc_v9_0_init_golden_registers(struct 
amdgpu_device *adev)
 
switch (adev->asic_type) {
case CHIP_VEGA10:
-   case CHIP_VEGA12:
soc15_program_register_sequence(adev,
golden_settings_mmhub_1_0_0,

ARRAY_SIZE(golden_settings_mmhub_1_0_0));
@@ -968,6 +967,8 @@ static void gmc_v9_0_init_golden_registers(struct 
amdgpu_device *adev)
golden_settings_athub_1_0_0,

ARRAY_SIZE(golden_settings_athub_1_0_0));
break;
+   case CHIP_VEGA12:
+   break;
case CHIP_RAVEN:
soc15_program_register_sequence(adev,
golden_settings_athub_1_0_0,
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 15/42] drm/amdgpu/gmc9: add vega12 support

2018-03-21 Thread Alex Deucher
Same as vega10.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index a70cbc45c4c1..c4467742badd 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -791,6 +791,7 @@ static int gmc_v9_0_mc_init(struct amdgpu_device *adev)
if (amdgpu_gart_size == -1) {
switch (adev->asic_type) {
case CHIP_VEGA10:  /* all engines support GPUVM */
+   case CHIP_VEGA12:  /* all engines support GPUVM */
default:
adev->gmc.gart_size = 512ULL << 20;
break;
@@ -849,6 +850,7 @@ static int gmc_v9_0_sw_init(void *handle)
}
break;
case CHIP_VEGA10:
+   case CHIP_VEGA12:
/*
 * To fulfill 4-level page support,
 * vm size is 256TB (48bit), maximum size of Vega10,
@@ -958,6 +960,7 @@ static void gmc_v9_0_init_golden_registers(struct 
amdgpu_device *adev)
 
switch (adev->asic_type) {
case CHIP_VEGA10:
+   case CHIP_VEGA12:
soc15_program_register_sequence(adev,
golden_settings_mmhub_1_0_0,

ARRAY_SIZE(golden_settings_mmhub_1_0_0));
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 17/42] drm/amdgpu/mmhub: add clockgating support for vega12

2018-03-21 Thread Alex Deucher
Treat it the same as vega10 for now.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c 
b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
index 3dd5816495a5..43f925773b57 100644
--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
@@ -733,6 +733,7 @@ int mmhub_v1_0_set_clockgating(struct amdgpu_device *adev,
 
switch (adev->asic_type) {
case CHIP_VEGA10:
+   case CHIP_VEGA12:
case CHIP_RAVEN:
mmhub_v1_0_update_medium_grain_clock_gating(adev,
state == AMD_CG_STATE_GATE ? true : false);
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 10/42] drm/amdgpu: specify vega12 vce firmware

2018-03-21 Thread Alex Deucher
Declare firmware and add support for the file.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
index 9152478d7528..a33804bd3314 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
@@ -55,6 +55,7 @@
 #define FIRMWARE_POLARIS12 "amdgpu/polaris12_vce.bin"
 
 #define FIRMWARE_VEGA10"amdgpu/vega10_vce.bin"
+#define FIRMWARE_VEGA12"amdgpu/vega12_vce.bin"
 
 #ifdef CONFIG_DRM_AMDGPU_CIK
 MODULE_FIRMWARE(FIRMWARE_BONAIRE);
@@ -72,6 +73,7 @@ MODULE_FIRMWARE(FIRMWARE_POLARIS11);
 MODULE_FIRMWARE(FIRMWARE_POLARIS12);
 
 MODULE_FIRMWARE(FIRMWARE_VEGA10);
+MODULE_FIRMWARE(FIRMWARE_VEGA12);
 
 static void amdgpu_vce_idle_work_handler(struct work_struct *work);
 
@@ -127,11 +129,14 @@ int amdgpu_vce_sw_init(struct amdgpu_device *adev, 
unsigned long size)
case CHIP_POLARIS11:
fw_name = FIRMWARE_POLARIS11;
break;
+   case CHIP_POLARIS12:
+   fw_name = FIRMWARE_POLARIS12;
+   break;
case CHIP_VEGA10:
fw_name = FIRMWARE_VEGA10;
break;
-   case CHIP_POLARIS12:
-   fw_name = FIRMWARE_POLARIS12;
+   case CHIP_VEGA12:
+   fw_name = FIRMWARE_VEGA12;
break;
 
default:
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 14/42] drm/amdgpu: add vega12 to dc support check

2018-03-21 Thread Alex Deucher
DC is used for modesetting on vega12.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 781ea7dc09c0..60e577ce36b0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1765,6 +1765,7 @@ bool amdgpu_device_asic_has_dc_support(enum amd_asic_type 
asic_type)
return amdgpu_dc != 0;
 #endif
case CHIP_VEGA10:
+   case CHIP_VEGA12:
 #if defined(CONFIG_DRM_AMD_DC_DCN1_0)
case CHIP_RAVEN:
 #endif
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 09/42] drm/amdgpu: specify vega12 uvd firmware

2018-03-21 Thread Alex Deucher
Declare firmware and add support for the file.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
index f3c459b7c0bb..627542b22ae4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
@@ -68,6 +68,7 @@
 #define FIRMWARE_POLARIS12 "amdgpu/polaris12_uvd.bin"
 
 #define FIRMWARE_VEGA10"amdgpu/vega10_uvd.bin"
+#define FIRMWARE_VEGA12"amdgpu/vega12_uvd.bin"
 
 #define mmUVD_GPCOM_VCPU_DATA0_VEGA10 (0x03c4 + 0x7e00)
 #define mmUVD_GPCOM_VCPU_DATA1_VEGA10 (0x03c5 + 0x7e00)
@@ -110,6 +111,7 @@ MODULE_FIRMWARE(FIRMWARE_POLARIS11);
 MODULE_FIRMWARE(FIRMWARE_POLARIS12);
 
 MODULE_FIRMWARE(FIRMWARE_VEGA10);
+MODULE_FIRMWARE(FIRMWARE_VEGA12);
 
 static void amdgpu_uvd_idle_work_handler(struct work_struct *work);
 
@@ -161,11 +163,14 @@ int amdgpu_uvd_sw_init(struct amdgpu_device *adev)
case CHIP_POLARIS11:
fw_name = FIRMWARE_POLARIS11;
break;
+   case CHIP_POLARIS12:
+   fw_name = FIRMWARE_POLARIS12;
+   break;
case CHIP_VEGA10:
fw_name = FIRMWARE_VEGA10;
break;
-   case CHIP_POLARIS12:
-   fw_name = FIRMWARE_POLARIS12;
+   case CHIP_VEGA12:
+   fw_name = FIRMWARE_VEGA12;
break;
default:
return -EINVAL;
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 06/42] drm/amdgpu/psp: initial vega12 support

2018-03-21 Thread Alex Deucher
Same as vega10 for now.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c | 1 +
 drivers/gpu/drm/amd/amdgpu/psp_v3_1.c   | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
index 9a75410cd576..19e71f4a8ac2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
@@ -51,6 +51,7 @@ static int psp_sw_init(void *handle)
 
switch (adev->asic_type) {
case CHIP_VEGA10:
+   case CHIP_VEGA12:
psp_v3_1_set_psp_funcs(psp);
break;
case CHIP_RAVEN:
diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v3_1.c 
b/drivers/gpu/drm/amd/amdgpu/psp_v3_1.c
index 690b9766d8ae..5c824a38982b 100644
--- a/drivers/gpu/drm/amd/amdgpu/psp_v3_1.c
+++ b/drivers/gpu/drm/amd/amdgpu/psp_v3_1.c
@@ -39,6 +39,8 @@
 
 MODULE_FIRMWARE("amdgpu/vega10_sos.bin");
 MODULE_FIRMWARE("amdgpu/vega10_asd.bin");
+MODULE_FIRMWARE("amdgpu/vega12_sos.bin");
+MODULE_FIRMWARE("amdgpu/vega12_asd.bin");
 
 #define smnMP1_FIRMWARE_FLAGS 0x3010028
 
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 03/42] drm/amdgpu: add gpu_info firmware for vega12

2018-03-21 Thread Alex Deucher
Stores gpu configuration details.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 8f4e2d13545f..aebf199ed178 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -59,6 +59,7 @@
 #include "amdgpu_pm.h"
 
 MODULE_FIRMWARE("amdgpu/vega10_gpu_info.bin");
+MODULE_FIRMWARE("amdgpu/vega12_gpu_info.bin");
 MODULE_FIRMWARE("amdgpu/raven_gpu_info.bin");
 
 #define AMDGPU_RESUME_MS   2000
@@ -1158,6 +1159,9 @@ static int amdgpu_device_parse_gpu_info_fw(struct 
amdgpu_device *adev)
case CHIP_VEGA10:
chip_name = "vega10";
break;
+   case CHIP_VEGA12:
+   chip_name = "vega12";
+   break;
case CHIP_RAVEN:
chip_name = "raven";
break;
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 08/42] drm/amdgpu: add vega12 ucode loading method

2018-03-21 Thread Alex Deucher
From: Feifei Xu 

Same as vega10.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
Signed-off-by: Hawking Zhang 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index 474f88fbafce..dd6f98921918 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -271,6 +271,7 @@ amdgpu_ucode_get_load_type(struct amdgpu_device *adev, int 
load_type)
return AMDGPU_FW_LOAD_SMU;
case CHIP_VEGA10:
case CHIP_RAVEN:
+   case CHIP_VEGA12:
if (!load_type)
return AMDGPU_FW_LOAD_DIRECT;
else
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 04/42] drm/amdgpu: set asic family and ip blocks for vega12

2018-03-21 Thread Alex Deucher
soc15 just like vega10 and raven.

Signed-off-by: Alex Deucher 
Reviewed-by: Feifei Xu 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index aebf199ed178..781ea7dc09c0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1275,8 +1275,9 @@ static int amdgpu_device_ip_early_init(struct 
amdgpu_device *adev)
return r;
break;
 #endif
-   case  CHIP_VEGA10:
-   case  CHIP_RAVEN:
+   case CHIP_VEGA10:
+   case CHIP_VEGA12:
+   case CHIP_RAVEN:
if (adev->asic_type == CHIP_RAVEN)
adev->family = AMDGPU_FAMILY_RV;
else
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 02/42] drm/amdgpu: add vega12 to asic_type enum

2018-03-21 Thread Alex Deucher
From: Feifei Xu 

Add vega12 to amd_asic_type enum and amdgpu_asic_name[].

Signed-off-by: Alex Deucher 
Signed-off-by: Hawking Zhang 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 1 +
 include/drm/amd_asic_type.h| 1 +
 2 files changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 00919ab47306..8f4e2d13545f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -83,6 +83,7 @@ static const char *amdgpu_asic_name[] = {
"POLARIS11",
"POLARIS12",
"VEGA10",
+   "VEGA12",
"RAVEN",
"LAST",
 };
diff --git a/include/drm/amd_asic_type.h b/include/drm/amd_asic_type.h
index 599028f66585..6c731c52c071 100644
--- a/include/drm/amd_asic_type.h
+++ b/include/drm/amd_asic_type.h
@@ -45,6 +45,7 @@ enum amd_asic_type {
CHIP_POLARIS11,
CHIP_POLARIS12,
CHIP_VEGA10,
+   CHIP_VEGA12,
CHIP_RAVEN,
CHIP_LAST,
 };
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 00/42] Add vega12 support

2018-03-21 Thread Alex Deucher
Vega12 is a new GPU from AMD.  This adds support for it.

Patch 1 just adds new register headers and is pretty big,
so I haven't sent it to the mailing list.  The entire
series can be viewed here:
https://cgit.freedesktop.org/~agd5f/linux/log/?h=amd-staging-drm-next-vega12

Alex Deucher (20):
  drm/amdgpu: add gpu_info firmware for vega12
  drm/amdgpu: set asic family and ip blocks for vega12
  drm/amdgpu/psp: initial vega12 support
  drm/amdgpu: specify vega12 uvd firmware
  drm/amdgpu: specify vega12 vce firmware
  drm/amdgpu/virtual_dce: add vega12 support
  drm/amd/display/dm: add vega12 support
  drm/amdgpu: add vega12 to dc support check
  drm/amdgpu/gmc9: add vega12 support
  drm/amdgpu/mmhub: add clockgating support for vega12
  drm/amdgpu/sdma4: specify vega12 firmware
  drm/amdgpu/sdma4: Add placeholder for vega12 golden settings
  drm/amdgpu/sdma4: add clockgating support for vega12
  drm/amdgpu/gfx9: add support for vega12 firmware
  drm/amdgpu/gfx9: Add placeholder for vega12 golden settings
  drm/amdgpu/gfx9: add gfx config for vega12
  drm/amdgpu/gfx9: add support for vega12
  drm/amdgpu/gfx9: add clockgating support for vega12
  drm/amdgpu/soc15: add support for vega12
  drm/amdgpu: add vega12 pci ids (v2)

Evan Quan (11):
  drm/amdgpu: initilize vega12 psp firmwares
  drm/amdgpu/soc15: update vega12 cg_flags
  drm/amd/powerplay: add vega12_inc.h
  drm/amd/powerplay: update atomfirmware.h (v2)
  drm/amd/powerplay: add new smu9_driver_if.h for vega12 (v2)
  drm/amd/powerplay: add vega12_ppsmc.h
  drm/amd/powerplay: add vega12_pptable.h
  drm/amd/powerplay: update ppatomfwctl (v2)
  drm/amd/powerplay: add new pp_psm infrastructure for vega12 (v2)
  drm/amd/powerplay: add the smu manager for vega12 (v4)
  drm/amd/powerplay: add the hw manager for vega12 (v4)

Feifei Xu (6):
  drm/amd/include: Add ip header files for vega12.
  drm/amdgpu: add vega12 to asic_type enum
  drm/amdgpu: add vega12 ucode loading method
  drm/amdgpu/gmc9: fix vega12's athub golden setting.
  drm/amdgpu/sdma4: Update vega12 sdma golden setting.
  drm/amd/soc15: Add external_rev_id for vega12.

Hawking Zhang (4):
  drm/amdgpu: vega12 to smu firmware
  drm/amdgpu/sdma4: add sdma4_0_1 support for vega12 (v3)
  drm/amdgpu/gfx9: add golden setting for vega12 (v3)
  drm/amdgpu/soc15: initialize reg base for vega12

Jerry (Fangzhi) Zuo (1):
  drm/amd/display: Add bios firmware info version for VG12

 drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c| 3 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |11 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c| 6 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c| 1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c  | 1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c| 9 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c| 9 +-
 drivers/gpu/drm/amd/amdgpu/dce_virtual.c   | 1 +
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c  |65 +
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c  | 4 +
 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c| 1 +
 drivers/gpu/drm/amd/amdgpu/psp_v3_1.c  | 5 +
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c |25 +-
 drivers/gpu/drm/amd/amdgpu/soc15.c |25 +
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  | 4 +
 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c | 1 +
 .../drm/amd/include/asic_reg/gc/gc_9_2_1_offset.h  |  7497 +
 .../drm/amd/include/asic_reg/gc/gc_9_2_1_sh_mask.h | 31160 +++
 .../include/asic_reg/mmhub/mmhub_9_3_0_offset.h|  1991 ++
 .../include/asic_reg/mmhub/mmhub_9_3_0_sh_mask.h   | 10265 ++
 .../amd/include/asic_reg/oss/osssys_4_0_1_offset.h |   337 +
 .../include/asic_reg/oss/osssys_4_0_1_sh_mask.h|  1249 +
 drivers/gpu/drm/amd/include/atomfirmware.h |82 +-
 drivers/gpu/drm/amd/include/dm_pp_interface.h  | 2 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/Makefile   | 4 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c| 6 +
 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c   |   244 +-
 .../gpu/drm/amd/powerplay/hwmgr/pp_psm_legacy.c|   262 +
 .../gpu/drm/amd/powerplay/hwmgr/pp_psm_legacy.h|40 +
 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_new.c   |76 +
 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_new.h   |40 +
 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c |87 +
 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h |65 +
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c |  2444 ++
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h |   470 +
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_inc.h   |39 +
 .../gpu/drm/amd/powerplay/hwmgr/vega12_powertune.c |  1364 +
 .../gpu/drm/amd/powerplay/hwmgr/vega12_powertune.h |53 +
 .../gpu/drm/amd/powerplay/hwmgr/vega12_pptable.h   |   109 +
 .../amd/powerplay/hwmgr/vega12_processpptables.c   |   430 +
 .../amd/powerplay/hwmgr/vega12_processpptables.h   |58 

[PATCH 05/42] drm/amdgpu: vega12 to smu firmware

2018-03-21 Thread Alex Deucher
From: Hawking Zhang 

Add the cgs interface to query the smu firmware for vega12
and declare the firmware.

Signed-off-by: Alex Deucher 
Signed-off-by: Hawking Zhang 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c   | 3 +++
 drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c | 1 +
 2 files changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
index 5b37c1ac725c..a8a0fd927da2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
@@ -654,6 +654,9 @@ static int amdgpu_cgs_get_firmware_info(struct cgs_device 
*cgs_device,
else
strcpy(fw_name, 
"amdgpu/vega10_smc.bin");
break;
+   case CHIP_VEGA12:
+   strcpy(fw_name, "amdgpu/vega12_smc.bin");
+   break;
default:
DRM_ERROR("SMC firmware not supported\n");
return -EINVAL;
diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c 
b/drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c
index 04c45c236a73..c28b60aae5f8 100644
--- a/drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c
+++ b/drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c
@@ -43,6 +43,7 @@ MODULE_FIRMWARE("amdgpu/polaris11_k_smc.bin");
 MODULE_FIRMWARE("amdgpu/polaris12_smc.bin");
 MODULE_FIRMWARE("amdgpu/vega10_smc.bin");
 MODULE_FIRMWARE("amdgpu/vega10_acg_smc.bin");
+MODULE_FIRMWARE("amdgpu/vega12_smc.bin");
 
 int smum_thermal_avfs_enable(struct pp_hwmgr *hwmgr)
 {
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/pp: Add new asic(vega12) support in pp_psm.c

2018-03-21 Thread Deucher, Alexander
Acked-by: Alex Deucher 


From: amd-gfx  on behalf of Rex Zhu 

Sent: Wednesday, March 21, 2018 7:34:34 AM
To: amd-gfx@lists.freedesktop.org
Cc: Zhu, Rex
Subject: [PATCH] drm/amd/pp: Add new asic(vega12) support in pp_psm.c

In new asics, no power state management in driver,
no need to implement related callback functions.
add some ps checks in pp_psm.c

Revert "drm/amd/powerplay: add new pp_psm infrastructure for vega12 (v2)"
This reverts commit 7d1a63f3aa331b853e41f92d0e7890ed31de8c13.

Change-Id: Ic31d3f475f94399d3136bff8be454f290e3c1e50
Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/Makefile   |   4 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c   | 270 ++---
 .../gpu/drm/amd/powerplay/hwmgr/pp_psm_legacy.c| 262 
 .../gpu/drm/amd/powerplay/hwmgr/pp_psm_legacy.h|  40 ---
 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_new.c   |  76 --
 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_new.h   |  40 ---
 6 files changed, 239 insertions(+), 453 deletions(-)
 delete mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_legacy.c
 delete mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_legacy.h
 delete mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_new.c
 delete mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_new.h

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile 
b/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile
index 9446dbc475..faf9c88 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile
@@ -31,9 +31,9 @@ HARDWARE_MGR = hwmgr.o processpptables.o \
 smu7_clockpowergating.o \
 vega10_processpptables.o vega10_hwmgr.o vega10_powertune.o \
 vega10_thermal.o smu10_hwmgr.o pp_psm.o\
-   pp_overdriver.o smu_helper.o pp_psm_legacy.o pp_psm_new.o \
 vega12_processpptables.o vega12_hwmgr.o \
-   vega12_powertune.o vega12_thermal.o
+   vega12_powertune.o vega12_thermal.o \
+   pp_overdriver.o smu_helper.o

 AMD_PP_HWMGR = $(addprefix $(AMD_PP_PATH)/hwmgr/,$(HARDWARE_MGR))

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c
index 295ab9f..0f2851b 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c
@@ -21,65 +21,269 @@
  *
  */

+#include 
+#include 
+#include 
 #include "pp_psm.h"
-#include "pp_psm_legacy.h"
-#include "pp_psm_new.h"

 int psm_init_power_state_table(struct pp_hwmgr *hwmgr)
 {
-   if (hwmgr->chip_id != CHIP_VEGA12)
-   return psm_legacy_init_power_state_table(hwmgr);
-   else
-   return psm_new_init_power_state_table(hwmgr);
+   int result;
+   unsigned int i;
+   unsigned int table_entries;
+   struct pp_power_state *state;
+   int size;
+
+   if (hwmgr->hwmgr_func->get_num_of_pp_table_entries == NULL)
+   return 0;
+
+   if (hwmgr->hwmgr_func->get_power_state_size == NULL)
+   return 0;
+
+   hwmgr->num_ps = table_entries = 
hwmgr->hwmgr_func->get_num_of_pp_table_entries(hwmgr);
+
+   hwmgr->ps_size = size = hwmgr->hwmgr_func->get_power_state_size(hwmgr) +
+ sizeof(struct pp_power_state);
+
+   if (table_entries == 0 || size == 0) {
+   pr_warn("Please check whether power state management is 
suppported on this asic\n");
+   return 0;
+   }
+
+   hwmgr->ps = kzalloc(size * table_entries, GFP_KERNEL);
+   if (hwmgr->ps == NULL)
+   return -ENOMEM;
+
+   hwmgr->request_ps = kzalloc(size, GFP_KERNEL);
+   if (hwmgr->request_ps == NULL) {
+   kfree(hwmgr->ps);
+   hwmgr->ps = NULL;
+   return -ENOMEM;
+   }
+
+   hwmgr->current_ps = kzalloc(size, GFP_KERNEL);
+   if (hwmgr->current_ps == NULL) {
+   kfree(hwmgr->request_ps);
+   kfree(hwmgr->ps);
+   hwmgr->request_ps = NULL;
+   hwmgr->ps = NULL;
+   return -ENOMEM;
+   }
+
+   state = hwmgr->ps;
+
+   for (i = 0; i < table_entries; i++) {
+   result = hwmgr->hwmgr_func->get_pp_table_entry(hwmgr, i, state);
+
+   if (state->classification.flags & 
PP_StateClassificationFlag_Boot) {
+   hwmgr->boot_ps = state;
+   memcpy(hwmgr->current_ps, state, size);
+   memcpy(hwmgr->request_ps, state, size);
+   }
+
+   state->id = i + 1; /* assigned unique num for every power state 
id */
+
+   if (state->classification.flags & 
PP_StateClassificationFlag_Uvd)
+   hwmgr->uvd_ps = state;
+   state = (struct 

Re: [PATCH] drm/amd/pp: Clean up powerplay code on Vega12

2018-03-21 Thread Deucher, Alexander
Reviewed-by: Alex Deucher 


From: amd-gfx  on behalf of Rex Zhu 

Sent: Wednesday, March 21, 2018 5:46:34 AM
To: amd-gfx@lists.freedesktop.org
Cc: Zhu, Rex
Subject: [PATCH] drm/amd/pp: Clean up powerplay code on Vega12

Change-Id: I792a0c6170115867b99d7112d8eba9ff2faf39d7
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 482 +
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h |  32 --
 2 files changed, 1 insertion(+), 513 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
index da2053e..15ce1e8 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
@@ -48,7 +48,6 @@
 #include "pp_overdriver.h"
 #include "pp_thermal.h"

-static const ULONG PhwVega12_Magic = (ULONG)(PHM_VIslands_Magic);

 static int vega12_force_clock_level(struct pp_hwmgr *hwmgr,
 enum pp_clock_type type, uint32_t mask);
@@ -57,26 +56,6 @@ static int vega12_get_clock_ranges(struct pp_hwmgr *hwmgr,
 PPCLK_e clock_select,
 bool max);

-struct vega12_power_state *cast_phw_vega12_power_state(
- struct pp_hw_power_state *hw_ps)
-{
-   PP_ASSERT_WITH_CODE((PhwVega12_Magic == hw_ps->magic),
-   "Invalid Powerstate Type!",
-return NULL;);
-
-   return (struct vega12_power_state *)hw_ps;
-}
-
-const struct vega12_power_state *cast_const_phw_vega12_power_state(
-const struct pp_hw_power_state *hw_ps)
-{
-   PP_ASSERT_WITH_CODE((PhwVega12_Magic == hw_ps->magic),
-   "Invalid Powerstate Type!",
-return NULL;);
-
-   return (const struct vega12_power_state *)hw_ps;
-}
-
 static void vega12_set_default_registry_data(struct pp_hwmgr *hwmgr)
 {
 struct vega12_hwmgr *data =
@@ -590,7 +569,7 @@ static int vega12_setup_default_dpm_tables(struct pp_hwmgr 
*hwmgr)
 }

 vega12_init_dpm_state(&(dpm_table->dpm_state));
-/* Initialize Mclk DPM table based on allow Mclk values */
+   /* Initialize Mclk DPM table based on allow Mclk values */
 dpm_table = &(data->dpm_table.mem_table);

 PP_ASSERT_WITH_CODE(vega12_get_number_dpm_level(hwmgr, PPCLK_UCLK,
@@ -953,262 +932,12 @@ static int vega12_enable_dpm_tasks(struct pp_hwmgr 
*hwmgr)
 return result;
 }

-static int vega12_get_power_state_size(struct pp_hwmgr *hwmgr)
-{
-   return sizeof(struct vega12_power_state);
-}
-
-static int vega12_get_number_of_pp_table_entries(struct pp_hwmgr *hwmgr)
-{
-   return 0;
-}
-
 static int vega12_patch_boot_state(struct pp_hwmgr *hwmgr,
  struct pp_hw_power_state *hw_ps)
 {
 return 0;
 }

-static int vega12_apply_state_adjust_rules(struct pp_hwmgr *hwmgr,
-   struct pp_power_state  *request_ps,
-   const struct pp_power_state *current_ps)
-{
-   struct vega12_power_state *vega12_ps =
-   
cast_phw_vega12_power_state(_ps->hardware);
-   uint32_t sclk;
-   uint32_t mclk;
-   struct PP_Clocks minimum_clocks = {0};
-   bool disable_mclk_switching;
-   bool disable_mclk_switching_for_frame_lock;
-   bool disable_mclk_switching_for_vr;
-   bool force_mclk_high;
-   struct cgs_display_info info = {0};
-   const struct phm_clock_and_voltage_limits *max_limits;
-   uint32_t i;
-   struct vega12_hwmgr *data = (struct vega12_hwmgr *)(hwmgr->backend);
-   struct phm_ppt_v2_information *table_info =
-   (struct phm_ppt_v2_information *)(hwmgr->pptable);
-   int32_t count;
-   uint32_t stable_pstate_sclk_dpm_percentage;
-   uint32_t stable_pstate_sclk = 0, stable_pstate_mclk = 0;
-   uint32_t latency;
-
-   data->battery_state = (PP_StateUILabel_Battery ==
-   request_ps->classification.ui_label);
-
-   if (vega12_ps->performance_level_count != 2)
-   pr_info("VI should always have 2 performance levels");
-
-   max_limits = (PP_PowerSource_AC == hwmgr->power_source) ?
-   &(hwmgr->dyn_state.max_clock_voltage_on_ac) :
-   &(hwmgr->dyn_state.max_clock_voltage_on_dc);
-
-   /* Cap clock DPM tables at DC MAX if it is in DC. */
-   if (PP_PowerSource_DC == hwmgr->power_source) {
-   for (i = 0; i < vega12_ps->performance_level_count; i++) {
-   if (vega12_ps->performance_levels[i].mem_clock >
-   max_limits->mclk)
-   vega12_ps->performance_levels[i].mem_clock =
-   max_limits->mclk;
-  

Re: [PATCH] drm/amd/pp: Fix set wrong temperature range on smu7

2018-03-21 Thread Deucher, Alexander
Reviewed-by: Alex Deucher 


From: amd-gfx  on behalf of Rex Zhu 

Sent: Wednesday, March 21, 2018 3:56:19 AM
To: amd-gfx@lists.freedesktop.org
Cc: Zhu, Rex
Subject: [PATCH] drm/amd/pp: Fix set wrong temperature range on smu7

Fix the issue thermal irq was always triggered
for GPU under temperature range detected

The low temp in default thermal policy
was set to -273. so need to use int type for the low temp.

Change-Id: I1141b2698233ecd1e984b80eaf371966ab1aeef0
Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu7_thermal.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_thermal.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_thermal.c
index 4dd26eb..44527755 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_thermal.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_thermal.c
@@ -308,7 +308,7 @@ int smu7_thermal_get_temperature(struct pp_hwmgr *hwmgr)
 * @exception PP_Result_BadInput if the input data is not valid.
 */
 static int smu7_thermal_set_temperature_range(struct pp_hwmgr *hwmgr,
-   uint32_t low_temp, uint32_t high_temp)
+   int low_temp, int high_temp)
 {
 int low = SMU7_THERMAL_MINIMUM_ALERT_TEMP *
 PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
--
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: fix "mitigate workaround for i915"

2018-03-21 Thread Christian König
Mixed up exporter and importer here. E.g. while mapping the BO we need
to check the importer not the exporter.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
index 1c9991738477..4b584cb75bf4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
@@ -132,6 +132,7 @@ static int amdgpu_gem_map_attach(struct dma_buf *dma_buf,
 {
struct drm_gem_object *obj = dma_buf->priv;
struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
+   struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
long r;
 
r = drm_gem_map_attach(dma_buf, target_dev, attach);
@@ -143,7 +144,7 @@ static int amdgpu_gem_map_attach(struct dma_buf *dma_buf,
goto error_detach;
 
 
-   if (dma_buf->ops != _dmabuf_ops) {
+   if (attach->dev->driver != adev->dev->driver) {
/*
 * Wait for all shared fences to complete before we switch to 
future
 * use of exclusive fence on this prime shared bo.
@@ -162,7 +163,7 @@ static int amdgpu_gem_map_attach(struct dma_buf *dma_buf,
if (r)
goto error_unreserve;
 
-   if (dma_buf->ops != _dmabuf_ops)
+   if (attach->dev->driver != adev->dev->driver)
bo->prime_shared_count++;
 
 error_unreserve:
@@ -179,6 +180,7 @@ static void amdgpu_gem_map_detach(struct dma_buf *dma_buf,
 {
struct drm_gem_object *obj = dma_buf->priv;
struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
+   struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
int ret = 0;
 
ret = amdgpu_bo_reserve(bo, true);
@@ -186,7 +188,7 @@ static void amdgpu_gem_map_detach(struct dma_buf *dma_buf,
goto error;
 
amdgpu_bo_unpin(bo);
-   if (dma_buf->ops != _dmabuf_ops && bo->prime_shared_count)
+   if (attach->dev->driver != adev->dev->driver && bo->prime_shared_count)
bo->prime_shared_count--;
amdgpu_bo_unreserve(bo);
 
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [Linaro-mm-sig] [PATCH 1/5] dma-buf: add optional invalidate_mappings callback v2

2018-03-21 Thread Christian König

Am 21.03.2018 um 09:28 schrieb Daniel Vetter:

On Tue, Mar 20, 2018 at 06:47:57PM +0100, Christian König wrote:

Am 20.03.2018 um 15:08 schrieb Daniel Vetter:

[SNIP]
For the in-driver reservation path (CS) having a slow-path that grabs a
temporary reference, drops the vram lock and then locks the reservation
normally (using the acquire context used already for the entire CS) is a
bit tricky, but totally feasible. Ttm doesn't do that though.

That is exactly what we do in amdgpu as well, it's just not very efficient
nor reliable to retry getting the right pages for a submission over and over
again.

Out of curiosity, where's that code? I did read the ttm eviction code way
back, and that one definitely didn't do that. Would be interesting to
update my understanding.


That is in amdgpu_cs.c. amdgpu_cs_parser_bos() does a horrible dance 
with grabbing, releasing and regrabbing locks in a loop.


Then in amdgpu_cs_submit() we grab an lock preventing page table updates 
and check if all pages are still the one we want to have:

    amdgpu_mn_lock(p->mn);
    if (p->bo_list) {
    for (i = p->bo_list->first_userptr;
 i < p->bo_list->num_entries; ++i) {
    struct amdgpu_bo *bo = p->bo_list->array[i].robj;

    if 
(amdgpu_ttm_tt_userptr_needs_pages(bo->tbo.ttm)) {

    amdgpu_mn_unlock(p->mn);
    return -ERESTARTSYS;
    }
    }
    }


If anything changed on the page tables we restart the whole IOCTL using 
-ERESTARTSYS and try again.


Regards,
Christian.


-Daniel


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amd/pp: Add new asic(vega12) support in pp_psm.c

2018-03-21 Thread Rex Zhu
In new asics, no power state management in driver,
no need to implement related callback functions.
add some ps checks in pp_psm.c

Revert "drm/amd/powerplay: add new pp_psm infrastructure for vega12 (v2)"
This reverts commit 7d1a63f3aa331b853e41f92d0e7890ed31de8c13.

Change-Id: Ic31d3f475f94399d3136bff8be454f290e3c1e50
Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/Makefile   |   4 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c   | 270 ++---
 .../gpu/drm/amd/powerplay/hwmgr/pp_psm_legacy.c| 262 
 .../gpu/drm/amd/powerplay/hwmgr/pp_psm_legacy.h|  40 ---
 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_new.c   |  76 --
 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_new.h   |  40 ---
 6 files changed, 239 insertions(+), 453 deletions(-)
 delete mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_legacy.c
 delete mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_legacy.h
 delete mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_new.c
 delete mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm_new.h

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile 
b/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile
index 9446dbc475..faf9c88 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile
@@ -31,9 +31,9 @@ HARDWARE_MGR = hwmgr.o processpptables.o \
smu7_clockpowergating.o \
vega10_processpptables.o vega10_hwmgr.o vega10_powertune.o \
vega10_thermal.o smu10_hwmgr.o pp_psm.o\
-   pp_overdriver.o smu_helper.o pp_psm_legacy.o pp_psm_new.o \
vega12_processpptables.o vega12_hwmgr.o \
-   vega12_powertune.o vega12_thermal.o
+   vega12_powertune.o vega12_thermal.o \
+   pp_overdriver.o smu_helper.o
 
 AMD_PP_HWMGR = $(addprefix $(AMD_PP_PATH)/hwmgr/,$(HARDWARE_MGR))
 
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c
index 295ab9f..0f2851b 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c
@@ -21,65 +21,269 @@
  *
  */
 
+#include 
+#include 
+#include 
 #include "pp_psm.h"
-#include "pp_psm_legacy.h"
-#include "pp_psm_new.h"
 
 int psm_init_power_state_table(struct pp_hwmgr *hwmgr)
 {
-   if (hwmgr->chip_id != CHIP_VEGA12)
-   return psm_legacy_init_power_state_table(hwmgr);
-   else
-   return psm_new_init_power_state_table(hwmgr);
+   int result;
+   unsigned int i;
+   unsigned int table_entries;
+   struct pp_power_state *state;
+   int size;
+
+   if (hwmgr->hwmgr_func->get_num_of_pp_table_entries == NULL)
+   return 0;
+
+   if (hwmgr->hwmgr_func->get_power_state_size == NULL)
+   return 0;
+
+   hwmgr->num_ps = table_entries = 
hwmgr->hwmgr_func->get_num_of_pp_table_entries(hwmgr);
+
+   hwmgr->ps_size = size = hwmgr->hwmgr_func->get_power_state_size(hwmgr) +
+ sizeof(struct pp_power_state);
+
+   if (table_entries == 0 || size == 0) {
+   pr_warn("Please check whether power state management is 
suppported on this asic\n");
+   return 0;
+   }
+
+   hwmgr->ps = kzalloc(size * table_entries, GFP_KERNEL);
+   if (hwmgr->ps == NULL)
+   return -ENOMEM;
+
+   hwmgr->request_ps = kzalloc(size, GFP_KERNEL);
+   if (hwmgr->request_ps == NULL) {
+   kfree(hwmgr->ps);
+   hwmgr->ps = NULL;
+   return -ENOMEM;
+   }
+
+   hwmgr->current_ps = kzalloc(size, GFP_KERNEL);
+   if (hwmgr->current_ps == NULL) {
+   kfree(hwmgr->request_ps);
+   kfree(hwmgr->ps);
+   hwmgr->request_ps = NULL;
+   hwmgr->ps = NULL;
+   return -ENOMEM;
+   }
+
+   state = hwmgr->ps;
+
+   for (i = 0; i < table_entries; i++) {
+   result = hwmgr->hwmgr_func->get_pp_table_entry(hwmgr, i, state);
+
+   if (state->classification.flags & 
PP_StateClassificationFlag_Boot) {
+   hwmgr->boot_ps = state;
+   memcpy(hwmgr->current_ps, state, size);
+   memcpy(hwmgr->request_ps, state, size);
+   }
+
+   state->id = i + 1; /* assigned unique num for every power state 
id */
+
+   if (state->classification.flags & 
PP_StateClassificationFlag_Uvd)
+   hwmgr->uvd_ps = state;
+   state = (struct pp_power_state *)((unsigned long)state + size);
+   }
+
+   return 0;
 }
 
 int psm_fini_power_state_table(struct pp_hwmgr *hwmgr)
 {
-   if (hwmgr->chip_id != CHIP_VEGA12)
-   return psm_legacy_fini_power_state_table(hwmgr);
-   else
-   return psm_new_fini_power_state_table(hwmgr);
+   if (hwmgr == 

Re: [PATCH] drm/amdgpu: Don't change preferred domian when fallback GTT v5

2018-03-21 Thread Christian König

Am 21.03.2018 um 11:27 schrieb Chunming Zhou:

v2: add sanity checking
v3: make code open
v4: also handle visible to invisible fallback
v5: Since two fallback cases, re-use goto retry

Change-Id: I2cf672ad36b8b4cc1a6b2e704f786bf6a155d9ce
Signed-off-by: Chunming Zhou 


Reviewed-by: Christian König 


---
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 16 ++--
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 18 +++---
  2 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 6e6570ff9f8b..8328684aee06 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -76,23 +76,11 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, 
unsigned long size,
}
}
  
-retry:

r = amdgpu_bo_create(adev, size, alignment, kernel, initial_domain,
 flags, NULL, resv, );
if (r) {
-   if (r != -ERESTARTSYS) {
-   if (flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED) {
-   flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
-   goto retry;
-   }
-
-   if (initial_domain == AMDGPU_GEM_DOMAIN_VRAM) {
-   initial_domain |= AMDGPU_GEM_DOMAIN_GTT;
-   goto retry;
-   }
-   DRM_DEBUG("Failed to allocate GEM object (%ld, %d, %u, 
%d)\n",
- size, initial_domain, alignment, r);
-   }
+   DRM_DEBUG("Failed to allocate GEM object (%ld, %d, %u, %d)\n",
+ size, initial_domain, alignment, r);
return r;
}
*obj = >gem_base;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index b3310219e0ac..e57656301b37 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -371,6 +371,7 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
enum ttm_bo_type type;
unsigned long page_align;
size_t acc_size;
+   u32 domains;
int r;
  
  	page_align = roundup(byte_align, PAGE_SIZE) >> PAGE_SHIFT;

@@ -440,12 +441,23 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
  #endif
  
  	bo->tbo.bdev = >mman.bdev;

-   amdgpu_ttm_placement_from_domain(bo, domain);
-
+   domains = bo->preferred_domains;
+retry:
+   amdgpu_ttm_placement_from_domain(bo, domains);
r = ttm_bo_init_reserved(>mman.bdev, >tbo, size, type,
 >placement, page_align, , acc_size,
 sg, resv, _ttm_bo_destroy);
-   if (unlikely(r != 0))
+
+   if (unlikely(r && r != -ERESTARTSYS)) {
+   if (bo->flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED) {
+   bo->flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
+   goto retry;
+   } else if (domains != bo->preferred_domains) {
+   domains = bo->allowed_domains;
+   goto retry;
+   }
+   }
+   if (unlikely(r))
return r;
  
  	if (adev->gmc.visible_vram_size < adev->gmc.real_vram_size &&


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: re-validate per VM BOs if required

2018-03-21 Thread zhoucm1



On 2018年03月20日 17:13, zhoucm1 wrote:



On 2018年03月20日 15:49, zhoucm1 wrote:



On 2018年03月19日 18:50, Christian König wrote:

If a per VM BO ends up in a allowed domain it never moves back into the
prefered domain.

Signed-off-by: Christian König 
Yeah, it's better than mine, Reviewed-by: Chunming Zhou 



the left problem is BOs validation order.
For old bo list usage, it has fixed order for BOs in bo list,
but for per-vm-bo feature, the order isn't fixed, which will result 
in the performance is undulate.
e.g. steam game F1 generally is 40fps when using old bo list, it's 
very stable, but when enabling per-vm-bo feature, the fps is between 
37~40fps.

even worse, sometime, fps could drop to 18fps.
the root cause is some *KEY* BOs are randomly placed to allowed domain 
without fixed validation order.
For old bo list case, its later BOs can be evictable, so the front BOs 
are validated with preferred domain first, that is also why the 
performance is stable to 40fps when using old bo list.


Some more thinking:
Could user space pass validation order for per-vm BOs? or set BOs 
index for every per-vm BO?

Ping...
If no objection, I will try to make a bo list for per-vm case to 
determine the validation order.


Regards,
David Zhou


Any comment?


Regards,
David Zhou



Any thought?

Regards,
David Zhou


---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 15 +--
  1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

index 24474294c92a..e8b515dd032c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1770,14 +1770,16 @@ int amdgpu_vm_handle_moved(struct 
amdgpu_device *adev,

    spin_lock(>status_lock);
  while (!list_empty(>moved)) {
-    struct amdgpu_bo_va *bo_va;
  struct reservation_object *resv;
+    struct amdgpu_bo_va *bo_va;
+    struct amdgpu_bo *bo;
    bo_va = list_first_entry(>moved,
  struct amdgpu_bo_va, base.vm_status);
  spin_unlock(>status_lock);
  -    resv = bo_va->base.bo->tbo.resv;
+    bo = bo_va->base.bo;
+    resv = bo->tbo.resv;
    /* Per VM BOs never need to bo cleared in the page 
tables */

  if (resv == vm->root.base.bo->tbo.resv)
@@ -1797,6 +1799,15 @@ int amdgpu_vm_handle_moved(struct 
amdgpu_device *adev,

  reservation_object_unlock(resv);
    spin_lock(>status_lock);
+
+    /* If the BO prefers to be in VRAM, but currently isn't add it
+ * back to the evicted list so that it gets validated again on
+ * the next command submission.
+ */
+    if (resv == vm->root.base.bo->tbo.resv &&
+    bo->preferred_domains == AMDGPU_GEM_DOMAIN_VRAM &&
+    bo->tbo.mem.mem_type != TTM_PL_VRAM)
+    list_add_tail(_va->base.vm_status, >evicted);
  }
  spin_unlock(>status_lock);






___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: Don't change preferred domian when fallback GTT v5

2018-03-21 Thread Chunming Zhou
v2: add sanity checking
v3: make code open
v4: also handle visible to invisible fallback
v5: Since two fallback cases, re-use goto retry

Change-Id: I2cf672ad36b8b4cc1a6b2e704f786bf6a155d9ce
Signed-off-by: Chunming Zhou 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 16 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 18 +++---
 2 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 6e6570ff9f8b..8328684aee06 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -76,23 +76,11 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, 
unsigned long size,
}
}
 
-retry:
r = amdgpu_bo_create(adev, size, alignment, kernel, initial_domain,
 flags, NULL, resv, );
if (r) {
-   if (r != -ERESTARTSYS) {
-   if (flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED) {
-   flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
-   goto retry;
-   }
-
-   if (initial_domain == AMDGPU_GEM_DOMAIN_VRAM) {
-   initial_domain |= AMDGPU_GEM_DOMAIN_GTT;
-   goto retry;
-   }
-   DRM_DEBUG("Failed to allocate GEM object (%ld, %d, %u, 
%d)\n",
- size, initial_domain, alignment, r);
-   }
+   DRM_DEBUG("Failed to allocate GEM object (%ld, %d, %u, %d)\n",
+ size, initial_domain, alignment, r);
return r;
}
*obj = >gem_base;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index b3310219e0ac..e57656301b37 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -371,6 +371,7 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
enum ttm_bo_type type;
unsigned long page_align;
size_t acc_size;
+   u32 domains;
int r;
 
page_align = roundup(byte_align, PAGE_SIZE) >> PAGE_SHIFT;
@@ -440,12 +441,23 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
 #endif
 
bo->tbo.bdev = >mman.bdev;
-   amdgpu_ttm_placement_from_domain(bo, domain);
-
+   domains = bo->preferred_domains;
+retry:
+   amdgpu_ttm_placement_from_domain(bo, domains);
r = ttm_bo_init_reserved(>mman.bdev, >tbo, size, type,
 >placement, page_align, , acc_size,
 sg, resv, _ttm_bo_destroy);
-   if (unlikely(r != 0))
+
+   if (unlikely(r && r != -ERESTARTSYS)) {
+   if (bo->flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED) {
+   bo->flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
+   goto retry;
+   } else if (domains != bo->preferred_domains) {
+   domains = bo->allowed_domains;
+   goto retry;
+   }
+   }
+   if (unlikely(r))
return r;
 
if (adev->gmc.visible_vram_size < adev->gmc.real_vram_size &&
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Don't change preferred domian when fallback GTT v4

2018-03-21 Thread Christian König

Am 20.03.2018 um 08:55 schrieb Chunming Zhou:

v2: add sanity checking
v3: make code open
v4: also handle visible to invisible fallback

Change-Id: I2cf672ad36b8b4cc1a6b2e704f786bf6a155d9ce
Signed-off-by: Chunming Zhou 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 16 ++--
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 19 ---
  2 files changed, 18 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 6e6570ff9f8b..8328684aee06 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -76,23 +76,11 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, 
unsigned long size,
}
}
  
-retry:

r = amdgpu_bo_create(adev, size, alignment, kernel, initial_domain,
 flags, NULL, resv, );
if (r) {
-   if (r != -ERESTARTSYS) {
-   if (flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED) {
-   flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
-   goto retry;
-   }
-
-   if (initial_domain == AMDGPU_GEM_DOMAIN_VRAM) {
-   initial_domain |= AMDGPU_GEM_DOMAIN_GTT;
-   goto retry;
-   }
-   DRM_DEBUG("Failed to allocate GEM object (%ld, %d, %u, 
%d)\n",
- size, initial_domain, alignment, r);
-   }
+   DRM_DEBUG("Failed to allocate GEM object (%ld, %d, %u, %d)\n",
+ size, initial_domain, alignment, r);
return r;
}
*obj = >gem_base;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index b3310219e0ac..84c5e9db1b39 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -440,12 +440,25 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
  #endif
  
  	bo->tbo.bdev = >mman.bdev;

-   amdgpu_ttm_placement_from_domain(bo, domain);
-
+retry:
+   amdgpu_ttm_placement_from_domain(bo, bo->preferred_domains);
r = ttm_bo_init_reserved(>mman.bdev, >tbo, size, type,
 >placement, page_align, , acc_size,
 sg, resv, _ttm_bo_destroy);
-   if (unlikely(r != 0))
+
+   if (unlikely(r && r != -ERESTARTSYS)) {
+   if (bo->flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED) {
+   bo->flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
+   goto retry;
+   } else if (bo->allowed_domains != bo->preferred_domains) {
+   amdgpu_ttm_placement_from_domain(bo, bo->allowed_domains);
+   r = ttm_bo_init_reserved(>mman.bdev, >tbo, size,
+type, >placement, page_align,
+, acc_size, sg, resv,
+_ttm_bo_destroy);
+   }
+   }
+   if (unlikely(r))


Mhm, again this ugly retry label. But since we now handled two cases 
open coding this becomes to lengthly as well.


Let's go back to your original approach. How about the following code:

domains = bo->preferred_domains;
retry:
    amdgpu_ttm_placement_from_domain(bo, domains);
    r = ttm_bo_init_reserved(>mman.bdev, >tbo, size, type,
        >placement, page_align, , acc_size,
        sg, resv, _ttm_bo_destroy);

    if (unlikely(r && r != -ERESTARTSYS)) {
        if (bo->flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED) {
            bo->flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
            goto retry;
        } else if (domains != bo->preferred_domains) {
            domains = bo->preferred_domains;
            goto retry;
        }
    }
    if (unlikely(r))
...

That shouldn't loop even if it fails with preferred_domains and handle 
the case gracefully that we first try to clear the flag and then the 
move the BO to GTT.


Regards,
Christian.
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amd/pp: Clean up powerplay code on Vega12

2018-03-21 Thread Rex Zhu
Change-Id: I792a0c6170115867b99d7112d8eba9ff2faf39d7
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 482 +
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h |  32 --
 2 files changed, 1 insertion(+), 513 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
index da2053e..15ce1e8 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
@@ -48,7 +48,6 @@
 #include "pp_overdriver.h"
 #include "pp_thermal.h"
 
-static const ULONG PhwVega12_Magic = (ULONG)(PHM_VIslands_Magic);
 
 static int vega12_force_clock_level(struct pp_hwmgr *hwmgr,
enum pp_clock_type type, uint32_t mask);
@@ -57,26 +56,6 @@ static int vega12_get_clock_ranges(struct pp_hwmgr *hwmgr,
PPCLK_e clock_select,
bool max);
 
-struct vega12_power_state *cast_phw_vega12_power_state(
- struct pp_hw_power_state *hw_ps)
-{
-   PP_ASSERT_WITH_CODE((PhwVega12_Magic == hw_ps->magic),
-   "Invalid Powerstate Type!",
-return NULL;);
-
-   return (struct vega12_power_state *)hw_ps;
-}
-
-const struct vega12_power_state *cast_const_phw_vega12_power_state(
-const struct pp_hw_power_state *hw_ps)
-{
-   PP_ASSERT_WITH_CODE((PhwVega12_Magic == hw_ps->magic),
-   "Invalid Powerstate Type!",
-return NULL;);
-
-   return (const struct vega12_power_state *)hw_ps;
-}
-
 static void vega12_set_default_registry_data(struct pp_hwmgr *hwmgr)
 {
struct vega12_hwmgr *data =
@@ -590,7 +569,7 @@ static int vega12_setup_default_dpm_tables(struct pp_hwmgr 
*hwmgr)
}
 
vega12_init_dpm_state(&(dpm_table->dpm_state));
-/* Initialize Mclk DPM table based on allow Mclk values */
+   /* Initialize Mclk DPM table based on allow Mclk values */
dpm_table = &(data->dpm_table.mem_table);
 
PP_ASSERT_WITH_CODE(vega12_get_number_dpm_level(hwmgr, PPCLK_UCLK,
@@ -953,262 +932,12 @@ static int vega12_enable_dpm_tasks(struct pp_hwmgr 
*hwmgr)
return result;
 }
 
-static int vega12_get_power_state_size(struct pp_hwmgr *hwmgr)
-{
-   return sizeof(struct vega12_power_state);
-}
-
-static int vega12_get_number_of_pp_table_entries(struct pp_hwmgr *hwmgr)
-{
-   return 0;
-}
-
 static int vega12_patch_boot_state(struct pp_hwmgr *hwmgr,
 struct pp_hw_power_state *hw_ps)
 {
return 0;
 }
 
-static int vega12_apply_state_adjust_rules(struct pp_hwmgr *hwmgr,
-   struct pp_power_state  *request_ps,
-   const struct pp_power_state *current_ps)
-{
-   struct vega12_power_state *vega12_ps =
-   
cast_phw_vega12_power_state(_ps->hardware);
-   uint32_t sclk;
-   uint32_t mclk;
-   struct PP_Clocks minimum_clocks = {0};
-   bool disable_mclk_switching;
-   bool disable_mclk_switching_for_frame_lock;
-   bool disable_mclk_switching_for_vr;
-   bool force_mclk_high;
-   struct cgs_display_info info = {0};
-   const struct phm_clock_and_voltage_limits *max_limits;
-   uint32_t i;
-   struct vega12_hwmgr *data = (struct vega12_hwmgr *)(hwmgr->backend);
-   struct phm_ppt_v2_information *table_info =
-   (struct phm_ppt_v2_information *)(hwmgr->pptable);
-   int32_t count;
-   uint32_t stable_pstate_sclk_dpm_percentage;
-   uint32_t stable_pstate_sclk = 0, stable_pstate_mclk = 0;
-   uint32_t latency;
-
-   data->battery_state = (PP_StateUILabel_Battery ==
-   request_ps->classification.ui_label);
-
-   if (vega12_ps->performance_level_count != 2)
-   pr_info("VI should always have 2 performance levels");
-
-   max_limits = (PP_PowerSource_AC == hwmgr->power_source) ?
-   &(hwmgr->dyn_state.max_clock_voltage_on_ac) :
-   &(hwmgr->dyn_state.max_clock_voltage_on_dc);
-
-   /* Cap clock DPM tables at DC MAX if it is in DC. */
-   if (PP_PowerSource_DC == hwmgr->power_source) {
-   for (i = 0; i < vega12_ps->performance_level_count; i++) {
-   if (vega12_ps->performance_levels[i].mem_clock >
-   max_limits->mclk)
-   vega12_ps->performance_levels[i].mem_clock =
-   max_limits->mclk;
-   if (vega12_ps->performance_levels[i].gfx_clock >
-   max_limits->sclk)
-   vega12_ps->performance_levels[i].gfx_clock =
-   max_limits->sclk;
-   }
-   }
-
-   cgs_get_active_displays_info(hwmgr->device, );
-
-   

Re: [Linaro-mm-sig] [PATCH 1/5] dma-buf: add optional invalidate_mappings callback v2

2018-03-21 Thread Christian König

Am 21.03.2018 um 09:18 schrieb Daniel Vetter:

[SNIP]
They're both in i915_gem_userptr.c, somewhat interleaved. Would be
interesting if you could show what you think is going wrong in there
compared to amdgpu_mn.c.


i915 implements only one callback:

static const struct mmu_notifier_ops i915_gem_userptr_notifier = {
    .invalidate_range_start = 
i915_gem_userptr_mn_invalidate_range_start,

};
For correct operation you always need to implement invalidate_range_end 
as well and add some lock/completion work Otherwise get_user_pages() can 
again grab the reference to the wrong page.


The next problem seems to be that cancel_userptr() doesn't prevent any 
new command submission. E.g.

i915_gem_object_wait(obj, I915_WAIT_ALL, MAX_SCHEDULE_TIMEOUT, NULL);
What prevents new command submissions to use the GEM object directly 
after you finished waiting here?



I get a feeling we're talking past each another here.

Yeah, agree. Additional to that I don't know the i915 code very well.


Can you perhaps explain what exactly the race is you're seeing? The i915 
userptr code is
fairly convoluted and pushes a lot of stuff to workers (but then syncs
with those workers again later on), so easily possible you've overlooked
one of these lines that might guarantee already what you think needs to be
guaranteed. We're definitely not aiming to allow userspace to allow
writing to random pages all over.


You not read/write to random pages, there still is a reference to the 
page. So that the page can't be reused until you are done.


The problem is rather that you can't guarantee that you write to the 
page which is mapped into the process at that location. E.g. the CPU and 
the GPU might see two different things.



Leaking the IOMMU mappings otoh means rogue userspace could do a bunch of
stray writes (I don't see anywhere code in amdgpu_mn.c to unmap at least
the gpu side PTEs to make stuff inaccessible) and wreak the core kernel's
book-keeping.

In i915 we guarantee that we call set_page_dirty/mark_page_accessed only
after all the mappings are really gone (both GPU PTEs and sg mapping),
guaranteeing that any stray writes from either the GPU or IOMMU will
result in faults (except bugs in the IOMMU, but can't have it all, "IOMMU
actually works" is an assumption behind device isolation).

Well exactly that's the point, the handling in i915 looks incorrect to me.
You need to call set_page_dirty/mark_page_accessed way before the mapping is
destroyed.

To be more precise for userptrs it must be called from the
invalidate_range_start, but i915 seems to delegate everything into a
background worker to avoid the locking problems.

Yeah, and at the end of the function there's a flush_work to make sure the
worker has caught up.

Ah, yes haven't seen that.

But then grabbing the obj->base.dev->struct_mutex lock in 
cancel_userptr() is rather evil. You just silenced lockdep because you 
offloaded that into a work item.


So no matter how you put it i915 is clearly doing something wrong here :)


I know. i915 gem has tons of fallbacks and retry loops (we restart the
entire CS if needed), and i915 userptr pushes the entire get_user_pages
dance off into a worker if the fastpath doesn't succeed and we run out of
memory or hit contended locks. We also have obscene amounts of
__GFP_NORETRY and __GFP_NOWARN all over the place to make sure the core mm
code doesn't do something we don't want it do to do in the fastpaths
(because there's really no point in spending lots of time trying to make
memory available if we have a slowpath fallback with much less
constraints).
Well I haven't audited the code, but I'm pretty sure that just mitigates 
the problem and silenced lockdep instead of really fixing the issue.



We're also not limiting ourselves to GFP_NOIO, but instead have a
recursion detection in our own shrinker callback to avoid these
deadlocks.


Which if you ask me is absolutely horrible. I mean the comment in 
linux/mutex.h sums it up pretty well:
 * This function should not be used, _ever_. It is purely for 
hysterical GEM

 * raisins, and once those are gone this will be removed.


Regards,
Christian.
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [Linaro-mm-sig] [PATCH 1/5] dma-buf: add optional invalidate_mappings callback v2

2018-03-21 Thread Daniel Vetter
On Tue, Mar 20, 2018 at 06:47:57PM +0100, Christian König wrote:
> Am 20.03.2018 um 15:08 schrieb Daniel Vetter:
> > [SNIP]
> > For the in-driver reservation path (CS) having a slow-path that grabs a
> > temporary reference, drops the vram lock and then locks the reservation
> > normally (using the acquire context used already for the entire CS) is a
> > bit tricky, but totally feasible. Ttm doesn't do that though.
> 
> That is exactly what we do in amdgpu as well, it's just not very efficient
> nor reliable to retry getting the right pages for a submission over and over
> again.

Out of curiosity, where's that code? I did read the ttm eviction code way
back, and that one definitely didn't do that. Would be interesting to
update my understanding.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [Linaro-mm-sig] [PATCH 1/5] dma-buf: add optional invalidate_mappings callback v2

2018-03-21 Thread Daniel Vetter
On Tue, Mar 20, 2018 at 06:47:57PM +0100, Christian König wrote:
> Am 20.03.2018 um 15:08 schrieb Daniel Vetter:
> > [SNIP]
> > For the in-driver reservation path (CS) having a slow-path that grabs a
> > temporary reference, drops the vram lock and then locks the reservation
> > normally (using the acquire context used already for the entire CS) is a
> > bit tricky, but totally feasible. Ttm doesn't do that though.
> 
> That is exactly what we do in amdgpu as well, it's just not very efficient
> nor reliable to retry getting the right pages for a submission over and over
> again.
> 
> > [SNIP]
> > Note that there are 2 paths for i915 userptr. One is the mmu notifier, the
> > other one is the root-only hack we have for dubious reasons (or that I
> > really don't see the point in myself).
> 
> Well I'm referring to i915_gem_userptr.c, if that isn't what you are
> exposing then just feel free to ignore this whole discussion.

They're both in i915_gem_userptr.c, somewhat interleaved. Would be
interesting if you could show what you think is going wrong in there
compared to amdgpu_mn.c.

> > > For coherent usage you need to install some lock to prevent concurrent
> > > get_user_pages(), command submission and
> > > invalidate_range_start/invalidate_range_end from the MMU notifier.
> > > 
> > > Otherwise you can't guarantee that you are actually accessing the right 
> > > page
> > > in the case of a fork() or mprotect().
> > Yeah doing that with a full lock will create endless amounts of issues,
> > but I don't see why we need that. Userspace racing stuff with itself gets
> > to keep all the pieces. This is like racing DIRECT_IO against mprotect and
> > fork.
> 
> First of all I strongly disagree on that. A thread calling fork() because it
> wants to run a command is not something we can forbid just because we have a
> gfx stack loaded. That the video driver is not capable of handling that
> correct is certainly not the problem of userspace.
> 
> Second it's not only userspace racing here, you can get into this kind of
> issues just because of transparent huge page support where the background
> daemon tries to reallocate the page tables into bigger chunks.
> 
> And if I'm not completely mistaken you can also open up quite a bunch of
> security problems if you suddenly access the wrong page.

I get a feeling we're talking past each another here. Can you perhaps
explain what exactly the race is you're seeing? The i915 userptr code is
fairly convoluted and pushes a lot of stuff to workers (but then syncs
with those workers again later on), so easily possible you've overlooked
one of these lines that might guarantee already what you think needs to be
guaranteed. We're definitely not aiming to allow userspace to allow
writing to random pages all over.

> > Leaking the IOMMU mappings otoh means rogue userspace could do a bunch of
> > stray writes (I don't see anywhere code in amdgpu_mn.c to unmap at least
> > the gpu side PTEs to make stuff inaccessible) and wreak the core kernel's
> > book-keeping.
> > 
> > In i915 we guarantee that we call set_page_dirty/mark_page_accessed only
> > after all the mappings are really gone (both GPU PTEs and sg mapping),
> > guaranteeing that any stray writes from either the GPU or IOMMU will
> > result in faults (except bugs in the IOMMU, but can't have it all, "IOMMU
> > actually works" is an assumption behind device isolation).
> Well exactly that's the point, the handling in i915 looks incorrect to me.
> You need to call set_page_dirty/mark_page_accessed way before the mapping is
> destroyed.
> 
> To be more precise for userptrs it must be called from the
> invalidate_range_start, but i915 seems to delegate everything into a
> background worker to avoid the locking problems.

Yeah, and at the end of the function there's a flush_work to make sure the
worker has caught up.

The set_page_dirty is also there, but hidden very deep in the call chain
as part of the vma unmapping and backing storage unpinning. But I do think
we guarantee what you expect needs to happen.

> > > Felix and I hammered for quite some time on amdgpu until all of this was
> > > handled correctly, see drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c.
> > Maybe we should have more shared code in this, it seems to be a source of
> > endless amounts of fun ...
> > 
> > > I can try to gather the lockdep splat from my mail history, but it
> > > essentially took us multiple years to get rid of all of them.
> > I'm very much interested in specifically the splat that makes it
> > impossible for you folks to remove the sg mappings. That one sounds bad.
> > And would essentially make mmu_notifiers useless for their primary use
> > case, which is handling virtual machines where you really have to make
> > sure the IOMMU mapping is gone before you claim it's gone, because there's
> > no 2nd level of device checks (like GPU PTEs, or command checker) catching
> > stray writes.
> 
> Well to be more precise the problem is not that we 

[PATCH] drm/amd/pp: Fix set wrong temperature range on smu7

2018-03-21 Thread Rex Zhu
Fix the issue thermal irq was always triggered
for GPU under temperature range detected

The low temp in default thermal policy
was set to -273. so need to use int type for the low temp.

Change-Id: I1141b2698233ecd1e984b80eaf371966ab1aeef0
Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu7_thermal.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_thermal.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_thermal.c
index 4dd26eb..44527755 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_thermal.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_thermal.c
@@ -308,7 +308,7 @@ int smu7_thermal_get_temperature(struct pp_hwmgr *hwmgr)
 * @exception PP_Result_BadInput if the input data is not valid.
 */
 static int smu7_thermal_set_temperature_range(struct pp_hwmgr *hwmgr,
-   uint32_t low_temp, uint32_t high_temp)
+   int low_temp, int high_temp)
 {
int low = SMU7_THERMAL_MINIMUM_ALERT_TEMP *
PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/2] drm/amdgpu: Enable scatter gather display support

2018-03-21 Thread Christian König

Am 21.03.2018 um 06:08 schrieb Marek Olšák:
On Tue, Mar 20, 2018 at 4:16 PM, Christian König 
> wrote:


That's what I meant with use up the otherwise unused VRAM. I don't
see any disadvantage of always setting GTT as second domain on APUs.

My assumption was that we dropped this in userspace for
displayable surfaces, but Marek proved that wrong.

So what we should do is actually to add GTT as fallback to all BOs
on APUs in Mesa and only make sure that the kernel is capable of
handling GTT with optimal performance (e.g. have user huge pages
etc..).


VRAM|GTT is practically as good as GTT. VRAM with BO priorities and 
eviction throttling is the true "VRAM|GTT".


I don't know how else to make use of VRAM intelligently.


Well why not set VRAM|GTT as default on APUs? That should still save 
quite a bunch of moves even with throttling.


I mean there really shouldn't be any advantage to use VRAM any more 
except that we want to use it up as long as it is available.


Christian.



Marek


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/2] drm/amdgpu: Enable scatter gather display support

2018-03-21 Thread Christian König
But for CZ/ST, due to hardware limitation as discussed before, we 
either use VRAM or GTT, not both.
That is actually not correct, as far as I read up on that the issue for 
switching between VRAM and GTT placement is minimal.


We should just make sure that we don't do this on every page flip, e.g. 
have double or triple buffering where one BO is in GTT and one in VRAM.


Regards,
Christian.

Am 20.03.2018 um 21:38 schrieb Li, Samuel:


ØI think we can also have the case of systems with similar amounts of 
carve out and system ram.  E.g., on a system with 4 GB of system 
memory with 1 GB carved out for vram.  It would be a big waste not to 
use VRAM.  We'll probably need a heuristic at some point.


Agreed. But for CZ/ST, due to hardware limitation as discussed before, 
we either use VRAM or GTT, not both. That might be changed for later 
ASICs, but it is beyond the scope of this patch.


Regards,

Samuel Li

*From:*Koenig, Christian
*Sent:* Tuesday, March 20, 2018 4:17 PM
*To:* Deucher, Alexander ; Marek Olšák 

*Cc:* Alex Deucher ; Michel Dänzer 
; Li, Samuel ; amd-gfx list 

*Subject:* Re: [PATCH 1/2] drm/amdgpu: Enable scatter gather display 
support


That's what I meant with use up the otherwise unused VRAM. I don't see 
any disadvantage of always setting GTT as second domain on APUs.


My assumption was that we dropped this in userspace for displayable 
surfaces, but Marek proved that wrong.


So what we should do is actually to add GTT as fallback to all BOs on 
APUs in Mesa and only make sure that the kernel is capable of handling 
GTT with optimal performance (e.g. have user huge pages etc..).


Christian.

Am 20.03.2018 um 21:04 schrieb Deucher, Alexander:

I think we can also have the case of systems with similar amounts
of carve out and system ram.  E.g., on a system with 4 GB of
system memory with 1 GB carved out for vram.  It would be a big
waste not to use VRAM.  We'll probably need a heuristic at some point.

Alex



*From:*Christian König 

*Sent:* Tuesday, March 20, 2018 2:32:49 PM
*To:* Marek Olšák; Koenig, Christian
*Cc:* Alex Deucher; Deucher, Alexander; Michel Dänzer; Li, Samuel;
amd-gfx list
*Subject:* Re: [PATCH 1/2] drm/amdgpu: Enable scatter gather
display support

I don't think that is a good idea.

Ideally GTT should now have the same performance as VRAM on APUs
and we should use VRAM only for things where we absolutely have to
and to actually use up the otherwise unused VRAM.

Can you run some tests with all BOs forced to GTT and see if there
is any performance regression?

Christian.

Am 20.03.2018 um 15:51 schrieb Marek Olšák:

On Tue, Mar 20, 2018 at 9:55 AM, Christian König
> wrote:

Yes, exactly. And if I remember correctly Mesa used to
always set GTT as fallback on APUs, correct?

"used to" is the key part. Mesa doesn't force GTT on APUs
anymore. It expects that the combination of BO priorities and
BO move throttling will result in optimal BO placements over time.

Marek


The problem seems to be that this fallback isn't set for
displayable BOs.

So what needs to be done is to just enable this fallback
for displayable BOs as well if the kernel can handle it.

Christian.



Am 20.03.2018 um 00:01 schrieb Marek Olšák:

In theory, Mesa doesn't have to do anything. It can
continue setting VRAM and if the kernel has to put a
display buffer into GTT, it doesn't matter (for Mesa).
Whether the VRAM placement is really used is largely
determined by BO priorities.

The way I understand scather/gather is that it only
allows the GTT placement. It doesn't force the GTT
placement. Mesa also doesn't force the GTT placement.

Marek

On Mon, Mar 19, 2018 at 5:12 PM, Alex Deucher
>
wrote:

On Mon, Mar 19, 2018 at 4:29 PM, Li, Samuel
> wrote:
>>to my earlier point, there may be cases where it
is advantageous to put
>> display buffers in vram even if s/g display is
supported
>
> Agreed. That is also why the patch has the
options to let 

Re: [PATCH 00/20] Add KFD GPUVM support for dGPUs v4

2018-03-21 Thread Oded Gabbay
On Mon, Mar 19, 2018 at 9:05 PM, Felix Kuehling  wrote:
> On 2018-03-19 12:39 PM, Christian König wrote:
>> So coming back to this series once more.
>>
>> Patch #1, #3 are Reviewed-by: Christian König .
>>
>> Patch #2, #4 - #13 and #18-#19 are Acked-by: Christian König
>> .
>>
>> Patch #14: What's the difference to setting vramlimit=$size_of_bar ?
>
> The debug_largebar option only affects KFD. Graphics can still use all
> memory.
>
>>
>> Patch #15 & #20: Why is that actually still needed? I thought we have
>> fixed all dependencies and can now use the "standard" way of attaching
>> fences to reservation objects to do this.
>
> Patch 15 adds a KFD-specific MMU notifier, because the graphics MMU
> notifier deals with amdgpu_cs command submission. We need a completely
> different treatment of MMU notifiers for KFD. We need to stop user mode
> queues.
>
> Patch 20 implements the user mode queue preemption mechanism, and the
> corresponding restore function that re-validates userptr BOs before
> restarting user mode queues.
>
> I think you're implying that the graphics MMU notifier would wait for
> the eviction fence and trigger a regular eviction in KFD. I haven't
> tried that. The MMU notifier and userptr eviction mechanism was
> implemented in KFD before we had the TTM evictions. We didn't go back
> and revisit it after that. There are a few major differences that make
> me want to keep the two types of evictions separate, though:
>
>   * TTM evictions are the result of memory pressure (triggered by
> another process)
>   o Most MMU notifiers are triggered by something in the same
> process (fork, mprotect, etc.)
>   o Thus the restore delay after MMU notifiers can be much shorter
>   * TTM evictions are done asynchronously in a delayed worker
>   o MMU notifiers are synchronous, queues need to be stopped before
> returning
>   * KFD's TTM eviction/restore code doesn't handle userptrs
> (get_user_pages, etc)
>   o MMU notifier restore worker is specialized to handle just userptrs
>
>
>>
>> Saying so I still need to take a closer look at patch #20.
>>
>> Patch #16: Looks good to me in general, but I think it would be safer
>> if we grab a reference to the task structure. Otherwise grabbing pages
>> from a mm_struct sounds a bit scary to me.
>
> You're right. I've never seen it cause problems when testing process
> termination, probably because during process termination KFD cancels the
> delayed worker that calls this function. But I'll fix this and take a
> proper reference.
>
>>
>> Patch #17: I think it would be better to allocate the node when the
>> locks are not held and free it when we find that it isn't used, but no
>> big deal.
>
> OK. I'll change that.
>
> Thanks,
>   Felix
>
>>
>> Regards,
>> Christian.
>>
>> Am 15.03.2018 um 22:27 schrieb Felix Kuehling:
>>> Rebased and integrated review feedback from v3:
>>> * Removed vm->vm_context field
>>> * Use uninterruptible waiting in initial PD validation to avoid
>>> ERESTARTSYS
>>> * Return number of successful map/unmap operations in failure cases
>>> * Facilitate partial retry after failed map/unmap
>>> * Added comments with parameter descriptions to new APIs
>>> * Defined AMDKFD_IOC_FREE_MEMORY_OF_GPU write-only
>>>
>>> This patch series also adds Userptr support in patches 15-20.
>>>
>>> Felix Kuehling (19):
>>>drm/amdgpu: Move KFD-specific fields into struct amdgpu_vm
>>>drm/amdgpu: Fix initial validation of PD BO for KFD VMs
>>>drm/amdgpu: Add helper to turn an existing VM into a compute VM
>>>drm/amdgpu: Add kfd2kgd interface to acquire an existing VM
>>>drm/amdkfd: Create KFD VMs on demand
>>>drm/amdkfd: Remove limit on number of GPUs
>>>drm/amdkfd: Aperture setup for dGPUs
>>>drm/amdkfd: Add per-process IDR for buffer handles
>>>drm/amdkfd: Allocate CWSR trap handler memory for dGPUs
>>>drm/amdkfd: Add TC flush on VMID deallocation for Hawaii
>>>drm/amdkfd: Add ioctls for GPUVM memory management
>>>drm/amdkfd: Kmap event page for dGPUs
>>>drm/amdkfd: Add module option for testing large-BAR functionality
>>>drm/amdgpu: Add MMU notifier type for KFD userptr
>>>drm/amdgpu: Enable amdgpu_ttm_tt_get_user_pages in worker threads
>>>drm/amdgpu: GFP_NOIO while holding locks taken in MMU notifier
>>>drm/amdkfd: GFP_NOIO while holding locks taken in MMU notifier
>>>drm/amdkfd: Add quiesce_mm and resume_mm to kgd2kfd_calls
>>>drm/amdgpu: Add userptr support for KFD
>>>
>>> Oak Zeng (1):
>>>drm/amdkfd: Populate DRM render device minor
>>>
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h |  37 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c  |   1 +
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v8.c  |   1 +
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c   | 818
>>> ++---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  

Re: [PATCH 2/2] drm/amd/pp: Register smu irq for legacy asics

2018-03-21 Thread Deucher, Alexander
Series is:

Reviewed-by: Alex Deucher 


From: amd-gfx  on behalf of Rex Zhu 

Sent: Wednesday, March 21, 2018 1:51:28 AM
To: amd-gfx@lists.freedesktop.org
Cc: Zhu, Rex
Subject: [PATCH 2/2] drm/amd/pp: Register smu irq for legacy asics

Change-Id: I1927175adfecbcfe99908f06959f8c0a507d3278
Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 33 
 1 file changed, 33 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
index 8a81360..5323f74 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
@@ -3996,8 +3996,41 @@ static int smu7_set_max_fan_rpm_output(struct pp_hwmgr 
*hwmgr, uint16_t us_max_f
 PPSMC_MSG_SetFanRpmMax, us_max_fan_rpm);
 }

+static const struct amdgpu_irq_src_funcs smu7_irq_funcs = {
+   .process = phm_irq_process,
+};
+
 static int smu7_register_irq_handlers(struct pp_hwmgr *hwmgr)
 {
+   struct amdgpu_irq_src *source =
+   kzalloc(sizeof(struct amdgpu_irq_src), GFP_KERNEL);
+
+   if (!source)
+   return -ENOMEM;
+
+   source->funcs = _irq_funcs;
+
+   if (hwmgr->thermal_controller.ucType ==
+   hwmgr->default_thermal_ctrl_type ||
+   hwmgr->thermal_controller.ucType ==
+   ATOM_TONGA_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
+
+   amdgpu_irq_add_id((struct amdgpu_device *)(hwmgr->adev),
+   AMDGPU_IH_CLIENTID_LEGACY,
+   230,
+   source);
+   amdgpu_irq_add_id((struct amdgpu_device *)(hwmgr->adev),
+   AMDGPU_IH_CLIENTID_LEGACY,
+   231,
+   source);
+   }
+
+   /* Register CTF(GPIO_19) interrupt */
+   amdgpu_irq_add_id((struct amdgpu_device *)(hwmgr->adev),
+   AMDGPU_IH_CLIENTID_LEGACY,
+   83,
+   source);
+
 return 0;
 }

--
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/2] drm/amd/pp: Register smu irq for legacy asics

2018-03-21 Thread Rex Zhu
Change-Id: I1927175adfecbcfe99908f06959f8c0a507d3278
Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 33 
 1 file changed, 33 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
index 8a81360..5323f74 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
@@ -3996,8 +3996,41 @@ static int smu7_set_max_fan_rpm_output(struct pp_hwmgr 
*hwmgr, uint16_t us_max_f
PPSMC_MSG_SetFanRpmMax, us_max_fan_rpm);
 }
 
+static const struct amdgpu_irq_src_funcs smu7_irq_funcs = {
+   .process = phm_irq_process,
+};
+
 static int smu7_register_irq_handlers(struct pp_hwmgr *hwmgr)
 {
+   struct amdgpu_irq_src *source =
+   kzalloc(sizeof(struct amdgpu_irq_src), GFP_KERNEL);
+
+   if (!source)
+   return -ENOMEM;
+
+   source->funcs = _irq_funcs;
+
+   if (hwmgr->thermal_controller.ucType ==
+   hwmgr->default_thermal_ctrl_type ||
+   hwmgr->thermal_controller.ucType ==
+   ATOM_TONGA_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
+
+   amdgpu_irq_add_id((struct amdgpu_device *)(hwmgr->adev),
+   AMDGPU_IH_CLIENTID_LEGACY,
+   230,
+   source);
+   amdgpu_irq_add_id((struct amdgpu_device *)(hwmgr->adev),
+   AMDGPU_IH_CLIENTID_LEGACY,
+   231,
+   source);
+   }
+
+   /* Register CTF(GPIO_19) interrupt */
+   amdgpu_irq_add_id((struct amdgpu_device *)(hwmgr->adev),
+   AMDGPU_IH_CLIENTID_LEGACY,
+   83,
+   source);
+
return 0;
 }
 
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/2] drm/amd/pp: Initialize default thermal control type for each asic

2018-03-21 Thread Rex Zhu
Signed-off-by: Rex Zhu 

Change-Id: I4e1b3f4bc66f28cc6a015182452d426ddd611224
---
 drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c| 9 +
 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c | 2 +-
 drivers/gpu/drm/amd/powerplay/inc/hwmgr.h  | 1 +
 3 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
index 6318438..3f1e822 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
@@ -33,6 +33,8 @@
 #include "ppsmc.h"
 #include "amd_acpi.h"
 #include "pp_psm.h"
+#include "atombios.h"
+#include "pptable.h"
 
 extern const struct pp_smumgr_func ci_smu_funcs;
 extern const struct pp_smumgr_func smu8_smu_funcs;
@@ -87,6 +89,7 @@ int hwmgr_early_init(struct pp_hwmgr *hwmgr)
hwmgr->fan_ctrl_is_in_default_mode = true;
hwmgr->reload_fw = 1;
hwmgr_init_workload_prority(hwmgr);
+   hwmgr->default_thermal_ctrl_type = ATOM_PP_THERMALCONTROLLER_NONE;
 
switch (hwmgr->chip_family) {
case AMDGPU_FAMILY_CI:
@@ -139,6 +142,7 @@ int hwmgr_early_init(struct pp_hwmgr *hwmgr)
case AMDGPU_FAMILY_AI:
switch (hwmgr->chip_id) {
case CHIP_VEGA10:
+   hwmgr->default_thermal_ctrl_type = 
ATOM_PP_THERMALCONTROLLER_VEGA10;
hwmgr->smumgr_funcs = _smu_funcs;
vega10_hwmgr_init(hwmgr);
break;
@@ -418,6 +422,7 @@ int polaris_set_asic_special_caps(struct pp_hwmgr *hwmgr)
phm_cap_set(hwmgr->platform_descriptor.platformCaps,

PHM_PlatformCaps_TCPRamping);
}
+   hwmgr->default_thermal_ctrl_type = ATOM_PP_THERMALCONTROLLER_POLARIS10;
return 0;
 }
 
@@ -433,6 +438,7 @@ int fiji_set_asic_special_caps(struct pp_hwmgr *hwmgr)
PHM_PlatformCaps_TDRamping);
phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
PHM_PlatformCaps_TCPRamping);
+   hwmgr->default_thermal_ctrl_type = ATOM_PP_THERMALCONTROLLER_FIJI;
return 0;
 }
 
@@ -453,6 +459,7 @@ int tonga_set_asic_special_caps(struct pp_hwmgr *hwmgr)
  PHM_PlatformCaps_UVDPowerGating);
phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
  PHM_PlatformCaps_VCEPowerGating);
+   hwmgr->default_thermal_ctrl_type = ATOM_PP_THERMALCONTROLLER_TONGA;
return 0;
 }
 
@@ -468,6 +475,7 @@ int topaz_set_asic_special_caps(struct pp_hwmgr *hwmgr)
PHM_PlatformCaps_TDRamping);
phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
PHM_PlatformCaps_TCPRamping);
+   hwmgr->default_thermal_ctrl_type = ATOM_PP_THERMALCONTROLLER_ICELAND;
return 0;
 }
 
@@ -485,5 +493,6 @@ int ci_set_asic_special_caps(struct pp_hwmgr *hwmgr)
PHM_PlatformCaps_MemorySpreadSpectrumSupport);
phm_cap_set(hwmgr->platform_descriptor.platformCaps,
PHM_PlatformCaps_EngineSpreadSpectrumSupport);
+   hwmgr->default_thermal_ctrl_type = ATOM_PP_THERMALCONTROLLER_CISLANDS;
return 0;
 }
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
index 7bb9dd9..f177183 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
@@ -4830,7 +4830,7 @@ static int vega10_register_irq_handlers(struct pp_hwmgr 
*hwmgr)
source->funcs = _irq_funcs;
 
if (hwmgr->thermal_controller.ucType ==
-   ATOM_VEGA10_PP_THERMALCONTROLLER_VEGA10 ||
+   hwmgr->default_thermal_ctrl_type ||
hwmgr->thermal_controller.ucType ==
ATOM_VEGA10_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) 
{
 
diff --git a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h 
b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
index efdcf31..f32d5db 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
@@ -720,6 +720,7 @@ struct pp_hwmgr {
uint32_t usec_timeout;
void *pptable;
struct phm_platform_descriptor platform_descriptor;
+   uint8_t default_thermal_ctrl_type;
void *backend;
 
void *smu_backend;
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx