Re: [PATCH] drm/amdgpu: Move IB pool init and fini.

2019-03-07 Thread Koenig, Christian
Am 07.03.19 um 17:59 schrieb Grodzovsky, Andrey:
> On 3/7/19 6:29 AM, Christian König wrote:
>> Am 06.03.19 um 22:25 schrieb Andrey Grodzovsky:
>>> Problem:
>>> Using SDMA for TLB invalidation in certain ASICs exposed a problem
>>> of IB pool not being ready while SDMA already up on Init and already
>>> shutt down while SDMA still running on Fini. This caused
>>> IB allocation failure. Temproary fix was commited into a
>>> bringup branch but this is the generic fix.
>>>
>>> Fix:
>>> Init IB pool rigth after GMC is ready but before SDMA is ready.
>>> Do th opposite for Fini.
>>>
>>> Signed-off-by: Andrey Grodzovsky 
>>> ---
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 20 +++-
>>>    1 file changed, 11 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> index 00def57..c05a551 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> @@ -1686,6 +1686,13 @@ static int amdgpu_device_ip_init(struct
>>> amdgpu_device *adev)
>>>    }
>>>    }
>>>    +    r = amdgpu_ib_pool_init(adev);
>>> +    if (r) {
>>> +    dev_err(adev->dev, "IB initialization failed (%d).\n", r);
>>> +    amdgpu_vf_error_put(adev, AMDGIM_ERROR_VF_IB_INIT_FAIL, 0, r);
>>> +    goto init_failed;
>>> +    }
>>> +
>>>    r = amdgpu_ucode_create_bo(adev); /* create ucode bo when
>>> sw_init complete*/
>>>    if (r)
>>>    goto init_failed;
>>> @@ -1888,7 +1895,8 @@ static int amdgpu_device_ip_fini(struct
>>> amdgpu_device *adev)
>>>    for (i = 0; i < adev->num_ip_blocks; i++) {
>>>    if (!adev->ip_blocks[i].status.hw)
>>>    continue;
>>> -    if (adev->ip_blocks[i].version->type ==
>>> AMD_IP_BLOCK_TYPE_SMC) {
>>> +    if (adev->ip_blocks[i].version->type ==
>>> AMD_IP_BLOCK_TYPE_SMC ||
>>> +    adev->ip_blocks[i].version->type ==
>>> AMD_IP_BLOCK_TYPE_SDMA) {
>> Why is that necessary?
>
> I assumed SDMA gw fini will call tlb flush but after testing with this
> removed it didn't so ok.
>
>>>    r = adev->ip_blocks[i].version->funcs->hw_fini((void
>>> *)adev);
>>>    /* XXX handle errors */
>>>    if (r) {
>>> @@ -1900,6 +1908,8 @@ static int amdgpu_device_ip_fini(struct
>>> amdgpu_device *adev)
>>>    }
>>>    }
>>>    +    amdgpu_ib_pool_fini(adev);
>> Thinking more about it this should probably be done right before GMC
>> SW fini.
>>
>> IIRC we already have an extra case for this somewhere.
>>
>> Christian.
>
> Why do you think that the proper place ? Seemed to work fine where it was.

IIRC we already freed up a couple of other BOs at this place, e.g. the 
ucode etc..

And you moved the ip_pool_init directly before the ucode allocation, so 
that seems to be a good choice for freeing it up again.

Christian.

>
> Anyway, i moved it and resent v2.
>
> Andrey
>
>
>>> +
>>>    for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
>>>    if (!adev->ip_blocks[i].status.hw)
>>>    continue;
>>> @@ -2651,13 +2661,6 @@ int amdgpu_device_init(struct amdgpu_device
>>> *adev,
>>>    /* Get a log2 for easy divisions. */
>>>    adev->mm_stats.log2_max_MBps = ilog2(max(1u, max_MBps));
>>>    -    r = amdgpu_ib_pool_init(adev);
>>> -    if (r) {
>>> -    dev_err(adev->dev, "IB initialization failed (%d).\n", r);
>>> -    amdgpu_vf_error_put(adev, AMDGIM_ERROR_VF_IB_INIT_FAIL, 0, r);
>>> -    goto failed;
>>> -    }
>>> -
>>>    amdgpu_fbdev_init(adev);
>>>      r = amdgpu_pm_sysfs_init(adev);
>>> @@ -2735,7 +2738,6 @@ void amdgpu_device_fini(struct amdgpu_device
>>> *adev)
>>>    else
>>>    drm_atomic_helper_shutdown(adev->ddev);
>>>    }
>>> -    amdgpu_ib_pool_fini(adev);
>>>    amdgpu_fence_driver_fini(adev);
>>>    amdgpu_pm_sysfs_fini(adev);
>>>    amdgpu_fbdev_fini(adev);

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v2] drm/amdgpu: Move IB pool init and fini v2

2019-03-07 Thread Koenig, Christian
Am 07.03.19 um 17:57 schrieb Andrey Grodzovsky:
> Problem:
> Using SDMA for TLB invalidation in certain ASICs exposed a problem
> of IB pool not being ready while SDMA already up on Init and already
> shutt down while SDMA still running on Fini. This caused
> IB allocation failure. Temproary fix was commited into a
> bringup branch but this is the generic fix.
>
> Fix:
> Init IB pool rigth after GMC is ready but before SDMA is ready.
> Do th opposite for Fini.
>
> v2: Remove restriction on SDMA early init and move amdgpu_ib_pool_fini
>
> Signed-off-by: Andrey Grodzovsky 

Reviewed-by: Christian König 

> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 16 
>   1 file changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index 00def57..0c4a580 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -1686,6 +1686,13 @@ static int amdgpu_device_ip_init(struct amdgpu_device 
> *adev)
>   }
>   }
>   
> + r = amdgpu_ib_pool_init(adev);
> + if (r) {
> + dev_err(adev->dev, "IB initialization failed (%d).\n", r);
> + amdgpu_vf_error_put(adev, AMDGIM_ERROR_VF_IB_INIT_FAIL, 0, r);
> + goto init_failed;
> + }
> +
>   r = amdgpu_ucode_create_bo(adev); /* create ucode bo when sw_init 
> complete*/
>   if (r)
>   goto init_failed;
> @@ -1924,6 +1931,7 @@ static int amdgpu_device_ip_fini(struct amdgpu_device 
> *adev)
>   amdgpu_free_static_csa(>virt.csa_obj);
>   amdgpu_device_wb_fini(adev);
>   amdgpu_device_vram_scratch_fini(adev);
> + amdgpu_ib_pool_fini(adev);
>   }
>   
>   r = adev->ip_blocks[i].version->funcs->sw_fini((void *)adev);
> @@ -2651,13 +2659,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
>   /* Get a log2 for easy divisions. */
>   adev->mm_stats.log2_max_MBps = ilog2(max(1u, max_MBps));
>   
> - r = amdgpu_ib_pool_init(adev);
> - if (r) {
> - dev_err(adev->dev, "IB initialization failed (%d).\n", r);
> - amdgpu_vf_error_put(adev, AMDGIM_ERROR_VF_IB_INIT_FAIL, 0, r);
> - goto failed;
> - }
> -
>   amdgpu_fbdev_init(adev);
>   
>   r = amdgpu_pm_sysfs_init(adev);
> @@ -2735,7 +2736,6 @@ void amdgpu_device_fini(struct amdgpu_device *adev)
>   else
>   drm_atomic_helper_shutdown(adev->ddev);
>   }
> - amdgpu_ib_pool_fini(adev);
>   amdgpu_fence_driver_fini(adev);
>   amdgpu_pm_sysfs_fini(adev);
>   amdgpu_fbdev_fini(adev);

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH] remove amdgpu_vrr_atom

2019-03-07 Thread Cui, Flora
Drop amd-gfx.

Hi Michel, 
Thanks for your info.

Hi Hui,
Could you make similar changes in ugl?

-Original Message-
From: Michel Dänzer  
Sent: Thursday, March 07, 2019 5:05 PM
To: Cui, Flora ; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] remove amdgpu_vrr_atom


Hi Flora,


note that xf86-video-amdgpu patches are reviewed as GitLab merge requests these 
days, as documented in README.md:

https://gitlab.freedesktop.org/xorg/driver/xf86-video-amdgpu/merge_requests


On 2019-03-07 4:25 a.m., Cui, Flora wrote:
> it doesn't work as expected

Why is that? Maybe you ran into the Mesa bug fixed by
https://gitlab.freedesktop.org/mesa/mesa/commit/c0a540f32067cc8cb126d9aa1eb12a11cf15373a?merge_request_iid=202
, or a similar bug elsewhere?


-- 
Earthling Michel Dänzer   |  https://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH 1/2] drm/amdgpu: use ring/hash for fault handling on GMC9 v2

2019-03-07 Thread Kuehling, Felix
Hmm, that's a clever (and elegant) little data structure. The series is 
Reviewed-by: Felix Kuehling 

Regards,
   Felix

On 3/7/2019 8:28 AM, Christian König wrote:
> Further testing showed that the idea with the chash doesn't work as expected.
> Especially we can't predict when we can remove the entries from the hash 
> again.
>
> So replace the chash with a ring buffer/hash mix where entries in the 
> container
> age automatically based on their timestamp.
>
> v2: use ring buffer / hash mix
>
> Signed-off-by: Christian König 
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 49 
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h | 34 ++
>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c   | 60 ++---
>   3 files changed, 86 insertions(+), 57 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> index 5a32a0d2ad31..579cadd16886 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> @@ -240,3 +240,52 @@ void amdgpu_gmc_agp_location(struct amdgpu_device *adev, 
> struct amdgpu_gmc *mc)
>   dev_info(adev->dev, "AGP: %lluM 0x%016llX - 0x%016llX\n",
>   mc->agp_size >> 20, mc->agp_start, mc->agp_end);
>   }
> +
> +/**
> + * amdgpu_gmc_filter_faults - filter VM faults
> + *
> + * @adev: amdgpu device structure
> + * @addr: address of the VM fault
> + * @pasid: PASID of the process causing the fault
> + * @timestamp: timestamp of the fault
> + *
> + * Returns:
> + * True if the fault was filtered and should not be processed further.
> + * False if the fault is a new one and needs to be handled.
> + */
> +bool amdgpu_gmc_filter_faults(struct amdgpu_device *adev, uint64_t addr,
> +   uint16_t pasid, uint64_t timestamp)
> +{
> + struct amdgpu_gmc *gmc = >gmc;
> +
> + uint64_t stamp, key = addr << 4 | pasid;
> + struct amdgpu_gmc_fault *fault;
> + uint32_t hash;
> +
> + /* If we don't have space left in the ring buffer return immediately */
> + stamp = max(timestamp, AMDGPU_GMC_FAULT_TIMEOUT + 1) -
> + AMDGPU_GMC_FAULT_TIMEOUT;
> + if (gmc->fault_ring[gmc->last_fault].timestamp >= stamp)
> + return true;
> +
> + /* Try to find the fault in the hash */
> + hash = hash_64(key, AMDGPU_GMC_FAULT_HASH_ORDER);
> + fault = >fault_ring[gmc->fault_hash[hash].idx];
> + do {
> + if (fault->key == key)
> + return true;
> +
> + stamp = fault->timestamp;
> + fault = >fault_ring[fault->next];
> + } while (fault->timestamp < stamp);
> +
> + /* Add the fault to the ring */
> + fault = >fault_ring[gmc->last_fault];
> + fault->key = key;
> + fault->timestamp = timestamp;
> +
> + /* And update the hash */
> + fault->next = gmc->fault_hash[hash].idx;
> + gmc->fault_hash[hash].idx = gmc->last_fault++;
> + return false;
> +}
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
> index 6ce45664ff87..071145ac67b5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
> @@ -43,8 +43,34 @@
>*/
>   #define AMDGPU_GMC_HOLE_MASK0xULL
>   
> +/*
> + * Ring size as power of two for the log of recent faults.
> + */
> +#define AMDGPU_GMC_FAULT_RING_ORDER  8
> +#define AMDGPU_GMC_FAULT_RING_SIZE   (1 << AMDGPU_GMC_FAULT_RING_ORDER)
> +
> +/*
> + * Hash size as power of two for the log of recent faults
> + */
> +#define AMDGPU_GMC_FAULT_HASH_ORDER  8
> +#define AMDGPU_GMC_FAULT_HASH_SIZE   (1 << AMDGPU_GMC_FAULT_HASH_ORDER)
> +
> +/*
> + * Number of IH timestamp ticks until a fault is considered handled
> + */
> +#define AMDGPU_GMC_FAULT_TIMEOUT 5000ULL
> +
>   struct firmware;
>   
> +/*
> + * GMC page fault information
> + */
> +struct amdgpu_gmc_fault {
> + uint64_ttimestamp;
> + uint64_tnext:AMDGPU_GMC_FAULT_RING_ORDER;
> + uint64_tkey:52;
> +};
> +
>   /*
>* VMHUB structures, functions & helpers
>*/
> @@ -141,6 +167,12 @@ struct amdgpu_gmc {
>   struct kfd_vm_fault_info *vm_fault_info;
>   atomic_tvm_fault_info_updated;
>   
> + struct amdgpu_gmc_fault fault_ring[AMDGPU_GMC_FAULT_RING_SIZE];
> + struct {
> + uint64_tidx:AMDGPU_GMC_FAULT_RING_ORDER;
> + } fault_hash[AMDGPU_GMC_FAULT_HASH_SIZE];
> + uint64_tlast_fault:AMDGPU_GMC_FAULT_RING_ORDER;
> +
>   const struct amdgpu_gmc_funcs   *gmc_funcs;
>   
>   struct amdgpu_xgmi xgmi;
> @@ -195,5 +227,7 @@ void amdgpu_gmc_gart_location(struct amdgpu_device *adev,
> struct amdgpu_gmc *mc);
>   void amdgpu_gmc_agp_location(struct amdgpu_device *adev,
>struct amdgpu_gmc *mc);
> +bool amdgpu_gmc_filter_faults(struct amdgpu_device *adev, uint64_t 

[PATCH] drm/amdgpu: Cosmetic change for calling func amdgpu_gmc_vram_location

2019-03-07 Thread Zeng, Oak
Use function parameter mc as the second parameter of amdgpu_gmc_vram_location,
so codes look more consistent.

Change-Id: Ib5b06d188ebc6e82eb0d4d2a57d995149bf5f7a5
Signed-off-by: Oak Zeng 
---
 drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c | 2 +-
 drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c | 2 +-
 drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c | 2 +-
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
index 8a812e1..2edb7fc 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
@@ -225,7 +225,7 @@ static void gmc_v6_0_vram_gtt_location(struct amdgpu_device 
*adev,
u64 base = RREG32(mmMC_VM_FB_LOCATION) & 0x;
base <<= 24;
 
-   amdgpu_gmc_vram_location(adev, >gmc, base);
+   amdgpu_gmc_vram_location(adev, mc, base);
amdgpu_gmc_gart_location(adev, mc);
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
index f9f5bef..8a76bfe 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
@@ -242,7 +242,7 @@ static void gmc_v7_0_vram_gtt_location(struct amdgpu_device 
*adev,
u64 base = RREG32(mmMC_VM_FB_LOCATION) & 0x;
base <<= 24;
 
-   amdgpu_gmc_vram_location(adev, >gmc, base);
+   amdgpu_gmc_vram_location(adev, mc, base);
amdgpu_gmc_gart_location(adev, mc);
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
index 34d547c..2880a14 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
@@ -433,7 +433,7 @@ static void gmc_v8_0_vram_gtt_location(struct amdgpu_device 
*adev,
base = RREG32(mmMC_VM_FB_LOCATION) & 0x;
base <<= 24;
 
-   amdgpu_gmc_vram_location(adev, >gmc, base);
+   amdgpu_gmc_vram_location(adev, mc, base);
amdgpu_gmc_gart_location(adev, mc);
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 84904bd..0252345 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -1013,7 +1013,7 @@ static void gmc_v9_0_vram_gtt_location(struct 
amdgpu_device *adev,
base = mmhub_v1_0_get_fb_location(adev);
/* add the xgmi offset of the physical node */
base += adev->gmc.xgmi.physical_node_id * 
adev->gmc.xgmi.node_segment_size;
-   amdgpu_gmc_vram_location(adev, >gmc, base);
+   amdgpu_gmc_vram_location(adev, mc, base);
amdgpu_gmc_gart_location(adev, mc);
if (!amdgpu_sriov_vf(adev))
amdgpu_gmc_agp_location(adev, mc);
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[pull] amdgpu drm-next-5.1

2019-03-07 Thread Alex Deucher
Hi Dave, Daniel,

Fixes for 5.1:
- Powerplay fixes
- DC fixes
- Fix locking around indirect register access in some cases
- KFD MQD fix
- Disable BACO for vega20 for now (fixes pending)

The following changes since commit fbac3c48fa6b4cfa43eaae39d5a53269bff7ec5f:

  Merge branch 'drm-next-5.1' of git://people.freedesktop.org/~agd5f/linux into 
drm-next (2019-02-22 15:56:42 +1000)

are available in the Git repository at:

  git://people.freedesktop.org/~agd5f/linux drm-next-5.1

for you to fetch changes up to 59d3191f14dc18881fec1172c7096b7863622803:

  drm/amd/display: don't call dm_pp_ function from an fpu block (2019-03-06 
15:31:20 -0500)


Alex Deucher (1):
  drm/amdgpu/powerplay: add missing breaks in polaris10_smumgr

Anthony Koo (1):
  drm/amd/display: Fix issue with link_active state not correct for MST

Candice Li (1):
  Revert "drm/amdgpu: use BACO reset on vega20 if platform support"

Christian König (1):
  drm/amdgpu: clear PDs/PTs only after initializing them

Evan Quan (9):
  drm/amd/powerplay: fix the confusing ppfeature mask calculations
  drm/amd/powerplay: drop redundant soft min/max settings
  drm/amd/powerplay: need to reapply the dpm level settings
  drm/amd/powerplay: force FCLK to highest also for 5K or higher displays
  drm/amd/powerplay: overwrite ODSettingsMin for UCLK_FMAX feature
  drm/amd/powerplay: support retrieving clock information from other sysplls
  drm/amd/powerplay: set default fclk for no fclk dpm support case
  drm/amd/powerplay: honor the OD settings
  drm/amd/powerplay: show the right override pcie parameters

Harry Wentland (1):
  drm/amd/display: don't call dm_pp_ function from an fpu block

Huang Rui (2):
  drm/amd/powerplay: use REG32_PCIE wrapper instead for powerplay
  drm/amdgpu: use REG32_PCIE wrapper instead for psp

Kevin Wang (1):
  drm/amdkfd: use init_mqd function to allocate object for hid_mqd (CI)

Mathias Fröhlich (1):
  drm/amd/display: Fix reference counting for struct dc_sink.

Nathan Chancellor (1):
  drm/amd/display: Pass app_tf by value rather than by reference

shaoyunl (1):
  drm/powerplay: print current clock level when dpm is disabled on vg20

 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c |  11 +-
 drivers/gpu/drm/amd/amdgpu/psp_v3_1.c  |   4 +-
 drivers/gpu/drm/amd/amdgpu/soc15.c |   1 -
 drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c   |  52 +
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  |  43 +++-
 .../amd/display/amdgpu_dm/amdgpu_dm_mst_types.c|   1 +
 drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c   |   8 +-
 drivers/gpu/drm/amd/display/dc/core/dc_link.c  |  16 +-
 .../drm/amd/display/modules/freesync/freesync.c|   7 +-
 .../gpu/drm/amd/display/modules/inc/mod_freesync.h |   2 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/pp_psm.c   |   3 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c |  30 +--
 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h |   4 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c |   8 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c |   4 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c | 222 +++--
 drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.h |   7 +
 .../drm/amd/powerplay/smumgr/polaris10_smumgr.c|   2 +
 drivers/gpu/drm/amd/powerplay/smumgr/smu9_smumgr.c |   6 +-
 .../gpu/drm/amd/powerplay/smumgr/vega20_smumgr.c   |   6 +-
 20 files changed, 227 insertions(+), 210 deletions(-)
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amdgpu: Move IB pool init and fini.

2019-03-07 Thread Grodzovsky, Andrey

On 3/7/19 6:29 AM, Christian König wrote:
> Am 06.03.19 um 22:25 schrieb Andrey Grodzovsky:
>> Problem:
>> Using SDMA for TLB invalidation in certain ASICs exposed a problem
>> of IB pool not being ready while SDMA already up on Init and already
>> shutt down while SDMA still running on Fini. This caused
>> IB allocation failure. Temproary fix was commited into a
>> bringup branch but this is the generic fix.
>>
>> Fix:
>> Init IB pool rigth after GMC is ready but before SDMA is ready.
>> Do th opposite for Fini.
>>
>> Signed-off-by: Andrey Grodzovsky 
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 20 +++-
>>   1 file changed, 11 insertions(+), 9 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> index 00def57..c05a551 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> @@ -1686,6 +1686,13 @@ static int amdgpu_device_ip_init(struct 
>> amdgpu_device *adev)
>>   }
>>   }
>>   +    r = amdgpu_ib_pool_init(adev);
>> +    if (r) {
>> +    dev_err(adev->dev, "IB initialization failed (%d).\n", r);
>> +    amdgpu_vf_error_put(adev, AMDGIM_ERROR_VF_IB_INIT_FAIL, 0, r);
>> +    goto init_failed;
>> +    }
>> +
>>   r = amdgpu_ucode_create_bo(adev); /* create ucode bo when 
>> sw_init complete*/
>>   if (r)
>>   goto init_failed;
>> @@ -1888,7 +1895,8 @@ static int amdgpu_device_ip_fini(struct 
>> amdgpu_device *adev)
>>   for (i = 0; i < adev->num_ip_blocks; i++) {
>>   if (!adev->ip_blocks[i].status.hw)
>>   continue;
>> -    if (adev->ip_blocks[i].version->type == 
>> AMD_IP_BLOCK_TYPE_SMC) {
>> +    if (adev->ip_blocks[i].version->type == 
>> AMD_IP_BLOCK_TYPE_SMC ||
>> +    adev->ip_blocks[i].version->type == 
>> AMD_IP_BLOCK_TYPE_SDMA) {
>
> Why is that necessary?


I assumed SDMA gw fini will call tlb flush but after testing with this 
removed it didn't so ok.

>
>>   r = adev->ip_blocks[i].version->funcs->hw_fini((void 
>> *)adev);
>>   /* XXX handle errors */
>>   if (r) {
>> @@ -1900,6 +1908,8 @@ static int amdgpu_device_ip_fini(struct 
>> amdgpu_device *adev)
>>   }
>>   }
>>   +    amdgpu_ib_pool_fini(adev);
>
> Thinking more about it this should probably be done right before GMC 
> SW fini.
>
> IIRC we already have an extra case for this somewhere.
>
> Christian.


Why do you think that the proper place ? Seemed to work fine where it was.

Anyway, i moved it and resent v2.

Andrey


>
>> +
>>   for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
>>   if (!adev->ip_blocks[i].status.hw)
>>   continue;
>> @@ -2651,13 +2661,6 @@ int amdgpu_device_init(struct amdgpu_device 
>> *adev,
>>   /* Get a log2 for easy divisions. */
>>   adev->mm_stats.log2_max_MBps = ilog2(max(1u, max_MBps));
>>   -    r = amdgpu_ib_pool_init(adev);
>> -    if (r) {
>> -    dev_err(adev->dev, "IB initialization failed (%d).\n", r);
>> -    amdgpu_vf_error_put(adev, AMDGIM_ERROR_VF_IB_INIT_FAIL, 0, r);
>> -    goto failed;
>> -    }
>> -
>>   amdgpu_fbdev_init(adev);
>>     r = amdgpu_pm_sysfs_init(adev);
>> @@ -2735,7 +2738,6 @@ void amdgpu_device_fini(struct amdgpu_device 
>> *adev)
>>   else
>>   drm_atomic_helper_shutdown(adev->ddev);
>>   }
>> -    amdgpu_ib_pool_fini(adev);
>>   amdgpu_fence_driver_fini(adev);
>>   amdgpu_pm_sysfs_fini(adev);
>>   amdgpu_fbdev_fini(adev);
>
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v2] drm/amdgpu: Move IB pool init and fini v2

2019-03-07 Thread Andrey Grodzovsky
Problem:
Using SDMA for TLB invalidation in certain ASICs exposed a problem
of IB pool not being ready while SDMA already up on Init and already
shutt down while SDMA still running on Fini. This caused
IB allocation failure. Temproary fix was commited into a
bringup branch but this is the generic fix.

Fix:
Init IB pool rigth after GMC is ready but before SDMA is ready.
Do th opposite for Fini.

v2: Remove restriction on SDMA early init and move amdgpu_ib_pool_fini

Signed-off-by: Andrey Grodzovsky 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 00def57..0c4a580 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1686,6 +1686,13 @@ static int amdgpu_device_ip_init(struct amdgpu_device 
*adev)
}
}
 
+   r = amdgpu_ib_pool_init(adev);
+   if (r) {
+   dev_err(adev->dev, "IB initialization failed (%d).\n", r);
+   amdgpu_vf_error_put(adev, AMDGIM_ERROR_VF_IB_INIT_FAIL, 0, r);
+   goto init_failed;
+   }
+
r = amdgpu_ucode_create_bo(adev); /* create ucode bo when sw_init 
complete*/
if (r)
goto init_failed;
@@ -1924,6 +1931,7 @@ static int amdgpu_device_ip_fini(struct amdgpu_device 
*adev)
amdgpu_free_static_csa(>virt.csa_obj);
amdgpu_device_wb_fini(adev);
amdgpu_device_vram_scratch_fini(adev);
+   amdgpu_ib_pool_fini(adev);
}
 
r = adev->ip_blocks[i].version->funcs->sw_fini((void *)adev);
@@ -2651,13 +2659,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
/* Get a log2 for easy divisions. */
adev->mm_stats.log2_max_MBps = ilog2(max(1u, max_MBps));
 
-   r = amdgpu_ib_pool_init(adev);
-   if (r) {
-   dev_err(adev->dev, "IB initialization failed (%d).\n", r);
-   amdgpu_vf_error_put(adev, AMDGIM_ERROR_VF_IB_INIT_FAIL, 0, r);
-   goto failed;
-   }
-
amdgpu_fbdev_init(adev);
 
r = amdgpu_pm_sysfs_init(adev);
@@ -2735,7 +2736,6 @@ void amdgpu_device_fini(struct amdgpu_device *adev)
else
drm_atomic_helper_shutdown(adev->ddev);
}
-   amdgpu_ib_pool_fini(adev);
amdgpu_fence_driver_fini(adev);
amdgpu_pm_sysfs_fini(adev);
amdgpu_fbdev_fini(adev);
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amd/display: avoid passing enum as NULL pointer

2019-03-07 Thread Nathan Chancellor
On Thu, Mar 07, 2019 at 11:34:29AM +0100, Arnd Bergmann wrote:
> The mod_freesync_build_vrr_infopacket() function uses rather obscure
> calling conventions, where an enum is passed in through a pointer,
> and a NULL pointer is expected to behave the same way as the zero-value
> (TRANSFER_FUNC_UNKNOWN).
> 
> Trying to build this with clang results in a warning:
> 
> drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm.c:4601:3: error: 
> expression which evaluates to zero treated
>   as a null pointer constant of type 'const enum color_transfer_func *' 
> [-Werror,-Wnon-literal-null-conversion]
> 
> Passing it by value instead of by reference makes the code simpler
> and more conventional but should not change the behavior at all.
> 
> Fixes: c2791297013e ("drm/amd/display: Add color bit info to freesync 
> infoframe")
> Signed-off-by: Arnd Bergmann 
> ---
>  drivers/gpu/drm/amd/display/modules/freesync/freesync.c | 7 +++
>  drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h  | 2 +-
>  2 files changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c 
> b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
> index 94a84bc57c7a..6f32fe129880 100644
> --- a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
> +++ b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
> @@ -724,7 +724,7 @@ static void build_vrr_infopacket_v1(enum signal_type 
> signal,
>  
>  static void build_vrr_infopacket_v2(enum signal_type signal,
>   const struct mod_vrr_params *vrr,
> - const enum color_transfer_func *app_tf,
> + const enum color_transfer_func app_tf,
>   struct dc_info_packet *infopacket)
>  {
>   unsigned int payload_size = 0;
> @@ -732,8 +732,7 @@ static void build_vrr_infopacket_v2(enum signal_type 
> signal,
>   build_vrr_infopacket_header_v2(signal, infopacket, _size);
>   build_vrr_infopacket_data(vrr, infopacket);
>  
> - if (app_tf != NULL)
> - build_vrr_infopacket_fs2_data(*app_tf, infopacket);
> + build_vrr_infopacket_fs2_data(app_tf, infopacket);
>  
>   build_vrr_infopacket_checksum(_size, infopacket);
>  
> @@ -757,7 +756,7 @@ void mod_freesync_build_vrr_infopacket(struct 
> mod_freesync *mod_freesync,
>   const struct dc_stream_state *stream,
>   const struct mod_vrr_params *vrr,
>   enum vrr_packet_type packet_type,
> - const enum color_transfer_func *app_tf,
> + const enum color_transfer_func app_tf,
>   struct dc_info_packet *infopacket)
>  {
>   /* SPD info packet for FreeSync
> diff --git a/drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h 
> b/drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h
> index 4222e403b151..645793b924cf 100644
> --- a/drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h
> +++ b/drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h
> @@ -145,7 +145,7 @@ void mod_freesync_build_vrr_infopacket(struct 
> mod_freesync *mod_freesync,
>   const struct dc_stream_state *stream,
>   const struct mod_vrr_params *vrr,
>   enum vrr_packet_type packet_type,
> - const enum color_transfer_func *app_tf,
> + const enum color_transfer_func app_tf,
>   struct dc_info_packet *infopacket);
>  
>  void mod_freesync_build_vrr_params(struct mod_freesync *mod_freesync,
> -- 
> 2.20.0
> 

Just as an FYI, I sent the same fix when the warning first hit which was
recently accepted:

https://cgit.freedesktop.org/~agd5f/linux/commit/?id=672e78cab819ebe31e3b9b8abac367be8a110472

Just waiting for it to hit mainline.

Cheers,
Nathan
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH] drm/amd/display: avoid passing enum as NULL pointer

2019-03-07 Thread Arnd Bergmann
The mod_freesync_build_vrr_infopacket() function uses rather obscure
calling conventions, where an enum is passed in through a pointer,
and a NULL pointer is expected to behave the same way as the zero-value
(TRANSFER_FUNC_UNKNOWN).

Trying to build this with clang results in a warning:

drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm.c:4601:3: error: 
expression which evaluates to zero treated
  as a null pointer constant of type 'const enum color_transfer_func *' 
[-Werror,-Wnon-literal-null-conversion]

Passing it by value instead of by reference makes the code simpler
and more conventional but should not change the behavior at all.

Fixes: c2791297013e ("drm/amd/display: Add color bit info to freesync 
infoframe")
Signed-off-by: Arnd Bergmann 
---
 drivers/gpu/drm/amd/display/modules/freesync/freesync.c | 7 +++
 drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h  | 2 +-
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c 
b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
index 94a84bc57c7a..6f32fe129880 100644
--- a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+++ b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
@@ -724,7 +724,7 @@ static void build_vrr_infopacket_v1(enum signal_type signal,
 
 static void build_vrr_infopacket_v2(enum signal_type signal,
const struct mod_vrr_params *vrr,
-   const enum color_transfer_func *app_tf,
+   const enum color_transfer_func app_tf,
struct dc_info_packet *infopacket)
 {
unsigned int payload_size = 0;
@@ -732,8 +732,7 @@ static void build_vrr_infopacket_v2(enum signal_type signal,
build_vrr_infopacket_header_v2(signal, infopacket, _size);
build_vrr_infopacket_data(vrr, infopacket);
 
-   if (app_tf != NULL)
-   build_vrr_infopacket_fs2_data(*app_tf, infopacket);
+   build_vrr_infopacket_fs2_data(app_tf, infopacket);
 
build_vrr_infopacket_checksum(_size, infopacket);
 
@@ -757,7 +756,7 @@ void mod_freesync_build_vrr_infopacket(struct mod_freesync 
*mod_freesync,
const struct dc_stream_state *stream,
const struct mod_vrr_params *vrr,
enum vrr_packet_type packet_type,
-   const enum color_transfer_func *app_tf,
+   const enum color_transfer_func app_tf,
struct dc_info_packet *infopacket)
 {
/* SPD info packet for FreeSync
diff --git a/drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h 
b/drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h
index 4222e403b151..645793b924cf 100644
--- a/drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h
+++ b/drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h
@@ -145,7 +145,7 @@ void mod_freesync_build_vrr_infopacket(struct mod_freesync 
*mod_freesync,
const struct dc_stream_state *stream,
const struct mod_vrr_params *vrr,
enum vrr_packet_type packet_type,
-   const enum color_transfer_func *app_tf,
+   const enum color_transfer_func app_tf,
struct dc_info_packet *infopacket);
 
 void mod_freesync_build_vrr_params(struct mod_freesync *mod_freesync,
-- 
2.20.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amd/display: avoid passing enum as NULL pointer

2019-03-07 Thread Deucher, Alexander
Thanks for the patch.  We already have the fix queued:
https://cgit.freedesktop.org/~agd5f/linux/commit/?h=drm-next-5.2-wip=672e78cab819ebe31e3b9b8abac367be8a110472

Alex

From: Arnd Bergmann 
Sent: Thursday, March 7, 2019 5:34 AM
To: Wentland, Harry; Li, Sun peng (Leo); Deucher, Alexander; Koenig, Christian; 
Zhou, David(ChunMing)
Cc: Nick Desaulniers; Arnd Bergmann; David Airlie; Daniel Vetter; Koo, Anthony; 
Cyr, Aric; Tatla, Harmanprit; Chalmers, Kenneth; Kumarasamy, Sivapiriyan; Kees 
Cook; Bayan Zabihiyan; Cheng, Tony; amd-gfx@lists.freedesktop.org; 
dri-de...@lists.freedesktop.org; linux-ker...@vger.kernel.org
Subject: [PATCH] drm/amd/display: avoid passing enum as NULL pointer

The mod_freesync_build_vrr_infopacket() function uses rather obscure
calling conventions, where an enum is passed in through a pointer,
and a NULL pointer is expected to behave the same way as the zero-value
(TRANSFER_FUNC_UNKNOWN).

Trying to build this with clang results in a warning:

drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm.c:4601:3: error: 
expression which evaluates to zero treated
  as a null pointer constant of type 'const enum color_transfer_func *' 
[-Werror,-Wnon-literal-null-conversion]

Passing it by value instead of by reference makes the code simpler
and more conventional but should not change the behavior at all.

Fixes: c2791297013e ("drm/amd/display: Add color bit info to freesync 
infoframe")
Signed-off-by: Arnd Bergmann 
---
 drivers/gpu/drm/amd/display/modules/freesync/freesync.c | 7 +++
 drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h  | 2 +-
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c 
b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
index 94a84bc57c7a..6f32fe129880 100644
--- a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+++ b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
@@ -724,7 +724,7 @@ static void build_vrr_infopacket_v1(enum signal_type signal,

 static void build_vrr_infopacket_v2(enum signal_type signal,
 const struct mod_vrr_params *vrr,
-   const enum color_transfer_func *app_tf,
+   const enum color_transfer_func app_tf,
 struct dc_info_packet *infopacket)
 {
 unsigned int payload_size = 0;
@@ -732,8 +732,7 @@ static void build_vrr_infopacket_v2(enum signal_type signal,
 build_vrr_infopacket_header_v2(signal, infopacket, _size);
 build_vrr_infopacket_data(vrr, infopacket);

-   if (app_tf != NULL)
-   build_vrr_infopacket_fs2_data(*app_tf, infopacket);
+   build_vrr_infopacket_fs2_data(app_tf, infopacket);

 build_vrr_infopacket_checksum(_size, infopacket);

@@ -757,7 +756,7 @@ void mod_freesync_build_vrr_infopacket(struct mod_freesync 
*mod_freesync,
 const struct dc_stream_state *stream,
 const struct mod_vrr_params *vrr,
 enum vrr_packet_type packet_type,
-   const enum color_transfer_func *app_tf,
+   const enum color_transfer_func app_tf,
 struct dc_info_packet *infopacket)
 {
 /* SPD info packet for FreeSync
diff --git a/drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h 
b/drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h
index 4222e403b151..645793b924cf 100644
--- a/drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h
+++ b/drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h
@@ -145,7 +145,7 @@ void mod_freesync_build_vrr_infopacket(struct mod_freesync 
*mod_freesync,
 const struct dc_stream_state *stream,
 const struct mod_vrr_params *vrr,
 enum vrr_packet_type packet_type,
-   const enum color_transfer_func *app_tf,
+   const enum color_transfer_func app_tf,
 struct dc_info_packet *infopacket);

 void mod_freesync_build_vrr_params(struct mod_freesync *mod_freesync,
--
2.20.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amdgpu: fix ras parameter descriptions

2019-03-07 Thread Deucher, Alexander
Reviewed-by: Alex Deucher 

From: amd-gfx  on behalf of Evan Quan 

Sent: Thursday, March 7, 2019 2:00 AM
To: amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander; Pan, Xinhui; Quan, Evan; Zhang, Hawking
Subject: [PATCH] drm/amdgpu: fix ras parameter descriptions

The descriptions of modinfo wrongly show two parameters
for each feature(see below). This patch can fix this
incorrect outputs.

parm:   amdgpu_ras_enable:Enable RAS features on the GPU (0 = disable, 
1 = enable, -1 = auto (default))
parm:   ras_enable:int
parm:   amdgpu_ras_mask:Mask of RAS features to enable (default 
0x), only valid when ras_enable == 1
parm:   ras_mask:uint

Change-Id: I04f7e505cecca991f196802befd2006dc49b3dcf
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 38dbf6115c15..e0a7712d5d7b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -512,18 +512,18 @@ MODULE_PARM_DESC(emu_mode, "Emulation mode, (1 = enable, 
0 = disable)");
 module_param_named(emu_mode, amdgpu_emu_mode, int, 0444);

 /*
- * DOC: amdgpu_ras_enable (int)
+ * DOC: ras_enable (int)
  * Enable RAS features on the GPU (0 = disable, 1 = enable, -1 = auto 
(default))
  */
-MODULE_PARM_DESC(amdgpu_ras_enable, "Enable RAS features on the GPU (0 = 
disable, 1 = enable, -1 = auto (default))");
+MODULE_PARM_DESC(ras_enable, "Enable RAS features on the GPU (0 = disable, 1 = 
enable, -1 = auto (default))");
 module_param_named(ras_enable, amdgpu_ras_enable, int, 0444);

 /**
- * DOC: amdgpu_ras_mask (uint)
+ * DOC: ras_mask (uint)
  * Mask of RAS features to enable (default 0x), only valid when 
ras_enable == 1
  * See the flags in drivers/gpu/drm/amd/amdgpu/amdgpu_ras.h
  */
-MODULE_PARM_DESC(amdgpu_ras_mask, "Mask of RAS features to enable (default 
0x), only valid when ras_enable == 1");
+MODULE_PARM_DESC(ras_mask, "Mask of RAS features to enable (default 
0x), only valid when ras_enable == 1");
 module_param_named(ras_mask, amdgpu_ras_mask, uint, 0444);


--
2.21.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re:[PATCH] drm/amdgpu: enable bo priority setting from user space

2019-03-07 Thread Zhou, David(ChunMing)
yes,per submission bo list priority already is used by us. but per vm bo still 
is in fly, no priority on that.

-David

send from my phone

 Original Message 
Subject: Re: [PATCH] drm/amdgpu: enable bo priority setting from user space
From: "Koenig, Christian"
To: "Zhou, David(ChunMing)" ,amd-gfx@lists.freedesktop.org
CC:

Well you can already use the per submission priority for the BOs.

Additional to that as I said for per VM BOs we can add a priority to sort them 
in the LRU.

Not sure how effective both of those actually are.

Regards,
Christian.

Am 07.03.19 um 14:09 schrieb Zhou, David(ChunMing):
Yes, you are right, thanks to point it out. Will see if there is other way.

-David

send from my phone

 Original Message 
Subject: Re: [PATCH] drm/amdgpu: enable bo priority setting from user space
From: Christian König
To: "Zhou, David(ChunMing)" 
,amd-gfx@lists.freedesktop.org
CC:

Am 07.03.19 um 10:15 schrieb Chunming Zhou:
> Signed-off-by: Chunming Zhou 

Well NAK to the whole approach.

The TTM priority is a global priority, but processes are only allowed to
specific the priority inside their own allocations. So this approach
will never fly upstream.

What you can do is to add a priority for per vm BOs to affect their sort
order on the LRU, but I doubt that this will have much of an effect.

Regards,
Christian.

> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c |  1 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 13 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h|  2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c |  3 ++-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.h |  1 +
>   include/drm/ttm/ttm_bo_driver.h|  9 -
>   include/uapi/drm/amdgpu_drm.h  |  3 +++
>   7 files changed, 29 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> index 5cbde74b97dd..70a6baf20c22 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> @@ -144,6 +144,7 @@ static int amdgpufb_create_pinned_object(struct 
> amdgpu_fbdev *rfbdev,
>size = mode_cmd->pitches[0] * height;
>aligned_size = ALIGN(size, PAGE_SIZE);
>ret = amdgpu_gem_object_create(adev, aligned_size, 0, domain,
> +TTM_BO_PRIORITY_NORMAL,
>   AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
>   AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS |
>   AMDGPU_GEM_CREATE_VRAM_CLEARED,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index d21dd2f369da..7c1c2362c67e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -44,6 +44,7 @@ void amdgpu_gem_object_free(struct drm_gem_object *gobj)
>
>   int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,
> int alignment, u32 initial_domain,
> +  enum ttm_bo_priority priority,
> u64 flags, enum ttm_bo_type type,
> struct reservation_object *resv,
> struct drm_gem_object **obj)
> @@ -60,6 +61,7 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, 
> unsigned long size,
>bp.type = type;
>bp.resv = resv;
>bp.preferred_domain = initial_domain;
> + bp.priority = priority;
>   retry:
>bp.flags = flags;
>bp.domain = initial_domain;
> @@ -229,6 +231,14 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
> *data,
>if (args->in.domains & ~AMDGPU_GEM_DOMAIN_MASK)
>return -EINVAL;
>
> + /* check priority */
> + if (args->in.priority == 0) {
> + /* default is normal */
> + args->in.priority = TTM_BO_PRIORITY_NORMAL;
> + } else if (args->in.priority > TTM_MAX_BO_PRIORITY) {
> + args->in.priority = TTM_MAX_BO_PRIORITY;
> + DRM_ERROR("priority specified from user space is over MAX 
> priority\n");
> + }
>/* create a gem object to contain this object in */
>if (args->in.domains & (AMDGPU_GEM_DOMAIN_GDS |
>AMDGPU_GEM_DOMAIN_GWS | AMDGPU_GEM_DOMAIN_OA)) {
> @@ -252,6 +262,7 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
> *data,
>
>r = amdgpu_gem_object_create(adev, size, args->in.alignment,
> (u32)(0x & args->in.domains),
> +  args->in.priority - 1,
> flags, ttm_bo_type_device, resv, );
>if (flags & AMDGPU_GEM_CREATE_VM_ALWAYS_VALID) {
>if (!r) {
> @@ -304,6 +315,7 @@ int amdgpu_gem_userptr_ioctl(struct drm_device *dev, void 
> *data,
>

[PATCH 1/2] drm/amdgpu: use ring/hash for fault handling on GMC9 v2

2019-03-07 Thread Christian König
Further testing showed that the idea with the chash doesn't work as expected.
Especially we can't predict when we can remove the entries from the hash again.

So replace the chash with a ring buffer/hash mix where entries in the container
age automatically based on their timestamp.

v2: use ring buffer / hash mix

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 49 
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h | 34 ++
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c   | 60 ++---
 3 files changed, 86 insertions(+), 57 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index 5a32a0d2ad31..579cadd16886 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -240,3 +240,52 @@ void amdgpu_gmc_agp_location(struct amdgpu_device *adev, 
struct amdgpu_gmc *mc)
dev_info(adev->dev, "AGP: %lluM 0x%016llX - 0x%016llX\n",
mc->agp_size >> 20, mc->agp_start, mc->agp_end);
 }
+
+/**
+ * amdgpu_gmc_filter_faults - filter VM faults
+ *
+ * @adev: amdgpu device structure
+ * @addr: address of the VM fault
+ * @pasid: PASID of the process causing the fault
+ * @timestamp: timestamp of the fault
+ *
+ * Returns:
+ * True if the fault was filtered and should not be processed further.
+ * False if the fault is a new one and needs to be handled.
+ */
+bool amdgpu_gmc_filter_faults(struct amdgpu_device *adev, uint64_t addr,
+ uint16_t pasid, uint64_t timestamp)
+{
+   struct amdgpu_gmc *gmc = >gmc;
+
+   uint64_t stamp, key = addr << 4 | pasid;
+   struct amdgpu_gmc_fault *fault;
+   uint32_t hash;
+
+   /* If we don't have space left in the ring buffer return immediately */
+   stamp = max(timestamp, AMDGPU_GMC_FAULT_TIMEOUT + 1) -
+   AMDGPU_GMC_FAULT_TIMEOUT;
+   if (gmc->fault_ring[gmc->last_fault].timestamp >= stamp)
+   return true;
+
+   /* Try to find the fault in the hash */
+   hash = hash_64(key, AMDGPU_GMC_FAULT_HASH_ORDER);
+   fault = >fault_ring[gmc->fault_hash[hash].idx];
+   do {
+   if (fault->key == key)
+   return true;
+
+   stamp = fault->timestamp;
+   fault = >fault_ring[fault->next];
+   } while (fault->timestamp < stamp);
+
+   /* Add the fault to the ring */
+   fault = >fault_ring[gmc->last_fault];
+   fault->key = key;
+   fault->timestamp = timestamp;
+
+   /* And update the hash */
+   fault->next = gmc->fault_hash[hash].idx;
+   gmc->fault_hash[hash].idx = gmc->last_fault++;
+   return false;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
index 6ce45664ff87..071145ac67b5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
@@ -43,8 +43,34 @@
  */
 #define AMDGPU_GMC_HOLE_MASK   0xULL
 
+/*
+ * Ring size as power of two for the log of recent faults.
+ */
+#define AMDGPU_GMC_FAULT_RING_ORDER8
+#define AMDGPU_GMC_FAULT_RING_SIZE (1 << AMDGPU_GMC_FAULT_RING_ORDER)
+
+/*
+ * Hash size as power of two for the log of recent faults
+ */
+#define AMDGPU_GMC_FAULT_HASH_ORDER8
+#define AMDGPU_GMC_FAULT_HASH_SIZE (1 << AMDGPU_GMC_FAULT_HASH_ORDER)
+
+/*
+ * Number of IH timestamp ticks until a fault is considered handled
+ */
+#define AMDGPU_GMC_FAULT_TIMEOUT   5000ULL
+
 struct firmware;
 
+/*
+ * GMC page fault information
+ */
+struct amdgpu_gmc_fault {
+   uint64_ttimestamp;
+   uint64_tnext:AMDGPU_GMC_FAULT_RING_ORDER;
+   uint64_tkey:52;
+};
+
 /*
  * VMHUB structures, functions & helpers
  */
@@ -141,6 +167,12 @@ struct amdgpu_gmc {
struct kfd_vm_fault_info *vm_fault_info;
atomic_tvm_fault_info_updated;
 
+   struct amdgpu_gmc_fault fault_ring[AMDGPU_GMC_FAULT_RING_SIZE];
+   struct {
+   uint64_tidx:AMDGPU_GMC_FAULT_RING_ORDER;
+   } fault_hash[AMDGPU_GMC_FAULT_HASH_SIZE];
+   uint64_tlast_fault:AMDGPU_GMC_FAULT_RING_ORDER;
+
const struct amdgpu_gmc_funcs   *gmc_funcs;
 
struct amdgpu_xgmi xgmi;
@@ -195,5 +227,7 @@ void amdgpu_gmc_gart_location(struct amdgpu_device *adev,
  struct amdgpu_gmc *mc);
 void amdgpu_gmc_agp_location(struct amdgpu_device *adev,
 struct amdgpu_gmc *mc);
+bool amdgpu_gmc_filter_faults(struct amdgpu_device *adev, uint64_t addr,
+ uint16_t pasid, uint64_t timestamp);
 
 #endif
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 84904bd680df..0c37c0afb1bd 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -416,62 +416,6 @@ static int 

Re: [PATCH] drm/amdgpu: enable bo priority setting from user space

2019-03-07 Thread Koenig, Christian
Well you can already use the per submission priority for the BOs.

Additional to that as I said for per VM BOs we can add a priority to sort them 
in the LRU.

Not sure how effective both of those actually are.

Regards,
Christian.

Am 07.03.19 um 14:09 schrieb Zhou, David(ChunMing):
Yes, you are right, thanks to point it out. Will see if there is other way.

-David

send from my phone

 Original Message 
Subject: Re: [PATCH] drm/amdgpu: enable bo priority setting from user space
From: Christian König
To: "Zhou, David(ChunMing)" 
,amd-gfx@lists.freedesktop.org
CC:

Am 07.03.19 um 10:15 schrieb Chunming Zhou:
> Signed-off-by: Chunming Zhou 

Well NAK to the whole approach.

The TTM priority is a global priority, but processes are only allowed to
specific the priority inside their own allocations. So this approach
will never fly upstream.

What you can do is to add a priority for per vm BOs to affect their sort
order on the LRU, but I doubt that this will have much of an effect.

Regards,
Christian.

> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c |  1 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 13 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h|  2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c |  3 ++-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.h |  1 +
>   include/drm/ttm/ttm_bo_driver.h|  9 -
>   include/uapi/drm/amdgpu_drm.h  |  3 +++
>   7 files changed, 29 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> index 5cbde74b97dd..70a6baf20c22 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> @@ -144,6 +144,7 @@ static int amdgpufb_create_pinned_object(struct 
> amdgpu_fbdev *rfbdev,
>size = mode_cmd->pitches[0] * height;
>aligned_size = ALIGN(size, PAGE_SIZE);
>ret = amdgpu_gem_object_create(adev, aligned_size, 0, domain,
> +TTM_BO_PRIORITY_NORMAL,
>   AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
>   AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS |
>   AMDGPU_GEM_CREATE_VRAM_CLEARED,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index d21dd2f369da..7c1c2362c67e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -44,6 +44,7 @@ void amdgpu_gem_object_free(struct drm_gem_object *gobj)
>
>   int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,
> int alignment, u32 initial_domain,
> +  enum ttm_bo_priority priority,
> u64 flags, enum ttm_bo_type type,
> struct reservation_object *resv,
> struct drm_gem_object **obj)
> @@ -60,6 +61,7 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, 
> unsigned long size,
>bp.type = type;
>bp.resv = resv;
>bp.preferred_domain = initial_domain;
> + bp.priority = priority;
>   retry:
>bp.flags = flags;
>bp.domain = initial_domain;
> @@ -229,6 +231,14 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
> *data,
>if (args->in.domains & ~AMDGPU_GEM_DOMAIN_MASK)
>return -EINVAL;
>
> + /* check priority */
> + if (args->in.priority == 0) {
> + /* default is normal */
> + args->in.priority = TTM_BO_PRIORITY_NORMAL;
> + } else if (args->in.priority > TTM_MAX_BO_PRIORITY) {
> + args->in.priority = TTM_MAX_BO_PRIORITY;
> + DRM_ERROR("priority specified from user space is over MAX 
> priority\n");
> + }
>/* create a gem object to contain this object in */
>if (args->in.domains & (AMDGPU_GEM_DOMAIN_GDS |
>AMDGPU_GEM_DOMAIN_GWS | AMDGPU_GEM_DOMAIN_OA)) {
> @@ -252,6 +262,7 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
> *data,
>
>r = amdgpu_gem_object_create(adev, size, args->in.alignment,
> (u32)(0x & args->in.domains),
> +  args->in.priority - 1,
> flags, ttm_bo_type_device, resv, );
>if (flags & AMDGPU_GEM_CREATE_VM_ALWAYS_VALID) {
>if (!r) {
> @@ -304,6 +315,7 @@ int amdgpu_gem_userptr_ioctl(struct drm_device *dev, void 
> *data,
>
>/* create a gem object to contain this object in */
>r = amdgpu_gem_object_create(adev, args->size, 0, 
> AMDGPU_GEM_DOMAIN_CPU,
> +  TTM_BO_PRIORITY_NORMAL,
> 0, ttm_bo_type_device, NULL, );
>if (r)
>return r;
> @@ -755,6 +767,7 @@ 

Re:[PATCH] drm/amdgpu: enable bo priority setting from user space

2019-03-07 Thread Zhou, David(ChunMing)
Yes, you are right, thanks to point it out. Will see if there is other way.

-David

send from my phone

 Original Message 
Subject: Re: [PATCH] drm/amdgpu: enable bo priority setting from user space
From: Christian König
To: "Zhou, David(ChunMing)" ,amd-gfx@lists.freedesktop.org
CC:

Am 07.03.19 um 10:15 schrieb Chunming Zhou:
> Signed-off-by: Chunming Zhou 

Well NAK to the whole approach.

The TTM priority is a global priority, but processes are only allowed to
specific the priority inside their own allocations. So this approach
will never fly upstream.

What you can do is to add a priority for per vm BOs to affect their sort
order on the LRU, but I doubt that this will have much of an effect.

Regards,
Christian.

> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c |  1 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 13 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h|  2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c |  3 ++-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.h |  1 +
>   include/drm/ttm/ttm_bo_driver.h|  9 -
>   include/uapi/drm/amdgpu_drm.h  |  3 +++
>   7 files changed, 29 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> index 5cbde74b97dd..70a6baf20c22 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> @@ -144,6 +144,7 @@ static int amdgpufb_create_pinned_object(struct 
> amdgpu_fbdev *rfbdev,
>size = mode_cmd->pitches[0] * height;
>aligned_size = ALIGN(size, PAGE_SIZE);
>ret = amdgpu_gem_object_create(adev, aligned_size, 0, domain,
> +TTM_BO_PRIORITY_NORMAL,
>   AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
>   AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS |
>   AMDGPU_GEM_CREATE_VRAM_CLEARED,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index d21dd2f369da..7c1c2362c67e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -44,6 +44,7 @@ void amdgpu_gem_object_free(struct drm_gem_object *gobj)
>
>   int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,
> int alignment, u32 initial_domain,
> +  enum ttm_bo_priority priority,
> u64 flags, enum ttm_bo_type type,
> struct reservation_object *resv,
> struct drm_gem_object **obj)
> @@ -60,6 +61,7 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, 
> unsigned long size,
>bp.type = type;
>bp.resv = resv;
>bp.preferred_domain = initial_domain;
> + bp.priority = priority;
>   retry:
>bp.flags = flags;
>bp.domain = initial_domain;
> @@ -229,6 +231,14 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
> *data,
>if (args->in.domains & ~AMDGPU_GEM_DOMAIN_MASK)
>return -EINVAL;
>
> + /* check priority */
> + if (args->in.priority == 0) {
> + /* default is normal */
> + args->in.priority = TTM_BO_PRIORITY_NORMAL;
> + } else if (args->in.priority > TTM_MAX_BO_PRIORITY) {
> + args->in.priority = TTM_MAX_BO_PRIORITY;
> + DRM_ERROR("priority specified from user space is over MAX 
> priority\n");
> + }
>/* create a gem object to contain this object in */
>if (args->in.domains & (AMDGPU_GEM_DOMAIN_GDS |
>AMDGPU_GEM_DOMAIN_GWS | AMDGPU_GEM_DOMAIN_OA)) {
> @@ -252,6 +262,7 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
> *data,
>
>r = amdgpu_gem_object_create(adev, size, args->in.alignment,
> (u32)(0x & args->in.domains),
> +  args->in.priority - 1,
> flags, ttm_bo_type_device, resv, );
>if (flags & AMDGPU_GEM_CREATE_VM_ALWAYS_VALID) {
>if (!r) {
> @@ -304,6 +315,7 @@ int amdgpu_gem_userptr_ioctl(struct drm_device *dev, void 
> *data,
>
>/* create a gem object to contain this object in */
>r = amdgpu_gem_object_create(adev, args->size, 0, 
> AMDGPU_GEM_DOMAIN_CPU,
> +  TTM_BO_PRIORITY_NORMAL,
> 0, ttm_bo_type_device, NULL, );
>if (r)
>return r;
> @@ -755,6 +767,7 @@ int amdgpu_mode_dumb_create(struct drm_file *file_priv,
>domain = amdgpu_bo_get_preferred_pin_domain(adev,
>amdgpu_display_supported_domains(adev));
>r = amdgpu_gem_object_create(adev, args->size, 0, domain,
> +  TTM_BO_PRIORITY_NORMAL,
>  

Re: [PATCH] drm/amdgpu: enable bo priority setting from user space

2019-03-07 Thread Christian König

Am 07.03.19 um 10:15 schrieb Chunming Zhou:

Signed-off-by: Chunming Zhou 


Well NAK to the whole approach.

The TTM priority is a global priority, but processes are only allowed to 
specific the priority inside their own allocations. So this approach 
will never fly upstream.


What you can do is to add a priority for per vm BOs to affect their sort 
order on the LRU, but I doubt that this will have much of an effect.


Regards,
Christian.


---
  drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c |  1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 13 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h|  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c |  3 ++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h |  1 +
  include/drm/ttm/ttm_bo_driver.h|  9 -
  include/uapi/drm/amdgpu_drm.h  |  3 +++
  7 files changed, 29 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
index 5cbde74b97dd..70a6baf20c22 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
@@ -144,6 +144,7 @@ static int amdgpufb_create_pinned_object(struct 
amdgpu_fbdev *rfbdev,
size = mode_cmd->pitches[0] * height;
aligned_size = ALIGN(size, PAGE_SIZE);
ret = amdgpu_gem_object_create(adev, aligned_size, 0, domain,
+  TTM_BO_PRIORITY_NORMAL,
   AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
   AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS |
   AMDGPU_GEM_CREATE_VRAM_CLEARED,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index d21dd2f369da..7c1c2362c67e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -44,6 +44,7 @@ void amdgpu_gem_object_free(struct drm_gem_object *gobj)
  
  int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,

 int alignment, u32 initial_domain,
+enum ttm_bo_priority priority,
 u64 flags, enum ttm_bo_type type,
 struct reservation_object *resv,
 struct drm_gem_object **obj)
@@ -60,6 +61,7 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, 
unsigned long size,
bp.type = type;
bp.resv = resv;
bp.preferred_domain = initial_domain;
+   bp.priority = priority;
  retry:
bp.flags = flags;
bp.domain = initial_domain;
@@ -229,6 +231,14 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
*data,
if (args->in.domains & ~AMDGPU_GEM_DOMAIN_MASK)
return -EINVAL;
  
+	/* check priority */

+   if (args->in.priority == 0) {
+   /* default is normal */
+   args->in.priority = TTM_BO_PRIORITY_NORMAL;
+   } else if (args->in.priority > TTM_MAX_BO_PRIORITY) {
+   args->in.priority = TTM_MAX_BO_PRIORITY;
+   DRM_ERROR("priority specified from user space is over MAX 
priority\n");
+   }
/* create a gem object to contain this object in */
if (args->in.domains & (AMDGPU_GEM_DOMAIN_GDS |
AMDGPU_GEM_DOMAIN_GWS | AMDGPU_GEM_DOMAIN_OA)) {
@@ -252,6 +262,7 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
*data,
  
  	r = amdgpu_gem_object_create(adev, size, args->in.alignment,

 (u32)(0x & args->in.domains),
+args->in.priority - 1,
 flags, ttm_bo_type_device, resv, );
if (flags & AMDGPU_GEM_CREATE_VM_ALWAYS_VALID) {
if (!r) {
@@ -304,6 +315,7 @@ int amdgpu_gem_userptr_ioctl(struct drm_device *dev, void 
*data,
  
  	/* create a gem object to contain this object in */

r = amdgpu_gem_object_create(adev, args->size, 0, AMDGPU_GEM_DOMAIN_CPU,
+TTM_BO_PRIORITY_NORMAL,
 0, ttm_bo_type_device, NULL, );
if (r)
return r;
@@ -755,6 +767,7 @@ int amdgpu_mode_dumb_create(struct drm_file *file_priv,
domain = amdgpu_bo_get_preferred_pin_domain(adev,
amdgpu_display_supported_domains(adev));
r = amdgpu_gem_object_create(adev, args->size, 0, domain,
+TTM_BO_PRIORITY_NORMAL,
 AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED,
 ttm_bo_type_device, NULL, );
if (r)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h
index f1ddfc50bcc7..47b0a8190948 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h
@@ -61,7 +61,7 @@ extern const struct dma_buf_ops 

Re: [PATCH] drm/amdgpu: enable bo priority setting from user space

2019-03-07 Thread Michel Dänzer
On 2019-03-07 11:48 a.m., zhoucm1 wrote:
> On 2019年03月07日 17:55, Michel Dänzer wrote:
>> On 2019-03-07 10:15 a.m., Chunming Zhou wrote:
>>> Signed-off-by: Chunming Zhou 
>> Please provide corresponding UMD patches showing how this is to be used.
> spec is here:
> https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html,
> please search "|VkMemoryPriorityAllocateInfoEXT|".
> 
> Fortunately, Windows guy already implemented it before, otherwise, I
> cannot find ready code on opensource, I hate this chicken first and egg
> first question :

Whether you like it or not, the rule documented in
Documentation/gpu/drm-uapi.rst is clear: New UAPI cannot go upstream
before corresponding userspace patches have been reviewed. This is
important to avoid adding broken/unmaintainable UAPI.


> https://github.com/GPUOpen-Drivers/pal/blob/dev/src/core/gpuMemory.cpp,
> please search "createInfo.priority".
> https://github.com/GPUOpen-Drivers/pal/blob/dev/inc/core/palGpuMemory.h,
> priority definition is here.

These don't show how the priority field of struct
drm_amdgpu_gem_create_in is used.


>>> @@ -229,6 +231,14 @@ int amdgpu_gem_create_ioctl(struct drm_device
>>> *dev, void *data,
>>>   if (args->in.domains & ~AMDGPU_GEM_DOMAIN_MASK)
>>>   return -EINVAL;
>>>   +    /* check priority */
>>> +    if (args->in.priority == 0) {
>> Did you verify that this is 0 with old userspace compiled against struct
>> drm_amdgpu_gem_create_in without the priority field?
> Without priority field, I don't think we can check here. Do you mean we
> need to add a new args struct?

I don't think so. Presumably this will be initialized to 0 if userspace
doesn't pass in a value, but this needs to be verified.


>>> @@ -252,6 +262,7 @@ int amdgpu_gem_create_ioctl(struct drm_device
>>> *dev, void *data,
>>>     r = amdgpu_gem_object_create(adev, size, args->in.alignment,
>>>    (u32)(0x & args->in.domains),
>>> + args->in.priority - 1,
>>>    flags, ttm_bo_type_device, resv, );
>> It might be less confusing to subtract 1 after checking against
>> TTM_MAX_BO_PRIORITY instead of here. Still kind of confusing though. How
>> about this instead:
>>
>> Make the priority field of struct drm_amdgpu_gem_create_in signed. In
>> amdgpu_gem_create_ioctl, clamp the priority to the supported range:
>>
>> args->in.priority += TTM_BO_PRIORITY_NORMAL;
>> args->in.priority = max(args->in.priority, 0);
>> args->in.priority = min(args->in.priority,
>>     TTM_BO_PRIORITY_NORMAL - 1);
>>
>> This way userspace doesn't need to do a weird mapping of the priority
>> values (where 0 and 2 have the same meaning), and the range of supported
>> priorities could at least theoretically be extended without breaking
>> userspace.
> First, I want to explain a bit the priority value from vulkan:
> "    From Vulkan Spec, 0.0 <= value <= 1.0, and the granularity of the
> priorities is implementation-dependent.
>  One thing Spec forced is that if VkMemoryPriority not specified as
> default behavior, it is as if the
>  priority value is 0.5. Our strategy is that map 0.5 to
> GpuMemPriority::Normal-GpuMemPriorityOffset::Offset0,
>  which is consistent to MemoryPriorityDefault. We adopts
> GpuMemPriority::VeryLow, GpuMemPriority::Low,
>  GpuMemPriority::Normal, GpuMemPriority::High, 4 priority grades,
> each of which contains 8 steps of offests.
>  This maps [0.0-1.0) to totally 32 steps. Finally, 1.0 maps to
> GpuMemPriority::VeryHigh.
> "

In that case, I'd suggest making the priority field signed, and mapping
the full integer range to the range of supported priorities:

* 0 maps to TTM_BO_PRIORITY_NORMAL
* INT_MIN maps to TTM_BO_PRIORITY_VERYLOW
* INT_MAX maps to TTM_BO_PRIORITY_VERYHIGH

Intermediate values are interpolated appropriately to the supported
range of priorities.


This way, the Vulkan float priority value can easily be converted to the
integer priority value, and the actual range of priorities used on the
kernel side can be changed arbitrarily without breaking userspace.


> So my original purpose is directly use Priority enum defined in PAL,
> [...]

I don't think it's a good idea to let PAL dictate UAPI.


>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> index fd9c4beeaaa4..c85304e03021 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> @@ -494,8 +494,9 @@ static int amdgpu_bo_do_create(struct
>>> amdgpu_device *adev,
>>>     bo->tbo.bdev = >mman.bdev;
>>>   amdgpu_bo_placement_from_domain(bo, bp->domain);
>>> +    bo->tbo.priority = bp->priority;
>>>   if (bp->type == ttm_bo_type_kernel)
>>> -    bo->tbo.priority = 1;
>>> +    bo->tbo.priority = TTM_BO_PRIORITY_VERYHIGH;
>> if (bp->type == ttm_bo_type_kernel)
>>     bo->tbo.priority = TTM_BO_PRIORITY_VERYHIGH;
>>   

Re: [PATCH] drm/amdgpu: Move IB pool init and fini.

2019-03-07 Thread Christian König

Am 06.03.19 um 22:25 schrieb Andrey Grodzovsky:

Problem:
Using SDMA for TLB invalidation in certain ASICs exposed a problem
of IB pool not being ready while SDMA already up on Init and already
shutt down while SDMA still running on Fini. This caused
IB allocation failure. Temproary fix was commited into a
bringup branch but this is the generic fix.

Fix:
Init IB pool rigth after GMC is ready but before SDMA is ready.
Do th opposite for Fini.

Signed-off-by: Andrey Grodzovsky 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 20 +++-
  1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 00def57..c05a551 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1686,6 +1686,13 @@ static int amdgpu_device_ip_init(struct amdgpu_device 
*adev)
}
}
  
+	r = amdgpu_ib_pool_init(adev);

+   if (r) {
+   dev_err(adev->dev, "IB initialization failed (%d).\n", r);
+   amdgpu_vf_error_put(adev, AMDGIM_ERROR_VF_IB_INIT_FAIL, 0, r);
+   goto init_failed;
+   }
+
r = amdgpu_ucode_create_bo(adev); /* create ucode bo when sw_init 
complete*/
if (r)
goto init_failed;
@@ -1888,7 +1895,8 @@ static int amdgpu_device_ip_fini(struct amdgpu_device 
*adev)
for (i = 0; i < adev->num_ip_blocks; i++) {
if (!adev->ip_blocks[i].status.hw)
continue;
-   if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SMC) {
+   if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SMC ||
+   adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SDMA) 
{


Why is that necessary?


r = adev->ip_blocks[i].version->funcs->hw_fini((void 
*)adev);
/* XXX handle errors */
if (r) {
@@ -1900,6 +1908,8 @@ static int amdgpu_device_ip_fini(struct amdgpu_device 
*adev)
}
}
  
+	amdgpu_ib_pool_fini(adev);


Thinking more about it this should probably be done right before GMC SW 
fini.


IIRC we already have an extra case for this somewhere.

Christian.


+
for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
if (!adev->ip_blocks[i].status.hw)
continue;
@@ -2651,13 +2661,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
/* Get a log2 for easy divisions. */
adev->mm_stats.log2_max_MBps = ilog2(max(1u, max_MBps));
  
-	r = amdgpu_ib_pool_init(adev);

-   if (r) {
-   dev_err(adev->dev, "IB initialization failed (%d).\n", r);
-   amdgpu_vf_error_put(adev, AMDGIM_ERROR_VF_IB_INIT_FAIL, 0, r);
-   goto failed;
-   }
-
amdgpu_fbdev_init(adev);
  
  	r = amdgpu_pm_sysfs_init(adev);

@@ -2735,7 +2738,6 @@ void amdgpu_device_fini(struct amdgpu_device *adev)
else
drm_atomic_helper_shutdown(adev->ddev);
}
-   amdgpu_ib_pool_fini(adev);
amdgpu_fence_driver_fini(adev);
amdgpu_pm_sysfs_fini(adev);
amdgpu_fbdev_fini(adev);


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amdgpu: enable bo priority setting from user space

2019-03-07 Thread zhoucm1



On 2019年03月07日 17:55, Michel Dänzer wrote:

On 2019-03-07 10:15 a.m., Chunming Zhou wrote:

Signed-off-by: Chunming Zhou 

Please provide corresponding UMD patches showing how this is to be used.

spec is here:
https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html, 
please search "|VkMemoryPriorityAllocateInfoEXT|".


Fortunately, Windows guy already implemented it before, otherwise, I 
cannot find ready code on opensource, I hate this chicken first and egg 
first question :
https://github.com/GPUOpen-Drivers/pal/blob/dev/src/core/gpuMemory.cpp, 
please search "createInfo.priority".
https://github.com/GPUOpen-Drivers/pal/blob/dev/inc/core/palGpuMemory.h, 
priority definition is here.






@@ -229,6 +231,14 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
*data,
if (args->in.domains & ~AMDGPU_GEM_DOMAIN_MASK)
return -EINVAL;
  
+	/* check priority */

+   if (args->in.priority == 0) {

Did you verify that this is 0 with old userspace compiled against struct
drm_amdgpu_gem_create_in without the priority field?
Without priority field, I don't think we can check here. Do you mean we 
need to add a new args struct?






+   /* default is normal */
+   args->in.priority = TTM_BO_PRIORITY_NORMAL;
+   } else if (args->in.priority > TTM_MAX_BO_PRIORITY) {
+   args->in.priority = TTM_MAX_BO_PRIORITY;
+   DRM_ERROR("priority specified from user space is over MAX 
priority\n");

This must be DRM_DEBUG, or buggy/malicious userspace can spam dmesg.

Will change.





@@ -252,6 +262,7 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
*data,
  
  	r = amdgpu_gem_object_create(adev, size, args->in.alignment,

 (u32)(0x & args->in.domains),
+args->in.priority - 1,
 flags, ttm_bo_type_device, resv, );

It might be less confusing to subtract 1 after checking against
TTM_MAX_BO_PRIORITY instead of here. Still kind of confusing though. How
about this instead:

Make the priority field of struct drm_amdgpu_gem_create_in signed. In
amdgpu_gem_create_ioctl, clamp the priority to the supported range:

args->in.priority += TTM_BO_PRIORITY_NORMAL;
args->in.priority = max(args->in.priority, 0);
args->in.priority = min(args->in.priority,
TTM_BO_PRIORITY_NORMAL - 1);

This way userspace doesn't need to do a weird mapping of the priority
values (where 0 and 2 have the same meaning), and the range of supported
priorities could at least theoretically be extended without breaking
userspace.

First, I want to explain a bit the priority value from vulkan:
"    From Vulkan Spec, 0.0 <= value <= 1.0, and the granularity of the 
priorities is implementation-dependent.
 One thing Spec forced is that if VkMemoryPriority not specified as 
default behavior, it is as if the
 priority value is 0.5. Our strategy is that map 0.5 to 
GpuMemPriority::Normal-GpuMemPriorityOffset::Offset0,
 which is consistent to MemoryPriorityDefault. We adopts 
GpuMemPriority::VeryLow, GpuMemPriority::Low,
 GpuMemPriority::Normal, GpuMemPriority::High, 4 priority grades, 
each of which contains 8 steps of offests.
 This maps [0.0-1.0) to totally 32 steps. Finally, 1.0 maps to 
GpuMemPriority::VeryHigh.

"

So my original purpose is directly use Priority enum defined in PAL, 
like this:

 "
/// Specifies Base Level priority per GPU memory allocation as a hint to 
the memory manager in the event it needs to

/// select allocations to page out of their preferred heaps.
enum class GpuMemPriority : uint32
{
    Unused    = 0x0,  ///< Indicates that the allocation is not 
currently being used at all, and should be the first

  ///  choice to be paged out.
    VeryLow   = 0x1,  ///< Lowest priority to keep in its preferred heap.
    Low   = 0x2,  ///< Low priority to keep in its preferred heap.
    Normal    = 0x3,  ///< Normal priority to keep in its preferred heap.
    High  = 0x4,  ///< High priority to keep in its preferred heap 
(e.g., render targets).
    VeryHigh  = 0x5,  ///< Highest priority to keep in its preferred 
heap.  Last choice to be paged out (e.g., page

  ///  tables, displayable allocations).
    Count
};
"

If according your idea, we will need to convert it again when hooking 
linux implementation.

So what do think we still use unsigned?





@@ -304,6 +315,7 @@ int amdgpu_gem_userptr_ioctl(struct drm_device *dev, void 
*data,
  
  	/* create a gem object to contain this object in */

r = amdgpu_gem_object_create(adev, args->size, 0, AMDGPU_GEM_DOMAIN_CPU,
+TTM_BO_PRIORITY_NORMAL,
 0, ttm_bo_type_device, NULL, );

Should the userptr ioctl also allow setting the priority?


We can.





diff --git 

Re: [PATCH] drm/amdgpu: enable bo priority setting from user space

2019-03-07 Thread Michel Dänzer
On 2019-03-07 10:15 a.m., Chunming Zhou wrote:
> Signed-off-by: Chunming Zhou 

Please provide corresponding UMD patches showing how this is to be used.


> @@ -229,6 +231,14 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
> *data,
>   if (args->in.domains & ~AMDGPU_GEM_DOMAIN_MASK)
>   return -EINVAL;
>  
> + /* check priority */
> + if (args->in.priority == 0) {

Did you verify that this is 0 with old userspace compiled against struct
drm_amdgpu_gem_create_in without the priority field?


> + /* default is normal */
> + args->in.priority = TTM_BO_PRIORITY_NORMAL;
> + } else if (args->in.priority > TTM_MAX_BO_PRIORITY) {
> + args->in.priority = TTM_MAX_BO_PRIORITY;
> + DRM_ERROR("priority specified from user space is over MAX 
> priority\n");

This must be DRM_DEBUG, or buggy/malicious userspace can spam dmesg.


> @@ -252,6 +262,7 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
> *data,
>  
>   r = amdgpu_gem_object_create(adev, size, args->in.alignment,
>(u32)(0x & args->in.domains),
> +  args->in.priority - 1,
>flags, ttm_bo_type_device, resv, );

It might be less confusing to subtract 1 after checking against
TTM_MAX_BO_PRIORITY instead of here. Still kind of confusing though. How
about this instead:

Make the priority field of struct drm_amdgpu_gem_create_in signed. In
amdgpu_gem_create_ioctl, clamp the priority to the supported range:

args->in.priority += TTM_BO_PRIORITY_NORMAL;
args->in.priority = max(args->in.priority, 0);
args->in.priority = min(args->in.priority,
TTM_BO_PRIORITY_NORMAL - 1);

This way userspace doesn't need to do a weird mapping of the priority
values (where 0 and 2 have the same meaning), and the range of supported
priorities could at least theoretically be extended without breaking
userspace.


> @@ -304,6 +315,7 @@ int amdgpu_gem_userptr_ioctl(struct drm_device *dev, void 
> *data,
>  
>   /* create a gem object to contain this object in */
>   r = amdgpu_gem_object_create(adev, args->size, 0, AMDGPU_GEM_DOMAIN_CPU,
> +  TTM_BO_PRIORITY_NORMAL,
>0, ttm_bo_type_device, NULL, );

Should the userptr ioctl also allow setting the priority?


> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> index fd9c4beeaaa4..c85304e03021 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> @@ -494,8 +494,9 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
>  
>   bo->tbo.bdev = >mman.bdev;
>   amdgpu_bo_placement_from_domain(bo, bp->domain);
> + bo->tbo.priority = bp->priority;
>   if (bp->type == ttm_bo_type_kernel)
> - bo->tbo.priority = 1;
> + bo->tbo.priority = TTM_BO_PRIORITY_VERYHIGH;

if (bp->type == ttm_bo_type_kernel)
bo->tbo.priority = TTM_BO_PRIORITY_VERYHIGH;
else
bo->tbo.priority = bp->priority;

would be clearer I think.


-- 
Earthling Michel Dänzer   |  https://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH] drm/amdgpu: enable bo priority setting from user space

2019-03-07 Thread Chunming Zhou
Signed-off-by: Chunming Zhou 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 13 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h|  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c |  3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h |  1 +
 include/drm/ttm/ttm_bo_driver.h|  9 -
 include/uapi/drm/amdgpu_drm.h  |  3 +++
 7 files changed, 29 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
index 5cbde74b97dd..70a6baf20c22 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
@@ -144,6 +144,7 @@ static int amdgpufb_create_pinned_object(struct 
amdgpu_fbdev *rfbdev,
size = mode_cmd->pitches[0] * height;
aligned_size = ALIGN(size, PAGE_SIZE);
ret = amdgpu_gem_object_create(adev, aligned_size, 0, domain,
+  TTM_BO_PRIORITY_NORMAL,
   AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
   AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS |
   AMDGPU_GEM_CREATE_VRAM_CLEARED,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index d21dd2f369da..7c1c2362c67e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -44,6 +44,7 @@ void amdgpu_gem_object_free(struct drm_gem_object *gobj)
 
 int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,
 int alignment, u32 initial_domain,
+enum ttm_bo_priority priority,
 u64 flags, enum ttm_bo_type type,
 struct reservation_object *resv,
 struct drm_gem_object **obj)
@@ -60,6 +61,7 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, 
unsigned long size,
bp.type = type;
bp.resv = resv;
bp.preferred_domain = initial_domain;
+   bp.priority = priority;
 retry:
bp.flags = flags;
bp.domain = initial_domain;
@@ -229,6 +231,14 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
*data,
if (args->in.domains & ~AMDGPU_GEM_DOMAIN_MASK)
return -EINVAL;
 
+   /* check priority */
+   if (args->in.priority == 0) {
+   /* default is normal */
+   args->in.priority = TTM_BO_PRIORITY_NORMAL;
+   } else if (args->in.priority > TTM_MAX_BO_PRIORITY) {
+   args->in.priority = TTM_MAX_BO_PRIORITY;
+   DRM_ERROR("priority specified from user space is over MAX 
priority\n");
+   }
/* create a gem object to contain this object in */
if (args->in.domains & (AMDGPU_GEM_DOMAIN_GDS |
AMDGPU_GEM_DOMAIN_GWS | AMDGPU_GEM_DOMAIN_OA)) {
@@ -252,6 +262,7 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
*data,
 
r = amdgpu_gem_object_create(adev, size, args->in.alignment,
 (u32)(0x & args->in.domains),
+args->in.priority - 1,
 flags, ttm_bo_type_device, resv, );
if (flags & AMDGPU_GEM_CREATE_VM_ALWAYS_VALID) {
if (!r) {
@@ -304,6 +315,7 @@ int amdgpu_gem_userptr_ioctl(struct drm_device *dev, void 
*data,
 
/* create a gem object to contain this object in */
r = amdgpu_gem_object_create(adev, args->size, 0, AMDGPU_GEM_DOMAIN_CPU,
+TTM_BO_PRIORITY_NORMAL,
 0, ttm_bo_type_device, NULL, );
if (r)
return r;
@@ -755,6 +767,7 @@ int amdgpu_mode_dumb_create(struct drm_file *file_priv,
domain = amdgpu_bo_get_preferred_pin_domain(adev,
amdgpu_display_supported_domains(adev));
r = amdgpu_gem_object_create(adev, args->size, 0, domain,
+TTM_BO_PRIORITY_NORMAL,
 AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED,
 ttm_bo_type_device, NULL, );
if (r)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h
index f1ddfc50bcc7..47b0a8190948 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.h
@@ -61,7 +61,7 @@ extern const struct dma_buf_ops amdgpu_dmabuf_ops;
  */
 void amdgpu_gem_force_release(struct amdgpu_device *adev);
 int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,
-int alignment, u32 initial_domain,
+int alignment, u32 initial_domain, u32 priority,
 u64 flags, enum ttm_bo_type type,
 struct reservation_object *resv,

Re: [PATCH] remove amdgpu_vrr_atom

2019-03-07 Thread Michel Dänzer

Hi Flora,


note that xf86-video-amdgpu patches are reviewed as GitLab merge
requests these days, as documented in README.md:

https://gitlab.freedesktop.org/xorg/driver/xf86-video-amdgpu/merge_requests


On 2019-03-07 4:25 a.m., Cui, Flora wrote:
> it doesn't work as expected

Why is that? Maybe you ran into the Mesa bug fixed by
https://gitlab.freedesktop.org/mesa/mesa/commit/c0a540f32067cc8cb126d9aa1eb12a11cf15373a?merge_request_iid=202
, or a similar bug elsewhere?


-- 
Earthling Michel Dänzer   |  https://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx