Find out DCE version

2018-08-21 Thread Paul Menzel

Dear Linux folks,


How do I easily determine from a running system what version of the DCE 
(DisplayCore Engine?) the graphics device is?



Kind regards,

Paul
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Fix page fault and kasan warning on pci device remove.

2018-08-21 Thread Paul Menzel

Dear Andrey,


Am 21.08.2018 um 23:23 schrieb Andrey Grodzovsky:

Problem:
When executing echo 1 > /sys/class/drm/card0/device/remove kasan warning
as bellow and page fault happen because adev->gart.pages already freed by the
time amdgpu_gart_unbind is called.

BUG: KASAN: user-memory-access in amdgpu_gart_unbind+0x98/0x180 [amdgpu]
Write of size 8 at addr 3648 by task bash/1828
CPU: 2 PID: 1828 Comm: bash Tainted: GW  O  4.18.0-rc1-dev+ #29
Hardware name: Gigabyte Technology Co., Ltd. AX370-Gaming/AX370-Gaming-CF, BIOS 
F3 06/19/2017
Call Trace:
dump_stack+0x71/0xab
kasan_report+0x109/0x390
amdgpu_gart_unbind+0x98/0x180 [amdgpu]
ttm_tt_unbind+0x43/0x60 [ttm]
ttm_bo_move_ttm+0x83/0x1c0 [ttm]
ttm_bo_handle_move_mem+0xb97/0xd00 [ttm]
ttm_bo_evict+0x273/0x530 [ttm]
ttm_mem_evict_first+0x29c/0x360 [ttm]
ttm_bo_force_list_clean+0xfc/0x210 [ttm]
ttm_bo_clean_mm+0xe7/0x160 [ttm]
amdgpu_ttm_fini+0xda/0x1d0 [amdgpu]
amdgpu_bo_fini+0xf/0x60 [amdgpu]
gmc_v8_0_sw_fini+0x36/0x70 [amdgpu]
amdgpu_device_fini+0x2d0/0x7d0 [amdgpu]
amdgpu_driver_unload_kms+0x6a/0xd0 [amdgpu]
drm_dev_unregister+0x79/0x180 [drm]
amdgpu_pci_remove+0x2a/0x60 [amdgpu]
pci_device_remove+0x5b/0x100
device_release_driver_internal+0x236/0x360
pci_stop_bus_device+0xbf/0xf0
pci_stop_and_remove_bus_device_locked+0x16/0x30
remove_store+0xda/0xf0
kernfs_fop_write+0x186/0x220
  __vfs_write+0xcc/0x330
vfs_write+0xe6/0x250
ksys_write+0xb1/0x140
do_syscall_64+0x77/0x1e0
entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x7f66ebbb32c0

Fix:
Split gmc_v{6,7,8,9}_0_gart_fini to pospone amdgpu_gart_fini to after


pos*t*pone


memory managers are shut down since gart unbind happens
as part of this procudure.


proc*e*dure

Also, I wouldn’t put a dot at the end of the commit message summary.


Signed-off-by: Andrey Grodzovsky 
---
  1 |  0
  drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c |  9 ++---
  drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c | 16 ++--
  drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c | 16 ++--
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 16 ++--
  5 files changed, 8 insertions(+), 49 deletions(-)
  create mode 100644 1

diff --git a/1 b/1
new file mode 100644
index 000..e69de29


What happened here? Is the file `1` needed?

[…]


Kind regards,

Paul
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/3] drm/amdgpu: Don't use kiq in interrupt

2018-08-21 Thread Emily Deng
Don't use kiq interrupt, as it might sleep.

Signed-off-by: Emily Deng 
---
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 9 ++---
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index fcdbacb..f49f5f3 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -331,12 +331,6 @@ signed long  amdgpu_kiq_reg_write_reg_wait(struct 
amdgpu_device *adev,
 
r = amdgpu_fence_wait_polling(ring, seq, MAX_KIQ_REG_WAIT);
 
-   /* don't wait anymore for IRQ context */
-   if (r < 1 && in_interrupt())
-   goto failed_kiq;
-
-   might_sleep();
-
while (r < 1 && cnt++ < MAX_KIQ_REG_TRY) {
msleep(MAX_KIQ_REG_BAILOUT_INTERVAL);
r = amdgpu_fence_wait_polling(ring, seq, MAX_KIQ_REG_WAIT);
@@ -381,7 +375,8 @@ static void gmc_v9_0_flush_gpu_tlb(struct amdgpu_device 
*adev,
 
if (adev->gfx.kiq.ring.ready &&
(amdgpu_sriov_runtime(adev) || !amdgpu_sriov_vf(adev)) &&
-   !adev->in_gpu_reset) {
+   !adev->in_gpu_reset &&
+   !in_interrupt()) {
r = amdgpu_kiq_reg_write_reg_wait(adev, 
hub->vm_inv_eng0_req + eng,
hub->vm_inv_eng0_ack + eng, tmp, 1 << vmid);
if (!r)
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 3/3] drm/amdgpu: Use warn to replace error report

2018-08-21 Thread Emily Deng
When kiq flush fail, it could fallback to mmio flush, so don't report an
error, just a warning.

Signed-off-by: Emily Deng 
---
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index f49f5f3..6214ad3 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -342,7 +342,7 @@ signed long  amdgpu_kiq_reg_write_reg_wait(struct 
amdgpu_device *adev,
return 0;
 
 failed_kiq:
-   pr_err("failed to invalidate tlb with kiq\n");
+   pr_warn("failed to invalidate tlb with kiq\n");
return r;
 }
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/3] drm/amdgpu: Don't use kiq in gpu reset

2018-08-21 Thread Emily Deng
When in gpu reset, don't use kiq, it will generate more TDR.

Signed-off-by: Emily Deng 
---
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 15 ---
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index eec991f..fcdbacb 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -331,15 +331,8 @@ signed long  amdgpu_kiq_reg_write_reg_wait(struct 
amdgpu_device *adev,
 
r = amdgpu_fence_wait_polling(ring, seq, MAX_KIQ_REG_WAIT);
 
-   /* don't wait anymore for gpu reset case because this way may
-* block gpu_recover() routine forever, e.g. this virt_kiq_rreg
-* is triggered in TTM and ttm_bo_lock_delayed_workqueue() will
-* never return if we keep waiting in virt_kiq_rreg, which cause
-* gpu_recover() hang there.
-*
-* also don't wait anymore for IRQ context
-* */
-   if (r < 1 && (adev->in_gpu_reset || in_interrupt()))
+   /* don't wait anymore for IRQ context */
+   if (r < 1 && in_interrupt())
goto failed_kiq;
 
might_sleep();
@@ -387,8 +380,8 @@ static void gmc_v9_0_flush_gpu_tlb(struct amdgpu_device 
*adev,
u32 tmp = gmc_v9_0_get_invalidate_req(vmid);
 
if (adev->gfx.kiq.ring.ready &&
-   (amdgpu_sriov_runtime(adev) ||
-!amdgpu_sriov_vf(adev))) {
+   (amdgpu_sriov_runtime(adev) || !amdgpu_sriov_vf(adev)) &&
+   !adev->in_gpu_reset) {
r = amdgpu_kiq_reg_write_reg_wait(adev, 
hub->vm_inv_eng0_req + eng,
hub->vm_inv_eng0_ack + eng, tmp, 1 << vmid);
if (!r)
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v4 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v4)

2018-08-21 Thread Huang Rui
On Tue, Aug 21, 2018 at 03:54:28PM +0200, Christian König wrote:
> Am 21.08.2018 um 15:43 schrieb Huang Rui:
> >On Mon, Aug 20, 2018 at 09:17:12PM +0800, Christian König wrote:
> >>Am 20.08.2018 um 08:05 schrieb Huang Rui:
> >>>On Fri, Aug 17, 2018 at 06:38:16PM +0800, Koenig, Christian wrote:
> Am 17.08.2018 um 12:08 schrieb Huang Rui:
> >I continue to work for bulk moving that based on the proposal by 
> >Christian.
> >
> >Background:
> >amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then 
> >move all of
> >them on the end of LRU list one by one. Thus, that cause so many BOs 
> >moved to
> >the end of the LRU, and impact performance seriously.
> >
> >Then Christian provided a workaround to not move PD/PT BOs on LRU with 
> >below
> >patch:
> >"drm/amdgpu: band aid validating VM PTs"
> >Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
> >
> >However, the final solution should bulk move all PD/PT and PerVM BOs on 
> >the LRU
> >instead of one by one.
> >
> >Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which 
> >need to be
> >validated we move all BOs together to the end of the LRU without 
> >dropping the
> >lock for the LRU.
> >
> >While doing so we note the beginning and end of this block in the LRU 
> >list.
> >
> >Now when amdgpu_vm_validate_pt_bos() is called and we don't have 
> >anything to do,
> >we don't move every BO one by one, but instead cut the LRU list into 
> >pieces so
> >that we bulk move everything to the end in just one operation.
> >
> >Test data:
> >+--+-+---+---+
> >|  |The Talos|Clpeak(OCL)|BusSpeedReadback(OCL)  
> >|
> >|  |Principle(Vulkan)|   |   
> >|
> >++
> >|  | |   |0.319 ms(1k) 0.314 ms(2K) 
> >0.308 ms(4K) |
> >| Original |  147.7 FPS  |  76.86 us |0.307 ms(8K) 0.310 ms(16K) 
> >|
> >++
> >| Orignial + WA| |   |0.254 ms(1K) 0.241 ms(2K)  
> >|
> >|(don't move   |  162.1 FPS  |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 
> >0.204 ms(16K)|
> >|PT BOs on LRU)| |   |   
> >|
> >++
> >| Bulk move|  163.1 FPS  |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 
> >0.213 ms(4K) |
> >|  | |   |0.214 ms(8K) 0.225 ms(16K) 
> >|
> >+--+-+---+---+
> >
> >After test them with above three benchmarks include vulkan and opencl. 
> >We can
> >see the visible improvement than original, and even better than original 
> >with
> >workaround.
> >
> >v2: move all BOs include idle, relocated, and moved list to the end of 
> >LRU and
> >put them together.
> >v3: remove unused parameter and use list_for_each_entry instead of the 
> >one with
> >save entry.
> >v4: move the amdgpu_vm_move_to_lru_tail after command submission, at 
> >that time,
> >all bo will be back on idle list.
> >
> >Signed-off-by: Christian König 
> >Signed-off-by: Huang Rui 
> >Tested-by: Mike Lothian 
> >Tested-by: Dieter Nützel 
> >Acked-by: Chunming Zhou 
> >---
> >drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 11 ++
> >drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 71 
> > ++
> >drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +-
> >3 files changed, 75 insertions(+), 18 deletions(-)
> >
> >diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> >b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >index 502b94f..9fbdf02 100644
> >--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >@@ -1260,6 +1260,16 @@ static int amdgpu_cs_submit(struct 
> >amdgpu_cs_parser *p,
> > return 0;
> >}
> >+static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,
> >+ struct amdgpu_cs_parser *p)
> >+{
> >+struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> >+struct amdgpu_vm *vm = >vm;
> >+
> >+if (vm->validated)
> That check belongs inside amdgpu_vm_move_to_lru_tail().
> 
> >+amdgpu_vm_move_to_lru_tail(adev, vm);
> >+}
> >+
> >int 

Re: [PATCH] drm/amdgpu: Adjust the VM size based on system memory size

2018-08-21 Thread Zhang, Jerry (Junwei)

On 08/22/2018 05:25 AM, Felix Kuehling wrote:

Set the VM size based on system memory size between the ASIC-specific
limits given by min_vm_size and max_bits. GFXv9 GPUs will keep their
default VM size of 256TB (48 bit). Only older GPUs will adjust VM size
depending on system memory size.

This makes more VM space available for ROCm applications on GFXv8 GPUs
that want to map all available VRAM and system memory in their SVM
address space.

Signed-off-by: Felix Kuehling 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 26 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  2 +-
  2 files changed, 23 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 662e8a3..48971cc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -2482,28 +2482,46 @@ static uint32_t amdgpu_vm_get_block_size(uint64_t 
vm_size)
   * amdgpu_vm_adjust_size - adjust vm size, block size and fragment size
   *
   * @adev: amdgpu_device pointer
- * @vm_size: the default vm size if it's set auto
+ * @min_vm_size: the minimum vm size in GB if it's set auto
   * @fragment_size_default: Default PTE fragment size
   * @max_level: max VMPT level
   * @max_bits: max address space size in bits
   *
   */
-void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t vm_size,
+void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
   uint32_t fragment_size_default, unsigned max_level,
   unsigned max_bits)
  {
+   unsigned int max_size = 1 << (max_bits - 30);
+   unsigned int vm_size;
uint64_t tmp;

/* adjust vm size first */
if (amdgpu_vm_size != -1) {
-   unsigned max_size = 1 << (max_bits - 30);
-
vm_size = amdgpu_vm_size;
if (vm_size > max_size) {
dev_warn(adev->dev, "VM size (%d) too large, max is %u 
GB\n",
 amdgpu_vm_size, max_size);
vm_size = max_size;
}
+   } else {
+   struct sysinfo si;
+   unsigned int phys_ram_gb;
+
+   /* Optimal VM size depends on the amount of physical
+* RAM available. Underlying requirements and
+* assumptions:
+*
+*  - Need to map system memory and VRAM from all GPUs
+* - VRAM from other GPUs not known here
+* - Assume VRAM <= system memory
+*  - On GFX8 and older, VM space can be segmented for
+*different MTYPEs
+*  - Need to allow room for fragmentation, guard pages etc.
+*/
+   si_meminfo();
+   phys_ram_gb = ((uint64_t)si.totalram * si.mem_unit) >> 30;
+   vm_size = min(max(phys_ram_gb * 3, min_vm_size), max_size);


The patch looks good for me.
Curious about the constant "3", is that a experiment value? or any verification 
for that?

Regards,
Jerry


}

adev->vm_manager.max_pfn = (uint64_t)vm_size << 18;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 1162c2b..ab1d23e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -345,7 +345,7 @@ struct amdgpu_bo_va_mapping 
*amdgpu_vm_bo_lookup_mapping(struct amdgpu_vm *vm,
  void amdgpu_vm_bo_trace_cs(struct amdgpu_vm *vm, struct ww_acquire_ctx 
*ticket);
  void amdgpu_vm_bo_rmv(struct amdgpu_device *adev,
  struct amdgpu_bo_va *bo_va);
-void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t vm_size,
+void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
   uint32_t fragment_size_default, unsigned max_level,
   unsigned max_bits);
  int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct drm_file 
*filp);


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Fix page fault and kasan warning on pci device remove.

2018-08-21 Thread Zhang, Jerry (Junwei)

On 08/22/2018 05:23 AM, Andrey Grodzovsky wrote:

Problem:
When executing echo 1 > /sys/class/drm/card0/device/remove kasan warning
as bellow and page fault happen because adev->gart.pages already freed by the
time amdgpu_gart_unbind is called.

BUG: KASAN: user-memory-access in amdgpu_gart_unbind+0x98/0x180 [amdgpu]
Write of size 8 at addr 3648 by task bash/1828
CPU: 2 PID: 1828 Comm: bash Tainted: GW  O  4.18.0-rc1-dev+ #29
Hardware name: Gigabyte Technology Co., Ltd. AX370-Gaming/AX370-Gaming-CF, BIOS 
F3 06/19/2017
Call Trace:
dump_stack+0x71/0xab
kasan_report+0x109/0x390
amdgpu_gart_unbind+0x98/0x180 [amdgpu]
ttm_tt_unbind+0x43/0x60 [ttm]
ttm_bo_move_ttm+0x83/0x1c0 [ttm]
ttm_bo_handle_move_mem+0xb97/0xd00 [ttm]
ttm_bo_evict+0x273/0x530 [ttm]
ttm_mem_evict_first+0x29c/0x360 [ttm]
ttm_bo_force_list_clean+0xfc/0x210 [ttm]
ttm_bo_clean_mm+0xe7/0x160 [ttm]
amdgpu_ttm_fini+0xda/0x1d0 [amdgpu]
amdgpu_bo_fini+0xf/0x60 [amdgpu]
gmc_v8_0_sw_fini+0x36/0x70 [amdgpu]
amdgpu_device_fini+0x2d0/0x7d0 [amdgpu]
amdgpu_driver_unload_kms+0x6a/0xd0 [amdgpu]
drm_dev_unregister+0x79/0x180 [drm]
amdgpu_pci_remove+0x2a/0x60 [amdgpu]
pci_device_remove+0x5b/0x100
device_release_driver_internal+0x236/0x360
pci_stop_bus_device+0xbf/0xf0
pci_stop_and_remove_bus_device_locked+0x16/0x30
remove_store+0xda/0xf0
kernfs_fop_write+0x186/0x220
  __vfs_write+0xcc/0x330
vfs_write+0xe6/0x250
ksys_write+0xb1/0x140
do_syscall_64+0x77/0x1e0
entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x7f66ebbb32c0

Fix:
Split gmc_v{6,7,8,9}_0_gart_fini to pospone amdgpu_gart_fini to after
memory managers are shut down since gart unbind happens
as part of this procudure.

Signed-off-by: Andrey Grodzovsky 

Reviewed-by: Junwei Zhang 


---
  1 |  0
  drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c |  9 ++---
  drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c | 16 ++--
  drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c | 16 ++--
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 16 ++--
  5 files changed, 8 insertions(+), 49 deletions(-)
  create mode 100644 1

diff --git a/1 b/1
new file mode 100644
index 000..e69de29
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
index c14cf1c..0a0a4dc 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
@@ -633,12 +633,6 @@ static void gmc_v6_0_gart_disable(struct amdgpu_device 
*adev)
amdgpu_gart_table_vram_unpin(adev);
  }

-static void gmc_v6_0_gart_fini(struct amdgpu_device *adev)
-{
-   amdgpu_gart_table_vram_free(adev);
-   amdgpu_gart_fini(adev);
-}
-
  static void gmc_v6_0_vm_decode_fault(struct amdgpu_device *adev,
 u32 status, u32 addr, u32 mc_client)
  {
@@ -936,8 +930,9 @@ static int gmc_v6_0_sw_fini(void *handle)

amdgpu_gem_force_release(adev);
amdgpu_vm_manager_fini(adev);
-   gmc_v6_0_gart_fini(adev);
+   amdgpu_gart_table_vram_free(adev);
amdgpu_bo_fini(adev);
+   amdgpu_gart_fini(adev);
release_firmware(adev->gmc.fw);
adev->gmc.fw = NULL;

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
index 0c3a161..afbadfc 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
@@ -750,19 +750,6 @@ static void gmc_v7_0_gart_disable(struct amdgpu_device 
*adev)
  }

  /**
- * gmc_v7_0_gart_fini - vm fini callback
- *
- * @adev: amdgpu_device pointer
- *
- * Tears down the driver GART/VM setup (CIK).
- */
-static void gmc_v7_0_gart_fini(struct amdgpu_device *adev)
-{
-   amdgpu_gart_table_vram_free(adev);
-   amdgpu_gart_fini(adev);
-}
-
-/**
   * gmc_v7_0_vm_decode_fault - print human readable fault info
   *
   * @adev: amdgpu_device pointer
@@ -1091,8 +1078,9 @@ static int gmc_v7_0_sw_fini(void *handle)

amdgpu_gem_force_release(adev);
amdgpu_vm_manager_fini(adev);
-   gmc_v7_0_gart_fini(adev);
+   amdgpu_gart_table_vram_free(adev);
amdgpu_bo_fini(adev);
+   amdgpu_gart_fini(adev);
release_firmware(adev->gmc.fw);
adev->gmc.fw = NULL;

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
index 274c932..d871dae 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
@@ -969,19 +969,6 @@ static void gmc_v8_0_gart_disable(struct amdgpu_device 
*adev)
  }

  /**
- * gmc_v8_0_gart_fini - vm fini callback
- *
- * @adev: amdgpu_device pointer
- *
- * Tears down the driver GART/VM setup (CIK).
- */
-static void gmc_v8_0_gart_fini(struct amdgpu_device *adev)
-{
-   amdgpu_gart_table_vram_free(adev);
-   amdgpu_gart_fini(adev);
-}
-
-/**
   * gmc_v8_0_vm_decode_fault - print human readable fault info
   *
   * @adev: amdgpu_device pointer
@@ -1192,8 +1179,9 @@ static int gmc_v8_0_sw_fini(void *handle)

amdgpu_gem_force_release(adev);

Re: [PATCH] drm/amdgpu: Adjust the VM size based on system memory size

2018-08-21 Thread Felix Kuehling
This patch is meant for amd-staging-drm-next. It's part of my effort to
reduce differences between kfd-staging and upstream and eventually
replace all our memory manager hacks with upstream solutions.

This commit should only affect GFXv8 and older. It should not have any
negative impact, except bigger GPUVM page tables on systems with lots of
memory. Conversely, this could actually allow making the GPUVM page
tables smaller on systems with little memory, if we wanted to take
advantage of that opportunity.

The current minimum is 64GB, which I believe means 32KB page tables and
page directories. The smallest VM size that makes sense with 2-levels of
4KB page tables would be 1GB. If we really want to go that small, I
should also factor the local VRAM size into the equation.

Regards,
  Felix


On 2018-08-21 05:25 PM, Felix Kuehling wrote:
> Set the VM size based on system memory size between the ASIC-specific
> limits given by min_vm_size and max_bits. GFXv9 GPUs will keep their
> default VM size of 256TB (48 bit). Only older GPUs will adjust VM size
> depending on system memory size.
>
> This makes more VM space available for ROCm applications on GFXv8 GPUs
> that want to map all available VRAM and system memory in their SVM
> address space.
>
> Signed-off-by: Felix Kuehling 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 26 ++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  2 +-
>  2 files changed, 23 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 662e8a3..48971cc 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -2482,28 +2482,46 @@ static uint32_t amdgpu_vm_get_block_size(uint64_t 
> vm_size)
>   * amdgpu_vm_adjust_size - adjust vm size, block size and fragment size
>   *
>   * @adev: amdgpu_device pointer
> - * @vm_size: the default vm size if it's set auto
> + * @min_vm_size: the minimum vm size in GB if it's set auto
>   * @fragment_size_default: Default PTE fragment size
>   * @max_level: max VMPT level
>   * @max_bits: max address space size in bits
>   *
>   */
> -void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t vm_size,
> +void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
>  uint32_t fragment_size_default, unsigned max_level,
>  unsigned max_bits)
>  {
> + unsigned int max_size = 1 << (max_bits - 30);
> + unsigned int vm_size;
>   uint64_t tmp;
>  
>   /* adjust vm size first */
>   if (amdgpu_vm_size != -1) {
> - unsigned max_size = 1 << (max_bits - 30);
> -
>   vm_size = amdgpu_vm_size;
>   if (vm_size > max_size) {
>   dev_warn(adev->dev, "VM size (%d) too large, max is %u 
> GB\n",
>amdgpu_vm_size, max_size);
>   vm_size = max_size;
>   }
> + } else {
> + struct sysinfo si;
> + unsigned int phys_ram_gb;
> +
> + /* Optimal VM size depends on the amount of physical
> +  * RAM available. Underlying requirements and
> +  * assumptions:
> +  *
> +  *  - Need to map system memory and VRAM from all GPUs
> +  * - VRAM from other GPUs not known here
> +  * - Assume VRAM <= system memory
> +  *  - On GFX8 and older, VM space can be segmented for
> +  *different MTYPEs
> +  *  - Need to allow room for fragmentation, guard pages etc.
> +  */
> + si_meminfo();
> + phys_ram_gb = ((uint64_t)si.totalram * si.mem_unit) >> 30;
> + vm_size = min(max(phys_ram_gb * 3, min_vm_size), max_size);
>   }
>  
>   adev->vm_manager.max_pfn = (uint64_t)vm_size << 18;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> index 1162c2b..ab1d23e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> @@ -345,7 +345,7 @@ struct amdgpu_bo_va_mapping 
> *amdgpu_vm_bo_lookup_mapping(struct amdgpu_vm *vm,
>  void amdgpu_vm_bo_trace_cs(struct amdgpu_vm *vm, struct ww_acquire_ctx 
> *ticket);
>  void amdgpu_vm_bo_rmv(struct amdgpu_device *adev,
> struct amdgpu_bo_va *bo_va);
> -void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t vm_size,
> +void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
>  uint32_t fragment_size_default, unsigned max_level,
>  unsigned max_bits);
>  int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct drm_file 
> *filp);

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: Adjust the VM size based on system memory size

2018-08-21 Thread Felix Kuehling
Set the VM size based on system memory size between the ASIC-specific
limits given by min_vm_size and max_bits. GFXv9 GPUs will keep their
default VM size of 256TB (48 bit). Only older GPUs will adjust VM size
depending on system memory size.

This makes more VM space available for ROCm applications on GFXv8 GPUs
that want to map all available VRAM and system memory in their SVM
address space.

Signed-off-by: Felix Kuehling 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 26 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  2 +-
 2 files changed, 23 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 662e8a3..48971cc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -2482,28 +2482,46 @@ static uint32_t amdgpu_vm_get_block_size(uint64_t 
vm_size)
  * amdgpu_vm_adjust_size - adjust vm size, block size and fragment size
  *
  * @adev: amdgpu_device pointer
- * @vm_size: the default vm size if it's set auto
+ * @min_vm_size: the minimum vm size in GB if it's set auto
  * @fragment_size_default: Default PTE fragment size
  * @max_level: max VMPT level
  * @max_bits: max address space size in bits
  *
  */
-void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t vm_size,
+void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
   uint32_t fragment_size_default, unsigned max_level,
   unsigned max_bits)
 {
+   unsigned int max_size = 1 << (max_bits - 30);
+   unsigned int vm_size;
uint64_t tmp;
 
/* adjust vm size first */
if (amdgpu_vm_size != -1) {
-   unsigned max_size = 1 << (max_bits - 30);
-
vm_size = amdgpu_vm_size;
if (vm_size > max_size) {
dev_warn(adev->dev, "VM size (%d) too large, max is %u 
GB\n",
 amdgpu_vm_size, max_size);
vm_size = max_size;
}
+   } else {
+   struct sysinfo si;
+   unsigned int phys_ram_gb;
+
+   /* Optimal VM size depends on the amount of physical
+* RAM available. Underlying requirements and
+* assumptions:
+*
+*  - Need to map system memory and VRAM from all GPUs
+* - VRAM from other GPUs not known here
+* - Assume VRAM <= system memory
+*  - On GFX8 and older, VM space can be segmented for
+*different MTYPEs
+*  - Need to allow room for fragmentation, guard pages etc.
+*/
+   si_meminfo();
+   phys_ram_gb = ((uint64_t)si.totalram * si.mem_unit) >> 30;
+   vm_size = min(max(phys_ram_gb * 3, min_vm_size), max_size);
}
 
adev->vm_manager.max_pfn = (uint64_t)vm_size << 18;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 1162c2b..ab1d23e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -345,7 +345,7 @@ struct amdgpu_bo_va_mapping 
*amdgpu_vm_bo_lookup_mapping(struct amdgpu_vm *vm,
 void amdgpu_vm_bo_trace_cs(struct amdgpu_vm *vm, struct ww_acquire_ctx 
*ticket);
 void amdgpu_vm_bo_rmv(struct amdgpu_device *adev,
  struct amdgpu_bo_va *bo_va);
-void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t vm_size,
+void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
   uint32_t fragment_size_default, unsigned max_level,
   unsigned max_bits);
 int amdgpu_vm_ioctl(struct drm_device *dev, void *data, struct drm_file *filp);
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: Fix page fault and kasan warning on pci device remove.

2018-08-21 Thread Andrey Grodzovsky
Problem:
When executing echo 1 > /sys/class/drm/card0/device/remove kasan warning
as bellow and page fault happen because adev->gart.pages already freed by the
time amdgpu_gart_unbind is called.

BUG: KASAN: user-memory-access in amdgpu_gart_unbind+0x98/0x180 [amdgpu]
Write of size 8 at addr 3648 by task bash/1828
CPU: 2 PID: 1828 Comm: bash Tainted: GW  O  4.18.0-rc1-dev+ #29
Hardware name: Gigabyte Technology Co., Ltd. AX370-Gaming/AX370-Gaming-CF, BIOS 
F3 06/19/2017
Call Trace:
dump_stack+0x71/0xab
kasan_report+0x109/0x390
amdgpu_gart_unbind+0x98/0x180 [amdgpu]
ttm_tt_unbind+0x43/0x60 [ttm]
ttm_bo_move_ttm+0x83/0x1c0 [ttm]
ttm_bo_handle_move_mem+0xb97/0xd00 [ttm]
ttm_bo_evict+0x273/0x530 [ttm]
ttm_mem_evict_first+0x29c/0x360 [ttm]
ttm_bo_force_list_clean+0xfc/0x210 [ttm]
ttm_bo_clean_mm+0xe7/0x160 [ttm]
amdgpu_ttm_fini+0xda/0x1d0 [amdgpu]
amdgpu_bo_fini+0xf/0x60 [amdgpu]
gmc_v8_0_sw_fini+0x36/0x70 [amdgpu]
amdgpu_device_fini+0x2d0/0x7d0 [amdgpu]
amdgpu_driver_unload_kms+0x6a/0xd0 [amdgpu]
drm_dev_unregister+0x79/0x180 [drm]
amdgpu_pci_remove+0x2a/0x60 [amdgpu]
pci_device_remove+0x5b/0x100
device_release_driver_internal+0x236/0x360
pci_stop_bus_device+0xbf/0xf0
pci_stop_and_remove_bus_device_locked+0x16/0x30
remove_store+0xda/0xf0
kernfs_fop_write+0x186/0x220
 __vfs_write+0xcc/0x330
vfs_write+0xe6/0x250
ksys_write+0xb1/0x140
do_syscall_64+0x77/0x1e0
entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x7f66ebbb32c0

Fix:
Split gmc_v{6,7,8,9}_0_gart_fini to pospone amdgpu_gart_fini to after
memory managers are shut down since gart unbind happens
as part of this procudure.

Signed-off-by: Andrey Grodzovsky 
---
 1 |  0
 drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c |  9 ++---
 drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c | 16 ++--
 drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c | 16 ++--
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 16 ++--
 5 files changed, 8 insertions(+), 49 deletions(-)
 create mode 100644 1

diff --git a/1 b/1
new file mode 100644
index 000..e69de29
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
index c14cf1c..0a0a4dc 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
@@ -633,12 +633,6 @@ static void gmc_v6_0_gart_disable(struct amdgpu_device 
*adev)
amdgpu_gart_table_vram_unpin(adev);
 }
 
-static void gmc_v6_0_gart_fini(struct amdgpu_device *adev)
-{
-   amdgpu_gart_table_vram_free(adev);
-   amdgpu_gart_fini(adev);
-}
-
 static void gmc_v6_0_vm_decode_fault(struct amdgpu_device *adev,
 u32 status, u32 addr, u32 mc_client)
 {
@@ -936,8 +930,9 @@ static int gmc_v6_0_sw_fini(void *handle)
 
amdgpu_gem_force_release(adev);
amdgpu_vm_manager_fini(adev);
-   gmc_v6_0_gart_fini(adev);
+   amdgpu_gart_table_vram_free(adev);
amdgpu_bo_fini(adev);
+   amdgpu_gart_fini(adev);
release_firmware(adev->gmc.fw);
adev->gmc.fw = NULL;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
index 0c3a161..afbadfc 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
@@ -750,19 +750,6 @@ static void gmc_v7_0_gart_disable(struct amdgpu_device 
*adev)
 }
 
 /**
- * gmc_v7_0_gart_fini - vm fini callback
- *
- * @adev: amdgpu_device pointer
- *
- * Tears down the driver GART/VM setup (CIK).
- */
-static void gmc_v7_0_gart_fini(struct amdgpu_device *adev)
-{
-   amdgpu_gart_table_vram_free(adev);
-   amdgpu_gart_fini(adev);
-}
-
-/**
  * gmc_v7_0_vm_decode_fault - print human readable fault info
  *
  * @adev: amdgpu_device pointer
@@ -1091,8 +1078,9 @@ static int gmc_v7_0_sw_fini(void *handle)
 
amdgpu_gem_force_release(adev);
amdgpu_vm_manager_fini(adev);
-   gmc_v7_0_gart_fini(adev);
+   amdgpu_gart_table_vram_free(adev);
amdgpu_bo_fini(adev);
+   amdgpu_gart_fini(adev);
release_firmware(adev->gmc.fw);
adev->gmc.fw = NULL;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
index 274c932..d871dae 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
@@ -969,19 +969,6 @@ static void gmc_v8_0_gart_disable(struct amdgpu_device 
*adev)
 }
 
 /**
- * gmc_v8_0_gart_fini - vm fini callback
- *
- * @adev: amdgpu_device pointer
- *
- * Tears down the driver GART/VM setup (CIK).
- */
-static void gmc_v8_0_gart_fini(struct amdgpu_device *adev)
-{
-   amdgpu_gart_table_vram_free(adev);
-   amdgpu_gart_fini(adev);
-}
-
-/**
  * gmc_v8_0_vm_decode_fault - print human readable fault info
  *
  * @adev: amdgpu_device pointer
@@ -1192,8 +1179,9 @@ static int gmc_v8_0_sw_fini(void *handle)
 
amdgpu_gem_force_release(adev);
amdgpu_vm_manager_fini(adev);
-   gmc_v8_0_gart_fini(adev);
+   

Re: [PATCH] drm/amdgpu/display: disable eDP fast boot optimization on DCE8

2018-08-21 Thread Harry Wentland
On 2018-08-21 03:47 PM, Alex Deucher wrote:
> On Tue, Aug 21, 2018 at 3:38 PM Harry Wentland  wrote:
>>
>> On 2018-08-16 04:38 PM, Alex Deucher wrote:
>>> Seems to cause blank screens.
>>>
>>> Bug: https://bugs.freedesktop.org/show_bug.cgi?id=106940
>>> Signed-off-by: Alex Deucher 
>>
>> Your SOB is using a different email from the author. Mind if I change it to 
>> your gmail?
> 
> Weird.  not sure what happened there.  I'd prefer you just override
> the author to my amd address.
> 

Done and pushed.

It looks like you sent the patch from your gmail, which is why that's shown as 
author.

Harry

> Alex
> 
>>
>> Harry
>>
>>> ---
>>>  drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c | 8 +++-
>>>  1 file changed, 7 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c 
>>> b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
>>> index 350ee3e3e34d..2e3f85ceeaa9 100644
>>> --- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
>>> +++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
>>> @@ -1562,7 +1562,13 @@ void dce110_enable_accelerated_mode(struct dc *dc, 
>>> struct dc_state *context)
>>>   bool can_eDP_fast_boot_optimize = false;
>>>
>>>   if (edp_link) {
>>> - can_eDP_fast_boot_optimize =
>>> + /* this seems to cause blank screens on DCE8 */
>>> + if ((dc->ctx->dce_version == DCE_VERSION_8_0) ||
>>> + (dc->ctx->dce_version == DCE_VERSION_8_1) ||
>>> + (dc->ctx->dce_version == DCE_VERSION_8_3))
>>> + can_eDP_fast_boot_optimize = false;
>>> + else
>>> + can_eDP_fast_boot_optimize =
>>>   
>>> edp_link->link_enc->funcs->is_dig_enabled(edp_link->link_enc);
>>>   }
>>>
>>>
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu/display: add support for LVDS (v5)

2018-08-21 Thread Harry Wentland
On 2018-08-21 03:45 PM, Alex Deucher wrote:
> On Tue, Aug 21, 2018 at 3:36 PM Harry Wentland  wrote:
>>
>>
>>
>> On 2018-08-16 09:31 AM, Alex Deucher wrote:
>>> This adds support for LVDS displays.
>>>
>>> v2: add support for spread spectrum, sink detect
>>> v3: clean up enable_lvds_output
>>> v4: fix up link_detect
>>> v5: remove assert on 888 format
>>>
>>> Bug: https://bugs.freedesktop.org/show_bug.cgi?id=105880
>>> Signed-off-by: Alex Deucher 
>>> ---
>>>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  |  2 +
>>>  drivers/gpu/drm/amd/display/dc/core/dc_link.c  | 45 
>>> ++
>>>  .../gpu/drm/amd/display/dc/dce/dce_clock_source.c  | 10 +
>>>  .../gpu/drm/amd/display/dc/dce/dce_clock_source.h  |  2 +
>>>  .../gpu/drm/amd/display/dc/dce/dce_link_encoder.c  | 34 
>>>  .../gpu/drm/amd/display/dc/dce/dce_link_encoder.h  |  6 +++
>>>  .../drm/amd/display/dc/dce/dce_stream_encoder.c| 24 
>>>  .../gpu/drm/amd/display/dc/inc/hw/link_encoder.h   |  3 ++
>>>  .../gpu/drm/amd/display/dc/inc/hw/stream_encoder.h |  4 ++
>>>  drivers/gpu/drm/amd/display/include/signal_types.h |  5 +++
>>>  10 files changed, 135 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
>>> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>> index 1828e4382b24..818b5ec32f37 100644
>>> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>> @@ -3360,6 +3360,8 @@ static int to_drm_connector_type(enum signal_type st)
>>>   return DRM_MODE_CONNECTOR_HDMIA;
>>>   case SIGNAL_TYPE_EDP:
>>>   return DRM_MODE_CONNECTOR_eDP;
>>> + case SIGNAL_TYPE_LVDS:
>>> + return DRM_MODE_CONNECTOR_LVDS;
>>>   case SIGNAL_TYPE_RGB:
>>>   return DRM_MODE_CONNECTOR_VGA;
>>>   case SIGNAL_TYPE_DISPLAY_PORT:
>>> diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c 
>>> b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
>>> index 981f7cbd31cc..0f044fd5baf4 100644
>>> --- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
>>> +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
>>> @@ -203,6 +203,11 @@ static bool detect_sink(struct dc_link *link, enum 
>>> dc_connection_type *type)
>>>   uint32_t is_hpd_high = 0;
>>>   struct gpio *hpd_pin;
>>>
>>> + if (link->connector_signal == SIGNAL_TYPE_LVDS) {
>>> + *type = dc_connection_single;
>>> + return true;
>>> + }
>>> +
>>
>> Would it be better to still do HPD detection? Or is this failing here?
> 
> It fails there.  I don't think LVDS normally has a HPD pin assigned.
> There's no hotplug per se.
> 

Ah, I thought so. We can take care of this if it ever becomes an issue. It 
looks like DAL2 code still checked HPD but I only had a cursory look.

>>
>>>   /* todo: may need to lock gpio access */
>>>   hpd_pin = get_hpd_gpio(link->ctx->dc_bios, link->link_id, 
>>> link->ctx->gpio_service);
>>>   if (hpd_pin == NULL)
>>> @@ -616,6 +621,10 @@ bool dc_link_detect(struct dc_link *link, enum 
>>> dc_detect_reason reason)
>>>   link->local_sink)
>>>   return true;
>>>
>>> + if (link->connector_signal == SIGNAL_TYPE_LVDS &&
>>> + link->local_sink)
>>> + return true;
>>> +
>>>   prev_sink = link->local_sink;
>>>   if (prev_sink != NULL) {
>>>   dc_sink_retain(prev_sink);
>>> @@ -649,6 +658,12 @@ bool dc_link_detect(struct dc_link *link, enum 
>>> dc_detect_reason reason)
>>>   break;
>>>   }
>>>
>>> + case SIGNAL_TYPE_LVDS: {
>>> + sink_caps.transaction_type = DDC_TRANSACTION_TYPE_I2C;
>>> + sink_caps.signal = SIGNAL_TYPE_LVDS;
>>> + break;
>>> + }
>>> +
>>>   case SIGNAL_TYPE_EDP: {
>>>   detect_edp_sink_caps(link);
>>>   sink_caps.transaction_type =
>>> @@ -1087,6 +1102,9 @@ static bool construct(
>>>   dal_irq_get_rx_source(hpd_gpio);
>>>   }
>>>   break;
>>> + case CONNECTOR_ID_LVDS:
>>> + link->connector_signal = SIGNAL_TYPE_LVDS;
>>> + break;
>>>   default:
>>>   DC_LOG_WARNING("Unsupported Connector type:%d!\n", 
>>> link->link_id.id);
>>>   goto create_fail;
>>> @@ -1920,6 +1938,24 @@ static void enable_link_hdmi(struct pipe_ctx 
>>> *pipe_ctx)
>>>   dal_ddc_service_read_scdc_data(link->ddc);
>>>  }
>>>
>>> +static void enable_link_lvds(struct pipe_ctx *pipe_ctx)
>>> +{
>>> + struct dc_stream_state *stream = pipe_ctx->stream;
>>> + struct dc_link *link = stream->sink->link;
>>> +
>>> + if (stream->phy_pix_clk == 0)
>>> + stream->phy_pix_clk = stream->timing.pix_clk_khz;
>>> +
>>> + memset(>sink->link->cur_link_settings, 0,
>>> +

Re: [PATCH] drm/amdgpu: fix sdma doorbell range setting

2018-08-21 Thread Deucher, Alexander
Yes, this is for amd-staging-drm-next and ultimately upstream.


Alex


From: Kuehling, Felix
Sent: Tuesday, August 21, 2018 4:10:07 PM
To: Quan, Evan; amd-gfx@lists.freedesktop.org; Russell, Kent
Cc: Deucher, Alexander; Zhang, Hawking
Subject: Re: [PATCH] drm/amdgpu: fix sdma doorbell range setting

[+Kent]

Which branch is this for?

amd-staging-drm-next doesn't have Shaoyun's other changes for changing
the doorbell layout, so this change looks reasonable for
amd-staging-drm-next.

Kent, we should not merge this change into amd-kfd-staging, because it
would break KFD's 8 SDMA queues per engine. I'm preparing upstreaming
Vega20 changes KFD. That will include reverting this commit.

Regards,
  Felix

On 2018-08-21 04:21 AM, Evan Quan wrote:
> Use the old doorbell range setting until the driver is
> able to support more sdma queues.
>
> Change-Id: I80fc067fc64878d3c7dc3d38bbe1c6c94bec397f
> Signed-off-by: Evan Quan 
> ---
>  drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c 
> b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
> index 89ea92075b6b..2e65447637c6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
> +++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
> @@ -76,7 +76,7 @@ static void nbio_v7_4_sdma_doorbell_range(struct 
> amdgpu_device *adev, int instan
>
>if (use_doorbell) {
>doorbell_range = REG_SET_FIELD(doorbell_range, 
> BIF_SDMA0_DOORBELL_RANGE, OFFSET, doorbell_index);
> - doorbell_range = REG_SET_FIELD(doorbell_range, 
> BIF_SDMA0_DOORBELL_RANGE, SIZE, 8);
> + doorbell_range = REG_SET_FIELD(doorbell_range, 
> BIF_SDMA0_DOORBELL_RANGE, SIZE, 2);
>} else
>doorbell_range = REG_SET_FIELD(doorbell_range, 
> BIF_SDMA0_DOORBELL_RANGE, SIZE, 0);
>

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: fix sdma doorbell range setting

2018-08-21 Thread Felix Kuehling
[+Kent]

Which branch is this for?

amd-staging-drm-next doesn't have Shaoyun's other changes for changing
the doorbell layout, so this change looks reasonable for
amd-staging-drm-next.

Kent, we should not merge this change into amd-kfd-staging, because it
would break KFD's 8 SDMA queues per engine. I'm preparing upstreaming
Vega20 changes KFD. That will include reverting this commit.

Regards,
  Felix

On 2018-08-21 04:21 AM, Evan Quan wrote:
> Use the old doorbell range setting until the driver is
> able to support more sdma queues.
>
> Change-Id: I80fc067fc64878d3c7dc3d38bbe1c6c94bec397f
> Signed-off-by: Evan Quan 
> ---
>  drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c 
> b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
> index 89ea92075b6b..2e65447637c6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
> +++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
> @@ -76,7 +76,7 @@ static void nbio_v7_4_sdma_doorbell_range(struct 
> amdgpu_device *adev, int instan
>  
>   if (use_doorbell) {
>   doorbell_range = REG_SET_FIELD(doorbell_range, 
> BIF_SDMA0_DOORBELL_RANGE, OFFSET, doorbell_index);
> - doorbell_range = REG_SET_FIELD(doorbell_range, 
> BIF_SDMA0_DOORBELL_RANGE, SIZE, 8);
> + doorbell_range = REG_SET_FIELD(doorbell_range, 
> BIF_SDMA0_DOORBELL_RANGE, SIZE, 2);
>   } else
>   doorbell_range = REG_SET_FIELD(doorbell_range, 
> BIF_SDMA0_DOORBELL_RANGE, SIZE, 0);
>  

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu/display: disable eDP fast boot optimization on DCE8

2018-08-21 Thread Alex Deucher
On Tue, Aug 21, 2018 at 3:38 PM Harry Wentland  wrote:
>
> On 2018-08-16 04:38 PM, Alex Deucher wrote:
> > Seems to cause blank screens.
> >
> > Bug: https://bugs.freedesktop.org/show_bug.cgi?id=106940
> > Signed-off-by: Alex Deucher 
>
> Your SOB is using a different email from the author. Mind if I change it to 
> your gmail?

Weird.  not sure what happened there.  I'd prefer you just override
the author to my amd address.

Alex

>
> Harry
>
> > ---
> >  drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c | 8 +++-
> >  1 file changed, 7 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c 
> > b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
> > index 350ee3e3e34d..2e3f85ceeaa9 100644
> > --- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
> > +++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
> > @@ -1562,7 +1562,13 @@ void dce110_enable_accelerated_mode(struct dc *dc, 
> > struct dc_state *context)
> >   bool can_eDP_fast_boot_optimize = false;
> >
> >   if (edp_link) {
> > - can_eDP_fast_boot_optimize =
> > + /* this seems to cause blank screens on DCE8 */
> > + if ((dc->ctx->dce_version == DCE_VERSION_8_0) ||
> > + (dc->ctx->dce_version == DCE_VERSION_8_1) ||
> > + (dc->ctx->dce_version == DCE_VERSION_8_3))
> > + can_eDP_fast_boot_optimize = false;
> > + else
> > + can_eDP_fast_boot_optimize =
> >   
> > edp_link->link_enc->funcs->is_dig_enabled(edp_link->link_enc);
> >   }
> >
> >
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu/display: add support for LVDS (v5)

2018-08-21 Thread Alex Deucher
On Tue, Aug 21, 2018 at 3:36 PM Harry Wentland  wrote:
>
>
>
> On 2018-08-16 09:31 AM, Alex Deucher wrote:
> > This adds support for LVDS displays.
> >
> > v2: add support for spread spectrum, sink detect
> > v3: clean up enable_lvds_output
> > v4: fix up link_detect
> > v5: remove assert on 888 format
> >
> > Bug: https://bugs.freedesktop.org/show_bug.cgi?id=105880
> > Signed-off-by: Alex Deucher 
> > ---
> >  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  |  2 +
> >  drivers/gpu/drm/amd/display/dc/core/dc_link.c  | 45 
> > ++
> >  .../gpu/drm/amd/display/dc/dce/dce_clock_source.c  | 10 +
> >  .../gpu/drm/amd/display/dc/dce/dce_clock_source.h  |  2 +
> >  .../gpu/drm/amd/display/dc/dce/dce_link_encoder.c  | 34 
> >  .../gpu/drm/amd/display/dc/dce/dce_link_encoder.h  |  6 +++
> >  .../drm/amd/display/dc/dce/dce_stream_encoder.c| 24 
> >  .../gpu/drm/amd/display/dc/inc/hw/link_encoder.h   |  3 ++
> >  .../gpu/drm/amd/display/dc/inc/hw/stream_encoder.h |  4 ++
> >  drivers/gpu/drm/amd/display/include/signal_types.h |  5 +++
> >  10 files changed, 135 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
> > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > index 1828e4382b24..818b5ec32f37 100644
> > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > @@ -3360,6 +3360,8 @@ static int to_drm_connector_type(enum signal_type st)
> >   return DRM_MODE_CONNECTOR_HDMIA;
> >   case SIGNAL_TYPE_EDP:
> >   return DRM_MODE_CONNECTOR_eDP;
> > + case SIGNAL_TYPE_LVDS:
> > + return DRM_MODE_CONNECTOR_LVDS;
> >   case SIGNAL_TYPE_RGB:
> >   return DRM_MODE_CONNECTOR_VGA;
> >   case SIGNAL_TYPE_DISPLAY_PORT:
> > diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c 
> > b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
> > index 981f7cbd31cc..0f044fd5baf4 100644
> > --- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
> > +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
> > @@ -203,6 +203,11 @@ static bool detect_sink(struct dc_link *link, enum 
> > dc_connection_type *type)
> >   uint32_t is_hpd_high = 0;
> >   struct gpio *hpd_pin;
> >
> > + if (link->connector_signal == SIGNAL_TYPE_LVDS) {
> > + *type = dc_connection_single;
> > + return true;
> > + }
> > +
>
> Would it be better to still do HPD detection? Or is this failing here?

It fails there.  I don't think LVDS normally has a HPD pin assigned.
There's no hotplug per se.

>
> >   /* todo: may need to lock gpio access */
> >   hpd_pin = get_hpd_gpio(link->ctx->dc_bios, link->link_id, 
> > link->ctx->gpio_service);
> >   if (hpd_pin == NULL)
> > @@ -616,6 +621,10 @@ bool dc_link_detect(struct dc_link *link, enum 
> > dc_detect_reason reason)
> >   link->local_sink)
> >   return true;
> >
> > + if (link->connector_signal == SIGNAL_TYPE_LVDS &&
> > + link->local_sink)
> > + return true;
> > +
> >   prev_sink = link->local_sink;
> >   if (prev_sink != NULL) {
> >   dc_sink_retain(prev_sink);
> > @@ -649,6 +658,12 @@ bool dc_link_detect(struct dc_link *link, enum 
> > dc_detect_reason reason)
> >   break;
> >   }
> >
> > + case SIGNAL_TYPE_LVDS: {
> > + sink_caps.transaction_type = DDC_TRANSACTION_TYPE_I2C;
> > + sink_caps.signal = SIGNAL_TYPE_LVDS;
> > + break;
> > + }
> > +
> >   case SIGNAL_TYPE_EDP: {
> >   detect_edp_sink_caps(link);
> >   sink_caps.transaction_type =
> > @@ -1087,6 +1102,9 @@ static bool construct(
> >   dal_irq_get_rx_source(hpd_gpio);
> >   }
> >   break;
> > + case CONNECTOR_ID_LVDS:
> > + link->connector_signal = SIGNAL_TYPE_LVDS;
> > + break;
> >   default:
> >   DC_LOG_WARNING("Unsupported Connector type:%d!\n", 
> > link->link_id.id);
> >   goto create_fail;
> > @@ -1920,6 +1938,24 @@ static void enable_link_hdmi(struct pipe_ctx 
> > *pipe_ctx)
> >   dal_ddc_service_read_scdc_data(link->ddc);
> >  }
> >
> > +static void enable_link_lvds(struct pipe_ctx *pipe_ctx)
> > +{
> > + struct dc_stream_state *stream = pipe_ctx->stream;
> > + struct dc_link *link = stream->sink->link;
> > +
> > + if (stream->phy_pix_clk == 0)
> > + stream->phy_pix_clk = stream->timing.pix_clk_khz;
> > +
> > + memset(>sink->link->cur_link_settings, 0,
> > + sizeof(struct dc_link_settings));
> > +
>
> The cur_link_settings. should only by used by eDP/DP/MST. Shouldn't need to 
> touch them here.

Ok.  enable_link_hdmi() clears that structure as well, so I did it

Re: [PATCH] drm/amdgpu/display: disable eDP fast boot optimization on DCE8

2018-08-21 Thread Harry Wentland
On 2018-08-16 04:38 PM, Alex Deucher wrote:
> Seems to cause blank screens.
> 
> Bug: https://bugs.freedesktop.org/show_bug.cgi?id=106940
> Signed-off-by: Alex Deucher 

Your SOB is using a different email from the author. Mind if I change it to 
your gmail?

Harry

> ---
>  drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c 
> b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
> index 350ee3e3e34d..2e3f85ceeaa9 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
> @@ -1562,7 +1562,13 @@ void dce110_enable_accelerated_mode(struct dc *dc, 
> struct dc_state *context)
>   bool can_eDP_fast_boot_optimize = false;
>  
>   if (edp_link) {
> - can_eDP_fast_boot_optimize =
> + /* this seems to cause blank screens on DCE8 */
> + if ((dc->ctx->dce_version == DCE_VERSION_8_0) ||
> + (dc->ctx->dce_version == DCE_VERSION_8_1) ||
> + (dc->ctx->dce_version == DCE_VERSION_8_3))
> + can_eDP_fast_boot_optimize = false;
> + else
> + can_eDP_fast_boot_optimize =
>   
> edp_link->link_enc->funcs->is_dig_enabled(edp_link->link_enc);
>   }
>  
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu/display: add support for LVDS (v5)

2018-08-21 Thread Harry Wentland


On 2018-08-16 09:31 AM, Alex Deucher wrote:
> This adds support for LVDS displays.
> 
> v2: add support for spread spectrum, sink detect
> v3: clean up enable_lvds_output
> v4: fix up link_detect
> v5: remove assert on 888 format
> 
> Bug: https://bugs.freedesktop.org/show_bug.cgi?id=105880
> Signed-off-by: Alex Deucher 
> ---
>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  |  2 +
>  drivers/gpu/drm/amd/display/dc/core/dc_link.c  | 45 
> ++
>  .../gpu/drm/amd/display/dc/dce/dce_clock_source.c  | 10 +
>  .../gpu/drm/amd/display/dc/dce/dce_clock_source.h  |  2 +
>  .../gpu/drm/amd/display/dc/dce/dce_link_encoder.c  | 34 
>  .../gpu/drm/amd/display/dc/dce/dce_link_encoder.h  |  6 +++
>  .../drm/amd/display/dc/dce/dce_stream_encoder.c| 24 
>  .../gpu/drm/amd/display/dc/inc/hw/link_encoder.h   |  3 ++
>  .../gpu/drm/amd/display/dc/inc/hw/stream_encoder.h |  4 ++
>  drivers/gpu/drm/amd/display/include/signal_types.h |  5 +++
>  10 files changed, 135 insertions(+)
> 
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> index 1828e4382b24..818b5ec32f37 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> @@ -3360,6 +3360,8 @@ static int to_drm_connector_type(enum signal_type st)
>   return DRM_MODE_CONNECTOR_HDMIA;
>   case SIGNAL_TYPE_EDP:
>   return DRM_MODE_CONNECTOR_eDP;
> + case SIGNAL_TYPE_LVDS:
> + return DRM_MODE_CONNECTOR_LVDS;
>   case SIGNAL_TYPE_RGB:
>   return DRM_MODE_CONNECTOR_VGA;
>   case SIGNAL_TYPE_DISPLAY_PORT:
> diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c 
> b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
> index 981f7cbd31cc..0f044fd5baf4 100644
> --- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
> +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
> @@ -203,6 +203,11 @@ static bool detect_sink(struct dc_link *link, enum 
> dc_connection_type *type)
>   uint32_t is_hpd_high = 0;
>   struct gpio *hpd_pin;
>  
> + if (link->connector_signal == SIGNAL_TYPE_LVDS) {
> + *type = dc_connection_single;
> + return true;
> + }
> +

Would it be better to still do HPD detection? Or is this failing here?

>   /* todo: may need to lock gpio access */
>   hpd_pin = get_hpd_gpio(link->ctx->dc_bios, link->link_id, 
> link->ctx->gpio_service);
>   if (hpd_pin == NULL)
> @@ -616,6 +621,10 @@ bool dc_link_detect(struct dc_link *link, enum 
> dc_detect_reason reason)
>   link->local_sink)
>   return true;
>  
> + if (link->connector_signal == SIGNAL_TYPE_LVDS &&
> + link->local_sink)
> + return true;
> +
>   prev_sink = link->local_sink;
>   if (prev_sink != NULL) {
>   dc_sink_retain(prev_sink);
> @@ -649,6 +658,12 @@ bool dc_link_detect(struct dc_link *link, enum 
> dc_detect_reason reason)
>   break;
>   }
>  
> + case SIGNAL_TYPE_LVDS: {
> + sink_caps.transaction_type = DDC_TRANSACTION_TYPE_I2C;
> + sink_caps.signal = SIGNAL_TYPE_LVDS;
> + break;
> + }
> +
>   case SIGNAL_TYPE_EDP: {
>   detect_edp_sink_caps(link);
>   sink_caps.transaction_type =
> @@ -1087,6 +1102,9 @@ static bool construct(
>   dal_irq_get_rx_source(hpd_gpio);
>   }
>   break;
> + case CONNECTOR_ID_LVDS:
> + link->connector_signal = SIGNAL_TYPE_LVDS;
> + break;
>   default:
>   DC_LOG_WARNING("Unsupported Connector type:%d!\n", 
> link->link_id.id);
>   goto create_fail;
> @@ -1920,6 +1938,24 @@ static void enable_link_hdmi(struct pipe_ctx *pipe_ctx)
>   dal_ddc_service_read_scdc_data(link->ddc);
>  }
>  
> +static void enable_link_lvds(struct pipe_ctx *pipe_ctx)
> +{
> + struct dc_stream_state *stream = pipe_ctx->stream;
> + struct dc_link *link = stream->sink->link;
> +
> + if (stream->phy_pix_clk == 0)
> + stream->phy_pix_clk = stream->timing.pix_clk_khz;
> +
> + memset(>sink->link->cur_link_settings, 0,
> + sizeof(struct dc_link_settings));
> +

The cur_link_settings. should only by used by eDP/DP/MST. Shouldn't need to 
touch them here.

Otherwise the change looks good.

Harry

> + link->link_enc->funcs->enable_lvds_output(
> + link->link_enc,
> + pipe_ctx->clock_source->id,
> + stream->phy_pix_clk);
> +
> +}
> +
>  /enable_link***/
>  static enum dc_status enable_link(
>   struct dc_state *state,
> @@ -1943,6 +1979,10 @@ static enum 

Re: [PATCH] drm/amdgpu: fix preamble handling

2018-08-21 Thread Alex Deucher
On Tue, Aug 21, 2018 at 9:11 AM Christian König
 wrote:
>
> At this point the command submission can still be interrupted.
>
> Signed-off-by: Christian König 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 16 +---
>  1 file changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index d42d1c8f78f6..313ac971eaaf 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -1015,13 +1015,9 @@ static int amdgpu_cs_ib_fill(struct amdgpu_device 
> *adev,
> if (r)
> return r;
>
> -   if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE) {
> -   parser->job->preamble_status |= 
> AMDGPU_PREAMBLE_IB_PRESENT;
> -   if (!parser->ctx->preamble_presented) {
> -   parser->job->preamble_status |= 
> AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
> -   parser->ctx->preamble_presented = true;
> -   }
> -   }
> +   if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE)
> +   parser->job->preamble_status |=
> +   AMDGPU_PREAMBLE_IB_PRESENT;
>
> if (parser->entity && parser->entity != entity)
> return -EINVAL;
> @@ -1244,6 +1240,12 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
>
> amdgpu_cs_post_dependencies(p);
>
> +   if ((job->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) &&
> +   !p->ctx->preamble_presented) {
> +   job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
> +   p->ctx->preamble_presented = true;
> +   }
> +
> cs->out.handle = seq;
> job->uf_sequence = seq;
>
> --
> 2.14.1
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu/display: disable eDP fast boot optimization on DCE8

2018-08-21 Thread Harry Wentland
On 2018-08-16 04:38 PM, Alex Deucher wrote:
> Seems to cause blank screens.
> 
> Bug: https://bugs.freedesktop.org/show_bug.cgi?id=106940
> Signed-off-by: Alex Deucher 

Reviewed-by: Harry Wentland 

Harry

> ---
>  drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c 
> b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
> index 350ee3e3e34d..2e3f85ceeaa9 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
> @@ -1562,7 +1562,13 @@ void dce110_enable_accelerated_mode(struct dc *dc, 
> struct dc_state *context)
>   bool can_eDP_fast_boot_optimize = false;
>  
>   if (edp_link) {
> - can_eDP_fast_boot_optimize =
> + /* this seems to cause blank screens on DCE8 */
> + if ((dc->ctx->dce_version == DCE_VERSION_8_0) ||
> + (dc->ctx->dce_version == DCE_VERSION_8_1) ||
> + (dc->ctx->dce_version == DCE_VERSION_8_3))
> + can_eDP_fast_boot_optimize = false;
> + else
> + can_eDP_fast_boot_optimize =
>   
> edp_link->link_enc->funcs->is_dig_enabled(edp_link->link_enc);
>   }
>  
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: indent an if statement

2018-08-21 Thread Harry Wentland
On 2018-08-14 05:09 AM, Dan Carpenter wrote:
> The if statement isn't indented and it makes static checkers complain.
> 
> Signed-off-by: Dan Carpenter 

Reviewed-by: Harry Wentland 

Harry

> 
> diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c 
> b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
> index 4ca41d6e3bcf..d82ba58c720f 100644
> --- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
> +++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
> @@ -349,7 +349,7 @@ static bool is_dp_and_hdmi_sharable(
>  
>   if (stream1->clamping.c_depth != COLOR_DEPTH_888 ||
>   stream2->clamping.c_depth != COLOR_DEPTH_888)
> - return false;
> + return false;
>  
>   return true;
>  
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: fix a compile warning

2018-08-21 Thread Alex Deucher
On Fri, Aug 17, 2018 at 3:01 PM Randy Dunlap  wrote:
>
> On 08/16/2018 08:09 PM, Wen Yang wrote:
> > Fix comile warning like,
> >   CC [M]  drivers/gpu/drm/i915/gvt/execlist.o
> >   CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/instmem/nv50.o
> >   CC [M]  drivers/gpu/drm/radeon/btc_dpm.o
> >   CC [M]  drivers/isdn/hisax/avm_a1p.o
> >   CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_dpp.o
> > drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_hw_sequencer.c: In 
> > function ‘dcn10_update_mpcc’:
> > drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_hw_sequencer.c:1903:9: 
> > warning: missing braces around initializer [-Wmissing-braces]
> >   struct mpcc_blnd_cfg blnd_cfg = {0};
> >  ^
> > drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_hw_sequencer.c:1903:9: 
> > warning: (near initialization for ‘blnd_cfg.black_color’) [-Wmissing-braces]
> >
> > Signed-off-by: Wen Yang 
> > Reviewed-by: Jiang Biao 
>
> works for me.  Thanks.
>
> Acked-by: Randy Dunlap 
>

Applied.  thanks!

Alex

>
> > ---
> >  drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c 
> > b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
> > index cfcc54f..a06a035 100644
> > --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
> > +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
> > @@ -1900,7 +1900,7 @@ static void update_dpp(struct dpp *dpp, struct 
> > dc_plane_state *plane_state)
> >  static void dcn10_update_mpcc(struct dc *dc, struct pipe_ctx *pipe_ctx)
> >  {
> >   struct hubp *hubp = pipe_ctx->plane_res.hubp;
> > - struct mpcc_blnd_cfg blnd_cfg = {0};
> > + struct mpcc_blnd_cfg blnd_cfg = {{0}};
> >   bool per_pixel_alpha = pipe_ctx->plane_state->per_pixel_alpha && 
> > pipe_ctx->bottom_pipe;
> >   int mpcc_id;
> >   struct mpcc *new_mpcc;
> >
>
>
> --
> ~Randy
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: fix sdma doorbell range setting

2018-08-21 Thread Deucher, Alexander
Alternatively, we could cherry-pick the change to the doorbell ranges from 
Shaoyun.  Might also be useful to leave a comment here about the doorbell range 
for vega20 sdma.


Alex


From: Zhang, Hawking
Sent: Tuesday, August 21, 2018 5:02:24 AM
To: Quan, Evan; amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander; Quan, Evan
Subject: RE: [PATCH] drm/amdgpu: fix sdma doorbell range setting

Reviewed-by: Hawking Zhang 

Regards,
Hawking
-Original Message-
From: amd-gfx  On Behalf Of Evan Quan
Sent: 2018年8月21日 16:21
To: amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Quan, Evan 
; Zhang, Hawking 
Subject: [PATCH] drm/amdgpu: fix sdma doorbell range setting

Use the old doorbell range setting until the driver is able to support more 
sdma queues.

Change-Id: I80fc067fc64878d3c7dc3d38bbe1c6c94bec397f
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c 
b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
index 89ea92075b6b..2e65447637c6 100644
--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
+++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
@@ -76,7 +76,7 @@ static void nbio_v7_4_sdma_doorbell_range(struct 
amdgpu_device *adev, int instan

 if (use_doorbell) {
 doorbell_range = REG_SET_FIELD(doorbell_range, 
BIF_SDMA0_DOORBELL_RANGE, OFFSET, doorbell_index);
-   doorbell_range = REG_SET_FIELD(doorbell_range, 
BIF_SDMA0_DOORBELL_RANGE, SIZE, 8);
+   doorbell_range = REG_SET_FIELD(doorbell_range,
+BIF_SDMA0_DOORBELL_RANGE, SIZE, 2);
 } else
 doorbell_range = REG_SET_FIELD(doorbell_range, 
BIF_SDMA0_DOORBELL_RANGE, SIZE, 0);

--
2.18.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v4 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v4)

2018-08-21 Thread Christian König

Am 21.08.2018 um 15:43 schrieb Huang Rui:

On Mon, Aug 20, 2018 at 09:17:12PM +0800, Christian König wrote:

Am 20.08.2018 um 08:05 schrieb Huang Rui:

On Fri, Aug 17, 2018 at 06:38:16PM +0800, Koenig, Christian wrote:

Am 17.08.2018 um 12:08 schrieb Huang Rui:

I continue to work for bulk moving that based on the proposal by Christian.

Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact performance seriously.

Then Christian provided a workaround to not move PD/PT BOs on LRU with below
patch:
"drm/amdgpu: band aid validating VM PTs"
Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae

However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
instead of one by one.

Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
validated we move all BOs together to the end of the LRU without dropping the
lock for the LRU.

While doing so we note the beginning and end of this block in the LRU list.

Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
we don't move every BO one by one, but instead cut the LRU list into pieces so
that we bulk move everything to the end in just one operation.

Test data:
+--+-+---+---+
|  |The Talos|Clpeak(OCL)|BusSpeedReadback(OCL) 
 |
|  |Principle(Vulkan)|   |  
 |
++
|  | |   |0.319 ms(1k) 0.314 ms(2K) 0.308 
ms(4K) |
| Original |  147.7 FPS  |  76.86 us |0.307 ms(8K) 0.310 ms(16K)
 |
++
| Orignial + WA| |   |0.254 ms(1K) 0.241 ms(2K) 
 |
|(don't move   |  162.1 FPS  |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 
ms(16K)|
|PT BOs on LRU)| |   |  
 |
++
| Bulk move|  163.1 FPS  |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 
ms(4K) |
|  | |   |0.214 ms(8K) 0.225 ms(16K)
 |
+--+-+---+---+

After test them with above three benchmarks include vulkan and opencl. We can
see the visible improvement than original, and even better than original with
workaround.

v2: move all BOs include idle, relocated, and moved list to the end of LRU and
put them together.
v3: remove unused parameter and use list_for_each_entry instead of the one with
save entry.
v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
all bo will be back on idle list.

Signed-off-by: Christian König 
Signed-off-by: Huang Rui 
Tested-by: Mike Lothian 
Tested-by: Dieter Nützel 
Acked-by: Chunming Zhou 
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 11 ++
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 71 
++
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +-
3 files changed, 75 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 502b94f..9fbdf02 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1260,6 +1260,16 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
return 0;
}

+static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,

+struct amdgpu_cs_parser *p)
+{
+   struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+   struct amdgpu_vm *vm = >vm;
+
+   if (vm->validated)

That check belongs inside amdgpu_vm_move_to_lru_tail().


+   amdgpu_vm_move_to_lru_tail(adev, vm);
+}
+
int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file 
*filp)
{
struct amdgpu_device *adev = dev->dev_private;
@@ -1310,6 +1320,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)

	r = amdgpu_cs_submit(, cs);

+	amdgpu_cs_vm_move_on_lru(adev, );

out:
amdgpu_cs_parser_fini(, r, reserved_buffers);
return r;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 9c84770..037cfbc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
}

/**

+ * amdgpu_vm_move_to_lru_tail_by_list - move one list of BOs to end of LRU
+ *
+ * @vm: vm providing the BOs
+ * @list: the list that stored BOs
+ *
+ * Move one 

Re: [PATCH v4 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v4)

2018-08-21 Thread Huang Rui
On Mon, Aug 20, 2018 at 09:17:12PM +0800, Christian König wrote:
> Am 20.08.2018 um 08:05 schrieb Huang Rui:
> > On Fri, Aug 17, 2018 at 06:38:16PM +0800, Koenig, Christian wrote:
> >> Am 17.08.2018 um 12:08 schrieb Huang Rui:
> >>> I continue to work for bulk moving that based on the proposal by 
> >>> Christian.
> >>>
> >>> Background:
> >>> amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move 
> >>> all of
> >>> them on the end of LRU list one by one. Thus, that cause so many BOs 
> >>> moved to
> >>> the end of the LRU, and impact performance seriously.
> >>>
> >>> Then Christian provided a workaround to not move PD/PT BOs on LRU with 
> >>> below
> >>> patch:
> >>> "drm/amdgpu: band aid validating VM PTs"
> >>> Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
> >>>
> >>> However, the final solution should bulk move all PD/PT and PerVM BOs on 
> >>> the LRU
> >>> instead of one by one.
> >>>
> >>> Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need 
> >>> to be
> >>> validated we move all BOs together to the end of the LRU without dropping 
> >>> the
> >>> lock for the LRU.
> >>>
> >>> While doing so we note the beginning and end of this block in the LRU 
> >>> list.
> >>>
> >>> Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything 
> >>> to do,
> >>> we don't move every BO one by one, but instead cut the LRU list into 
> >>> pieces so
> >>> that we bulk move everything to the end in just one operation.
> >>>
> >>> Test data:
> >>> +--+-+---+---+
> >>> |  |The Talos|Clpeak(OCL)|BusSpeedReadback(OCL)   
> >>>|
> >>> |  |Principle(Vulkan)|   |
> >>>|
> >>> ++
> >>> |  | |   |0.319 ms(1k) 0.314 ms(2K) 
> >>> 0.308 ms(4K) |
> >>> | Original |  147.7 FPS  |  76.86 us |0.307 ms(8K) 0.310 ms(16K)  
> >>>|
> >>> ++
> >>> | Orignial + WA| |   |0.254 ms(1K) 0.241 ms(2K)   
> >>>|
> >>> |(don't move   |  162.1 FPS  |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 
> >>> 0.204 ms(16K)|
> >>> |PT BOs on LRU)| |   |
> >>>|
> >>> ++
> >>> | Bulk move|  163.1 FPS  |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 
> >>> 0.213 ms(4K) |
> >>> |  | |   |0.214 ms(8K) 0.225 ms(16K)  
> >>>|
> >>> +--+-+---+---+
> >>>
> >>> After test them with above three benchmarks include vulkan and opencl. We 
> >>> can
> >>> see the visible improvement than original, and even better than original 
> >>> with
> >>> workaround.
> >>>
> >>> v2: move all BOs include idle, relocated, and moved list to the end of 
> >>> LRU and
> >>> put them together.
> >>> v3: remove unused parameter and use list_for_each_entry instead of the 
> >>> one with
> >>> save entry.
> >>> v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that 
> >>> time,
> >>> all bo will be back on idle list.
> >>>
> >>> Signed-off-by: Christian König 
> >>> Signed-off-by: Huang Rui 
> >>> Tested-by: Mike Lothian 
> >>> Tested-by: Dieter Nützel 
> >>> Acked-by: Chunming Zhou 
> >>> ---
> >>>drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 11 ++
> >>>drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 71 
> >>> ++
> >>>drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +-
> >>>3 files changed, 75 insertions(+), 18 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> >>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >>> index 502b94f..9fbdf02 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >>> @@ -1260,6 +1260,16 @@ static int amdgpu_cs_submit(struct 
> >>> amdgpu_cs_parser *p,
> >>>   return 0;
> >>>}
> >>>
> >>> +static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,
> >>> +  struct amdgpu_cs_parser *p)
> >>> +{
> >>> + struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> >>> + struct amdgpu_vm *vm = >vm;
> >>> +
> >>> + if (vm->validated)
> >> That check belongs inside amdgpu_vm_move_to_lru_tail().
> >>
> >>> + amdgpu_vm_move_to_lru_tail(adev, vm);
> >>> +}
> >>> +
> >>>int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct 
> >>> drm_file *filp)
> >>>{
> >>>   struct amdgpu_device *adev = dev->dev_private;
> >>> @@ -1310,6 +1320,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void 
> >>> *data, struct drm_file *filp)
> 

[PATCH] drm/amdgpu: fix preamble handling

2018-08-21 Thread Christian König
At this point the command submission can still be interrupted.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 16 +---
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index d42d1c8f78f6..313ac971eaaf 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1015,13 +1015,9 @@ static int amdgpu_cs_ib_fill(struct amdgpu_device *adev,
if (r)
return r;
 
-   if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE) {
-   parser->job->preamble_status |= 
AMDGPU_PREAMBLE_IB_PRESENT;
-   if (!parser->ctx->preamble_presented) {
-   parser->job->preamble_status |= 
AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
-   parser->ctx->preamble_presented = true;
-   }
-   }
+   if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE)
+   parser->job->preamble_status |=
+   AMDGPU_PREAMBLE_IB_PRESENT;
 
if (parser->entity && parser->entity != entity)
return -EINVAL;
@@ -1244,6 +1240,12 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
 
amdgpu_cs_post_dependencies(p);
 
+   if ((job->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) &&
+   !p->ctx->preamble_presented) {
+   job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
+   p->ctx->preamble_presented = true;
+   }
+
cs->out.handle = seq;
job->uf_sequence = seq;
 
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v4] drm/amdgpu/sriov: Only sriov runtime support use kiq

2018-08-21 Thread Christian König

Am 21.08.2018 um 12:03 schrieb Emily Deng:

Use adev->gfx.kiq.ring.ready directly.

For sriov, don't use kiq in exclusive mode, as don't know how long time
it will take, some times it will occur exclusive timeout.

Signed-off-by: Emily Deng 


Reviewed-by: Christian König 


---
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 15 ---
  1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index f71615e..eec991f 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -320,9 +320,6 @@ signed long  amdgpu_kiq_reg_write_reg_wait(struct 
amdgpu_device *adev,
struct amdgpu_kiq *kiq = >gfx.kiq;
struct amdgpu_ring *ring = >ring;
  
-	if (!ring->ready)

-   return -EINVAL;
-
spin_lock_irqsave(>ring_lock, flags);
  
  	amdgpu_ring_alloc(ring, 32);

@@ -389,10 +386,14 @@ static void gmc_v9_0_flush_gpu_tlb(struct amdgpu_device 
*adev,
struct amdgpu_vmhub *hub = >vmhub[i];
u32 tmp = gmc_v9_0_get_invalidate_req(vmid);
  
-		r = amdgpu_kiq_reg_write_reg_wait(adev, hub->vm_inv_eng0_req + eng,

-   hub->vm_inv_eng0_ack + eng, tmp, 1 << vmid);
-   if (!r)
-   continue;
+   if (adev->gfx.kiq.ring.ready &&
+   (amdgpu_sriov_runtime(adev) ||
+!amdgpu_sriov_vf(adev))) {
+   r = amdgpu_kiq_reg_write_reg_wait(adev, 
hub->vm_inv_eng0_req + eng,
+   hub->vm_inv_eng0_ack + eng, tmp, 1 << vmid);
+   if (!r)
+   continue;
+   }
  
  		spin_lock(>gmc.invalidate_lock);
  


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: add ip_block_mask user option for static builds

2018-08-21 Thread Vishwakarma, Pratik

s/blokcs/blocks

s/dyanamic/dynamic

Regards

Pratik

On 8/21/2018 3:23 PM, Shirish S wrote:

This patch extends amdgpu.ip_block_mask to a Kconfig option as
well, that can be altered by user at build time for OS' that
do not permit passing dyanamic loading of amdgpu driver and also
passing command line arguments.

Note: This option to be used purely for debugging purposes and
amdgpu driver is not productised/tested extensively with any of its
blokcs disabled.
The default value of this option enables all IP's.

Signed-off-by: Shirish S 
---
  drivers/gpu/drm/amd/amdgpu/Kconfig  | 7 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 2 +-
  2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/Kconfig 
b/drivers/gpu/drm/amd/amdgpu/Kconfig
index e8af1f5..3f94ae5 100644
--- a/drivers/gpu/drm/amd/amdgpu/Kconfig
+++ b/drivers/gpu/drm/amd/amdgpu/Kconfig
@@ -23,6 +23,13 @@ config DRM_AMDGPU_CIK
  
  	  radeon.cik_support=0 amdgpu.cik_support=1
  
+config DRM_AMDGPU_IP_BLOCK_MASK

+   hex "AMDGPU IP Block Mask"
+   depends on DRM_AMDGPU
+   default "0x"
+   help
+ Modify this option to disable any IP block of amdgpu.
+
  config DRM_AMDGPU_USERPTR
bool "Always enable userptr write support"
depends on DRM_AMDGPU
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 2221f6b..bd0a876 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -93,7 +93,7 @@ int amdgpu_dpm = -1;
  int amdgpu_fw_load_type = -1;
  int amdgpu_aspm = -1;
  int amdgpu_runtime_pm = -1;
-uint amdgpu_ip_block_mask = 0x;
+uint amdgpu_ip_block_mask = CONFIG_DRM_AMDGPU_IP_BLOCK_MASK;
  int amdgpu_bapm = -1;
  int amdgpu_deep_color = 0;
  int amdgpu_vm_size = -1;


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: add ip_block_mask user option for static builds

2018-08-21 Thread Michel Dänzer
On 2018-08-21 11:53 a.m., Shirish S wrote:
> This patch extends amdgpu.ip_block_mask to a Kconfig option as
> well, that can be altered by user at build time for OS' that
> do not permit passing dyanamic loading of amdgpu driver and also
> passing command line arguments.
> 
> Note: This option to be used purely for debugging purposes and
> amdgpu driver is not productised/tested extensively with any of its
> blokcs disabled.
> The default value of this option enables all IP's.
> 
> Signed-off-by: Shirish S 
> ---
>  drivers/gpu/drm/amd/amdgpu/Kconfig  | 7 +++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 2 +-
>  2 files changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/Kconfig 
> b/drivers/gpu/drm/amd/amdgpu/Kconfig
> index e8af1f5..3f94ae5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/Kconfig
> +++ b/drivers/gpu/drm/amd/amdgpu/Kconfig
> @@ -23,6 +23,13 @@ config DRM_AMDGPU_CIK
>  
> radeon.cik_support=0 amdgpu.cik_support=1
>  
> +config DRM_AMDGPU_IP_BLOCK_MASK
> + hex "AMDGPU IP Block Mask"
> + depends on DRM_AMDGPU
> + default "0x"
> + help
> +   Modify this option to disable any IP block of amdgpu.

As I said before, IMO this doesn't belong upstream, as it's a workaround
for a downstream issue.


Also, this isn't generally usable on a system with multiple GPUs
supported by amdgpu, as the same block mask value may have different
effects with different GPUs.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v4] drm/amdgpu/sriov: Only sriov runtime support use kiq

2018-08-21 Thread Emily Deng
Use adev->gfx.kiq.ring.ready directly.

For sriov, don't use kiq in exclusive mode, as don't know how long time
it will take, some times it will occur exclusive timeout.

Signed-off-by: Emily Deng 
---
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 15 ---
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index f71615e..eec991f 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -320,9 +320,6 @@ signed long  amdgpu_kiq_reg_write_reg_wait(struct 
amdgpu_device *adev,
struct amdgpu_kiq *kiq = >gfx.kiq;
struct amdgpu_ring *ring = >ring;
 
-   if (!ring->ready)
-   return -EINVAL;
-
spin_lock_irqsave(>ring_lock, flags);
 
amdgpu_ring_alloc(ring, 32);
@@ -389,10 +386,14 @@ static void gmc_v9_0_flush_gpu_tlb(struct amdgpu_device 
*adev,
struct amdgpu_vmhub *hub = >vmhub[i];
u32 tmp = gmc_v9_0_get_invalidate_req(vmid);
 
-   r = amdgpu_kiq_reg_write_reg_wait(adev, hub->vm_inv_eng0_req + 
eng,
-   hub->vm_inv_eng0_ack + eng, tmp, 1 << vmid);
-   if (!r)
-   continue;
+   if (adev->gfx.kiq.ring.ready &&
+   (amdgpu_sriov_runtime(adev) ||
+!amdgpu_sriov_vf(adev))) {
+   r = amdgpu_kiq_reg_write_reg_wait(adev, 
hub->vm_inv_eng0_req + eng,
+   hub->vm_inv_eng0_ack + eng, tmp, 1 << vmid);
+   if (!r)
+   continue;
+   }
 
spin_lock(>gmc.invalidate_lock);
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: add ip_block_mask user option for static builds

2018-08-21 Thread Shirish S
This patch extends amdgpu.ip_block_mask to a Kconfig option as
well, that can be altered by user at build time for OS' that
do not permit passing dyanamic loading of amdgpu driver and also
passing command line arguments.

Note: This option to be used purely for debugging purposes and
amdgpu driver is not productised/tested extensively with any of its
blokcs disabled.
The default value of this option enables all IP's.

Signed-off-by: Shirish S 
---
 drivers/gpu/drm/amd/amdgpu/Kconfig  | 7 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 2 +-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/Kconfig 
b/drivers/gpu/drm/amd/amdgpu/Kconfig
index e8af1f5..3f94ae5 100644
--- a/drivers/gpu/drm/amd/amdgpu/Kconfig
+++ b/drivers/gpu/drm/amd/amdgpu/Kconfig
@@ -23,6 +23,13 @@ config DRM_AMDGPU_CIK
 
  radeon.cik_support=0 amdgpu.cik_support=1
 
+config DRM_AMDGPU_IP_BLOCK_MASK
+   hex "AMDGPU IP Block Mask"
+   depends on DRM_AMDGPU
+   default "0x"
+   help
+ Modify this option to disable any IP block of amdgpu.
+
 config DRM_AMDGPU_USERPTR
bool "Always enable userptr write support"
depends on DRM_AMDGPU
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 2221f6b..bd0a876 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -93,7 +93,7 @@ int amdgpu_dpm = -1;
 int amdgpu_fw_load_type = -1;
 int amdgpu_aspm = -1;
 int amdgpu_runtime_pm = -1;
-uint amdgpu_ip_block_mask = 0x;
+uint amdgpu_ip_block_mask = CONFIG_DRM_AMDGPU_IP_BLOCK_MASK;
 int amdgpu_bapm = -1;
 int amdgpu_deep_color = 0;
 int amdgpu_vm_size = -1;
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH v3] drm/amdgpu/sriov: Only sriov runtime support use kiq

2018-08-21 Thread Deng, Emily
>-Original Message-
>From: Christian König 
>Sent: Tuesday, August 21, 2018 5:05 PM
>To: Deng, Emily ; amd-gfx@lists.freedesktop.org
>Subject: Re: [PATCH v3] drm/amdgpu/sriov: Only sriov runtime support use kiq
>
>Am 21.08.2018 um 10:41 schrieb Emily Deng:
>> Move the check into the caller instead of returning an error code here
>>
>> For sriov, don't use kiq in exclusive mode, as don't know how long
>> time it will take, some times it will occur exclusive timeout.
>>
>> Signed-off-by: Emily Deng 
>> ---
>>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 19 +++
>>   1 file changed, 11 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> index 0bf8439..de1467e 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> @@ -321,9 +321,6 @@ signed long  amdgpu_kiq_reg_write_reg_wait(struct
>amdgpu_device *adev,
>>  struct amdgpu_kiq *kiq = >gfx.kiq;
>>  struct amdgpu_ring *ring = >ring;
>>
>> -if (!ring->ready)
>> -return -EINVAL;
>> -
>>  spin_lock_irqsave(>ring_lock, flags);
>>
>>  amdgpu_ring_alloc(ring, 32);
>> @@ -389,11 +386,17 @@ static void gmc_v9_0_flush_gpu_tlb(struct
>amdgpu_device *adev,
>>  for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
>>  struct amdgpu_vmhub *hub = >vmhub[i];
>>  u32 tmp = gmc_v9_0_get_invalidate_req(vmid);
>> -
>> -r = amdgpu_kiq_reg_write_reg_wait(adev, hub-
>>vm_inv_eng0_req + eng,
>> -hub->vm_inv_eng0_ack + eng, tmp, 1 << vmid);
>> -if (!r)
>> -continue;
>> +struct amdgpu_kiq *kiq = >gfx.kiq;
>> +struct amdgpu_ring *ring = >ring;
>> +
>> +if (ring->ready &&
>
>Do you really need the local variable here? Just check
>adev->gfx.kiq.ring.ready directly.
Ok, will modify.
>Apart from that the patch is Reviewed-by: Christian König
>.
>
>Regards,
>Christian.
>
>> +(amdgpu_sriov_runtime(adev) ||
>> + !amdgpu_sriov_vf(adev))) {
>> +r = amdgpu_kiq_reg_write_reg_wait(adev, hub-
>>vm_inv_eng0_req + eng,
>> +hub->vm_inv_eng0_ack + eng, tmp, 1 << vmid);
>> +if (!r)
>> +continue;
>> +}
>>
>>  spin_lock(>gmc.invalidate_lock);
>>

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v3] drm/amdgpu/sriov: Only sriov runtime support use kiq

2018-08-21 Thread Christian König

Am 21.08.2018 um 10:41 schrieb Emily Deng:

Move the check into the caller instead of returning an error code here

For sriov, don't use kiq in exclusive mode, as don't know how long time
it will take, some times it will occur exclusive timeout.

Signed-off-by: Emily Deng 
---
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 19 +++
  1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 0bf8439..de1467e 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -321,9 +321,6 @@ signed long  amdgpu_kiq_reg_write_reg_wait(struct 
amdgpu_device *adev,
struct amdgpu_kiq *kiq = >gfx.kiq;
struct amdgpu_ring *ring = >ring;
  
-	if (!ring->ready)

-   return -EINVAL;
-
spin_lock_irqsave(>ring_lock, flags);
  
  	amdgpu_ring_alloc(ring, 32);

@@ -389,11 +386,17 @@ static void gmc_v9_0_flush_gpu_tlb(struct amdgpu_device 
*adev,
for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
struct amdgpu_vmhub *hub = >vmhub[i];
u32 tmp = gmc_v9_0_get_invalidate_req(vmid);
-
-   r = amdgpu_kiq_reg_write_reg_wait(adev, hub->vm_inv_eng0_req + 
eng,
-   hub->vm_inv_eng0_ack + eng, tmp, 1 << vmid);
-   if (!r)
-   continue;
+   struct amdgpu_kiq *kiq = >gfx.kiq;
+   struct amdgpu_ring *ring = >ring;
+
+   if (ring->ready &&


Do you really need the local variable here? Just check 
adev->gfx.kiq.ring.ready directly.


Apart from that the patch is Reviewed-by: Christian König 
.


Regards,
Christian.


+   (amdgpu_sriov_runtime(adev) ||
+!amdgpu_sriov_vf(adev))) {
+   r = amdgpu_kiq_reg_write_reg_wait(adev, 
hub->vm_inv_eng0_req + eng,
+   hub->vm_inv_eng0_ack + eng, tmp, 1 << vmid);
+   if (!r)
+   continue;
+   }
  
  		spin_lock(>gmc.invalidate_lock);
  


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amdgpu: fix sdma doorbell range setting

2018-08-21 Thread Zhang, Hawking
Reviewed-by: Hawking Zhang 

Regards,
Hawking
-Original Message-
From: amd-gfx  On Behalf Of Evan Quan
Sent: 2018年8月21日 16:21
To: amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Quan, Evan 
; Zhang, Hawking 
Subject: [PATCH] drm/amdgpu: fix sdma doorbell range setting

Use the old doorbell range setting until the driver is able to support more 
sdma queues.

Change-Id: I80fc067fc64878d3c7dc3d38bbe1c6c94bec397f
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c 
b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
index 89ea92075b6b..2e65447637c6 100644
--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
+++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
@@ -76,7 +76,7 @@ static void nbio_v7_4_sdma_doorbell_range(struct 
amdgpu_device *adev, int instan
 
if (use_doorbell) {
doorbell_range = REG_SET_FIELD(doorbell_range, 
BIF_SDMA0_DOORBELL_RANGE, OFFSET, doorbell_index);
-   doorbell_range = REG_SET_FIELD(doorbell_range, 
BIF_SDMA0_DOORBELL_RANGE, SIZE, 8);
+   doorbell_range = REG_SET_FIELD(doorbell_range, 
+BIF_SDMA0_DOORBELL_RANGE, SIZE, 2);
} else
doorbell_range = REG_SET_FIELD(doorbell_range, 
BIF_SDMA0_DOORBELL_RANGE, SIZE, 0);
 
--
2.18.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v3] drm/amdgpu/sriov: Only sriov runtime support use kiq

2018-08-21 Thread Emily Deng
Move the check into the caller instead of returning an error code here

For sriov, don't use kiq in exclusive mode, as don't know how long time
it will take, some times it will occur exclusive timeout.

Signed-off-by: Emily Deng 
---
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 19 +++
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 0bf8439..de1467e 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -321,9 +321,6 @@ signed long  amdgpu_kiq_reg_write_reg_wait(struct 
amdgpu_device *adev,
struct amdgpu_kiq *kiq = >gfx.kiq;
struct amdgpu_ring *ring = >ring;
 
-   if (!ring->ready)
-   return -EINVAL;
-
spin_lock_irqsave(>ring_lock, flags);
 
amdgpu_ring_alloc(ring, 32);
@@ -389,11 +386,17 @@ static void gmc_v9_0_flush_gpu_tlb(struct amdgpu_device 
*adev,
for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
struct amdgpu_vmhub *hub = >vmhub[i];
u32 tmp = gmc_v9_0_get_invalidate_req(vmid);
-
-   r = amdgpu_kiq_reg_write_reg_wait(adev, hub->vm_inv_eng0_req + 
eng,
-   hub->vm_inv_eng0_ack + eng, tmp, 1 << vmid);
-   if (!r)
-   continue;
+   struct amdgpu_kiq *kiq = >gfx.kiq;
+   struct amdgpu_ring *ring = >ring;
+
+   if (ring->ready &&
+   (amdgpu_sriov_runtime(adev) ||
+!amdgpu_sriov_vf(adev))) {
+   r = amdgpu_kiq_reg_write_reg_wait(adev, 
hub->vm_inv_eng0_req + eng,
+   hub->vm_inv_eng0_ack + eng, tmp, 1 << vmid);
+   if (!r)
+   continue;
+   }
 
spin_lock(>gmc.invalidate_lock);
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: fix sdma doorbell range setting

2018-08-21 Thread Evan Quan
Use the old doorbell range setting until the driver is
able to support more sdma queues.

Change-Id: I80fc067fc64878d3c7dc3d38bbe1c6c94bec397f
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c 
b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
index 89ea92075b6b..2e65447637c6 100644
--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
+++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
@@ -76,7 +76,7 @@ static void nbio_v7_4_sdma_doorbell_range(struct 
amdgpu_device *adev, int instan
 
if (use_doorbell) {
doorbell_range = REG_SET_FIELD(doorbell_range, 
BIF_SDMA0_DOORBELL_RANGE, OFFSET, doorbell_index);
-   doorbell_range = REG_SET_FIELD(doorbell_range, 
BIF_SDMA0_DOORBELL_RANGE, SIZE, 8);
+   doorbell_range = REG_SET_FIELD(doorbell_range, 
BIF_SDMA0_DOORBELL_RANGE, SIZE, 2);
} else
doorbell_range = REG_SET_FIELD(doorbell_range, 
BIF_SDMA0_DOORBELL_RANGE, SIZE, 0);
 
-- 
2.18.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH v2] drm/amdgpu/sriov: Only sriov runtime support use kiq

2018-08-21 Thread Deng, Emily
>-Original Message-
>From: Christian König 
>Sent: Tuesday, August 21, 2018 3:51 PM
>To: Deng, Emily ; amd-gfx@lists.freedesktop.org
>Subject: Re: [PATCH v2] drm/amdgpu/sriov: Only sriov runtime support use kiq
>
>Am 21.08.2018 um 07:16 schrieb Emily Deng:
>> Refine the code style, add brackets.
>>
>> For sriov, don't use kiq in exclusive mode, as don't know how long
>> time it will take, some times it will occur exclusive timeout.
>>
>> Signed-off-by: Emily Deng 
>> ---
>>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> index f71615e..adfd0bd 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> @@ -320,7 +320,7 @@ signed long  amdgpu_kiq_reg_write_reg_wait(struct
>amdgpu_device *adev,
>>  struct amdgpu_kiq *kiq = >gfx.kiq;
>>  struct amdgpu_ring *ring = >ring;
>>
>> -if (!ring->ready)
>> +if (!ring->ready || (!amdgpu_sriov_runtime(adev) &&
>> +amdgpu_sriov_vf(adev)))
>
>Please rework the code flow a bit to make it more obvious what happens here.
>
>E.g. move the check into the caller instead of returning an error code here.
Thanks, will refine.

>Christian.
>
>>  return -EINVAL;
>>
>>  spin_lock_irqsave(>ring_lock, flags);

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v2] drm/amdgpu/sriov: Only sriov runtime support use kiq

2018-08-21 Thread Christian König

Am 21.08.2018 um 07:16 schrieb Emily Deng:

Refine the code style, add brackets.

For sriov, don't use kiq in exclusive mode, as don't know how long time
it will take, some times it will occur exclusive timeout.

Signed-off-by: Emily Deng 
---
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index f71615e..adfd0bd 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -320,7 +320,7 @@ signed long  amdgpu_kiq_reg_write_reg_wait(struct 
amdgpu_device *adev,
struct amdgpu_kiq *kiq = >gfx.kiq;
struct amdgpu_ring *ring = >ring;
  
-	if (!ring->ready)

+   if (!ring->ready || (!amdgpu_sriov_runtime(adev) && 
amdgpu_sriov_vf(adev)))


Please rework the code flow a bit to make it more obvious what happens here.

E.g. move the check into the caller instead of returning an error code here.

Christian.


return -EINVAL;
  
  	spin_lock_irqsave(>ring_lock, flags);


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx