Am 06.12.18 um 21:55 schrieb Andrey Grodzovsky:
If CS is submitted using guilty ctx, we terminate amdgpu_cs_parser_init
before locking ctx->lock, latter in amdgpu_cs_parser_fini we still are
trying to release the lock just becase parser->ctx != NULL.
Signed-off-by: Andrey Grodzovsky
Reviewed-
Some new variants require different firmwares.
Signed-off-by: Junwei Zhang
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
index ceadeeadfa56..387f
Some new variants require different firmwares.
Signed-off-by: Junwei Zhang
---
drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
index 1ad7e6b8ed1d..0edb8622
Instead of EVV cks-off voltages, avfs cks-off voltages can avoid
the overshoot voltages when switching sclk.
Signed-off-by: Kenneth Feng
---
drivers/gpu/drm/amd/powerplay/inc/smu7_ppsmc.h | 2 ++
drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c | 6 ++
2 files changed, 8 inse
XGMI hive put kfd_pre_reset into amdgpu_device_lock_adev,
but outside req_full_gpu of sriov.
It would make sriov hang during reset.
Change-Id: I5b3e2a42c77b3b9635419df4470d021df7be34d1
Signed-off-by: Wentao Lou
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 10 ++
1 file changed, 6 ins
This change seems to be breaking the build for me. I'm getting errors like this:
CC [M] drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm.o
In file included from ../include/trace/events/tlb.h:9:0,
from ../arch/x86/include/asm/mmu_context.h:10,
from ../i
> -Original Message-
> From: dri-devel On Behalf Of
> Andrey Grodzovsky
> Sent: Friday, December 07, 2018 1:41 AM
> To: dri-de...@lists.freedesktop.org; amd-gfx@lists.freedesktop.org;
> ckoenig.leichtzumer...@gmail.com; e...@anholt.net;
> etna...@lists.freedesktop.org
> Cc: Liu, Monk
>
SWDEV-173384: vm-mix reboot (4 VMs) fail on Tonga under sriov
Phenomena: there is compute_1.3.1 ring test fail on one VM
when it starts to do hw_init after it is rebooted
Root cause: RLC will go wrong in soft_reset under sriov
Workaround: init RLC csb, and skip RLC stop, reset, start
Reviewed-by: Evan Quan
> -Original Message-
> From: amd-gfx On Behalf Of Alex
> Deucher
> Sent: 2018年12月7日 0:29
> To: amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander
> Subject: [PATCH 3/3] drm/amdgpu/powerplay: check MC firmware for FFC
> support
>
> Check if the MC firmware sup
Reviewed-by: Evan Quan
> -Original Message-
> From: amd-gfx On Behalf Of Alex
> Deucher
> Sent: 2018年12月7日 0:29
> To: amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander
> Subject: [PATCH 2/3] drm/amdgpu/powerplay: update smu7_ppsmc.h
>
> Add new messages for polaris.
>
> Signed-of
Comment inline
> -Original Message-
> From: amd-gfx On Behalf Of Alex
> Deucher
> Sent: 2018年12月7日 0:29
> To: amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander
> Subject: [PATCH 1/3] drm/amdgpu/powerplay: Add special avfs cases for
> some polaris asics
>
> Add special avfs handling
On 2018-12-06 6:32 a.m., Rex Zhu wrote:
> used to manager the reserverd vm space.
>
> Signed-off-by: Rex Zhu
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 8 ++--
> drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h | 4 +++-
> drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 6 +-
> 3 files changed, 1
On 12/06/2018 01:33 PM, Christian König wrote:
> Am 06.12.18 um 18:41 schrieb Andrey Grodzovsky:
>> Decauple sched threads stop and start and ring mirror
>> list handling from the policy of what to do about the
>> guilty jobs.
>> When stoppping the sched thread and detaching sched fences
>> from
Expedite job deletion from ring mirror list to the HW fence signal
callback instead from finish_work, together with waiting for all
such fences to signal in drm_sched_stop we garantee that
already signaled job will not be processed twice.
Remove the sched finish fence callback and just submit finis
Decauple sched threads stop and start and ring mirror
list handling from the policy of what to do about the
guilty jobs.
When stoppping the sched thread and detaching sched fences
from non signaled HW fenes wait for all signaled HW fences
to complete before rerunning the jobs.
v2: Fix resubmission
Use HMM helper function hmm_vma_fault() to get physical pages backing
userptr and start CPU page table update track of those pages. Then use
hmm_vma_range_done() to check if those pages are updated before
amdgpu_cs_submit for gfx or before user queues are resumed for kfd.
If userptr pages are upda
There is circular lock between gfx and kfd path with HMM change:
lock(dqm) -> bo::reserve -> amdgpu_mn_lock
To avoid this, move init/unint_mqd() out of lock(dqm), to remove nested
locking between mmap_sem and bo::reserve. The locking order
is: bo::reserve -> amdgpu_mn_lock(p->mn)
Change-Id: I2ec0
Replace our MMU notifier with hmm_mirror_ops.sync_cpu_device_pagetables
callback. Enable CONFIG_HMM and CONFIG_HMM_MIRROR as a dependency in
DRM_AMDGPU_USERPTR Kconfig.
It supports both KFD userptr and gfx userptr paths.
The depdent HMM patchset from Jérôme Glisse are all merged into 4.20.0
kerne
On 2018-12-03 8:52 p.m., Kuehling, Felix wrote:
> See comments inline. I didn't review the amdgpu_cs and amdgpu_gem parts
> as I don't know them very well.
>
> On 2018-12-03 3:19 p.m., Yang, Philip wrote:
>> Use HMM helper function hmm_vma_fault() to get physical pages backing
>> userptr and start
If CS is submitted using guilty ctx, we terminate amdgpu_cs_parser_init
before locking ctx->lock, latter in amdgpu_cs_parser_fini we still are
trying to release the lock just becase parser->ctx != NULL.
Signed-off-by: Andrey Grodzovsky
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 4 ++--
1 file
Am 06.12.18 um 18:41 schrieb Andrey Grodzovsky:
Decauple sched threads stop and start and ring mirror
list handling from the policy of what to do about the
guilty jobs.
When stoppping the sched thread and detaching sched fences
from non signaled HW fenes wait for all signaled HW fences
to complet
On 12/06/2018 12:41 PM, Andrey Grodzovsky wrote:
> Expedite job deletion from ring mirror list to the HW fence signal
> callback instead from finish_work, together with waiting for all
> such fences to signal in drm_sched_stop we garantee that
> already signaled job will not be processed twice.
>
Expedite job deletion from ring mirror list to the HW fence signal
callback instead from finish_work, together with waiting for all
such fences to signal in drm_sched_stop we garantee that
already signaled job will not be processed twice.
Remove the sched finish fence callback and just submit finis
Decauple sched threads stop and start and ring mirror
list handling from the policy of what to do about the
guilty jobs.
When stoppping the sched thread and detaching sched fences
from non signaled HW fenes wait for all signaled HW fences
to complete before rerunning the jobs.
Suggested-by: Christ
Add new messages for polaris.
Signed-off-by: Alex Deucher
---
drivers/gpu/drm/amd/powerplay/inc/smu7_ppsmc.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu7_ppsmc.h
b/drivers/gpu/drm/amd/powerplay/inc/smu7_ppsmc.h
index 62f36ba2435b..d11d6a797ce4 10
Check if the MC firmware supports FFC and tell the SMC so
mclk switching is handled properly.
Signed-off-by: Alex Deucher
---
drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 8
1 file changed, 8 insertions(+)
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
b/drivers/g
Add special avfs handling for some polaris variants.
Signed-off-by: Alex Deucher
---
.../drm/amd/powerplay/smumgr/polaris10_smumgr.c| 51 ++
1 file changed, 51 insertions(+)
diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c
b/drivers/gpu/drm/amd/power
Ok - the change is Acked-by: Andrey Grodzovsky
Andrey
On 12/06/2018 10:59 AM, Nicholas Kazlauskas wrote:
> On 2018-12-06 10:36 a.m., Grodzovsky, Andrey wrote:
>> Not an expert on Freesync so maybe stupid question but from he comment
>> looks like this pipe locking is only for the sake of Freesy
Hi, Rex,
Change is done according to your suggestion.
Thanks,
Hersen
commit 41b6aadc991481986e6db278121065c5f7e08195
Author: hersen wu
Date: Wed Nov 28 16:55:47 2018 -0500
drm/amd/powerplay: rv dal-pplib interface refactor powerplay part
[WHY] clarify dal input parameters to pp
On 2018-12-06 10:36 a.m., Grodzovsky, Andrey wrote:
> Not an expert on Freesync so maybe stupid question but from he comment
> looks like this pipe locking is only for the sake of Freesync mode there
> - why is it then called unconditionally w/o checking if you even run in
> Freesync mode ?
>
> An
Not an expert on Freesync so maybe stupid question but from he comment
looks like this pipe locking is only for the sake of Freesync mode there
- why is it then called unconditionally w/o checking if you even run in
Freesync mode ?
Andrey
On 12/06/2018 08:42 AM, Kazlauskas, Nicholas wrote:
>
KASAN caught something during today's piglit run, see the attached dmesg
excerpt.
Looks like amdgpu destroys the VM while the scheduler still has a
reference to its entity?
--
Earthling Michel Dänzer | http://www.amd.com
Libre software enthusiast |
Hi Rex,
ok still doesn't make sense to me. VM updates are pipelined with command
submissions.
So you just need to create the mapping for the specific ID when the CTX
is created.
And when the CTX is destroyed you unmap the CSA again.
Or is that for each queue inside the context? Please clarify
Hi Christian,
We allocate and map csa per ctx, need to record the used/free vm space.
So use bitmap to manager the reserved vm space.
Also add resv_space_id in ctx.
When amdgpu_ctx_fini, we can clear the bit in the bitmap.
Best Regards
Rex
> -Original Message-
> From: Christian König
> -Original Message-
> From: Christian König
> Sent: Thursday, December 6, 2018 8:33 PM
> To: Zhu, Rex ; amd-gfx@lists.freedesktop.org
> Subject: Re: [PATCH 3/9] drm/amdgpu: Use dynamical reserved vm size
>
> Am 06.12.18 um 13:14 schrieb Rex Zhu:
> > Use dynamical reserved vm size insta
On Thu, Dec 6, 2018 at 1:12 AM Kenneth Feng wrote:
>
> Instead of EVV cks-off voltages, avfs cks-off voltages can avoid
> the overshoot voltages when switching sclk.
>
> Signed-off-by: Kenneth Feng
Acked-by: Alex Deucher
> ---
> drivers/gpu/drm/amd/powerplay/inc/smu7_ppsmc.h | 2 ++
>
On 12/5/18 4:01 PM, Kazlauskas, Nicholas wrote:
> On 2018-12-05 3:44 p.m., Grodzovsky, Andrey wrote:
>>
>>
>> On 12/05/2018 03:42 PM, Kazlauskas, Nicholas wrote:
>>> On 2018-12-05 3:26 p.m., Grodzovsky, Andrey wrote:
On 12/05/2018 02:59 PM, Nicholas Kazlauskas wrote:
> [Why]
> Leg
Am 06.12.18 um 13:14 schrieb Rex Zhu:
create and map csa for gfx/sdma engine to save the
middle command buffer when gpu preemption triggered.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 9 +++---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 55
Am 06.12.18 um 13:14 schrieb Rex Zhu:
to support cp gfx mid-command buffer preemption in baremetal
Signed-off-by: Rex Zhu
Acked-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 15 ++-
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 15 ++-
2 files change
Am 06.12.18 um 13:14 schrieb Rex Zhu:
add a point of struct amdgpu_job in emit_cntxcntl
interface in order to get the csa mc address per ctx
when emit ce metadata in baremetal.
Signed-off-by: Rex Zhu
Reviewed-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 2 +-
driv
Am 06.12.18 um 13:14 schrieb Rex Zhu:
save csa mc address in the job, so can patch the
address to pm4 when emit_ib even the ctx was freed.
suggested by Christian.
Signed-off-by: Rex Zhu
Reviewed-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 ++
drivers/gpu/drm/amd
Am 06.12.18 um 13:14 schrieb Rex Zhu:
1. meet kfd request
2. align with baremetal, in baremetal, driver map csa
per ctx.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 19 ++-
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 8
2 files changed, 1
Am 06.12.18 um 13:14 schrieb Rex Zhu:
used to manager the reserverd vm space.
Why do we need that?
Christian.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 8 ++--
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h | 4 +++-
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 6 +
Am 06.12.18 um 13:14 schrieb Rex Zhu:
Use dynamical reserved vm size instand of hardcode.
driver always reserve AMDGPU_VA_RESERVED_SIZE at the
bottom of VM space.
for gfx/sdma mcbp feature,
reserve AMDGPU_VA_RESERVED_SIZ * AMDGPU_VM_MAX_NUM_CTX
at the top of VM space.
For sriov, only need to re
Am 06.12.18 um 13:14 schrieb Rex Zhu:
on baremetal, driver create csa per ctx.
So add a function argument: ctx_id to
get csa gpu addr.
In Sriov, driver create csa per process,
so ctx id always 1.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c | 5 +++--
drivers/gpu/drm/am
Am 06.12.18 um 13:13 schrieb Rex Zhu:
driver need to reserve resource for each ctx for
some hw features. so add this limitation.
Signed-off-by: Rex Zhu
Reviewed-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 2 +-
2 file
on baremetal, driver create csa per ctx.
So add a function argument: ctx_id to
get csa gpu addr.
In Sriov, driver create csa per process,
so ctx id always 1.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c | 5 +++--
drivers/gpu/drm/amd/amdgpu/amdgpu_csa.h | 2 +-
drivers/gpu/
on baremetal, driver create csa per ctx.
So add a function argument: ctx_id to
get csa gpu addr.
In Sriov, driver create csa per process,
so ctx id always 1.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c | 5 +++--
drivers/gpu/drm/amd/amdgpu/amdgpu_csa.h | 2 +-
drivers/gpu/
1. meet kfd request
2. align with baremetal, in baremetal, driver map csa
per ctx.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 19 ++-
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 8
2 files changed, 18 insertions(+), 9 deletions(-)
diff --git
save csa mc address in the job, so can patch the
address to pm4 when emit_ib even the ctx was freed.
suggested by Christian.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 ++
drivers/gpu/drm/amd/amdgpu/amdgpu_job.h | 4 +++-
2 files changed, 5 insertions(+), 1 deletion(
create and map csa for gfx/sdma engine to save the
middle command buffer when gpu preemption triggered.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 9 +++---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 55 -
drivers/gpu/drm/amd/amdgpu/amdg
add a point of struct amdgpu_job in emit_cntxcntl
interface in order to get the csa mc address per ctx
when emit ce metadata in baremetal.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 4 ++--
drivers/gpu/drm/amd/amdgpu/g
to support cp gfx mid-command buffer preemption in baremetal
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 15 ++-
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 15 ++-
2 files changed, 20 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/amd/amd
Use dynamical reserved vm size instand of hardcode.
driver always reserve AMDGPU_VA_RESERVED_SIZE at the
bottom of VM space.
for gfx/sdma mcbp feature,
reserve AMDGPU_VA_RESERVED_SIZ * AMDGPU_VM_MAX_NUM_CTX
at the top of VM space.
For sriov, only need to reserve AMDGPU_VA_RESERVED_SIZE
at the top
driver need to reserve resource for each ctx for
some hw features. so add this limitation.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/a
on baremetal, driver create csa per ctx.
So add a function argument: ctx_id to
get csa gpu addr.
In Sriov, driver create csa per process,
so ctx id always 1.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c | 5 +++--
drivers/gpu/drm/amd/amdgpu/amdgpu_csa.h | 2 +-
drivers/gpu/
used to manager the reserverd vm space.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 8 ++--
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h | 4 +++-
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 6 +-
3 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/drivers/g
save csa mc address in the job, so can patch the
address to pm4 when emit_ib even the ctx was freed.
suggested by Christian.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 ++
drivers/gpu/drm/amd/amdgpu/amdgpu_job.h | 4 +++-
2 files changed, 5 insertions(+), 1 deletion(
1. meet kfd request
2. align with baremetal, in baremetal, driver map csa
per ctx.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 19 ++-
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 8
2 files changed, 18 insertions(+), 9 deletions(-)
diff --git
to support cp gfx mid-command buffer preemption in baremetal
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 15 ++-
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 15 ++-
2 files changed, 20 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/amd/amd
create and map csa for gfx/sdma engine to save the
middle command buffer when gpu preemption triggered.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 9 +++---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 55 -
drivers/gpu/drm/amd/amdgpu/amdg
used to manager the reserverd vm space.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 8 ++--
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h | 4 +++-
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 6 +-
3 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/drivers/g
driver need to reserve resource for each ctx for
some hw features. so add this limitation.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/a
add a point of struct amdgpu_job in emit_cntxcntl
interface in order to get the csa mc address per ctx
when emit ce metadata in baremetal.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 4 ++--
drivers/gpu/drm/amd/amdgpu/g
on baremetal, driver create csa per ctx.
So add a function argument: ctx_id to
get csa gpu addr.
In Sriov, driver create csa per process,
so ctx id always 1.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c | 5 +++--
drivers/gpu/drm/amd/amdgpu/amdgpu_csa.h | 2 +-
drivers/gpu/
Use dynamical reserved vm size instand of hardcode.
driver always reserve AMDGPU_VA_RESERVED_SIZE at the
bottom of VM space.
for gfx/sdma mcbp feature,
reserve AMDGPU_VA_RESERVED_SIZ * AMDGPU_VM_MAX_NUM_CTX
at the top of VM space.
For sriov, only need to reserve AMDGPU_VA_RESERVED_SIZE
at the top
For a lot of use cases we need 64bit sequence numbers. Currently drivers
overload the dma_fence structure to store the additional bits.
Stop doing that and make the sequence number in the dma_fence always
64bit.
For compatibility with hardware which can do only 32bit sequences the
comparisons in
This completes "drm/syncobj: Drop add/remove_callback from driver
interface" and cleans up the implementation a bit.
Signed-off-by: Christian König
---
drivers/gpu/drm/drm_syncobj.c | 91 ++-
include/drm/drm_syncobj.h | 21 --
2 files changed,
Lockless container implementation similar to a dma_fence_array, but with
only two elements per node and automatic garbage collection.
v2: properly document dma_fence_chain_for_each, add dma_fence_chain_find_seqno,
drop prev reference during garbage collection if it's not a chain fence.
v3: use
Implement finding the right timeline point in drm_syncobj_find_fence.
v2: return -EINVAL when the point is not submitted yet.
v3: fix reference counting bug, add flags handling as well
Signed-off-by: Christian König
---
drivers/gpu/drm/drm_syncobj.c | 43
From: Chunming Zhou
Signed-off-by: Chunming Zhou
---
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 90f474f98b6e..316bfc1a6a75 100644
--- a/dri
From: Chunming Zhou
user mode can query timeline payload.
v2: check return value of copy_to_user
v3: handle querying entry by entry
v4: rebase on new chain container, simplify interface
Signed-off-by: Chunming Zhou
Cc: Daniel Rakos
Cc: Jason Ekstrand
Cc: Bas Nieuwenhuizen
Cc: Dave Airlie
Cc
From: Chunming Zhou
syncobj wait/signal operation is appending in command submission.
v2: separate to two kinds in/out_deps functions
Signed-off-by: Chunming Zhou
Cc: Daniel Rakos
Cc: Jason Ekstrand
Cc: Bas Nieuwenhuizen
Cc: Dave Airlie
Cc: Christian König
Cc: Chris Wilson
---
drivers/gp
Use the dma_fence_chain object to create a timeline of fence objects
instead of just replacing the existing fence.
v2: rebase and cleanup
Signed-off-by: Christian König
---
drivers/gpu/drm/drm_syncobj.c | 37 +
include/drm/drm_syncobj.h | 5 +
2 file
From: Chunming Zhou
points array is one-to-one match with syncobjs array.
v2:
add seperate ioctl for timeline point wait, otherwise break uapi.
v3:
userspace can specify two kinds waits::
a. Wait for time point to be completed.
b. and wait for time point to become available
v4:
rebase
v5:
add com
Am 06.12.18 um 03:42 schrieb wentalou:
amdgpu_ring_soft_recovery would have Call-Trace,
when s_fence->parent was NULL inside amdgpu_job_timedout.
Check fence first, as drm_sched_hw_job_reset did.
Change-Id: Ibb062e36feb4e2522a59641fe0d2d76b9773cda7
Signed-off-by: Wentao Lou
Reviewed-by: Chris
Am 06.12.18 um 00:25 schrieb Kuehling, Felix:
On 2018-12-05 4:15 a.m., Christian König wrote:
This finally enables processing of ring 1 & 2.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/vega10_ih.c | 68 --
1 file changed, 63 insertions(+), 5 deletio
Am 05.12.18 um 23:53 schrieb Kuehling, Felix:
Depending on the interrupt ring, the IRQ dispatch and processing
functions will run in interrupt context or in a worker thread.
Is there a way for the processing functions to find out which context
it's running in? That may influence decisions whethe
78 matches
Mail list logo