[AMD Public Use]
Reviewed-by: Hawking Zhang
Regards,
Hawking
-Original Message-
From: Gao, Likun
Sent: Wednesday, July 8, 2020 13:48
To: amd-gfx@lists.freedesktop.org
Cc: Zhang, Hawking ; Gao, Likun
Subject: [v2] drm/amdgpu: remove unnecessary logic of ASIC check
From: Likun Gao
Re
From: Likun Gao
Remove some unused ASIC check logic.
Remove some definition of amdgpu_device which only used by
the removed ASIC check logic.(V2)
Signed-off-by: Likun Gao
Change-Id: I5b06a51b41790b4df1078099848025851f79c320
---
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 5 -
drivers/gpu/drm/
[AMD Public Use]
Reviewed-by: Hawking Zhang
Regards,
Hawking
-Original Message-
From: Gao, Likun
Sent: Wednesday, July 8, 2020 13:25
To: amd-gfx@lists.freedesktop.org
Cc: Zhang, Hawking ; Gao, Likun
Subject: [PATCH] drm/amdgpu: enable gpu recovery for sienna cichlid
From: Likun Gao
[AMD Public Use]
Reviewed-by: Hawking Zhang
Regards,
Hawking
-Original Message-
From: Gao, Likun
Sent: Wednesday, July 8, 2020 13:24
To: amd-gfx@lists.freedesktop.org
Cc: Zhang, Hawking ; Gao, Likun
Subject: [PATCH] drm/amdgpu: remove unnecessary logic of ASIC check
From: Likun Gao
From: Likun Gao
Enable gpu recovery for sienna cichlid by default to trigger
gpu recovery once needed.
Signed-off-by: Likun Gao
Change-Id: Iaf3cfa145bdc8407771d5a26dabb413570980a85
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/dr
From: Likun Gao
Remove some unused ASIC check logic.
Signed-off-by: Likun Gao
Change-Id: Ief8bcb77392294b180473754e669b9e460a04826
---
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 4
drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 7 +--
drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c | 3 +--
3 files ch
Hi, Thomas, Joerg, Boris, Ingo, Baolu, and x86/iommu maintainers,
On Tue, Jun 30, 2020 at 04:44:30PM -0700, Fenghua Yu wrote:
> Typical hardware devices require a driver stack to translate application
> buffers to hardware addresses, and a kernel-user transition to notify the
> hardware of new wor
To improve coverage also annotate the gpu reset code itself, since
that's called from other places than drm/scheduler (which is already
annotated). Annotations nests, so this doesn't break anything, and
allows easier testing.
Cc: linux-me...@vger.kernel.org
Cc: linaro-mm-...@lists.linaro.org
Cc: l
...
I think it's time to stop this little exercise.
The lockdep splat, for the record:
[ 132.583381] ==
[ 132.584091] WARNING: possible circular locking dependency detected
[ 132.584775] 5.7.0-rc3+ #346 Tainted: GW
[ 132.585461] ---
Trying to grab dma_resv_lock while in commit_tail before we've done
all the code that leads to the eventual signalling of the vblank event
(which can be a dma_fence) is deadlock-y. Don't do that.
Here the solution is easy because just grabbing locks to read
something races anyway. We don't need to
This is one from the department of "maybe play lottery if you hit
this, karma compensation might work". Or at least lockdep ftw!
This reverts commit 565d1941557756a584ac357d945bc374d5fcd1d0.
It's not quite as low-risk as the commit message claims, because this
grabs console_lock, which might be h
If the scheduler rt thread gets stuck on a mutex that we're holding
while waiting for gpu workloads to complete, we have a problem.
Add dma-fence annotations so that lockdep can check this for us.
I've tried to quite carefully review this, and I think it's at the
right spot. But obviosly no exper
This is a bit tricky, since ->notifier_lock is held while calling
dma_fence_wait we must ensure that also the read side (i.e.
dma_fence_begin_signalling) is on the same side. If we mix this up
lockdep complaints, and that's again why we want to have these
annotations.
A nice side effect of this is
In the face of unpriviledged userspace being able to submit bogus gpu
workloads the kernel needs gpu timeout and reset (tdr) to guarantee
that dma_fences actually complete. Annotate this worker to make sure
we don't have any accidental locking inversions or other problems
lurking.
Originally this
My dma-fence lockdep annotations caught an inversion because we
allocate memory where we really shouldn't:
kmem_cache_alloc+0x2b/0x6d0
amdgpu_fence_emit+0x30/0x330 [amdgpu]
amdgpu_ib_schedule+0x306/0x550 [amdgpu]
amdgpu_job_run+0x10f/0x260 [amdgpu]
drm_sched
This is a bit disappointing since we need to split the annotations
over all the different parts.
I was considering just leaking the critical section into the
->atomic_commit_tail callback of each driver. But that would mean we
need to pass the fence_cookie into each driver (there's a total of 13
i
Not going to bother with a complete&pretty commit message, just
offending backtrace:
kvmalloc_node+0x47/0x80
dc_create_state+0x1f/0x60 [amdgpu]
dc_commit_state+0xcb/0x9b0 [amdgpu]
amdgpu_dm_atomic_commit_tail+0xd31/0x2010 [amdgpu]
commit_tail+0xa4/0x140 [drm
I need a canary in a ttm-based atomic driver to make sure the
dma_fence_begin/end_signalling annotations actually work.
Cc: linux-me...@vger.kernel.org
Cc: linaro-mm-...@lists.linaro.org
Cc: linux-r...@vger.kernel.org
Cc: amd-gfx@lists.freedesktop.org
Cc: intel-...@lists.freedesktop.org
Cc: Chris
This is needed to signal the fences from page flips, annotate it
accordingly. We need to annotate entire timer callback since if we get
stuck anywhere in there, then the timer stops, and hence fences stop.
Just annotating the top part that does the vblank handling isn't
enough.
Cc: linux-me...@vge
Comes up every few years, gets somewhat tedious to discuss, let's
write this down once and for all.
What I'm not sure about is whether the text should be more explicit in
flat out mandating the amdkfd eviction fences for long running compute
workloads or workloads where userspace fencing is allowe
This is rather overkill since currently all drivers call this from
hardirq (or at least timers). But maybe in the future we're going to
have thread irq handlers and what not, doesn't hurt to be prepared.
Plus this is an easy start for sprinkling these fence annotations into
shared code.
Cc: linux-
Design is similar to the lockdep annotations for workers, but with
some twists:
- We use a read-lock for the execution/worker/completion side, so that
this explicit annotation can be more liberally sprinkled around.
With read locks lockdep isn't going to complain if the read-side
isn't neste
Two in one go:
- it is allowed to call dma_fence_wait() while holding a
dma_resv_lock(). This is fundamental to how eviction works with ttm,
so required.
- it is allowed to call dma_fence_wait() from memory reclaim contexts,
specifically from shrinker callbacks (which i915 does), and from mm
[AMD Public Use]
-Original Message-
From: amd-gfx On Behalf Of Christian
König
Sent: Monday, July 6, 2020 11:18 PM
To: amd-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org
Subject: [PATCH 1/2] drm/ttm: further cleanup ttm_mem_reg handling
Stop touching the backend private poin
On Tue, Jul 7, 2020 at 11:22 AM Alex Deucher wrote:
>
> On Tue, Jul 7, 2020 at 2:15 PM Lepton Wu wrote:
> >
> > I hit this when compiling amdgpu in kernel. amdgpu_driver_load_kms fail
> > to load firmwares since GPU was initialized before rootfs is ready.
> > Just gracefully fail in such cases.
>
On Tue, Jul 7, 2020 at 2:15 PM Lepton Wu wrote:
>
> I hit this when compiling amdgpu in kernel. amdgpu_driver_load_kms fail
> to load firmwares since GPU was initialized before rootfs is ready.
> Just gracefully fail in such cases.
>
> v2: Check return code of amdgpu_driver_load_kms
>
> Signed-off
I hit this when compiling amdgpu in kernel. amdgpu_driver_load_kms fail
to load firmwares since GPU was initialized before rootfs is ready.
Just gracefully fail in such cases.
v2: Check return code of amdgpu_driver_load_kms
Signed-off-by: Lepton Wu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c |
[AMD Official Use Only - Internal Distribution Only]
Reviewed-by: Bhawanpreet Lakha
From: Nicholas Kazlauskas
Sent: July 7, 2020 11:41 AM
To: amd-gfx@lists.freedesktop.org
Cc: Kazlauskas, Nicholas ; Lakha, Bhawanpreet
Subject: [PATCH] drm/amd/display: Add miss
[Why]
To support inbox1 in CW4 we need to actually program CW4 instead of
region 4 for newer firmware.
This is done correctly on DCN20/DCN21 but this code wasn't added to
DCN30.
[How]
Copy over the missing code. It doesn't need address translation since
DCN30 uses virtual addressing.
Cc: Bhawanp
Applied. Thanks!
Alex
On Mon, Jul 6, 2020 at 8:29 AM wrote:
>
> From: Tom Rix
>
> clang static analysis flags this error
>
> drivers/gpu/drm/radeon/ci_dpm.c:5652:9: warning: Use of memory after it is
> freed [unix.Malloc]
> kfree(rdev->pm.dpm.ps[i].ps_priv);
>
Am 07.07.20 um 16:24 schrieb Michel Dänzer:
On 2020-07-07 3:40 p.m., Christian König wrote:
There is a document about how the libdrm copy of the header is to be
updated.
Can't find it of hand, but our userspace guys should be able to point
out where it is.
https://gitlab.freedesktop.org/mesa/d
[AMD Public Use]
Ah, yes, indeed.
Thanks,
Alex
From: Quan, Evan
Sent: Monday, July 6, 2020 9:31 PM
To: Alex Deucher ; amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander
Subject: RE: [PATCH] drm/amdgpu/swSMU: drop pm_enabled check in set mp1 state
[AMD Offi
On 2020-07-07 3:40 p.m., Christian König wrote:
> There is a document about how the libdrm copy of the header is to be
> updated.
>
> Can't find it of hand, but our userspace guys should be able to point
> out where it is.
https://gitlab.freedesktop.org/mesa/drm/-/blob/master/include/drm/README
Reviewed-by: Christian König
Am 07.07.20 um 16:14 schrieb Alex Deucher:
Reviewed-by: Alex Deucher
On Mon, Jul 6, 2020 at 6:29 PM Marek Olšák wrote:
Please review. Thanks.
Marek
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.
Reviewed-by: Alex Deucher
On Mon, Jul 6, 2020 at 6:29 PM Marek Olšák wrote:
>
> Please review. Thanks.
>
> Marek
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_
There is a document about how the libdrm copy of the header is to be
updated.
Can't find it of hand, but our userspace guys should be able to point
out where it is.
One more comment below.
Am 07.07.20 um 15:24 schrieb Zhang, Hawking:
[AMD Public Use]
It seems to me the kernel amdgpu_drm.h
[AMD Public Use]
It seems to me the kernel amdgpu_drm.h doesn't sync up with libdrm amdgpu_drm.h
if your patch is based on drm-master.
I'd expect another patch to add AMDGPU_IB_FLAG_EMIT_MEM_SYNC flag in libdrm
since kernel already has the flag for a while.
Other than that, the patch is Review
From: Likun Gao
Set AMDGPU_IB_FLAG_EMIT_MEM_SYNC flag for specific ASIC when test multi
fence.
Signed-off-by: Likun Gao
Change-Id: I41e5cb19d9ca72c1d396cc28d1b54c31773fe4d5
---
include/drm/amdgpu_drm.h | 2 ++
tests/amdgpu/basic_tests.c | 6 ++
2 files changed, 8 insertions(+)
diff --gi
Hi Christian,
On 22.06.2020 15:27, Christian König wrote:
> Am 19.06.20 um 12:36 schrieb Marek Szyprowski:
>> The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg()
>> function
>> returns the number of the created entries in the DMA address space.
>> However the subsequent calls to the
For APU, vram type is DDR4 and vram width is 64
For dGPU, vram type is GDDR6 and vram width is 128
Signed-off-by: Aaron Liu
---
drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
b/drivers/gpu
I hit this when compiling amdgpu in kernel. amdgpu_driver_load_kms fail
to load firmwares since GPU was initialized before rootfs is ready.
Just gracefully fail in such cases.
Signed-off-by: Lepton Wu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 3 ++-
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c |
41 matches
Mail list logo