On 08/02/2018 01:50 PM, Nayan Deshmukh wrote:
On Thu, Aug 2, 2018 at 10:31 AM Zhang, Jerry (Junwei) mailto:jerry.zh...@amd.com>> wrote:
On 07/12/2018 02:36 PM, Nayan Deshmukh wrote:
> Signed-off-by: Nayan Deshmukh mailto:nayan26deshm...@gmail.com>>
> ---
>
On Thu, Aug 2, 2018 at 10:31 AM Zhang, Jerry (Junwei)
wrote:
> On 07/12/2018 02:36 PM, Nayan Deshmukh wrote:
> > Signed-off-by: Nayan Deshmukh
> > ---
> > drivers/gpu/drm/scheduler/gpu_scheduler.c | 3 +++
> > include/drm/gpu_scheduler.h | 2 ++
> > 2 files changed, 5
On 08/01/2018 07:31 PM, Christian König wrote:
Start to use the scheduler load balancing for userspace SDMA
command submissions.
In this case, each SDMA could load all SDMA(instances) rqs, and UMD will not
specify a ring id.
If so, we may abstract a set of rings for each type of IP,
On 07/12/2018 02:36 PM, Nayan Deshmukh wrote:
Signed-off-by: Nayan Deshmukh
---
drivers/gpu/drm/scheduler/gpu_scheduler.c | 3 +++
include/drm/gpu_scheduler.h | 2 ++
2 files changed, 5 insertions(+)
diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c
Another big question:
I agree the general idea is good to balance scheduler load for same ring family.
But, when same entity job run on different scheduler, that means the later job
could be completed ahead of front, Right?
That will break fence design, later fence must be signaled after front
On Wed, Aug 01, 2018 at 01:31:15PM +0200, Christian König wrote:
> Further unmangle amdgpu.h.
>
> Signed-off-by: Christian König
Reviewed-by: Huang Rui
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu.h | 59 +--
> drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h | 84
>
On Wed, Aug 01, 2018 at 09:06:29PM +0800, Christian König wrote:
> Yeah, I've actually added one before pushing it to amd-staging-drm-next.
>
> But thanks for the reminder, wanted to note that to Nayan as well :)
>
Yes, a soft reminder to Nayan. Thanks Nayan for the contribution. :-)
Thanks,
On Wed, Aug 1, 2018 at 2:29 PM, Christian König
wrote:
> Am 01.08.2018 um 19:59 schrieb Marek Olšák:
>>
>> On Wed, Aug 1, 2018 at 1:52 PM, Christian König
>> wrote:
>>>
>>> Am 01.08.2018 um 19:39 schrieb Marek Olšák:
On Wed, Aug 1, 2018 at 2:32 AM, Christian König
wrote:
>
Hi Dave,
Fixes for 4.19:
- Fix UVD 7.2 instance handling
- Fix UVD 7.2 harvesting
- GPU scheduler fix for when a process is killed
- TTM cleanups
- amdgpu CS bo_list fixes
- Powerplay fixes for polaris12 and CZ/ST
- DC fixes for link training certain HMDs
- DC fix for vega10 blank screen in
[Why]
On POLARIS10 powerplay can fail to retrieve minimum memory clocks.
This cascades into failing to find the MinVddc value when
populating a single memory level. When this is called during
polaris10_populate_all_memory_levels the function exits early leaving
the previously used values in the
On 08/01/2018 03:51 PM, Harry Wentland wrote:
[Why]
Some boards seem to have a problem where HPD is high on HDMI even though
no display is connected. We don't want to report these as connected. DP
spec still requires us to report DP displays as connected when HPD is
high but we can't read the
[Why]
Some boards seem to have a problem where HPD is high on HDMI even though
no display is connected. We don't want to report these as connected. DP
spec still requires us to report DP displays as connected when HPD is
high but we can't read the EDID in order to go to fail-safe mode.
[How]
If
Am 01.08.2018 um 19:59 schrieb Marek Olšák:
On Wed, Aug 1, 2018 at 1:52 PM, Christian König
wrote:
Am 01.08.2018 um 19:39 schrieb Marek Olšák:
On Wed, Aug 1, 2018 at 2:32 AM, Christian König
wrote:
Am 01.08.2018 um 00:07 schrieb Marek Olšák:
Can this be implemented as a wrapper on top of
On Wed, Aug 1, 2018 at 1:52 PM, Christian König
wrote:
> Am 01.08.2018 um 19:39 schrieb Marek Olšák:
>>
>> On Wed, Aug 1, 2018 at 2:32 AM, Christian König
>> wrote:
>>>
>>> Am 01.08.2018 um 00:07 schrieb Marek Olšák:
Can this be implemented as a wrapper on top of libdrm? So that the
Am 01.08.2018 um 19:39 schrieb Marek Olšák:
On Wed, Aug 1, 2018 at 2:32 AM, Christian König
wrote:
Am 01.08.2018 um 00:07 schrieb Marek Olšák:
Can this be implemented as a wrapper on top of libdrm? So that the
tree (or hash table) isn't created for UMDs that don't need it.
No, the problem
Series is Acked-by: Andrey Grodzovsky
Andrey
On 08/01/2018 12:06 PM, Nayan Deshmukh wrote:
Yes, that is correct.
Nayan
On Wed, Aug 1, 2018, 9:05 PM Andrey Grodzovsky
mailto:andrey.grodzov...@amd.com>> wrote:
Clarification question - if the run queues belong to different
On Wed, Aug 1, 2018 at 2:32 AM, Christian König
wrote:
> Am 01.08.2018 um 00:07 schrieb Marek Olšák:
>>
>> Can this be implemented as a wrapper on top of libdrm? So that the
>> tree (or hash table) isn't created for UMDs that don't need it.
>
>
> No, the problem is that an application gets a CPU
On Wed, Aug 1, 2018 at 4:25 AM, Mauro Rossi wrote:
> Add support for DRM_FORMAT_{A,X}BGR in atombios_crtc
> R6xx crossbar registers are defined and used based on ASIC_IS_DCE2 condition,
> for DCE1/R5xx AVIVO_D1GRPH_SWAP_RB bit is used to swap red and blue channels.
>
> Signed-off-by: Mauro
On Wed, Aug 1, 2018 at 4:25 AM, Mauro Rossi wrote:
> SURFACE_PIXEL_FORMAT_GRPH_ABGR already listed in
> amd/display/dc/dc_hw_types.h
> and the necessary crossbars register controls to swap red and blue channels
> are already implemented in drm/amd/display/dc/dce/dce_mem_input.c
>
> Logic to
Yes, that is correct.
Nayan
On Wed, Aug 1, 2018, 9:05 PM Andrey Grodzovsky
wrote:
> Clarification question - if the run queues belong to different
> schedulers they effectively point to different rings,
>
> it means we allow to move (reschedule) a drm_sched_entity from one ring
> to another -
On Wed, Aug 1, 2018 at 11:41 AM, Harry Wentland wrote:
> [Why]
> Some boards seem to have a problem where HPD is high on HDMI even though
> no display is connected. We don't want to report these as connected. DP
> spec still requires us to report DP displays as connected when HPD is
> high but we
[Why]
Some boards seem to have a problem where HPD is high on HDMI even though
no display is connected. We don't want to report these as connected. DP
spec still requires us to report DP displays as connected when HPD is
high but we can't read the EDID in order to go to fail-safe mode.
[How]
If
Clarification question - if the run queues belong to different
schedulers they effectively point to different rings,
it means we allow to move (reschedule) a drm_sched_entity from one ring
to another - i assume that the idea int the first place, that
you have a set of HW rings and you can
Does anything use it? Might be useful for debugging, but I guess you'd
probably use something like umr in that case.
Alex
On Tue, Jul 31, 2018 at 9:15 PM, Zhu, Rex wrote:
> Hi Alex,
>
>
> Is it necessary to export an interface as "is_gfx_on" in powerplay to
> amdgpu?
>
> It can show the
Since we now deal with multiple rq we need to update all of them, not
just the current one.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 3 +--
drivers/gpu/drm/scheduler/gpu_scheduler.c | 36 ---
include/drm/gpu_scheduler.h
Yeah, I've actually added one before pushing it to amd-staging-drm-next.
But thanks for the reminder, wanted to note that to Nayan as well :)
Christian.
Am 01.08.2018 um 15:15 schrieb Huang Rui:
On Wed, Aug 01, 2018 at 01:50:00PM +0530, Nayan Deshmukh wrote:
This should need a commmit
On Wed, Aug 01, 2018 at 01:50:00PM +0530, Nayan Deshmukh wrote:
This should need a commmit message.
Thanks,
Ray
> Signed-off-by: Nayan Deshmukh
> ---
> drivers/gpu/drm/scheduler/gpu_scheduler.c | 3 +++
> include/drm/gpu_scheduler.h | 2 ++
> 2 files changed, 5 insertions(+)
>
Not needed any more.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 98
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 5 --
2 files changed, 103 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
Instead of the fixed round robin use let the scheduler balance the load
of page table updates.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 12 ++--
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 7
Not needed any more since that is now done by the scheduler.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/Makefile | 3 +-
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 27 +--
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c| 22 +-
Start to use the scheduler load balancing for userspace SDMA
command submissions.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 25 +
1 file changed, 21 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
Further demangle ring from entity handling.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c| 66 ---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 53 ++---
Further unmangle amdgpu.h.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 59 +--
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h | 84 +
2 files changed, 86 insertions(+), 57 deletions(-)
create mode 100644
Start to use the scheduler load balancing for userspace compute
command submissions.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
On Wed, Aug 01, 2018 at 02:59:13PM +0800, Christian König wrote:
> Am 01.08.2018 um 09:03 schrieb Huang Rui:
> > On Tue, Jul 31, 2018 at 06:46:04PM +0200, Paul Menzel wrote:
> >> From: Paul Menzel
> >> Date: Wed, 25 Jul 2018 12:54:19 +0200
> >>
> >> Improve commit d796d844 (drm/radeon/kms: make
Am 01.08.2018 um 10:19 schrieb Nayan Deshmukh:
These are the potential run queues on which the jobs from this
entity can be scheduled. We will use this to do load balancing.
Signed-off-by: Nayan Deshmukh
Reviewed-by: Christian König for the whole
series.
I also just pushed them into our
Add support for DRM_FORMAT_{A,X}BGR in atombios_crtc
R6xx crossbar registers are defined and used based on ASIC_IS_DCE2 condition,
for DCE1/R5xx AVIVO_D1GRPH_SWAP_RB bit is used to swap red and blue channels.
Signed-off-by: Mauro Rossi
---
drivers/gpu/drm/radeon/atombios_crtc.c | 25
Sending a respin for support of {A,X}RGB pixel formats in DCE1 and later,
with separate patches for amd dc, amdgpu and radeon
Please review taking in to account following doubts I have:
For amd dc crossbars register controls to swap red and blue channels
are already implemented in
SURFACE_PIXEL_FORMAT_GRPH_ABGR already listed in
amd/display/dc/dc_hw_types.h
and the necessary crossbars register controls to swap red and blue channels
are already implemented in drm/amd/display/dc/dce/dce_mem_input.c
Logic to handle new formats is added in amdgpu_dm and dce 8.0, 10.0,
Add support for DRM_FORMAT_{A,X}BGR in amdgpu, for si and amd dc disabled
Here is it necessary to define and set crossbar registers
to swap red and blue channels
Signed-off-by: Mauro Rossi
---
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c | 10 ++
drivers/gpu/drm/amd/amdgpu/si_enums.h | 20
This is the first attempt to move entities between schedulers to
have dynamic load balancing. We just move entities with no jobs for
now as moving the ones with jobs will lead to other compilcations
like ensuring that the other scheduler does not remove a job from
the current entity while we are
Signed-off-by: Nayan Deshmukh
---
drivers/gpu/drm/scheduler/gpu_scheduler.c | 3 +++
include/drm/gpu_scheduler.h | 2 ++
2 files changed, 5 insertions(+)
diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c
b/drivers/gpu/drm/scheduler/gpu_scheduler.c
index
The function selects the run queue from the rq_list with the
least load. The load is decided by the number of jobs in a
scheduler.
v2: avoid using atomic read twice consecutively, instead store
it locally
Signed-off-by: Nayan Deshmukh
---
drivers/gpu/drm/scheduler/gpu_scheduler.c | 25
These are the potential run queues on which the jobs from this
entity can be scheduled. We will use this to do load balancing.
Signed-off-by: Nayan Deshmukh
---
drivers/gpu/drm/scheduler/gpu_scheduler.c | 8
include/drm/gpu_scheduler.h | 7 ++-
2 files changed, 14
Dear Rui,
Am 01.08.2018 um 09:03 schrieb Huang Rui:
On Tue, Jul 31, 2018 at 06:46:04PM +0200, Paul Menzel wrote:
From: Paul Menzel
Date: Wed, 25 Jul 2018 12:54:19 +0200
Improve commit d796d844 (drm/radeon/kms: make hibernate work on IGPs) to
only migrate VRAM objects if the Linux kernel is
Am 01.08.2018 um 09:03 schrieb Huang Rui:
On Tue, Jul 31, 2018 at 06:46:04PM +0200, Paul Menzel wrote:
From: Paul Menzel
Date: Wed, 25 Jul 2018 12:54:19 +0200
Improve commit d796d844 (drm/radeon/kms: make hibernate work on IGPs) to
only migrate VRAM objects if the Linux kernel is actually
On Tue, Jul 31, 2018 at 06:46:04PM +0200, Paul Menzel wrote:
> From: Paul Menzel
> Date: Wed, 25 Jul 2018 12:54:19 +0200
>
> Improve commit d796d844 (drm/radeon/kms: make hibernate work on IGPs) to
> only migrate VRAM objects if the Linux kernel is actually built with
> support for hibernation
Am 01.08.2018 um 07:49 schrieb Huang Rui:
This patch fixed the error when do not configure CONFIG_X86, otherwise, below
error will be encountered.
All errors (new ones prefixed by >>):
drivers/gpu/drm/ttm/ttm_page_alloc_dma.c: In function
'ttm_set_pages_caching':
Am 01.08.2018 um 00:07 schrieb Marek Olšák:
Can this be implemented as a wrapper on top of libdrm? So that the
tree (or hash table) isn't created for UMDs that don't need it.
No, the problem is that an application gets a CPU pointer from one API
and tries to import that pointer into another
On Tue, Jul 31, 2018 at 10:56:53AM -0400, Andrey Grodzovsky wrote:
> During debug sessions I encountered a need to trace
> back a job dependecy a few steps back to the first failing
> job. This trace helpped me a lot.
>
> Signed-off-by: Andrey Grodzovsky
Series are Reviewed-by: Huang Rui
>
50 matches
Mail list logo