Re: [PATCH] drm/sched: Re-queue run job worker when drm_sched_entity_pop_job() returns NULL

2024-02-05 Thread Luben Tuikov
take this through the drm-fixes directly. This is indeed the case. I was going to push this patch through drm-misc-next, but the original/base patch (<20240124210811.1639040-1-matthew.br...@intel.com>) isn't there. If drm-misc maintainers back merge drm-fixes into drm-misc-next,

Re: [PATCH] drm/sched: Re-queue run job worker when drm_sched_entity_pop_job() returns NULL

2024-02-05 Thread Luben Tuikov
orker") > Signed-off-by: Matthew Brost Indeed, we cannot have any loops in the GPU scheduler work items, as we need to bounce between submit and free in the same work queue. (Coming from the original design before work items/queues were introduced). Thanks for fixing this, Matt! Reviewed-b

Re: [PATCH] drm/sched: Add Matthew Brost to maintainers

2024-02-05 Thread Luben Tuikov
On 2024-02-05 19:06, Luben Tuikov wrote: > On 2024-02-01 07:56, Christian König wrote: >> Am 31.01.24 um 18:11 schrieb Daniel Vetter: >>> On Tue, Jan 30, 2024 at 07:03:02PM -0800, Matthew Brost wrote: >>>> Add Matthew Brost to DRM scheduler maintainers. >>>&

Re: [PATCH] drm/sched: Add Matthew Brost to maintainers

2024-02-05 Thread Luben Tuikov
On 2024-02-01 07:56, Christian König wrote: > Am 31.01.24 um 18:11 schrieb Daniel Vetter: >> On Tue, Jan 30, 2024 at 07:03:02PM -0800, Matthew Brost wrote: >>> Add Matthew Brost to DRM scheduler maintainers. >>> >>> Cc: Luben Tuikov >>> Cc: Daniel V

Re: [PATCH] drm/sched: Drain all entities in DRM sched run job worker

2024-01-29 Thread Luben Tuikov
On 2024-01-29 02:44, Christian König wrote: > Am 26.01.24 um 17:29 schrieb Matthew Brost: >> On Fri, Jan 26, 2024 at 11:32:57AM +0100, Christian König wrote: >>> Am 25.01.24 um 18:30 schrieb Matthew Brost: On Thu, Jan 25, 2024 at 04:12:58PM +0100, Christian König wrote: > Am 24.01.24 um

Re: [PATCH] drm/sched: Drain all entities in DRM sched run job worker

2024-01-28 Thread Luben Tuikov
gt;> It can perfectly be that we messed this up when switching from kthread to a >> work item. >> > > Right, that what this patch does - the run worker does not go idle until > no ready entities are found. That was incorrect in the original patch > and fixed here. Do y

Re: [PATCH] drm/sched: Drain all entities in DRM sched run job worker

2024-01-28 Thread Luben Tuikov
On 2024-01-24 16:08, Matthew Brost wrote: > All entities must be drained in the DRM scheduler run job worker to > avoid the following case. An entity found that is ready, no job found > ready on entity, and run job worker goes idle with other entities + jobs > ready. Draining all ready entities

Re: [PATCH 2/2] drm/sched: Return an error code only as a constant in drm_sched_init()

2024-01-07 Thread Luben Tuikov
On 2023-12-26 10:58, Markus Elfring wrote: > From: Markus Elfring > Date: Tue, 26 Dec 2023 16:37:37 +0100 > > Return an error code without storing it in an intermediate variable. > > Signed-off-by: Markus Elfring Thank you Markus for this patch. Reviewed-by: Luben Tuikov

Re: [PATCH 1/2] drm/sched: One function call less in drm_sched_init() after error detection

2024-01-07 Thread Luben Tuikov
nter. > This issue was detected by using the Coccinelle software. > > Thus adjust a jump target. > > Signed-off-by: Markus Elfring Thank you Markus for this patch. Reviewed-by: Luben Tuikov Pushed to drm-misc-next. -- Regards, Luben > --- > drivers/gpu/drm/scheduler/sche

Re: [PATCH] drm/scheduler: Unwrap job dependencies

2023-12-09 Thread Luben Tuikov
Hi, On 2023-12-05 14:02, Rob Clark wrote: > From: Rob Clark > > Container fences have burner contexts, which makes the trick to store at > most one fence per context somewhat useless if we don't unwrap array or > chain fences. > > Signed-off-by: Rob Clark Link:

Re: Radeon regression in 6.6 kernel

2023-11-29 Thread Luben Tuikov
On 2023-11-29 22:36, Luben Tuikov wrote: > On 2023-11-29 15:49, Alex Deucher wrote: >> On Wed, Nov 29, 2023 at 3:10 PM Alex Deucher wrote: >>> >>> Actually I think I see the problem. I'll try and send out a patch >>> later today to test. >> >

Re: Radeon regression in 6.6 kernel

2023-11-29 Thread Luben Tuikov
org/archives/amd-gfx/2023-November/101197.html Link: https://lore.kernel.org/r/87edgv4x3i@vps.thesusis.net Let's link the start of the thread. Regards, Luben > Signed-off-by: Alex Deucher > Cc: Phillip Susi > Cc: Luben Tuikov > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_d

Re: Radeon regression in 6.6 kernel

2023-11-29 Thread Luben Tuikov
On 2023-11-29 10:22, Alex Deucher wrote: > On Wed, Nov 29, 2023 at 8:50 AM Alex Deucher wrote: >> >> On Tue, Nov 28, 2023 at 11:45 PM Luben Tuikov wrote: >>> >>> On 2023-11-28 17:13, Alex Deucher wrote: >>>> On Mon, Nov 27, 2023 at 6:24 PM Phillip

Re: Radeon regression in 6.6 kernel

2023-11-29 Thread Luben Tuikov
On 2023-11-29 08:50, Alex Deucher wrote: > On Tue, Nov 28, 2023 at 11:45 PM Luben Tuikov wrote: >> >> On 2023-11-28 17:13, Alex Deucher wrote: >>> On Mon, Nov 27, 2023 at 6:24 PM Phillip Susi wrote: >>>> >>>> Alex Deucher writes: >>>&

Re: Radeon regression in 6.6 kernel

2023-11-28 Thread Luben Tuikov
On 2023-11-28 17:13, Alex Deucher wrote: > On Mon, Nov 27, 2023 at 6:24 PM Phillip Susi wrote: >> >> Alex Deucher writes: >> In that case those are the already known problems with the scheduler changes, aren't they? >>> >>> Yes. Those changes went into 6.7 though, not 6.6 AFAIK.

Re: [PATCH] drm/sched: Partial revert of "Qualify drm_sched_wakeup() by drm_sched_entity_is_ready()"

2023-11-28 Thread Luben Tuikov
struct drm_sched_entity *entity) > { > - if (drm_sched_entity_is_ready(entity)) > - if (drm_sched_can_queue(sched, entity)) > - drm_sched_run_job_queue(sched); > + if (drm_sched_can_queue(sched, entity)) > + drm_sched_run_job_q

Re: [PATCH] Revert "drm/sched: Qualify drm_sched_wakeup() by drm_sched_entity_is_ready()"

2023-11-27 Thread Luben Tuikov
Hi Bert, # The title of the patch should be: drm/sched: Partial revert of "Qualify drm_sched_wakeup() by drm_sched_entity_is_ready()" On 2023-11-27 08:30, Bert Karwatzki wrote: > Commit f3123c25 (in combination with the use of work queues by the gpu Commit f3123c2590005c, in combination with

Re: [PATCH 1/2] drm/sched: Rename priority MIN to LOW

2023-11-27 Thread Luben Tuikov
On 2023-11-27 09:20, Christian König wrote: > Am 27.11.23 um 15:13 schrieb Luben Tuikov: >> On 2023-11-27 08:55, Christian König wrote: >>> Hi Luben, >>> >>> Am 24.11.23 um 08:57 schrieb Christian König: >>>> Am 24.11.23 um 06:27 schrieb L

Re: [PATCH 1/2] drm/sched: Rename priority MIN to LOW

2023-11-27 Thread Luben Tuikov
On 2023-11-27 08:55, Christian König wrote: > Hi Luben, > > Am 24.11.23 um 08:57 schrieb Christian König: >> Am 24.11.23 um 06:27 schrieb Luben Tuikov: >>> Rename DRM_SCHED_PRIORITY_MIN to DRM_SCHED_PRIORITY_LOW. >>> >>> This mirrors DRM_SCHED_

Re: linux-next: build failure after merge of the drm-misc tree

2023-11-26 Thread Luben Tuikov
On 2023-11-26 18:38, Stephen Rothwell wrote: > Hi all, > > After merging the drm-misc tree, today's linux-next build (x86_64 > allmodconfig) failed like this: > > drivers/gpu/drm/nouveau/nouveau_sched.c:21:41: error: > 'DRM_SCHED_PRIORITY_MIN' undeclared here (not in a function); did you mean

Re: [PATCH] drm/sched: Fix compilation issues with DRM priority rename

2023-11-26 Thread Luben Tuikov
On 2023-11-25 14:22, Luben Tuikov wrote: > Fix compilation issues with DRM scheduler priority rename MIN to LOW. > > Signed-off-by: Luben Tuikov > Reported-by: kernel test robot > Closes: > https://lore.kernel.org/oe-kbuild-all/202311252109.wgbjsskg-...@intel.com/ > Cc: D

Re: drm scheduler redesign causes deadlocks [extended repost]

2023-11-25 Thread Luben Tuikov
On 2023-11-24 04:38, Bert Karwatzki wrote: > Am Mittwoch, dem 22.11.2023 um 18:02 -0500 schrieb Luben Tuikov: >> On 2023-11-21 04:00, Bert Karwatzki wrote: >>> Since linux-next-20231115 my linux system (debian sid on msi alpha 15 >>> laptop) >>> suffers from ra

[PATCH] drm/sched: Fix compilation issues with DRM priority rename

2023-11-25 Thread Luben Tuikov
Fix compilation issues with DRM scheduler priority rename MIN to LOW. Signed-off-by: Luben Tuikov Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202311252109.wgbjsskg-...@intel.com/ Cc: Danilo Krummrich Cc: Frank Binns Cc: Donald Robson Cc: Matt Coster Cc

Re: [PATCH 2/2] drm/sched: Reverse run-queue priority enumeration

2023-11-24 Thread Luben Tuikov
On 2023-11-24 04:38, Christian König wrote: > Am 24.11.23 um 09:22 schrieb Luben Tuikov: >> On 2023-11-24 03:04, Christian König wrote: >>> Am 24.11.23 um 06:27 schrieb Luben Tuikov: >>>> Reverse run-queue priority enumeration such that the higest priority is

Re: linux-next: Signed-off-by missing for commit in the drm-misc tree

2023-11-24 Thread Luben Tuikov
On 2023-11-24 08:20, Jani Nikula wrote: > On Wed, 22 Nov 2023, Luben Tuikov wrote: >> On 2023-11-22 07:00, Maxime Ripard wrote: >>> Hi Luben, >>> >>> On Thu, Nov 16, 2023 at 09:27:58AM +0100, Daniel Vetter wrote: >>>> On Thu, Nov 16, 2023 at 09:

Re: [PATCH 2/2] drm/sched: Reverse run-queue priority enumeration

2023-11-24 Thread Luben Tuikov
On 2023-11-24 03:04, Christian König wrote: > Am 24.11.23 um 06:27 schrieb Luben Tuikov: >> Reverse run-queue priority enumeration such that the higest priority is now >> 0, >> and for each consecutive integer the prioirty diminishes. >> >> Run-queues corresp

[PATCH 1/2] drm/sched: Rename priority MIN to LOW

2023-11-23 Thread Luben Tuikov
Kumar Cc: Dmitry Baryshkov Cc: Danilo Krummrich Cc: Alex Deucher Cc: Christian König Cc: linux-arm-...@vger.kernel.org Cc: freedr...@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: Luben Tuikov --- drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 4 ++-- drivers/gpu/drm/amd

[PATCH 2/2] drm/sched: Reverse run-queue priority enumeration

2023-11-23 Thread Luben Tuikov
freedesktop.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: Luben Tuikov --- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- drivers/gpu/drm/msm/msm_gpu.h| 2 +- drivers/gpu/drm/scheduler/sched_entity.c | 7 --- drivers/gpu/drm/scheduler/sched_main.c | 15 +++

[PATCH 0/2] Make scheduling of the same index, the same

2023-11-23 Thread Luben Tuikov
The first patch renames priority MIN to LOW. The second patch makes the "priority" of the same run-queue index in any two schedulers, the same. This series sits on top on this fix https://patchwork.freedesktop.org/patch/568723/ which I sent yesterday. Luben Tuikov (2): drm/sch

[PATCH] drm/sched: Fix bounds limiting when given a malformed entity

2023-11-23 Thread Luben Tuikov
If we're given a malformed entity in drm_sched_entity_init()--shouldn't happen, but we verify--with out-of-bounds priority value, we set it to an allowed value. Fix the expression which sets this limit. Signed-off-by: Luben Tuikov Fixes: 56e449603f0ac5 ("drm/sched: Convert the GPU sche

Re: Radeon regression in 6.6 kernel

2023-11-22 Thread Luben Tuikov
On 2023-11-21 17:05, Phillip Susi wrote: > Alex Deucher writes: > >> Does reverting 56e449603f0ac580700621a356d35d5716a62ce5 alone fix it? >> Can you also attach your full dmesg log for the failed suspend? > > No, it doesn't. Here is the full syslog from the boot with only that > revert: >

Re: drm scheduler redesign causes deadlocks [extended repost]

2023-11-22 Thread Luben Tuikov
On 2023-11-21 04:00, Bert Karwatzki wrote: > Since linux-next-20231115 my linux system (debian sid on msi alpha 15 laptop) > suffers from random deadlocks which can occur after  30 - 180min of usage. > These > deadlocks can be actively provoked by creating high system load (usually by > compiling

Re: linux-next: Signed-off-by missing for commit in the drm-misc tree

2023-11-22 Thread Luben Tuikov
On 2023-11-22 07:00, Maxime Ripard wrote: > Hi Luben, > > On Thu, Nov 16, 2023 at 09:27:58AM +0100, Daniel Vetter wrote: >> On Thu, Nov 16, 2023 at 09:11:43AM +0100, Maxime Ripard wrote: >>> On Tue, Nov 14, 2023 at 06:46:21PM -0500, Luben Tuikov wrote: >>>> O

Re: [PATCH 1/2] drm/scheduler: improve GPU scheduler documentation v2

2023-11-17 Thread Luben Tuikov
Hi, On 2023-11-16 09:15, Christian König wrote: > Start to improve the scheduler document. Especially document the > lifetime of each of the objects as well as the restrictions around > DMA-fence handling and userspace compatibility. > > v2: Some improvements suggested by Danilo, add section

[PATCH] drm/print: Handle NULL drm device in __drm_printk()

2023-11-16 Thread Luben Tuikov
is identical to if drm->dev had been NULL. Signed-off-by: Luben Tuikov --- include/drm/drm_print.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/drm/drm_print.h b/include/drm/drm_print.h index a93a387f8a1a15..dd4883df876a6d 100644 --- a/include/drm/drm_print.h ++

Re: linux-next: Signed-off-by missing for commit in the drm-misc tree

2023-11-16 Thread Luben Tuikov
On 2023-11-16 04:22, Maxime Ripard wrote: > Hi, > > On Mon, Nov 13, 2023 at 09:56:32PM -0500, Luben Tuikov wrote: >> On 2023-11-13 21:45, Stephen Rothwell wrote: >>> Hi Luben, >>> >>> On Mon, 13 Nov 2023 20:32:40 -0500 Luben Tuikov wrote: >>

Re: [PATCH] drm/sched: Define pr_fmt() for DRM using pr_*()

2023-11-16 Thread Luben Tuikov
On 2023-11-15 03:24, Jani Nikula wrote: > On Tue, 14 Nov 2023, Luben Tuikov wrote: >> diff --git a/include/drm/drm_print.h b/include/drm/drm_print.h >> index a93a387f8a1a15..ce784118e4f762 100644 >> --- a/include/drm/drm_print.h >> +++ b/include/drm/drm_print.h

Re: [PATCH] drm/scheduler: improve GPU scheduler documentation

2023-11-16 Thread Luben Tuikov
On 2023-11-13 07:38, Christian König wrote: > Start to improve the scheduler document. Especially document the > lifetime of each of the objects as well as the restrictions around > DMA-fence handling and userspace compatibility. Thanks Christian for doing this--much needed. > > Signed-off-by:

Re: [PATCH] drm/sched: Define pr_fmt() for DRM using pr_*()

2023-11-14 Thread Luben Tuikov
On 2023-11-14 07:20, Jani Nikula wrote: > On Mon, 13 Nov 2023, Luben Tuikov wrote: >> Hi Jani, >> >> On 2023-11-10 07:40, Jani Nikula wrote: >>> On Thu, 09 Nov 2023, Luben Tuikov wrote: >>>> Define pr_fmt() as "[drm] " for DRM code using pr_*

Re: linux-next: Signed-off-by missing for commit in the drm-misc tree

2023-11-14 Thread Luben Tuikov
On 2023-11-13 22:08, Stephen Rothwell wrote: > Hi Luben, > > BTW, cherry picking commits does not avoid conflicts - in fact it can > cause conflicts if there are further changes to the files affected by > the cherry picked commit in either the tree/branch the commit was > cheery picked from or

Re: linux-next: Signed-off-by missing for commit in the drm-misc tree

2023-11-13 Thread Luben Tuikov
On 2023-11-13 21:45, Stephen Rothwell wrote: > Hi Luben, > > On Mon, 13 Nov 2023 20:32:40 -0500 Luben Tuikov wrote: >> >> On 2023-11-13 20:08, Luben Tuikov wrote: >>> On 2023-11-13 15:55, Stephen Rothwell wrote: >>>> Hi all, >>>&g

Re: linux-next: Signed-off-by missing for commit in the drm-misc tree

2023-11-13 Thread Luben Tuikov
On 2023-11-13 20:08, Luben Tuikov wrote: > On 2023-11-13 15:55, Stephen Rothwell wrote: >> Hi all, >> >> Commit >> >> 0da611a87021 ("dma-buf: add dma_fence_timestamp helper") >> >> is missing a Signed-off-by from its committer. >> >

Re: linux-next: Signed-off-by missing for commit in the drm-misc tree

2023-11-13 Thread Luben Tuikov
On 2023-11-13 15:55, Stephen Rothwell wrote: > Hi all, > > Commit > > 0da611a87021 ("dma-buf: add dma_fence_timestamp helper") > > is missing a Signed-off-by from its committer. > In order to merge the scheduler changes necessary for the Xe driver, those changes were based on drm-tip,

Re: [PATCH] drm/sched: Define pr_fmt() for DRM using pr_*()

2023-11-13 Thread Luben Tuikov
Hi Jani, On 2023-11-10 07:40, Jani Nikula wrote: > On Thu, 09 Nov 2023, Luben Tuikov wrote: >> Define pr_fmt() as "[drm] " for DRM code using pr_*() facilities, especially >> when no devices are available. This makes it easier to browse kernel logs. > > Please do

Re: [PATCH] Revert "drm/sched: Define pr_fmt() for DRM using pr_*()"

2023-11-13 Thread Luben Tuikov
On 2023-11-11 06:33, Jani Nikula wrote: > On Sat, 11 Nov 2023, Luben Tuikov wrote: >> From Jani: >> The drm_print.[ch] facilities use very few pr_*() calls directly. The >> users of pr_*() calls do not necessarily include at >> all, and really don't have to. >>

[PATCH] Revert "drm/sched: Define pr_fmt() for DRM using pr_*()"

2023-11-11 Thread Luben Tuikov
nes pr_fmt() itself if not already defined. No, it's encouraged not to use pr_*() at all, and prefer drm device based logging, or device based logging. This reverts commit 36245bd02e88e68ac5955c2958c968879d7b75a9. Signed-off-by: Luben Tuikov Link: https://patchwork.freedesktop.org/patch/ms

[PATCH] Revert "drm/sched: Define pr_fmt() for DRM using pr_*()"

2023-11-10 Thread Luben Tuikov
This reverts commit 36245bd02e88e68ac5955c2958c968879d7b75a9. Signed-off-by: Luben Tuikov --- include/drm/drm_print.h | 14 -- 1 file changed, 14 deletions(-) diff --git a/include/drm/drm_print.h b/include/drm/drm_print.h index e8fe60d0eb8783..a93a387f8a1a15 100644 --- a/include

Re: [PATCH] drm/sched: Define pr_fmt() for DRM using pr_*()

2023-11-10 Thread Luben Tuikov
On 2023-11-10 07:40, Jani Nikula wrote: > On Thu, 09 Nov 2023, Luben Tuikov wrote: >> Define pr_fmt() as "[drm] " for DRM code using pr_*() facilities, especially >> when no devices are available. This makes it easier to browse kernel logs. > > Please do not m

Re: [PATCH drm-misc-next v6] drm/sched: implement dynamic job-flow control

2023-11-09 Thread Luben Tuikov
On 2023-11-09 19:57, Luben Tuikov wrote: > On 2023-11-09 19:16, Danilo Krummrich wrote: [snip] >> @@ -667,6 +771,8 @@ EXPORT_SYMBOL(drm_sched_resubmit_jobs); >> * drm_sched_job_init - init a scheduler job >> * @job: scheduler job to init >> * @entity: scheduler en

Re: [PATCH drm-misc-next v6] drm/sched: implement dynamic job-flow control

2023-11-09 Thread Luben Tuikov
On 2023-11-09 19:16, Danilo Krummrich wrote: > Currently, job flow control is implemented simply by limiting the number > of jobs in flight. Therefore, a scheduler is initialized with a credit > limit that corresponds to the number of jobs which can be sent to the > hardware. > > This implies

[PATCH] drm/sched: Define pr_fmt() for DRM using pr_*()

2023-11-09 Thread Luben Tuikov
Define pr_fmt() as "[drm] " for DRM code using pr_*() facilities, especially when no devices are available. This makes it easier to browse kernel logs. Signed-off-by: Luben Tuikov --- include/drm/drm_print.h | 14 ++ 1 file changed, 14 insertions(+) diff --git a/i

[PATCH] drm/sched: Qualify drm_sched_wakeup() by drm_sched_entity_is_ready()

2023-11-09 Thread Luben Tuikov
Don't "wake up" the GPU scheduler unless the entity is ready, as well as we can queue to the scheduler, i.e. there is no point in waking up the scheduler for the entity unless the entity is ready. Signed-off-by: Luben Tuikov Fixes: bc8d6a9df99038 ("drm/sched: Don't disturb the en

Re: [PATCH] drm/sched: Don't disturb the entity when in RR-mode scheduling

2023-11-09 Thread Luben Tuikov
On 2023-11-09 18:41, Danilo Krummrich wrote: > On 11/9/23 20:24, Danilo Krummrich wrote: >> On 11/9/23 07:52, Luben Tuikov wrote: >>> Hi, >>> >>> On 2023-11-07 19:41, Danilo Krummrich wrote: >>>> On 11/7/23 05:10, Luben Tuikov

Re: [PATCH] drm/sched: fix potential page fault in drm_sched_job_init()

2023-11-09 Thread Luben Tuikov
On 2023-11-09 14:55, Danilo Krummrich wrote: > On 11/9/23 01:09, Danilo Krummrich wrote: >> On 11/8/23 06:46, Luben Tuikov wrote: >>> Hi, >>> >>> Could you please use my gmail address, the one one I'm responding from--I >>> don't want >>>

Re: [PATCH] drm/sched: Don't disturb the entity when in RR-mode scheduling

2023-11-08 Thread Luben Tuikov
Hi, On 2023-11-07 19:41, Danilo Krummrich wrote: > On 11/7/23 05:10, Luben Tuikov wrote: >> Don't call drm_sched_select_entity() in drm_sched_run_job_queue(). In fact, >> rename __drm_sched_run_job_queue() to just drm_sched_run_job_queue(), and let >> it do just that, s

Re: [PATCH] drm/sched: fix potential page fault in drm_sched_job_init()

2023-11-08 Thread Luben Tuikov
On 2023-11-08 19:09, Danilo Krummrich wrote: > On 11/8/23 06:46, Luben Tuikov wrote: >> Hi, >> >> Could you please use my gmail address, the one one I'm responding from--I >> don't want >> to miss any DRM scheduler patches. BTW, the luben.tui...@amd.com email >

Re: [PATCH] drm/sched: fix potential page fault in drm_sched_job_init()

2023-11-07 Thread Luben Tuikov
On 2023-11-08 00:46, Luben Tuikov wrote: > Hi, > > Could you please use my gmail address, the one one I'm responding from--I > don't want > to miss any DRM scheduler patches. BTW, the luben.tui...@amd.com email should > bounce > as undeliverable. > > On 2023-11-07 21

Re: [PATCH] drm/sched: fix potential page fault in drm_sched_job_init()

2023-11-07 Thread Luben Tuikov
Hi, Could you please use my gmail address, the one one I'm responding from--I don't want to miss any DRM scheduler patches. BTW, the luben.tui...@amd.com email should bounce as undeliverable. On 2023-11-07 21:26, Danilo Krummrich wrote: > Commit 56e449603f0a ("drm/sched: Convert the GPU

Re: [PATCH] drm/sched: Don't disturb the entity when in RR-mode scheduling

2023-11-07 Thread Luben Tuikov
On 2023-11-07 12:53, Danilo Krummrich wrote: > On 11/7/23 05:10, Luben Tuikov wrote: >> Don't call drm_sched_select_entity() in drm_sched_run_job_queue(). In fact, >> rename __drm_sched_run_job_queue() to just drm_sched_run_job_queue(), and let >> it do just that, s

Re: [PATCH] drm/sched: Don't disturb the entity when in RR-mode scheduling

2023-11-07 Thread Luben Tuikov
On 2023-11-07 06:48, Matthew Brost wrote: > On Mon, Nov 06, 2023 at 11:10:21PM -0500, Luben Tuikov wrote: >> Don't call drm_sched_select_entity() in drm_sched_run_job_queue(). In fact, >> rename __drm_sched_run_job_queue() to just drm_sched_run_job_queue(), and let >> it d

[PATCH] drm/sched: Don't disturb the entity when in RR-mode scheduling

2023-11-06 Thread Luben Tuikov
. This commit fixes this by eliminating the call to drm_sched_select_entity() from drm_sched_run_job_queue(), and leaves it only in drm_sched_run_job_work(). v2: Rebased on top of Tvrtko's renames series of patches. (Luben) Add fixes-tag. (Tvrtko) Signed-off-by: Luben Tuikov Fixes: f7fe64ad0f22ff (&quo

Re: [PATCH 0/5] Some drm scheduler internal renames

2023-11-06 Thread Luben Tuikov
On 2023-11-06 07:41, Tvrtko Ursulin wrote: > > On 05/11/2023 01:51, Luben Tuikov wrote: >> On 2023-11-02 06:55, Tvrtko Ursulin wrote: >>> From: Tvrtko Ursulin >>> >>> I found some of the naming a bit incosistent and unclear so just a small >>> at

Re: [PATCH 0/5] Some drm scheduler internal renames

2023-11-04 Thread Luben Tuikov
On 2023-11-02 06:55, Tvrtko Ursulin wrote: > From: Tvrtko Ursulin > > I found some of the naming a bit incosistent and unclear so just a small > attempt to clarify and tidy some of them. See what people think if my first > stab improves things or not. > > Cc: Luben Tuikov

Re: [PATCH 0/5] Some drm scheduler internal renames

2023-11-04 Thread Luben Tuikov
tko Ursulin > > I found some of the naming a bit incosistent and unclear so just a small > attempt to clarify and tidy some of them. See what people think if my first > stab improves things or not. > > Cc: Luben Tuikov > Cc: Matthew Brost > > Tvrtko Ursu

Re: [PATCH] drm/sched: Eliminate drm_sched_run_job_queue_if_ready()

2023-11-03 Thread Luben Tuikov
Hi Tvrtko, On 2023-11-03 06:39, Tvrtko Ursulin wrote: > > On 02/11/2023 22:46, Luben Tuikov wrote: >> Eliminate drm_sched_run_job_queue_if_ready() and instead just call >> drm_sched_run_job_queue() in drm_sched_free_job_work(). The problem is that >>

Re: [PATCH] drm/sched: Eliminate drm_sched_run_job_queue_if_ready()

2023-11-03 Thread Luben Tuikov
Hi Matt, :-) On 2023-11-03 11:13, Matthew Brost wrote: > On Thu, Nov 02, 2023 at 06:46:54PM -0400, Luben Tuikov wrote: >> Eliminate drm_sched_run_job_queue_if_ready() and instead just call >> drm_sched_run_job_queue() in drm_sched_free_job_work(). The problem is that >> the

Re: [PATCH v8 3/5] drm/sched: Split free_job into own work item

2023-11-02 Thread Luben Tuikov
On 2023-11-02 07:13, Tvrtko Ursulin wrote: > > On 31/10/2023 03:24, Matthew Brost wrote: >> Rather than call free_job and run_job in same work item have a dedicated >> work item for each. This aligns with the design and intended use of work >> queues. >> >> v2: >> - Test for

[PATCH] drm/sched: Eliminate drm_sched_run_job_queue_if_ready()

2023-11-02 Thread Luben Tuikov
drm_sched_select_entity(), then in the case of RR scheduling, that would result in calling select_entity() twice, which may result in skipping a ready entity if more than one entity is ready. This commit fixes this by eliminating the if_ready() variant. Signed-off-by: Luben Tuikov --- drivers/gpu/drm/scheduler

Re: [PATCH v8 0/5] DRM scheduler changes for Xe

2023-11-02 Thread Luben Tuikov
On 2023-10-30 23:24, Matthew Brost wrote: > As a prerequisite to merging the new Intel Xe DRM driver [1] [2], we > have been asked to merge our common DRM scheduler patches first. > > This a continuation of a RFC [3] with all comments addressed, ready for > a full review, and hopefully in

Re: [PATCH v8 3/5] drm/sched: Split free_job into own work item

2023-11-02 Thread Luben Tuikov
- Do not move drm_sched_select_entity in file (Luben) > > Signed-off-by: Matthew Brost Reviewed-by: Luben Tuikov Regards, Luben > --- > drivers/gpu/drm/scheduler/sched_main.c | 146 + > include/drm/gpu_scheduler.h| 4 +- > 2 files ch

Re: [PATCH drm-misc-next v5] drm/sched: implement dynamic job-flow control

2023-11-01 Thread Luben Tuikov
s > credit count, which represents the number of credits a job contributes > to the scheduler's credit limit. > > Signed-off-by: Danilo Krummrich Reviewed-by: Luben Tuikov Regards, Luben > --- > Changes in V2: > == > - fixed up influence on scheduling fairness d

Re: [PATCH drm-misc-next v4] drm/sched: implement dynamic job-flow control

2023-11-01 Thread Luben Tuikov
On 2023-10-31 22:23, Danilo Krummrich wrote: > Hi Luben, > [snip] >>> @@ -187,12 +251,14 @@ void drm_sched_rq_remove_entity(struct drm_sched_rq >>> *rq, >>> /** >>> * drm_sched_rq_select_entity_rr - Select an entity which could provide a >>> job to run >>> * >>> + * @sched: the gpu

Re: [PATCH] drm/sched: Convert the GPU scheduler to variable number of run-queues

2023-10-31 Thread Luben Tuikov
On 2023-10-31 09:33, Danilo Krummrich wrote: > > On 10/26/23 19:25, Luben Tuikov wrote: >> On 2023-10-26 12:39, Danilo Krummrich wrote: >>> On 10/23/23 05:22, Luben Tuikov wrote: >>>> The GPU scheduler has now a variable number of run-queues, which are set >

Re: [PATCH drm-misc-next v4] drm/sched: implement dynamic job-flow control

2023-10-31 Thread Luben Tuikov
Hi, (PSA: luben.tui...@amd.com should've bounced :-) I'm removing it from the To: field.) On 2023-10-30 20:26, Danilo Krummrich wrote: > Currently, job flow control is implemented simply by limiting the number > of jobs in flight. Therefore, a scheduler is initialized with a credit > limit that

Re: [PATCH drm-misc-next v3] drm/sched: implement dynamic job-flow control

2023-10-27 Thread Luben Tuikov
On 2023-10-27 12:41, Boris Brezillon wrote: > On Fri, 27 Oct 2023 10:32:52 -0400 > Luben Tuikov wrote: > >> On 2023-10-27 04:25, Boris Brezillon wrote: >>> Hi Danilo, >>> >>> On Thu, 26 Oct 2023 18:13:00 +0200 >>> Danilo Krummrich wrote: >

Re: [PATCH drm-misc-next v3] drm/sched: implement dynamic job-flow control

2023-10-27 Thread Luben Tuikov
On 2023-10-27 12:31, Boris Brezillon wrote: > On Fri, 27 Oct 2023 16:23:24 +0200 > Danilo Krummrich wrote: > >> On 10/27/23 10:25, Boris Brezillon wrote: >>> Hi Danilo, >>> >>> On Thu, 26 Oct 2023 18:13:00 +0200 >>> Danilo Krummrich wrote: >>> Currently, job flow control is implemented

Re: [PATCH drm-misc-next v3] drm/sched: implement dynamic job-flow control

2023-10-27 Thread Luben Tuikov
Hi, On 2023-10-27 12:26, Boris Brezillon wrote: > On Fri, 27 Oct 2023 16:34:26 +0200 > Danilo Krummrich wrote: > >> On 10/27/23 09:17, Boris Brezillon wrote: >>> Hi Danilo, >>> >>> On Thu, 26 Oct 2023 18:13:00 +0200 >>> Danilo Krummrich wrote: >>> + + /** + *

Re: [PATCH drm-misc-next v3] drm/sched: implement dynamic job-flow control

2023-10-27 Thread Luben Tuikov
Hi Danilo, On 2023-10-27 10:45, Danilo Krummrich wrote: > Hi Luben, > > On 10/26/23 23:13, Luben Tuikov wrote: >> On 2023-10-26 12:13, Danilo Krummrich wrote: >>> Currently, job flow control is implemented simply by limiting the number >>> of jobs in flight. Ther

Re: [PATCH drm-misc-next v3] drm/sched: implement dynamic job-flow control

2023-10-27 Thread Luben Tuikov
On 2023-10-27 04:25, Boris Brezillon wrote: > Hi Danilo, > > On Thu, 26 Oct 2023 18:13:00 +0200 > Danilo Krummrich wrote: > >> Currently, job flow control is implemented simply by limiting the number >> of jobs in flight. Therefore, a scheduler is initialized with a credit >> limit that

Re: [PATCH drm-misc-next v3] drm/sched: implement dynamic job-flow control

2023-10-26 Thread Luben Tuikov
On 2023-10-26 17:13, Luben Tuikov wrote: > On 2023-10-26 12:13, Danilo Krummrich wrote: >> Currently, job flow control is implemented simply by limiting the number >> of jobs in flight. Therefore, a scheduler is initialized with a credit >> limit that corresponds to the num

Re: [PATCH drm-misc-next v3] drm/sched: implement dynamic job-flow control

2023-10-26 Thread Luben Tuikov
On 2023-10-26 12:13, Danilo Krummrich wrote: > Currently, job flow control is implemented simply by limiting the number > of jobs in flight. Therefore, a scheduler is initialized with a credit > limit that corresponds to the number of jobs which can be sent to the > hardware. > > This implies

[PATCH] MAINTAINERS: Update the GPU Scheduler email

2023-10-26 Thread Luben Tuikov
Update the GPU Scheduler maintainer email. Cc: Alex Deucher Cc: Christian König Cc: Daniel Vetter Cc: Dave Airlie Cc: AMD Graphics Cc: Direct Rendering Infrastructure - Development Signed-off-by: Luben Tuikov --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff

Re: [PATCH] drm/sched: Convert the GPU scheduler to variable number of run-queues

2023-10-26 Thread Luben Tuikov
On 2023-10-26 12:39, Danilo Krummrich wrote: > On 10/23/23 05:22, Luben Tuikov wrote: >> The GPU scheduler has now a variable number of run-queues, which are set up >> at >> drm_sched_init() time. This way, each driver announces how many run-queues it >> requires (suppo

Re: [PATCH] drm/sched: Convert the GPU scheduler to variable number of run-queues

2023-10-26 Thread Luben Tuikov
Hi, I've pushed this commit as I got a verbal Acked-by from Christian in our kernel meeting this morning. Matt, please rebase your patches to drm-misc-next. Regards, Luben On 2023-10-26 11:20, Luben Tuikov wrote: > Ping! > > On 2023-10-22 23:22, Luben Tuikov wrote: >> The GP

Re: [PATCH] drm/sched: Convert the GPU scheduler to variable number of run-queues

2023-10-26 Thread Luben Tuikov
Ping! On 2023-10-22 23:22, Luben Tuikov wrote: > The GPU scheduler has now a variable number of run-queues, which are set up at > drm_sched_init() time. This way, each driver announces how many run-queues it > requires (supports) per each GPU scheduler it creates. Note, that r

Re: [PATCH v7 3/6] drm/sched: Convert the GPU scheduler to variable number of run-queues

2023-10-26 Thread Luben Tuikov
Also note that there were no complaints from "kernel test robot " when I posted my patch (this patch), but there is now, which further shows that there's unwarranted changes. Just follow the steps I outlined below, and we should all be good. Thanks! Regards, Luben On 2023-10-26 05

Re: [PATCH v7 3/6] drm/sched: Convert the GPU scheduler to variable number of run-queues

2023-10-26 Thread Luben Tuikov
Hi, On 2023-10-26 02:33, kernel test robot wrote: > Hi Matthew, > > kernel test robot noticed the following build warnings: > > [auto build test WARNING on 201c8a7bd1f3f415920a2df4b8a8817e973f42fe] > > url: >

Re: [PATCH v7 4/6] drm/sched: Split free_job into own work item

2023-10-25 Thread Luben Tuikov
On 2023-10-26 00:12, Matthew Brost wrote: > Rather than call free_job and run_job in same work item have a dedicated > work item for each. This aligns with the design and intended use of work > queues. > > v2: >- Test for DMA_FENCE_FLAG_TIMESTAMP_BIT before setting > timestamp in

Re: [PATCH v7 6/6] drm/sched: Add a helper to queue TDR immediately

2023-10-25 Thread Luben Tuikov
) > - Adjust comment for drm_sched_tdr_queue_imm (Luben) > v4: > - Adjust commit message (Luben) > > Cc: Luben Tuikov > Signed-off-by: Matthew Brost > Reviewed-by: Luben Tuikov > --- > drivers/gpu/drm/scheduler/sched_main.c | 18 +- > include/drm/gpu_scheduler.h

Re: [PATCH v7 4/6] drm/sched: Split free_job into own work item

2023-10-25 Thread Luben Tuikov
On 2023-10-26 00:12, Matthew Brost wrote: > Rather than call free_job and run_job in same work item have a dedicated > work item for each. This aligns with the design and intended use of work > queues. > > v2: >- Test for DMA_FENCE_FLAG_TIMESTAMP_BIT before setting > timestamp in

Re: [PATCH v7 3/6] drm/sched: Convert the GPU scheduler to variable number of run-queues

2023-10-25 Thread Luben Tuikov
On 2023-10-26 00:12, Matthew Brost wrote: > From: Luben Tuikov > > The GPU scheduler has now a variable number of run-queues, which are set up at > drm_sched_init() time. This way, each driver announces how many run-queues it > requires (supports) per each GPU scheduler i

Re: [PATCH v7 0/6] DRM scheduler changes for Xe

2023-10-25 Thread Luben Tuikov
[1] https://gitlab.freedesktop.org/drm/xe/kernel > [2] https://patchwork.freedesktop.org/series/112188/ > [3] https://patchwork.freedesktop.org/series/116055/ > > Luben Tuikov (1): > drm/sched: Convert the GPU scheduler to variable number of run-queues > > Matthew Brost (5): &g

Re: [PATCH v6 4/7] drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy

2023-10-25 Thread Luben Tuikov
Hi Matt, On 2023-10-25 11:13, Matthew Brost wrote: > On Mon, Oct 23, 2023 at 11:50:26PM -0400, Luben Tuikov wrote: >> Hi, >> >> On 2023-10-17 11:09, Matthew Brost wrote: >>> DRM_SCHED_POLICY_SINGLE_ENTITY creates a 1 to 1 relationship between >>> scheduler

Re: [PATCH v6 4/7] drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy

2023-10-23 Thread Luben Tuikov
Hi, On 2023-10-17 11:09, Matthew Brost wrote: > DRM_SCHED_POLICY_SINGLE_ENTITY creates a 1 to 1 relationship between > scheduler and entity. No priorities or run queue used in this mode. > Intended for devices with firmware schedulers. > > v2: > - Drop sched / rq union (Luben) > v3: > -

Re: [PATCH v6 3/7] drm/sched: Move schedule policy to scheduler

2023-10-23 Thread Luben Tuikov
v3d build (CI) > - s/bad_policies/drm_sched_policy_mismatch/ (Luben) > - Don't update modparam doc (Luben) > v4: > - Fix alignment in msm_ringbuffer_new (Luben / checkpatch) > > Signed-off-by: Matthew Brost > Reviewed-by: Luben Tuikov > --- > drivers/gpu/drm/amd/amdgpu/am

Re: [PATCH drm-misc-next v2] drm/sched: implement dynamic job-flow control

2023-10-23 Thread Luben Tuikov
On 2023-10-23 18:35, Danilo Krummrich wrote: > On Wed, Oct 11, 2023 at 09:52:36PM -0400, Luben Tuikov wrote: >> Hi, >> >> Thanks for fixing the title and submitting a v2 of this patch. Comments >> inlined below. >> >> On 2023-10-09 18:35, Danilo Krummrich

Re: [PATCH drm-misc-next v2] drm/sched: implement dynamic job-flow control

2023-10-23 Thread Luben Tuikov
On 2023-10-23 18:57, Danilo Krummrich wrote: > On Tue, Oct 10, 2023 at 09:41:51AM +0200, Boris Brezillon wrote: >> On Tue, 10 Oct 2023 00:35:53 +0200 >> Danilo Krummrich wrote: >> >>> Currently, job flow control is implemented simply by limiting the number >>> of jobs in flight. Therefore, a

[PATCH] drm/sched: Convert the GPU scheduler to variable number of run-queues

2023-10-22 Thread Luben Tuikov
ernel.org Cc: freedr...@lists.freedesktop.org Cc: nouv...@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: Luben Tuikov --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 1 + drivers/gpu/drm/amd/amdgpu/amdgpu_job.c| 4 +- drivers/gpu/drm/etnaviv/etnaviv_sched.c| 1 +

Re: [PATCH v6 5/7] drm/sched: Split free_job into own work item

2023-10-21 Thread Luben Tuikov
Hi, On 2023-10-19 12:55, Matthew Brost wrote: > On Wed, Oct 18, 2023 at 09:25:36PM -0400, Luben Tuikov wrote: >> Hi, >> >> On 2023-10-17 11:09, Matthew Brost wrote: >>> Rather than call free_job and run_job in same work item have a dedicated >>> work ite

Re: [PATCH] drm/amdgpu: Remove redundant call to priority_is_valid()

2023-10-21 Thread Luben Tuikov
On 2023-10-20 12:37, Alex Deucher wrote: > On Tue, Oct 17, 2023 at 9:22 PM Luben Tuikov wrote: >> >> Remove a redundant call to amdgpu_ctx_priority_is_valid() from >> amdgpu_ctx_priority_permit(), which is called from amdgpu_ctx_init() which is >> called from amdgpu_

  1   2   3   4   >