On 08/05/2024 21:53, Lucas De Marchi wrote:
On Wed, May 08, 2024 at 09:23:17AM GMT, Tvrtko Ursulin wrote:
On 07/05/2024 22:35, Lucas De Marchi wrote:
On Fri, Apr 26, 2024 at 11:47:37AM GMT, Tvrtko Ursulin wrote:
On 24/04/2024 00:56, Lucas De Marchi wrote:
Print the accumulated runtime
On 08/05/2024 20:08, Friedrich Vock wrote:
On 08.05.24 20:09, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
The logic assumed any migration attempt worked and therefore would over-
account the amount of data migrated during buffer re-validation. As a
consequence client can be unfairly
From: Tvrtko Ursulin
The logic assumed any migration attempt worked and therefore would over-
account the amount of data migrated during buffer re-validation. As a
consequence client can be unfairly penalised by incorrectly considering
its migration budget spent.
Fix it by looking at the before
From: Tvrtko Ursulin
Current code appears to live in a misconception that playing with buffer
allowed and preferred placements can control the decision on whether
backing store migration will be attempted or not.
Both from code inspection and from empirical experiments I see that not
being true
From: Tvrtko Ursulin
Last few days I was looking at the situation with VRAM over subscription, what
happens versus what perhaps should happen. Browsing through the driver and
running some simple experiments.
I ended up with this patch series which, as a disclaimer, may be completely
wrong but
From: Tvrtko Ursulin
Currently the driver appears to be thinking that it will be attempting to
re-validate the evicted buffers on the next submission if they are not in
their preferred placement.
That however appears not to be true for the very common case of buffers
with allowed placements of
From: Tvrtko Ursulin
Currently the fallback placement flag can achieve a hint that buffer
should be migrated back to the non-fallback placement, however that only
works while there is no memory pressure. As soon as we reach full VRAM
utilisation, or worse overcommit, the logic is happy to leave
From: Tvrtko Ursulin
Now that TTM has the preferred placement flag, extend the current
workaround which assumes the GTT placement as fallback in the presence of
the additional VRAM placement.
By marking the VRAM placement as preferred we will make the buffer re-
validation phase actually
On 07/05/2024 23:45, Lucas De Marchi wrote:
From: Umesh Nerlige Ramappa
Add a helper to accumulate per-client runtime of all its
exec queues. This is called every time a sched job is finished.
v2:
- Use guc_exec_queue_free_job() and execlist_job_free() to accumulate
runtime when job
On 07/05/2024 22:35, Lucas De Marchi wrote:
On Fri, Apr 26, 2024 at 11:47:37AM GMT, Tvrtko Ursulin wrote:
On 24/04/2024 00:56, Lucas De Marchi wrote:
Print the accumulated runtime for client when printing fdinfo.
Each time a query is done it first does 2 things:
1) loop through all the
On 07/05/2024 00:23, Matthew Brost wrote:
On Thu, May 02, 2024 at 03:33:50PM +0100, Tvrtko Ursulin wrote:
Hi all,
Continuing after the brief IRC discussion yesterday regarding work queues
being prone to deadlocks or not, I had a browse around the code base and
ended up a bit confused.
When
On 03/05/2024 16:58, Alex Deucher wrote:
On Fri, May 3, 2024 at 11:33 AM Daniel Vetter wrote:
On Fri, May 03, 2024 at 01:58:38PM +0100, Tvrtko Ursulin wrote:
[And I forgot dri-devel.. doing well!]
On 03/05/2024 13:40, Tvrtko Ursulin wrote:
[Correcting Christian's email]
On 03/05
On 03/05/2024 14:39, Alex Deucher wrote:
On Fri, May 3, 2024 at 8:58 AM Tvrtko Ursulin wrote:
[And I forgot dri-devel.. doing well!]
On 03/05/2024 13:40, Tvrtko Ursulin wrote:
[Correcting Christian's email]
On 03/05/2024 13:36, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
Current
[And I forgot dri-devel.. doing well!]
On 03/05/2024 13:40, Tvrtko Ursulin wrote:
[Correcting Christian's email]
On 03/05/2024 13:36, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
Currently it is not well defined what is drm-memory- compared to other
categories.
In practice the only d
Hi all,
Continuing after the brief IRC discussion yesterday regarding work
queues being prone to deadlocks or not, I had a browse around the code
base and ended up a bit confused.
When drm_sched_init documents and allocates an *ordered* wq, if no
custom one was provided, could someone remi
Hi,
On 24/04/2024 15:48, Adrián Larumbe wrote:
Hi Tvrtko,
On 15.04.2024 13:50, Tvrtko Ursulin wrote:
On 05/04/2024 18:59, Rob Clark wrote:
On Wed, Apr 3, 2024 at 11:37 AM Adrián Larumbe
wrote:
Up to this day, all fdinfo-based GPU profilers must traverse the entire
/proc directory
type = get_fs_type("tmpfs");
if (!type)
goto err;
Reviewed-by: Tvrtko Ursulin
Regards,
Tvrtko
On 26/04/2024 19:59, Umesh Nerlige Ramappa wrote:
On Fri, Apr 26, 2024 at 11:49:32AM +0100, Tvrtko Ursulin wrote:
On 24/04/2024 00:56, Lucas De Marchi wrote:
From: Umesh Nerlige Ramappa
Add a helper to accumulate per-client runtime of all its
exec queues. Currently that is done in 2
: Lucas De Marchi
Also Cc'ing maintainers
Thanks,
Acked-by: Tvrtko Ursulin
Regards,
Tvrtko
---
MAINTAINERS | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index d6327dc12cb1..fbf7371a0bb0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10
On 24/04/2024 00:56, Lucas De Marchi wrote:
From: Umesh Nerlige Ramappa
Add a helper to accumulate per-client runtime of all its
exec queues. Currently that is done in 2 places:
1. when the exec_queue is destroyed
2. when the sched job is completed
Signed-off-by: Umesh Nerli
On 24/04/2024 00:56, Lucas De Marchi wrote:
Print the accumulated runtime for client when printing fdinfo.
Each time a query is done it first does 2 things:
1) loop through all the exec queues for the current client and
accumulate the runtime, per engine class. CTX_TIMESTAMP is used for
On 21/04/2024 22:44, Maíra Canal wrote:
Add a modparam for turning off Big/Super Pages to make sure that if an
user doesn't want Big/Super Pages enabled, it can disabled it by setting
the modparam to false.
Signed-off-by: Maíra Canal
---
drivers/gpu/drm/v3d/v3d_drv.c | 8
driver
align = SZ_64K;
+ else
+ align = SZ_4K;
V3d has one GPU address space, right? I wonder if one day fragmentation
could become an issue but it's a problem for another day. Patch looks
fine to me.
Reviewed-by: Tvrtko Ursulin
Regards,
Tvrtko
+
spin_lock(&v3d-&g
On 22/04/2024 10:57, Tvrtko Ursulin wrote:
On 21/04/2024 22:44, Maíra Canal wrote:
The V3D MMU also supports 64KB and 1MB pages, called big and super pages,
respectively. In order to set a 64KB page or 1MB page in the MMU, we need
to make sure that page table entries for all 4KB pages within
len -= page_size;
+ }
}
WARN_ON_ONCE(page - bo->node.start !=
It looks correct to me.
Reviewed-by: Tvrtko Ursulin
Regards,
Tvrtko
_runtime);
+ timestamp, jobs_completed, active_runtime);
}
return len;
Reviewed-by: Tvrtko Ursulin
Regards,
Tvrtko
ck for GPU usage
stats")
Reported-by: Tvrtko Ursulin
Signed-off-by: Maíra Canal
---
drivers/gpu/drm/v3d/v3d_drv.c | 10 ++
drivers/gpu/drm/v3d/v3d_drv.h | 21 +
drivers/gpu/drm/v3d/v3d_gem.c | 7 +--
drivers/gpu/drm/v3d/v3d_sched.c | 7 +++
On 01/04/2024 18:58, Felix Kuehling wrote:
On 2024-04-01 12:56, Tvrtko Ursulin wrote:
On 01/04/2024 17:37, Felix Kuehling wrote:
On 2024-04-01 11:09, Tvrtko Ursulin wrote:
On 28/03/2024 20:42, Felix Kuehling wrote:
On 2024-03-28 12:03, Tvrtko Ursulin wrote:
Hi Felix,
I had one more
should be added just once. Therefore, delete the
leftovers from commit 509433d8146c ("drm/v3d: Expose the total GPU usage
stats on sysfs").
Fixes: 509433d8146c ("drm/v3d: Expose the total GPU usage stats on
sysfs")
Reported-by: Tvrtko Ursulin
Signed-off-by: Maíra Canal
---
dr
zillon
Cc: Tvrtko Ursulin
Cc: Christopher Healy
Signed-off-by: Adrián Larumbe
It does seem like a good idea.. idk if there is some precedent to
prefer binary vs ascii in sysfs, but having a way to avoid walking
_all_ processes is a good idea.
I naturally second that it is a needed feature, bu
ze_t size,
+ struct vfsmount
*gemfs);
void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem);
void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
Reviewed-by: Tvrtko Ursulin
Regards,
Tvrtko
nd was once annoyed by trivial wrappers /
function calls. But some other are then annoyed by static inlines.. so
dunno.) For either flavour:
Reviewed-by: Tvrtko Ursulin
Regards,
Tvrtko
/**
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index bae4865b2101..2ebf6e10cc44 1006
ts")
Reported-by: Tvrtko Ursulin
Signed-off-by: Maíra Canal
---
drivers/gpu/drm/v3d/v3d_drv.c | 16
drivers/gpu/drm/v3d/v3d_drv.h | 7 +++
drivers/gpu/drm/v3d/v3d_gem.c | 7 +--
drivers/gpu/drm/v3d/v3d_sched.c | 9 +
drivers/gpu/drm
On 05/04/2024 19:29, Maíra Canal wrote:
The V3D MMU also supports 64KB and 1MB pages, called big and super pages,
respectively. In order to set a 64KB page or 1MB page in the MMU, we need
to make sure that page table entries for all 4KB pages within a big/super
page must be correctly configured
_stats->start_ns;
- global_stats->jobs_sent++;
- global_stats->start_ns = 0;
+ v3d_stats_update(local_stats);
+ v3d_stats_update(global_stats);
}
static struct dma_fence *v3d_bin_job_run(struct drm_sched_job *sched_job)
Reviewed-by: Tvrtko Ursulin
Regards,
Tvrtko
;%s\t%llu\t%llu\t%llu\n",
v3d_queue_to_string(queue),
timestamp,
- v3d->queue[queue].jobs_sent,
-v3d->queue[queue].enabled_ns +
active_runtime);
+stats->jobs_sent,
+stats->enabled_ns + active_runtime);
}
return len;
Reviewed-by: Tvrtko Ursulin
Regards,
Tvrtko
eftovers from commit 509433d8146c ("drm/v3d: Expose the total GPU usage
stats on sysfs").
Fixes: 509433d8146c ("drm/v3d: Expose the total GPU usage stats on sysfs")
Reported-by: Tvrtko Ursulin
Signed-off-by: Maíra Canal
---
drivers/gpu/drm/v3d/v3d_irq.c | 4
1 file changed, 4 de
On 01/04/2024 17:37, Felix Kuehling wrote:
On 2024-04-01 11:09, Tvrtko Ursulin wrote:
On 28/03/2024 20:42, Felix Kuehling wrote:
On 2024-03-28 12:03, Tvrtko Ursulin wrote:
Hi Felix,
I had one more thought while browsing around the amdgpu CRIU plugin.
It appears it relies on the KFD
On 12/03/2024 13:56, Maxime Ripard wrote:
Hi,
On Tue, Feb 20, 2024 at 09:49:25AM +0100, Maxime Ripard wrote:
## Changing the default location repo
Dim gets its repos list in the drm-rerere nightly.conf file. We will
need to change that file to match the gitlab repo, and drop the old cgit
URL
On 28/03/2024 20:42, Felix Kuehling wrote:
On 2024-03-28 12:03, Tvrtko Ursulin wrote:
Hi Felix,
I had one more thought while browsing around the amdgpu CRIU plugin.
It appears it relies on the KFD support being compiled in and /dev/kfd
present, correct? AFAICT at least, it relies on that
On 01/04/2024 13:45, Christian König wrote:
Am 01.04.24 um 14:39 schrieb Tvrtko Ursulin:
On 29/03/2024 00:00, T.J. Mercier wrote:
On Thu, Mar 28, 2024 at 7:53 AM Tvrtko Ursulin
wrote:
From: Tvrtko Ursulin
There is no point in compiling in the list and mutex operations
which are
only
On 29/03/2024 00:00, T.J. Mercier wrote:
On Thu, Mar 28, 2024 at 7:53 AM Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
There is no point in compiling in the list and mutex operations which are
only used from the dma-buf debugfs code, if debugfs is not compiled in.
Put the code in questions
for things like
re-assigned minor numbers due driver reload.
Otherwise I am eagerly awaiting to hear more about the design specifics
around dma-buf handling. And also seeing how to extend to other DRM
related anonymous fds.
Regards,
Tvrtko
On 15/03/2024 18:36, Tvrtko Ursulin wrote:
On 15/03
From: Tvrtko Ursulin
There is no point in compiling in the list and mutex operations which are
only used from the dma-buf debugfs code, if debugfs is not compiled in.
Put the code in questions behind some kconfig guards and so save some text
and maybe even a pointer per object at runtime when
os of the 'dev_priv' name keep sneaking it
in.
Signed-off-by: Andi Shyti
Cc: Jani Nikula
Cc: Joonas Lahtinen
Cc: Rodrigo Vivi
Cc: Tvrtko Ursulin
---
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 4 ++--
drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 6 +++---
drivers/gpu
On 20/03/2024 15:06, Andi Shyti wrote:
Ping! Any thoughts here?
I only casually observed the discussion after I saw Matt suggested
further simplifications. As I understood it, you will bring back the
uabi engine games when adding the dynamic behaviour and that is fine by me.
Regards,
Tvr
8/24 10:10, Christian König wrote:
Am 18.03.24 um 13:42 schrieb Maíra Canal:
Hi Christian,
On 3/12/24 10:48, Christian König wrote:
Am 12.03.24 um 14:09 schrieb Tvrtko Ursulin:
On 12/03/2024 10:37, Christian König wrote:
Am 12.03.24 um 11:31 schrieb Tvrtko Ursulin:
On 12/03/2024 10:23, Christian K
On 15/03/2024 02:33, Felix Kuehling wrote:
On 2024-03-12 5:45, Tvrtko Ursulin wrote:
On 11/03/2024 14:48, Tvrtko Ursulin wrote:
Hi Felix,
On 06/12/2023 21:23, Felix Kuehling wrote:
Executive Summary: We need to add CRIU support to DRM render nodes
in order to maintain CRIU support for
Hi Maira,
On 11/03/2024 10:06, Maíra Canal wrote:
The V3D MMU also supports 1MB pages, called super pages. In order to
set a 1MB page in the MMU, we need to make sure that page table entries
for all 4KB pages within a super page must be correctly configured.
Therefore, if the BO is larger tha
On 12/03/2024 10:37, Christian König wrote:
Am 12.03.24 um 11:31 schrieb Tvrtko Ursulin:
On 12/03/2024 10:23, Christian König wrote:
Am 12.03.24 um 10:30 schrieb Tvrtko Ursulin:
On 12/03/2024 08:59, Christian König wrote:
Am 12.03.24 um 09:51 schrieb Tvrtko Ursulin:
Hi Maira,
On 11/03
On 12/03/2024 10:23, Christian König wrote:
Am 12.03.24 um 10:30 schrieb Tvrtko Ursulin:
On 12/03/2024 08:59, Christian König wrote:
Am 12.03.24 um 09:51 schrieb Tvrtko Ursulin:
Hi Maira,
On 11/03/2024 10:05, Maíra Canal wrote:
For some applications, such as using huge pages, we might
On 11/03/2024 19:27, Lucas De Marchi wrote:
On Mon, Mar 11, 2024 at 05:43:00PM +, Tvrtko Ursulin wrote:
On 06/03/2024 19:36, Lucas De Marchi wrote:
Remove platforms that never had their PCI IDs added to the driver and
are of course marked with requiring force_probe. Note that most of
On 11/03/2024 14:48, Tvrtko Ursulin wrote:
Hi Felix,
On 06/12/2023 21:23, Felix Kuehling wrote:
Executive Summary: We need to add CRIU support to DRM render nodes in
order to maintain CRIU support for ROCm application once they start
relying on render nodes for more GPU memory management
On 12/03/2024 08:59, Christian König wrote:
Am 12.03.24 um 09:51 schrieb Tvrtko Ursulin:
Hi Maira,
On 11/03/2024 10:05, Maíra Canal wrote:
For some applications, such as using huge pages, we might want to have a
different mountpoint, for which we pass in mount flags that better match
our
Hi,
On 11/03/2024 10:06, Maíra Canal wrote:
Create a separate "tmpfs" kernel mount for V3D. This will allow us to
move away from the shmemfs `shm_mnt` and gives the flexibility to do
things like set our own mount options. Here, the interest is to use
"huge=", which should allow us to enable th
Hi Maira,
On 11/03/2024 10:05, Maíra Canal wrote:
For some applications, such as using huge pages, we might want to have a
different mountpoint, for which we pass in mount flags that better match
our usecase.
Therefore, add a new parameter to drm_gem_object_init() that allow us to
define the
On 06/03/2024 19:36, Lucas De Marchi wrote:
Remove platforms that never had their PCI IDs added to the driver and
are of course marked with requiring force_probe. Note that most of the
code for those platforms is actually used by subsequent ones, so it's
not a huge amount of code being removed.
Hi Felix,
On 06/12/2023 21:23, Felix Kuehling wrote:
Executive Summary: We need to add CRIU support to DRM render nodes in
order to maintain CRIU support for ROCm application once they start
relying on render nodes for more GPU memory management. In this email
I'm providing some background w
On 06/03/2024 01:56, Adrián Larumbe wrote:
Debugfs isn't always available in production builds that try to squeeze
every single byte out of the kernel image, but we still need a way to
toggle the timestamp and cycle counter registers so that jobs can be
profiled for fdinfo's drm engine and cycl
From: Tvrtko Ursulin
I will lose access to my @.*intel.com e-mail addresses soon so let me
adjust the maintainers entry and update the mailmap too.
While at it consolidate a few other of my old emails to point to the
main one.
Signed-off-by: Tvrtko Ursulin
Cc: Daniel Vetter
Cc: Dave Airlie
.
Regards,
Tvrtko
drm-intel-gt-next-2024-02-28:
Driver Changes:
Fixes:
- Add some boring kerneldoc (Tvrtko Ursulin)
- Check before removing mm notifier (Nirmoy
The following changes since commit eb927f01dfb6309c8a184593c2c0618c4000c481:
drm/i915/gt: Restart the heartbeat timer when forcing a
On 27/02/2024 09:26, Nirmoy Das wrote:
Hi Tvrtko,
On 2/27/2024 10:04 AM, Tvrtko Ursulin wrote:
On 21/02/2024 11:52, Nirmoy Das wrote:
Merged it to drm-intel-gt-next with s/check/Check
Shouldn't this have had:
Fixes: ed29c2691188 ("drm/i915: Fix userptr so we do not have to wo
/sushmave/mesa/-/commits/compute_hint
v2: Rename flags as per review suggestions (Rodrigo, Tvrtko).
Also, use flag bits in intel_context as it allows finer control for
toggling per engine if needed (Tvrtko).
Cc: Rodrigo Vivi
Cc: Tvrtko Ursulin
Cc: Sushma Venkatesh Reddy
Signed-off-by: Vinay Belgaumkar
On 21/02/2024 11:52, Nirmoy Das wrote:
Merged it to drm-intel-gt-next with s/check/Check
Shouldn't this have had:
Fixes: ed29c2691188 ("drm/i915: Fix userptr so we do not have to worry about
obj->mm.lock, v7.")
Cc: # v5.13+
?
Regards,
Tvrtko
On 2/19/2024 1:50 PM, Nirmoy Das wrote:
E
On 22/02/2024 21:07, Rodrigo Vivi wrote:
On Wed, Feb 21, 2024 at 02:22:45PM +, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
When debugging GPU hangs Mesa developers are finding it useful to replay
the captured error state against the simulator. But due various simulator
limitations which
On 26/02/2024 08:47, Tvrtko Ursulin wrote:
On 23/02/2024 19:25, Rodrigo Vivi wrote:
On Fri, Feb 23, 2024 at 10:31:41AM -0800, Belgaumkar, Vinay wrote:
On 2/23/2024 12:51 AM, Tvrtko Ursulin wrote:
On 22/02/2024 23:31, Belgaumkar, Vinay wrote:
On 2/22/2024 7:32 AM, Tvrtko Ursulin wrote
On 23/02/2024 19:25, Rodrigo Vivi wrote:
On Fri, Feb 23, 2024 at 10:31:41AM -0800, Belgaumkar, Vinay wrote:
On 2/23/2024 12:51 AM, Tvrtko Ursulin wrote:
On 22/02/2024 23:31, Belgaumkar, Vinay wrote:
On 2/22/2024 7:32 AM, Tvrtko Ursulin wrote:
On 21/02/2024 21:28, Rodrigo Vivi wrote
On 22/02/2024 23:31, Belgaumkar, Vinay wrote:
On 2/22/2024 7:32 AM, Tvrtko Ursulin wrote:
On 21/02/2024 21:28, Rodrigo Vivi wrote:
On Wed, Feb 21, 2024 at 09:42:34AM +, Tvrtko Ursulin wrote:
On 21/02/2024 00:14, Vinay Belgaumkar wrote:
Allow user to provide a context hint. When this
On 21/02/2024 21:28, Rodrigo Vivi wrote:
On Wed, Feb 21, 2024 at 09:42:34AM +, Tvrtko Ursulin wrote:
On 21/02/2024 00:14, Vinay Belgaumkar wrote:
Allow user to provide a context hint. When this is set, KMD will
send a hint to GuC which results in special handling for this
context. SLPC
re and how exactly TBD.
Regards,
Tvrtko
On 16.02.2024 17:43, Tvrtko Ursulin wrote:
On 16/02/2024 16:57, Daniel Vetter wrote:
On Wed, Feb 14, 2024 at 01:52:05PM +, Steven Price wrote:
Hi Adrián,
On 14/02/2024 12:14, Adrián Larumbe wrote:
A driver user expressed interest in being able to a
From: Tvrtko Ursulin
To enable adding override of the default engine context image let us start
shadowing the per engine state in the context.
Signed-off-by: Tvrtko Ursulin
Cc: Lionel Landwerlin
Cc: Carlos Santa
Cc: Rodrigo Vivi
---
drivers/gpu/drm/i915/gt/intel_context_types.h | 2
From: Tvrtko Ursulin
When debugging GPU hangs Mesa developers are finding it useful to replay
the captured error state against the simulator. But due various simulator
limitations which prevent replicating all hangs, one step further is being
able to replay against a real GPU.
This is almost
From: Tvrtko Ursulin
Please see 2/2 for explanation and rationale.
v2:
* Extracted shadowing of default state into a leading patch.
Tvrtko Ursulin (2):
drm/i915: Shadow default engine context image in the context
drm/i915: Support replaying GPU hangs with captured context image
drivers
On 21/02/2024 12:08, Tvrtko Ursulin wrote:
On 21/02/2024 11:19, Andi Shyti wrote:
Hi Tvrtko,
On Wed, Feb 21, 2024 at 08:19:34AM +, Tvrtko Ursulin wrote:
On 21/02/2024 00:14, Andi Shyti wrote:
On Tue, Feb 20, 2024 at 02:48:31PM +, Tvrtko Ursulin wrote:
On 20/02/2024 14:35, Andi
On 21/02/2024 11:19, Andi Shyti wrote:
Hi Tvrtko,
On Wed, Feb 21, 2024 at 08:19:34AM +, Tvrtko Ursulin wrote:
On 21/02/2024 00:14, Andi Shyti wrote:
On Tue, Feb 20, 2024 at 02:48:31PM +, Tvrtko Ursulin wrote:
On 20/02/2024 14:35, Andi Shyti wrote:
Enable only one CCS engine by
On 21/02/2024 00:14, Vinay Belgaumkar wrote:
Allow user to provide a context hint. When this is set, KMD will
send a hint to GuC which results in special handling for this
context. SLPC will ramp the GT frequency aggressively every time
it switches to this context. The down freq threshold will
On 20/02/2024 22:50, Rodrigo Vivi wrote:
On Tue, Feb 13, 2024 at 01:14:34PM +, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
When debugging GPU hangs Mesa developers are finding it useful to replay
the captured error state against the simulator. But due various simulator
limitations which
On 21/02/2024 00:14, Andi Shyti wrote:
Hi Tvrtko,
On Tue, Feb 20, 2024 at 02:48:31PM +, Tvrtko Ursulin wrote:
On 20/02/2024 14:35, Andi Shyti wrote:
Enable only one CCS engine by default with all the compute sices
slices
Thanks!
diff --git a/drivers/gpu/drm/i915/gt
On 20/02/2024 14:35, Andi Shyti wrote:
Enable only one CCS engine by default with all the compute sices
slices
allocated to it.
While generating the list of UABI engines to be exposed to the
user, exclude any additional CCS engines beyond the first
instance.
This change can be tested with
On 20/02/2024 14:20, Andi Shyti wrote:
Since CCS automatic load balancing is disabled, we will impose a
fixed balancing policy that involves setting all the CCS engines
to work together on the same load.
Erm *all* CSS engines work together..
Simultaneously, the user will see only 1 CCS rath
On 20/02/2024 10:36, Maxime Ripard wrote:
On Tue, Feb 20, 2024 at 09:16:43AM +, Tvrtko Ursulin wrote:
On 19/02/2024 20:02, Rodrigo Vivi wrote:
On Mon, Feb 19, 2024 at 01:14:23PM +, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
Request can be NULL if no guilty request was identified
On 20/02/2024 10:11, Andi Shyti wrote:
Hi Tvrtko,
On Mon, Feb 19, 2024 at 12:51:44PM +, Tvrtko Ursulin wrote:
On 19/02/2024 11:16, Tvrtko Ursulin wrote:
On 15/02/2024 13:59, Andi Shyti wrote:
...
+/*
+ * Exclude unavailable engines.
+ *
+ * Only the first CCS engine is utilized due
On 19/02/2024 20:02, Rodrigo Vivi wrote:
On Mon, Feb 19, 2024 at 01:14:23PM +, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
Request can be NULL if no guilty request was identified so simply use
engine->i915 instead.
Signed-off-by: Tvrtko Ursulin
Fixes: d50892a9554c ("drm/i915
From: Tvrtko Ursulin
Tooling appears very strict so lets pacify it by adding some comments,
even if fields are completely self-explanatory.
Signed-off-by: Tvrtko Ursulin
Fixes: b11236486749 ("drm/i915: Add GuC submission interface version query")
Reported-by: Stephen Rothwell
Cc:
From: Tvrtko Ursulin
Request can be NULL if no guilty request was identified so simply use
engine->i915 instead.
Signed-off-by: Tvrtko Ursulin
Fixes: d50892a9554c ("drm/i915: switch from drm_debug_printer() to device
specific drm_dbg_printer()")
Reported-by: Dan Carpenter
Cc: Ja
On 19/02/2024 11:16, Tvrtko Ursulin wrote:
On 15/02/2024 13:59, Andi Shyti wrote:
Since CCS automatic load balancing is disabled, we will impose a
fixed balancing policy that involves setting all the CCS engines
to work together on the same load.
Simultaneously, the user will see only 1 CCS
On 15/02/2024 13:59, Andi Shyti wrote:
Since CCS automatic load balancing is disabled, we will impose a
fixed balancing policy that involves setting all the CCS engines
to work together on the same load.
Simultaneously, the user will see only 1 CCS rather than the
actual number. As of now, thi
On 16/02/2024 16:57, Daniel Vetter wrote:
On Wed, Feb 14, 2024 at 01:52:05PM +, Steven Price wrote:
Hi Adrián,
On 14/02/2024 12:14, Adrián Larumbe wrote:
A driver user expressed interest in being able to access engine usage stats
through fdinfo when debugfs is not built into their kernel
submission version query which Mesa
wants for implementing Vulkan async compute queues.
Regards,
Tvrtko
drm-intel-gt-next-2024-02-15:
UAPI Changes:
- Add GuC submission interface version query (Tvrtko Ursulin)
Driver Changes:
Fixes/improvements/new stuff:
- Atomically invalidate userptr on mmu
From: Tvrtko Ursulin
When debugging GPU hangs Mesa developers are finding it useful to replay
the captured error state against the simulator. But due various simulator
limitations which prevent replicating all hangs, one step further is being
able to replay against a real GPU.
This is almost
On 08/02/2024 18:13, Erick Archer wrote:
The "struct i915_syncmap" uses a dynamically sized set of trailing
elements. It can use an "u32" array or a "struct i915_syncmap *"
array.
So, use the preferred way in the kernel declaring flexible arrays [1].
Because there are two possibilities for the
On 08/02/2024 17:55, Souza, Jose wrote:
On Thu, 2024-02-08 at 07:19 -0800, José Roberto de Souza wrote:
On Thu, 2024-02-08 at 14:59 +, Tvrtko Ursulin wrote:
On 08/02/2024 14:30, Souza, Jose wrote:
On Thu, 2024-02-08 at 08:25 +, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
Add a new
On 08/02/2024 14:30, Souza, Jose wrote:
On Thu, 2024-02-08 at 08:25 +, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
Add a new query to the GuC submission interface version.
Mesa intends to use this information to check for old firmware versions
with a known bug where using the render and
On 07/02/2024 19:34, John Harrison wrote:
On 2/7/2024 10:49, Tvrtko Ursulin wrote:
On 07/02/2024 18:12, John Harrison wrote:
On 2/7/2024 03:56, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
Add a new query to the GuC submission interface version.
Mesa intends to use this information to
From: Tvrtko Ursulin
Add a new query to the GuC submission interface version.
Mesa intends to use this information to check for old firmware versions
with a known bug where using the render and compute command streamers
simultaneously can cause GPU hangs due issues in firmware scheduling
On 07/02/2024 18:12, John Harrison wrote:
On 2/7/2024 03:56, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
Add a new query to the GuC submission interface version.
Mesa intends to use this information to check for old firmware versions
with a known bug where using the render and compute
From: Tvrtko Ursulin
Add a new query to the GuC submission interface version.
Mesa intends to use this information to check for old firmware versions
with a known bug where using the render and compute command streamers
simultaneously can cause GPU hangs due issues in firmware scheduling
Hi,
On 06/02/2024 16:45, Nikita Zhandarovich wrote:
After falling through the switch statement to default case 'repr' is
initialized with NULL, which will lead to incorrect dereference of
'!repr[n]' in the following loop.
Fix it with the help of an additional check for NULL.
Found by Linux V
stats[id].shared += sz;
else
stats[id].private += sz;
Reviewed-by: Tvrtko Ursulin
Good that you remembered this story, I completely forgot!
Regards,
Tvrtko
ts(obj)) {
status.shared += obj->size;
} else {
status.private += obj->size;
Reviewed-by: Tvrtko Ursulin
Regards,
Tvrtko
201 - 300 of 2069 matches
Mail list logo