On 31/05/2023 18:10, fei.y...@intel.com wrote:
From: Fei Yang
This series introduce a new extension for GEM_CREATE,
1. end support for set caching ioctl [PATCH 1/2]
2. add set_pat extension for gem_create [PATCH 2/2]
v2: drop one patch that was merged separately
commit 341ad0e8e254
On 24/05/2023 21:02, fei.y...@intel.com wrote:
From: Fei Yang
This series introduce a new extension for GEM_CREATE,
1. end support for set caching ioctl [PATCH 1/2]
2. add set_pat extension for gem_create [PATCH 2/2]
v2: drop one patch that was merged separately
commit 341ad0e8e254
From: Tvrtko Ursulin
User feedback indicates significant performance gains are possible in
specific games with non default RPS up/down thresholds.
Expose these tunables via sysfs which will allow users to achieve best
performance when running games and best power efficiency elsewhere.
Note
From: Tvrtko Ursulin
Now that we allow them to be modified, lets include them in the error
state so it is visible when they have been modified in GPU hang triage.
Signed-off-by: Tvrtko Ursulin
Cc: Rodrigo Vivi
Cc: Andi Shyti
---
drivers/gpu/drm/i915/i915_gpu_error.c | 5 +
drivers/gpu
From: Tvrtko Ursulin
Record the default values as preparation for exposing the sysfs controls.
Signed-off-by: Tvrtko Ursulin
Cc: Rodrigo Vivi
Reviewed-by: Rodrigo Vivi
Reviewed-by: Andi Shyti
---
drivers/gpu/drm/i915/gt/intel_gt_types.h | 3 +++
drivers/gpu/drm/i915/gt/intel_rps.c | 2
From: Tvrtko Ursulin
In preparation for exposing via sysfs add helpers for managing rps
thresholds.
v2:
* Force sw and hw re-programming on threshold change.
Signed-off-by: Tvrtko Ursulin
Cc: Rodrigo Vivi
Reviewed-by: Rodrigo Vivi
Reviewed-by: Andi Shyti
---
drivers/gpu/drm/i915/gt
From: Tvrtko Ursulin
>From patch 4:
User feedback indicates significant performance gains are possible in
specific games with non default RPS up/down thresholds.
Expose these tunables via sysfs which will allow users to achieve best
performance when running games and best po
From: Tvrtko Ursulin
Since 36d516be867c ("drm/i915/gt: Switch to manual evaluation of RPS")
thresholds are invariant so lets move their setting to init time.
Signed-off-by: Tvrtko Ursulin
Cc: Rodrigo Vivi
Reviewed-by: Rodrigo Vivi
Reviewed-by: Andi Shyti
---
drivers/gpu/d
in this
case and save CPU cycles/power.
v2: Instead of turning freq bits off, return false, since no counters will
run after this change when GT is parked (Tvrtko)
Signed-off-by: Ashutosh Dixit
Reviewed-by: Tvrtko Ursulin
---
drivers/gpu/drm/i915/i915_pmu.c | 12 +---
1 file
] (Clint Taylor)
- Meteorlake PXP enablement (Alan Previn)
- Do not enable render power-gating on MTL (Andrzej Hajda)
- Add MTL performance tuning changes (Radhakrishna Sripada)
- Extend Wa_16014892111 to MTL A-step (Radhakrishna Sripada)
- PMU multi-tile support (Tvrtko Ursulin)
- End support for set
On 24/05/2023 18:38, Dixit, Ashutosh wrote:
On Wed, 24 May 2023 04:38:18 -0700, Tvrtko Ursulin wrote:
Hi Tvrtko,
On 23/05/2023 16:19, Ashutosh Dixit wrote:
No functional changes but we can remove some unsightly index computation
and read/write functions if we convert the PMU sample
On 24/05/2023 13:30, Andi Shyti wrote:
Hi again,
finally... pushed in drm-intel-gt-next! :)
I had to revert this (uapi commit only) by force pushing, luckily it was the
top commit.
OK, sorry!
1)
IGT is not merged yet.
if igt is merged without the kernel it would fail, though.
can
On 24/05/2023 13:19, Andi Shyti wrote:
Hi Tvrtko,
finally... pushed in drm-intel-gt-next! :)
I had to revert this (uapi commit only) by force pushing, luckily it was the
top commit.
OK, sorry!
1)
IGT is not merged yet.
if igt is merged without the kernel it would fail, though.
On 24/05/2023 12:56, Tvrtko Ursulin wrote:
On 23/05/2023 09:37, Andi Shyti wrote:
Hi Fei,
finally... pushed in drm-intel-gt-next! :)
I had to revert this (uapi commit only) by force pushing, luckily it was
the top commit.
1)
IGT is not merged yet.
2)
The tools/include/uapi/drm
On 23/05/2023 09:37, Andi Shyti wrote:
Hi Fei,
finally... pushed in drm-intel-gt-next! :)
I had to revert this (uapi commit only) by force pushing, luckily it was
the top commit.
1)
IGT is not merged yet.
2)
The tools/include/uapi/drm/i915_drm.h part of the patch was not removed.
On 23/05/2023 16:19, Ashutosh Dixit wrote:
No functional changes but we can remove some unsightly index computation
and read/write functions if we convert the PMU sample array from a
one-dimensional to a two-dimensional array.
Suggested-by: Tvrtko Ursulin
Signed-off-by: Ashutosh Dixit
On 23/05/2023 00:09, Andi Shyti wrote:
Hi Tvrtko,
On Mon, May 22, 2023 at 12:59:27PM +0100, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
In preparation for exposing via sysfs add helpers for managing rps
thresholds.
v2:
* Force sw and hw re-programming on threshold change.
Signed-off
On 19/05/2023 21:56, Rodrigo Vivi wrote:
On Fri, Apr 28, 2023 at 09:44:53AM +0100, Tvrtko Ursulin wrote:
On 28/04/2023 09:14, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
User feedback indicates significant performance gains are possible in
specific games with non default RPS up/down
On 19/05/2023 21:56, Rodrigo Vivi wrote:
On Fri, Apr 28, 2023 at 09:14:56AM +0100, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
In preparation for exposing via sysfs add helpers for managing rps
thresholds.
Signed-off-by: Tvrtko Ursulin
---
drivers/gpu/drm/i915/gt/intel_rps.c | 36
From: Tvrtko Ursulin
Record the default values as preparation for exposing the sysfs controls.
Signed-off-by: Tvrtko Ursulin
Cc: Rodrigo Vivi
---
drivers/gpu/drm/i915/gt/intel_gt_types.h | 3 +++
drivers/gpu/drm/i915/gt/intel_rps.c | 2 ++
2 files changed, 5 insertions(+)
diff --git
From: Tvrtko Ursulin
User feedback indicates significant performance gains are possible in
specific games with non default RPS up/down thresholds.
Expose these tunables via sysfs which will allow users to achieve best
performance when running games and best power efficiency elsewhere.
Note
From: Tvrtko Ursulin
In preparation for exposing via sysfs add helpers for managing rps
thresholds.
v2:
* Force sw and hw re-programming on threshold change.
Signed-off-by: Tvrtko Ursulin
Cc: Rodrigo Vivi
---
drivers/gpu/drm/i915/gt/intel_rps.c | 54 +
drivers
From: Tvrtko Ursulin
Since 36d516be867c ("drm/i915/gt: Switch to manual evaluation of RPS")
thresholds are invariant so lets move their setting to init time.
Signed-off-by: Tvrtko Ursulin
Cc: Rodrigo Vivi
---
drivers/gpu/drm/i915/gt/intel_rps.c | 27 -
rnel.org/r/87sfeita1p@intel.com
Signed-off-by: Tetsuo Handa
Cc: Tvrtko Ursulin
Cc: Jani Nikula
Cc: Ville Syrjälä
---
Changes in v4:
Refreshed using drm-tip.git.
Changes in v3:
Refreshed using drm-tip.git, for commit 40053823baad ("drm/i915/display:
move modeset probe/r
In case you were waiting for me looking at the rest of the series, there
was this reply from the previous round I can expand on.
On 02/05/2023 08:50, Tvrtko Ursulin wrote:
On 01/05/2023 17:58, Rob Clark wrote:
On Fri, Apr 28, 2023 at 4:05 AM Tvrtko Ursulin
wrote:
On 27/04/2023 18:53
On 16/05/2023 19:11, fei.y...@intel.com wrote:
From: Fei Yang
To comply with the design that buffer objects shall have immutable
cache setting through out their life cycle, {set, get}_caching ioctl's
are no longer supported from MTL onward. With that change caching
policy can only be set at
From: Tvrtko Ursulin
Having it as u64 was a confusing (but harmless) mistake.
Also add some asserts to make sure the internal field does not overflow
in the future.
Signed-off-by: Tvrtko Ursulin
Cc: Ashutosh Dixit
Cc: Umesh Nerlige Ramappa
---
I am not entirely sure the __builtin_constant_p
/**
* @vm_ops:
*
* Virtual memory operations used with mmap.
*
* This is optional but necessary for mmap support.
*/
const struct vm_operations_struct *vm_ops;
};
With the u64 stats:
Acked-by: Tvrtko Ursulin
Regards,
Tvrtko
On 13/05/2023 00:28, fei.y...@intel.com wrote:
From: Fei Yang
To comply with the design that buffer objects shall have immutable
cache setting through out their life cycle, {set, get}_caching ioctl's
are no longer supported from MTL onward. With that change caching
policy can only be set at
On 12/05/2023 20:54, Jordan Justen wrote:
On 2023-05-10 15:14:16, Andi Shyti wrote:
Hi,
On Tue, May 09, 2023 at 09:59:42AM -0700, fei.y...@intel.com wrote:
From: Fei Yang
To comply with the design that buffer objects shall have immutable
cache setting through out their life cycle, {set,
config_mask(I915_PMU_REQUESTED_FREQUENCY) |
+ ENGINE_SAMPLE_MASK);
+ }
/*
* Also there is software busyness tracking available we do not
* need the timer for I915_SAMPLE_BUSY counter.
LGTM.
Reviewed-by: Tvrtko Ursulin
Or maybe it is possible to simpl
On 09/05/2023 18:12, Yang, Fei wrote:
> On 09/05/2023 00:48, fei.y...@intel.com wrote:
>> From: Fei Yang
>>
>> Currently the KMD is using enum i915_cache_level to set caching
policy for
>> buffer objects. This is flaky because the PAT index which really
controls
>> the caching
On 10/05/2023 19:46, Tejun Heo wrote:
Hello,
On Wed, May 10, 2023 at 04:59:01PM +0200, Maarten Lankhorst wrote:
The misc controller is not granular enough. A single computer may have any
number of
graphics cards, some of them with multiple regions of vram inside a single card.
Extending
On 09/05/2023 00:48, fei.y...@intel.com wrote:
From: Fei Yang
Currently the KMD is using enum i915_cache_level to set caching policy for
buffer objects. This is flaky because the PAT index which really controls
the caching behavior in PTE has far more levels than what's defined in the
enum.
On 04/05/2023 17:06, Yang, Fei wrote:
> On 04/05/2023 00:02, fei.y...@intel.com wrote:
>> From: Fei Yang
>>
>> Currently the KMD is using enum i915_cache_level to set caching
policy for
>> buffer objects. This is flaky because the PAT index which really
controls
>> the caching
On 03/05/2023 21:39, Yang, Fei wrote:
[...]
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c
b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 8c70a0ec7d2f..27c948350b5b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -54,6
On 04/05/2023 00:02, fei.y...@intel.com wrote:
From: Fei Yang
Currently the KMD is using enum i915_cache_level to set caching policy for
buffer objects. This is flaky because the PAT index which really controls
the caching behavior in PTE has far more levels than what's defined in the
enum.
On 03/05/2023 09:34, Maarten Lankhorst wrote:
Based roughly on the rdma and misc cgroup controllers, with a lot of
the accounting code borrowed from rdma.
The interface is simple:
- populate drmcgroup_device->regions[..] name and size for each active
region.
- Call
On 02/05/2023 05:11, fei.y...@intel.com wrote:
From: Fei Yang
Currently the KMD is using enum i915_cache_level to set caching policy for
buffer objects. This is flaky because the PAT index which really controls
the caching behavior in PTE has far more levels than what's defined in the
enum.
On 28/04/2023 15:45, Rob Clark wrote:
On Fri, Apr 28, 2023 at 3:56 AM Tvrtko Ursulin
wrote:
On 27/04/2023 18:53, Rob Clark wrote:
From: Rob Clark
Add support to dump GEM stats to fdinfo.
v2: Fix typos, change size units to match docs, use div_u64
v3: Do it in core
v4: more kerneldoc
On 01/05/2023 17:58, Rob Clark wrote:
On Fri, Apr 28, 2023 at 4:05 AM Tvrtko Ursulin
wrote:
On 27/04/2023 18:53, Rob Clark wrote:
From: Rob Clark
These are useful in particular for VM scenarios where the process which
has opened to drm device file is just a proxy for the real user
On 27/04/2023 18:53, Rob Clark wrote:
From: Rob Clark
These are useful in particular for VM scenarios where the process which
has opened to drm device file is just a proxy for the real user in a VM
guest.
Signed-off-by: Rob Clark
---
Documentation/gpu/drm-usage-stats.rst | 18
On 27/04/2023 18:53, Rob Clark wrote:
From: Rob Clark
Add support to dump GEM stats to fdinfo.
v2: Fix typos, change size units to match docs, use div_u64
v3: Do it in core
v4: more kerneldoc
Signed-off-by: Rob Clark
Reviewed-by: Emil Velikov
Reviewed-by: Daniel Vetter
---
On 28/04/2023 06:47, fei.y...@intel.com wrote:
From: Fei Yang
To comply with the design that buffer objects shall have immutable
cache setting through out their life cycle, {set, get}_caching ioctl's
are no longer supported from MTL onward. With that change caching
policy can only be set at
On 27/04/2023 17:07, Yang, Fei wrote:
> On 26/04/2023 16:41, Yang, Fei wrote:
>>> On 26/04/2023 07:24, fei.y...@intel.com wrote:
From: Fei Yang
The first three patches in this series are taken from
https://patchwork.freedesktop.org/series/116868/
These patches
On 28/04/2023 09:14, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
User feedback indicates significant performance gains are possible in
specific games with non default RPS up/down thresholds.
Expose these tunables via sysfs which will allow users to achieve best
performance when running games
From: Tvrtko Ursulin
Since 36d516be867c ("drm/i915/gt: Switch to manual evaluation of RPS")
thresholds are invariant so lets move their setting to init time.
Signed-off-by: Tvrtko Ursulin
---
drivers/gpu/drm/i915/gt/intel_rps.c | 27 ---
1 file changed, 16
From: Tvrtko Ursulin
Record the default values as preparation for exposing the sysfs controls.
Signed-off-by: Tvrtko Ursulin
---
drivers/gpu/drm/i915/gt/intel_gt_types.h | 3 +++
drivers/gpu/drm/i915/gt/intel_rps.c | 2 ++
2 files changed, 5 insertions(+)
diff --git a/drivers/gpu/drm
From: Tvrtko Ursulin
User feedback indicates significant performance gains are possible in
specific games with non default RPS up/down thresholds.
Expose these tunables via sysfs which will allow users to achieve best
performance when running games and best power efficiency elsewhere.
Note
From: Tvrtko Ursulin
In preparation for exposing via sysfs add helpers for managing rps
thresholds.
Signed-off-by: Tvrtko Ursulin
---
drivers/gpu/drm/i915/gt/intel_rps.c | 36 +
drivers/gpu/drm/i915/gt/intel_rps.h | 4
2 files changed, 40 insertions(+)
diff
From: Tvrtko Ursulin
>From patch 4:
User feedback indicates significant performance gains are possible in
specific games with non default RPS up/down thresholds.
Expose these tunables via sysfs which will allow users to achieve best
performance when running games and best po
On 26/04/2023 16:41, Yang, Fei wrote:
> On 26/04/2023 07:24, fei.y...@intel.com wrote:
>> From: Fei Yang
>>
>> The first three patches in this series are taken from
>> https://patchwork.freedesktop.org/series/116868/
>> These patches are included here because the last patch
>> has
From: Tvrtko Ursulin
User feedback indicates significant performance gains are possible in
specific games with non default RPS up/down thresholds.
Expose these tunables via sysfs which will allow users to achieve best
performance when running games and best power efficiency elsewhere.
Note
From: Tvrtko Ursulin
In preparation for exposing via sysfs add helpers for managing rps
thresholds.
Signed-off-by: Tvrtko Ursulin
---
drivers/gpu/drm/i915/gt/intel_rps.c | 36 +
drivers/gpu/drm/i915/gt/intel_rps.h | 4
2 files changed, 40 insertions(+)
diff
From: Tvrtko Ursulin
Record the default values as preparation for exposing the sysfs controls.
Signed-off-by: Tvrtko Ursulin
---
drivers/gpu/drm/i915/gt/intel_gt_types.h | 3 +++
drivers/gpu/drm/i915/gt/intel_rps.c | 2 ++
2 files changed, 5 insertions(+)
diff --git a/drivers/gpu/drm
From: Tvrtko Ursulin
Since 36d516be867c ("drm/i915/gt: Switch to manual evaluation of RPS")
thresholds are invariant so lets move their setting to init time.
Signed-off-by: Tvrtko Ursulin
---
drivers/gpu/drm/i915/gt/intel_rps.c | 27 ---
1 file changed, 16
From: Tvrtko Ursulin
>From patch 4:
User feedback indicates significant performance gains are possible in
specific games with non default RPS up/down thresholds.
Expose these tunables via sysfs which will allow users to achieve best
performance when running games and best po
On 26/04/2023 07:24, fei.y...@intel.com wrote:
From: Fei Yang
The first three patches in this series are taken from
https://patchwork.freedesktop.org/series/116868/
These patches are included here because the last patch
has dependency on the pat_index refactor.
This series is focusing on
On 23/04/2023 07:52, Yang, Fei wrote:
On 20/04/2023 00:00, fei.y...@intel.com wrote:
From: Fei Yang
Currently the KMD is using enum i915_cache_level to set caching policy for
buffer objects. This is flaky because the PAT index which really controls
the caching behavior in PTE has far more
[fixed mailing lists addresses]
On 24/04/2023 09:36, Jordan Justen wrote:
On 2023-04-23 00:05:06, Yang, Fei wrote:
On 2023-04-20 09:11:18, Yang, Fei wrote:
On 20/04/2023 12:39, Andi Shyti wrote:
Hi Fei,
because this is an API change, we need some more information here.
First of all you
On 23/04/2023 07:12, Yang, Fei wrote:
On 20/04/2023 00:00, fei.y...@intel.com wrote:
From: Fei Yang
Currently the KMD is using enum i915_cache_level to set caching policy
for buffer objects. This is flaky because the PAT index which really
controls the caching behavior in PTE has far more
On 21/04/2023 12:45, Emil Velikov wrote:
On Fri, 21 Apr 2023 at 12:23, Tvrtko Ursulin
wrote:
On 21/04/2023 11:26, Emil Velikov wrote:
On Wed, 12 Apr 2023 at 23:43, Rob Clark wrote:
+/**
+ * enum drm_gem_object_status - bitmask of object state for fdinfo reporting
On 20/04/2023 00:00, fei.y...@intel.com wrote:
From: Fei Yang
Currently the KMD is using enum i915_cache_level to set caching policy for
buffer objects. This is flaky because the PAT index which really controls
the caching behavior in PTE has far more levels than what's defined in the
enum.
On 21/04/2023 11:26, Emil Velikov wrote:
On Wed, 12 Apr 2023 at 23:43, Rob Clark wrote:
+/**
+ * enum drm_gem_object_status - bitmask of object state for fdinfo reporting
+ * @DRM_GEM_OBJECT_RESIDENT: object is resident in memory (ie. not unpinned)
+ * @DRM_GEM_OBJECT_PURGEABLE: object
On 20/04/2023 00:00, fei.y...@intel.com wrote:
From: Fei Yang
Currently the KMD is using enum i915_cache_level to set caching policy for
buffer objects. This is flaky because the PAT index which really controls
the caching behavior in PTE has far more levels than what's defined in the
enum.
On 20/04/2023 00:00, fei.y...@intel.com wrote:
From: Fei Yang
Currently the KMD is using enum i915_cache_level to set caching policy for
buffer objects. This is flaky because the PAT index which really controls
the caching behavior in PTE has far more levels than what's defined in the
enum.
On 19/04/2023 15:32, Rob Clark wrote:
On Wed, Apr 19, 2023 at 6:16 AM Tvrtko Ursulin
wrote:
On 18/04/2023 18:18, Rob Clark wrote:
On Mon, Apr 17, 2023 at 8:56 AM Tvrtko Ursulin
wrote:
From: Tvrtko Ursulin
For drivers who only wish to show one memory region called 'system,
and only
On 19/04/2023 15:38, Rob Clark wrote:
On Wed, Apr 19, 2023 at 7:06 AM Tvrtko Ursulin
wrote:
On 18/04/2023 17:08, Rob Clark wrote:
On Tue, Apr 18, 2023 at 7:58 AM Tvrtko Ursulin
wrote:
On 18/04/2023 15:39, Rob Clark wrote:
On Mon, Apr 17, 2023 at 8:56 AM Tvrtko Ursulin
wrote:
From
On 20/04/2023 12:39, Andi Shyti wrote:
Hi Fei,
To comply with the design that buffer objects shall have immutable
cache setting through out their life cycle, {set, get}_caching ioctl's
are no longer supported from MTL onward. With that change caching
policy can only be set at object creation
On 20/04/2023 11:13, Andrzej Hajda wrote:
On 20.04.2023 01:00, fei.y...@intel.com wrote:
From: Fei Yang
Currently the KMD is using enum i915_cache_level to set caching policy
for
buffer objects. This is flaky because the PAT index which really controls
the caching behavior in PTE has far
On 19/04/2023 23:10, Dixit, Ashutosh wrote:
On Wed, 19 Apr 2023 06:21:27 -0700, Tvrtko Ursulin wrote:
Hi Tvrtko,
On 10/04/2023 23:35, Ashutosh Dixit wrote:
Instead of erroring out when GuC reset is in progress, block waiting for
GuC reset to complete which is a more reasonable uapi
On 18/04/2023 17:08, Rob Clark wrote:
On Tue, Apr 18, 2023 at 7:58 AM Tvrtko Ursulin
wrote:
On 18/04/2023 15:39, Rob Clark wrote:
On Mon, Apr 17, 2023 at 8:56 AM Tvrtko Ursulin
wrote:
From: Tvrtko Ursulin
Show how more driver specific set of memory stats could be shown,
more
On 18/04/2023 15:56, Rob Clark wrote:
On Tue, Apr 18, 2023 at 1:53 AM Tvrtko Ursulin
wrote:
On 17/04/2023 21:12, Rob Clark wrote:
From: Rob Clark
Normally this would be the same information that can be obtained in
other ways. But in some cases the process opening the drm fd is merely
On 10/04/2023 23:35, Ashutosh Dixit wrote:
Instead of erroring out when GuC reset is in progress, block waiting for
GuC reset to complete which is a more reasonable uapi behavior.
v2: Avoid race between wake_up_all and waiting for wakeup (Rodrigo)
Signed-off-by: Ashutosh Dixit
---
On 18/04/2023 18:18, Rob Clark wrote:
On Mon, Apr 17, 2023 at 8:56 AM Tvrtko Ursulin
wrote:
From: Tvrtko Ursulin
For drivers who only wish to show one memory region called 'system,
and only account the GEM buffer object handles under it.
Signed-off-by: Tvrtko Ursulin
---
drivers/gpu
On 18/04/2023 17:13, Rob Clark wrote:
On Tue, Apr 18, 2023 at 7:46 AM Tvrtko Ursulin
wrote:
On 18/04/2023 15:36, Rob Clark wrote:
On Tue, Apr 18, 2023 at 7:19 AM Tvrtko Ursulin
wrote:
On 18/04/2023 14:49, Rob Clark wrote:
On Tue, Apr 18, 2023 at 2:00 AM Tvrtko Ursulin
wrote:
On 17
On 18/04/2023 15:39, Rob Clark wrote:
On Mon, Apr 17, 2023 at 8:56 AM Tvrtko Ursulin
wrote:
From: Tvrtko Ursulin
Show how more driver specific set of memory stats could be shown,
more specifically where object can reside in multiple regions, showing all
the supported stats, and where
On 18/04/2023 15:36, Rob Clark wrote:
On Tue, Apr 18, 2023 at 7:19 AM Tvrtko Ursulin
wrote:
On 18/04/2023 14:49, Rob Clark wrote:
On Tue, Apr 18, 2023 at 2:00 AM Tvrtko Ursulin
wrote:
On 17/04/2023 20:39, Rob Clark wrote:
On Mon, Apr 17, 2023 at 8:56 AM Tvrtko Ursulin
wrote:
From
On 18/04/2023 14:49, Rob Clark wrote:
On Tue, Apr 18, 2023 at 2:00 AM Tvrtko Ursulin
wrote:
On 17/04/2023 20:39, Rob Clark wrote:
On Mon, Apr 17, 2023 at 8:56 AM Tvrtko Ursulin
wrote:
From: Tvrtko Ursulin
Add support to dump GEM stats to fdinfo.
Signed-off-by: Tvrtko Ursulin
On 17/04/2023 17:20, Christian König wrote:
Am 17.04.23 um 17:56 schrieb Tvrtko Ursulin:
From: Tvrtko Ursulin
Add support to dump GEM stats to fdinfo.
Signed-off-by: Tvrtko Ursulin
---
Documentation/gpu/drm-usage-stats.rst | 12 +++
drivers/gpu/drm/drm_file.c | 52
On 17/04/2023 20:39, Rob Clark wrote:
On Mon, Apr 17, 2023 at 8:56 AM Tvrtko Ursulin
wrote:
From: Tvrtko Ursulin
Add support to dump GEM stats to fdinfo.
Signed-off-by: Tvrtko Ursulin
---
Documentation/gpu/drm-usage-stats.rst | 12 +++
drivers/gpu/drm/drm_file.c| 52
On 17/04/2023 21:12, Rob Clark wrote:
From: Rob Clark
Normally this would be the same information that can be obtained in
other ways. But in some cases the process opening the drm fd is merely
a sort of proxy for the actual process using the GPU. This is the case
for guest VM processes
On 17/04/2023 21:12, Rob Clark wrote:
From: Rob Clark
Make it work in terms of ctx so that it can be re-used for fdinfo.
Signed-off-by: Rob Clark
---
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 4 ++--
drivers/gpu/drm/msm/msm_drv.c | 2 ++
drivers/gpu/drm/msm/msm_gpu.c
reserved characters or whitespace.
+- - String excluding any above defined reserved characters or
whitespace.
+- - String.
Makes sense I think. At least I can't remember that I had special reason
to word it as strict as it was. Lets give it some time to marinade so
for later:
Acked-by: Tvrtko
From: Tvrtko Ursulin
Use DRM helpers for implementing basic memory stats.
Signed-off-by: Tvrtko Ursulin
Cc: Rob Clark
---
drivers/gpu/drm/msm/msm_drv.c | 4
1 file changed, 4 insertions(+)
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 060c7689a739
From: Tvrtko Ursulin
Show how more driver specific set of memory stats could be shown,
more specifically where object can reside in multiple regions, showing all
the supported stats, and where there is more to show than just user visible
objects.
WIP...
Signed-off-by: Tvrtko Ursulin
From: Tvrtko Ursulin
Use the common fdinfo helper for printing the basics. Remove now unused
client id allocation code.
Signed-off-by: Tvrtko Ursulin
Cc: Rob Clark
---
drivers/gpu/drm/i915/i915_driver.c | 6 +--
drivers/gpu/drm/i915/i915_drm_client.c | 65
From: Tvrtko Ursulin
For drivers who only wish to show one memory region called 'system,
and only account the GEM buffer object handles under it.
Signed-off-by: Tvrtko Ursulin
---
drivers/gpu/drm/drm_file.c | 45 ++
include/drm/drm_file.h | 6 +
2
From: Tvrtko Ursulin
Add support to dump GEM stats to fdinfo.
Signed-off-by: Tvrtko Ursulin
---
Documentation/gpu/drm-usage-stats.rst | 12 +++
drivers/gpu/drm/drm_file.c| 52 +++
include/drm/drm_drv.h | 7
include/drm/drm_file.h
From: Rob Clark
Handle a bit of the boiler-plate in a single case, and make it easier to
add some core tracked stats.
v2: Update drm-usage-stats.rst, 64b client-id, rename drm_show_fdinfo
Reviewed-by: Daniel Vetter
Signed-off-by: Rob Clark
---
Documentation/gpu/drm-usage-stats.rst | 10
From: Tvrtko Ursulin
As discussed in the Rob's thread here is a slightly alternative idea on what to
expose and how.
DRM core is still defining a list of common memory categories but it is now up
to drivers to fill in the data and opt into the feature.
There is also no aggregated category
On 17/04/2023 14:42, Rob Clark wrote:
On Mon, Apr 17, 2023 at 4:10 AM Tvrtko Ursulin
wrote:
On 16/04/2023 08:48, Daniel Vetter wrote:
On Fri, Apr 14, 2023 at 06:40:27AM -0700, Rob Clark wrote:
On Fri, Apr 14, 2023 at 1:57 AM Tvrtko Ursulin
wrote:
On 13/04/2023 21:05, Daniel Vetter
On 14/04/2023 11:45, Zhao Liu wrote:
Hi Tvrtko,
On Wed, Apr 12, 2023 at 04:45:13PM +0100, Tvrtko Ursulin wrote:
[snip]
[snip]
However I am unsure if disabling pagefaulting is needed or not. Thomas,
Matt, being the last to touch this area, perhaps you could have a look?
Because I notice
On 16/04/2023 08:48, Daniel Vetter wrote:
On Fri, Apr 14, 2023 at 06:40:27AM -0700, Rob Clark wrote:
On Fri, Apr 14, 2023 at 1:57 AM Tvrtko Ursulin
wrote:
On 13/04/2023 21:05, Daniel Vetter wrote:
On Thu, Apr 13, 2023 at 05:40:21PM +0100, Tvrtko Ursulin wrote:
On 13/04/2023 14:27
On 14/04/2023 10:07, Daniel Vetter wrote:
On Fri, 14 Apr 2023 at 10:57, Tvrtko Ursulin
wrote:
On 13/04/2023 21:05, Daniel Vetter wrote:
On Thu, Apr 13, 2023 at 05:40:21PM +0100, Tvrtko Ursulin wrote:
On 13/04/2023 14:27, Daniel Vetter wrote:
On Thu, Apr 13, 2023 at 01:58:34PM +0100
On 13/04/2023 21:05, Daniel Vetter wrote:
On Thu, Apr 13, 2023 at 05:40:21PM +0100, Tvrtko Ursulin wrote:
On 13/04/2023 14:27, Daniel Vetter wrote:
On Thu, Apr 13, 2023 at 01:58:34PM +0100, Tvrtko Ursulin wrote:
On 12/04/2023 20:18, Daniel Vetter wrote:
On Wed, Apr 12, 2023 at 11:42:07AM
On 13/04/2023 14:27, Daniel Vetter wrote:
On Thu, Apr 13, 2023 at 01:58:34PM +0100, Tvrtko Ursulin wrote:
On 12/04/2023 20:18, Daniel Vetter wrote:
On Wed, Apr 12, 2023 at 11:42:07AM -0700, Rob Clark wrote:
On Wed, Apr 12, 2023 at 11:17 AM Daniel Vetter wrote:
On Wed, Apr 12, 2023 at 10
On 13/04/2023 14:56, Andi Shyti wrote:
Hi Tvrtko,
(I forgot to CC Daniele)
On Thu, Apr 13, 2023 at 11:41:28AM +0100, Tvrtko Ursulin wrote:
On 13/04/2023 10:20, Andi Shyti wrote:
From: Paulo Zanoni
In multitile systems IRQ need to be reset and enabled per GT.
Although in MTL the GUnit
On 12/04/2023 23:42, Rob Clark wrote:
From: Rob Clark
There is more do to here to remove my client->id fully (would now be
dead code) so maybe easiest if you drop this patch and I do it after you
land this and it propagates to our branches? I'd like to avoid pain with
conflicts if
On 13/04/2023 09:46, Daniel Vetter wrote:
On Thu, Apr 13, 2023 at 10:07:11AM +0200, Christian König wrote:
Am 13.04.23 um 00:42 schrieb Rob Clark:
From: Rob Clark
Handle a bit of the boiler-plate in a single case, and make it easier to
add some core tracked stats.
v2: Update
501 - 600 of 2202 matches
Mail list logo