From: Luben Tuikov
Rename DRM_SCHED_PRIORITY_MIN to DRM_SCHED_PRIORITY_LOW.
This mirrors DRM_SCHED_PRIORITY_HIGH, for a list of DRM scheduler priorities
in ascending order,
DRM_SCHED_PRIORITY_LOW,
DRM_SCHED_PRIORITY_NORMAL,
DRM_SCHED_PRIORITY_HIGH,
DRM_SCHED_PRIORITY_KERNEL.
Cc: Rob
Hi Dave and Daniel,
Here goes another pull-request towards 6.8.
We are likely going to send another one in 2 weeks,
but I'd like to get this in right now so we can
get a clean drm-xe-next on top of drm-next for our
first Xe pull request.
Thanks,
Rodrigo.
drm-intel-next-2023-12-07:
- Improve
Nothing else to be done on this front from Xe perspective.
Signed-off-by: Rodrigo Vivi
---
Documentation/gpu/rfc/xe.rst | 16
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/Documentation/gpu/rfc/xe.rst b/Documentation/gpu/rfc/xe.rst
index cfff8a59a876
in
a community consensus.
Signed-off-by: Rodrigo Vivi
---
Documentation/gpu/rfc/xe.rst | 64 ++--
1 file changed, 32 insertions(+), 32 deletions(-)
diff --git a/Documentation/gpu/rfc/xe.rst b/Documentation/gpu/rfc/xe.rst
index 87dd620aea59..cfff8a59a876 100644
Current drm-xe-next doesn't have any drm/scheduler patch that is not
already accepted in drm-misc-next. This completed this goal with
the consensus of how the drm/scheduler fits to the fw scheduling and
the relationship between drm_gpu_scheduler and drm_sched_entity.
Signed-off-by: Rodrigo Vivi
of 2 separate pull requests, one right after the other.
We're looking forward to moving our work on Xe to the mainline and continuing
to evolve drm together towards a better future.
Matthew Brost (1):
drm/doc/rfc: Mark long running workload as complete.
Rodrigo Vivi (4):
drm/doc/rfc: Mark
As already indicated in this block, the consensus was already
reached out and documented as:
The ASYNC VM_BIND document
However this was item was not moved to the completed section.
Let's move and clean up the WIP block.
Signed-off-by: Rodrigo Vivi
---
Documentation/gpu/rfc/xe.rst | 24
that to the 'Completed' section and revisit the long-running
solution as a community after Xe is integrated in DRM.
Signed-off-by: Matthew Brost
Signed-off-by: Rodrigo Vivi
---
Documentation/gpu/rfc/xe.rst | 27 ---
1 file changed, 12 insertions(+), 15 deletions(-)
diff
On Wed, Nov 29, 2023 at 04:20:13PM -0800, Alan Previn wrote:
> If we are at the end of suspend or very early in resume
> its possible an async fence signal (via rcu_call) is triggered
> to free_engines which could lead us to the execution of
> the context destruction worker (after a prior worker
On Tue, Nov 28, 2023 at 04:51:25PM +0100, Thomas Hellström wrote:
> On Mon, 2023-11-27 at 14:36 -0500, Rodrigo Vivi wrote:
> > On Tue, Nov 21, 2023 at 11:40:46AM +0100, Thomas Hellström wrote:
> > > Add the first version of the VM_BIND locking document which is
> > > in
On Tue, Nov 14, 2023 at 07:52:29AM -0800, Alan Previn wrote:
> If we are at the end of suspend or very early in resume
> its possible an async fence signal (via rcu_call) is triggered
> to free_engines which could lead us to the execution of
> the context destruction worker (after a prior worker
fter the kernel detects the held reference and
> prints a message to abort suspending instead of hanging
> in the kernel forever which then requires serial connection
> or ramoops dump to debug further.
>
> Signed-off-by: Alan Previn
> Reviewed-by: Rodrigo Vivi
> Tested
On Wed, Nov 22, 2023 at 12:30:03PM -0800, Alan Previn wrote:
> Add missing tag for "Wa_14019159160 - Case 2" (for existing
> PXP code that ensures run alone mode bit is set to allow
> PxP-decryption.
>
> v2: - Fix WA id number (John Harrison).
> - Improve comments and code to be specific
>
ctions, evicton and for userptr gpu-vmas. Intention is to be using the
> same nomenclature as the drm-vm-bind-async.rst.
>
> v2:
> - s/gvm/gpu_vm/g (Rodrigo Vivi)
> - Clarify the userptr seqlock with a pointer to mm/mmu_notifier.c
> (Rodrigo Vivi)
> - Adjust commit message accordingly.
On Fri, Nov 17, 2023 at 06:21:07PM +0200, Ville Syrjälä wrote:
> On Fri, Nov 17, 2023 at 05:09:27PM +0200, Imre Deak wrote:
> > The current way of calculating the pbn_div value, the link BW per each
> > MTP slot, worked only for DP 1.4 link rates. Fix things up for UHBR
> > rates calculating with
On Sun, Nov 05, 2023 at 05:27:03PM +, Paz Zcharya wrote:
> Fix the value of variable `phys_base` to be the relative offset in
> stolen memory, and not the absolute offset of the GSM.
to me it looks like the other way around. phys_base is the physical
base address for the frame_buffer. Setting
On Tue, Nov 14, 2023 at 05:27:18PM +, Tvrtko Ursulin wrote:
>
> On 13/11/2023 17:57, Teres Alexis, Alan Previn wrote:
> > On Wed, 2023-10-25 at 13:58 +0100, Tvrtko Ursulin wrote:
> > > On 04/10/2023 18:59, Teres Alexis, Alan Previn wrote:
> > > > On Thu, 2023-09-28 at 13:46 +0100, Tvrtko
On Mon, Nov 13, 2023 at 11:36:13AM +0800, heminhong wrote:
> Current, the dewake_scanline variable is defined as unsigned int,
> an unsigned int variable that is always greater than or equal to 0.
> when _intel_dsb_commit function is called by intel_dsb_commit function,
> the dewake_scanline
y picked from commit 56e449603f0ac580700621a356d35d5716a62ce5)
Signed-off-by: Rodrigo Vivi
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c| 4 +-
drivers/gpu/drm/etnaviv/etnaviv_sched.c| 1 +
drivers/gpu/drm/lima/lima_sched.c | 4 +-
drivers/gpu/drm/msm/msm_rin
On Wed, Nov 01, 2023 at 02:44:46PM -0700, Zhanjun Dong wrote:
> The gt wedged could be triggered by missing guc firmware file, HW not
> working, etc. Once triggered, it means all gt usage is dead, therefore we
> can't enable pxp under this fatal error condition.
>
> v2: Updated commit message.
>
On Fri, Oct 27, 2023 at 02:18:14PM -0700, john.c.harri...@intel.com wrote:
> From: John Harrison
>
> These w/a's can have signficant performance implications for any
> workload which uses both RCS and CCS. On the other hand, the hang
> itself is only seen in one or two very specific workloads.
ctions, evicton and for userptr gpu-vmas. Intention is to be using the
> same nomenclature as the drm-vm-bind-async.rst.
>
> v2:
> - s/gvm/gpu_vm/g (Rodrigo Vivi)
> - Clarify the userptr seqlock with a pointer to mm/mmu_notifier.c
> (Rodrigo Vivi)
> - Adjust commit message accordingly.
ctions, evicton and for userptr gpu-vmas. Intention is to be using the
> same nomenclature as the drm-vm-bind-async.rst.
>
> v2:
> - s/gvm/gpu_vm/g (Rodrigo Vivi)
> - Clarify the userptr seqlock with a pointer to mm/mmu_notifier.c
> (Rodrigo Vivi)
> - Adjust commit message accordingly.
Hi Dave and Daniel,
Here goes drm-intel-fixes-2023-10-26:
- Determine context valid in OA reports (Umesh)
- Hold GT forcewake during steering operations (Matt Roper)
- Check if PMU is closed before stopping event (Umesh)
Thanks,
Rodrigo.
The following changes since commit
Hi Dave and Daniel,
Here goes drm-intel-fixes-2023-10-19:
- Fix display issue that was blocking S0ix (Khaled)
- Retry gtt fault when out of fence registers (Ville)
Thanks,
Rodrigo.
The following changes since commit 58720809f52779dc0f08e53e54b014209d13eebb:
Linux 6.6-rc6 (2023-10-15
Hi Dave and Daniel,
This is our last pull request towards 6.7.
I'm sending this on behalf of Jani, who was covering this round.
The main reason for this extra PR is to ensure that we get MTL
force_probe removed on 6.7. The platform has a good green picture
in our BAT CI currently and is stable.
On Mon, Oct 16, 2023 at 09:02:38AM +0100, Tvrtko Ursulin wrote:
>
> On 13/10/2023 21:51, Rodrigo Vivi wrote:
> > On Thu, Sep 28, 2023 at 01:48:34PM +0100, Tvrtko Ursulin wrote:
> > >
> > > On 27/09/2023 20:34, Belgaumkar, Vinay wrote:
> > > >
> >
pi?
> > We have asked Carl (cc'd) to post a patch for the same.
>
> Ack.
I'm glad to see that we will have the end-to-end flow of the high-level API.
>
> > > > Cc: Rodrigo Vivi
> > > > Signed-off-by: Vinay Belgaumkar
> > > > ---
> &
On Sun, Oct 08, 2023 at 02:49:40PM -0700, Randy Dunlap wrote:
> Correct typo of "its".
> Add commas for clarity.
> Capitalize L3.
>
> Signed-off-by: Randy Dunlap
> Cc: Jani Nikula
> Cc: Joonas Lahtinen
> Cc: Rodrigo Vivi
> Cc: Tvrtko Ursulin
>
Hi Dave and Daniel,
Here goes drm-intel-fixes-2023-10-05:
- Fix for OpenGL CTS regression on Compute Shaders (Nirmoy)
- Fix for default engines initialization (Mathias)
- Fix TLB invalidation for Multi-GT devices (Chris)
Thanks,
Rodrigo.
The following changes since commit
On Wed, Oct 04, 2023 at 03:54:59PM +0200, Nirmoy Das wrote:
> Hi Rodrigo,
>
> On 10/4/2023 2:44 PM, Rodrigo Vivi wrote:
> > On Wed, Oct 04, 2023 at 02:04:07PM +0200, Nirmoy Das wrote:
> > > Take the mcr lock only when driver needs to write into a mcr based
&
On Wed, Oct 04, 2023 at 02:04:07PM +0200, Nirmoy Das wrote:
> Take the mcr lock only when driver needs to write into a mcr based
> tlb based registers.
>
> To prevent GT reset interference, employ gt->reset.mutex instead, since
> intel_gt_mcr_multicast_write relies on gt->uncore->lock not being
Hi Dave and Daniel,
Here goes drm-intel-fixes-2023-09-28:
- Fix a panic regression on gen8_ggtt_insert_entries (Matthew Wilcox)
- Fix load issue due to reservation address in ggtt_reserve_guc_top (Javier
Pello)
- Fix a possible deadlock with guc busyness worker (Umesh)
Thanks,
Rodrigo.
The
Hi Dave and Daniel,
Here goes drm-intel-fixes-2023-09-21:
- Prevent error pointer dereference (Dan Carpenter)
- Fix PMU busyness values when using GuC mode (Umesh)
Thanks,
Rodrigo.
The following changes since commit ce9ecca0238b140b88f43859b211c9fdfd8e5b70:
Linux 6.6-rc2 (2023-09-17
s in v2:
> - drop the `... - 1` (thanks Kees)
> - Link to v1:
> https://lore.kernel.org/r/20230914-strncpy-drivers-gpu-drm-i915-gem-selftests-mock_context-c-v1-1-c3f92df94...@google.com
> ---
Reviewed-by: Rodrigo Vivi
and pushing it right now to drm-intel-gt-next
> drivers/gpu/drm/i9
ab.freedesktop.org/drm/xe/kernel/-/tree/drm-xe-next
> [2] https://patchwork.freedesktop.org/series/112188/
>
> Cc: David Airlie
> Cc: Daniel Vetter
> Cc: Joonas Lahtinen
> Cc: Rodrigo Vivi
> Cc: Tvrtko Ursulin
> Cc: Lucas De Marchi
Yeap, let's for sure get input
Hi Dave and Daniel,
Only a fix for blank-screen regression on Chromebooks,
targeting stable 6.5.
Here goes drm-intel-fixes-2023-09-14:
- Only check eDP HPD when AUX CH is shared. (Ville)
Thanks,
Rodrigo.
The following changes since commit 0bb80ecc33a8fb5a682236443c1e740d5c917d1d:
Linux
From: Matthew Brost
No DRM scheduler changes required, drivers just return NULL in run_job
vfunc.
Signed-off-by: Matthew Brost
---
Christian, Alex, Danilo, Lina, and others I'd like to kindly ask your
attention and probably an ack from you here.
Based on [1] and other discussions that we had
On Thu, Sep 07, 2023 at 02:58:08PM +0200, Andi Shyti wrote:
> From: Tvrtko Ursulin
>
> Walk all GTs when doing the respective bits of drop_caches work.
>
> Signed-off-by: Tvrtko Ursulin
> Signed-off-by: Andi Shyti
Reviewed-by: Rodrigo Vivi
> ---
> Hi,
>
>
On Mon, Sep 04, 2023 at 11:39:55PM +0200, Danilo Krummrich wrote:
> On 8/31/23 21:17, Rodrigo Vivi wrote:
> > On Tue, Aug 29, 2023 at 12:30:04PM -0400, Rodrigo Vivi wrote:
> > > Nouveau has landed the GPU VA helpers, support and documentation
> > > already and Xe is alr
On Mon, Sep 04, 2023 at 11:32:30PM +0200, Danilo Krummrich wrote:
> Hi Rodrigo,
>
> On 8/31/23 21:10, Rodrigo Vivi wrote:
> > On Tue, Aug 29, 2023 at 12:30:03PM -0400, Rodrigo Vivi wrote:
> > > The consensus is for individual drivers VM_BIND uapis with
> > > t
On Mon, Sep 04, 2023 at 08:32:40AM +0200, Andi Shyti wrote:
> Hi Jim,
>
> On Sun, Sep 03, 2023 at 12:46:00PM -0600, Jim Cromie wrote:
> > By at least strong convention, a print-buffer's trailing newline says
> > "message complete, send it". The exception (no TNL, followed by a call
> > to
een finalized and all
> > required firmware blobs are available. Recent CI results have also
> > been healthy, so we're ready to drop the force_probe requirement and
> > enable the platform by default.
> >
> > Cc: Rodrigo Vivi
> > Cc: Tvrtko Ursulin
>
On Fri, Sep 01, 2023 at 02:38:11PM -0400, Rodrigo Vivi wrote:
> On Wed, Aug 30, 2023 at 05:41:37PM -0400, Lyude Paul wrote:
> > Other then the name typo (s/Pual/Paul):
> >
> > Signed-off-by: Lyude Paul (just since I co-authored
> > things~)
>
>
On Thu, Aug 24, 2023 at 04:50:16PM -0400, Gil Dekel wrote:
> Instead of silently giving up when all link-training fallback values are
> exhausted, this patch modifies the fallback's failure branch to reduces
> both max_link_lane_count and max_link_rate to zero (0) and continues to
> emit uevents
On Thu, Aug 24, 2023 at 04:50:20PM -0400, Gil Dekel wrote:
> Before sending a uevent to userspace in order to trigger a corrective
> modeset, we change the failing connector's link-status to BAD. However,
> the downstream MST branch ports are left in their original GOOD state.
>
> This patch
On Wed, Aug 30, 2023 at 05:41:37PM -0400, Lyude Paul wrote:
> Other then the name typo (s/Pual/Paul):
>
> Signed-off-by: Lyude Paul (just since I co-authored
> things~)
I believe having the Co-developed-by: in the patches that you helped
out would be nice.
> Reviewed-by: Lyude Paul
>
> I
Hi Dave and Daniel,
Only a single patch towards -rc1.
I noticed that you already sent this week's PR, but sending
this just in case. Otherwise I believe it could wait for the
regular fixes cycle.
Here goes drm-intel-next-fixes-2023-08-31:
- Mark requests for GuC virtual engines to avoid
ctions, evicton and for userptr gpu-vmas. Intention is to be using the
> same nomenclature as the drm-vm-bind-async.rst.
>
> v2:
> - s/gvm/gpu_vm/g (Rodrigo Vivi)
> - Clarify the userptr seqlock with a pointer to mm/mmu_notifier.c
> (Rodrigo Vivi)
> - Adjust commit message accordingly.
On Tue, Aug 29, 2023 at 12:30:04PM -0400, Rodrigo Vivi wrote:
> Nouveau has landed the GPU VA helpers, support and documentation
> already and Xe is already using the upstream GPU VA.
Danilo, although this is more on the Xe side and I wouldn't ask you
to review our code entirely, I'd like
On Tue, Aug 29, 2023 at 12:30:03PM -0400, Rodrigo Vivi wrote:
> The consensus is for individual drivers VM_BIND uapis with
> the GPUVA helpers that are already implemented and merged
> upstream.
>
> The merged GPUVA documentation also establish some overall
> rules for the lock
On Thu, Aug 31, 2023 at 10:31:07AM +0200, Daniel Vetter wrote:
> On Tue, Aug 29, 2023 at 12:30:01PM -0400, Rodrigo Vivi wrote:
> > Also the uapi should be reviewed and scrutinized before xe
> > is accepted upstream and we shouldn't cause regression.
> >
> > Link:
>
Nouveau has landed the GPU VA helpers, support and documentation
already and Xe is already using the upstream GPU VA.
Signed-off-by: Rodrigo Vivi
---
Documentation/gpu/rfc/xe.rst | 36 ++--
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git
Also the uapi should be reviewed and scrutinized before xe
is accepted upstream and we shouldn't cause regression.
Link:
https://lore.kernel.org/all/20230630100059.122881-1-thomas.hellst...@linux.intel.com
Signed-off-by: Rodrigo Vivi
---
Documentation/gpu/rfc/xe.rst | 6 --
1 file changed
The consensus is for individual drivers VM_BIND uapis with
the GPUVA helpers that are already implemented and merged
upstream.
The merged GPUVA documentation also establish some overall
rules for the locking to be followed by the drivers.
Signed-off-by: Rodrigo Vivi
---
Documentation/gpu/rfc
Signed-off-by: Rodrigo Vivi
---
Documentation/gpu/rfc/xe.rst | 17 ++---
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/Documentation/gpu/rfc/xe.rst b/Documentation/gpu/rfc/xe.rst
index 3d2181bf3dad..bf60c5c82d0e 100644
--- a/Documentation/gpu/rfc/xe.rst
+++ b
Hi Dave and Daniel,
And this is our fixes targeting 6.5 (rc8?).
I'm again covering for Tvrtko at this round.
Please also notice that here we also have the drm
patches fixing the HPD polling that I had mentioned
in our next-fixes.
One is the fix itself and the other is a dependency
to add the
Hi Dave and Daniel,
Here goes our next-fixes targeting 6.6-rc1.
Please notice that we have 2 drm level patches there,
one to fix the display HPD polling and one dependency
introducing a helper to reschedule the poll work.
drm-intel-next-fixes-2023-08-24:
- Fix TLB invalidation (Alan)
- Fix
On Fri, Aug 18, 2023 at 05:06:42PM -0300, André Almeida wrote:
> Create a section that specifies how to deal with DRM device resets for
> kernel and userspace drivers.
>
> Signed-off-by: André Almeida
>
> ---
>
> v7 changes:
> - s/application/graphical API contex/ in the robustness part
On Wed, Aug 23, 2023 at 11:41:19AM -0400, Alex Deucher wrote:
> On Wed, Aug 23, 2023 at 11:26 AM Matthew Brost
> wrote:
> >
> > On Wed, Aug 23, 2023 at 09:10:51AM +0200, Christian König wrote:
> > > Am 23.08.23 um 05:27 schrieb Matthew Brost:
> > > > [SNIP]
> > > > > That is exactly what I want
From: Matthew Brost
No DRM scheduler changes required, drivers just return NULL in run_job
vfunc.
Signed-off-by: Matthew Brost
---
Documentation/gpu/rfc/xe.rst | 27 ---
1 file changed, 12 insertions(+), 15 deletions(-)
diff --git a/Documentation/gpu/rfc/xe.rst
Nouveau has landed the GPU VA helpers, support and documentation
already and Xe is already aligned with that.
Signed-off-by: Rodrigo Vivi
---
Documentation/gpu/rfc/xe.rst | 36 ++--
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/Documentation/gpu
The consensus is for individual drivers VM_BIND uapis with
the GPUVA helpers that are already implemented and merged
upstream.
The merged GPUVA documentation also establish some overall
rules for the locking to be followed by the drivers.
Signed-off-by: Rodrigo Vivi
---
Documentation/gpu/rfc
Signed-off-by: Rodrigo Vivi
---
Documentation/gpu/rfc/xe.rst | 17 ++---
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/Documentation/gpu/rfc/xe.rst b/Documentation/gpu/rfc/xe.rst
index 3d2181bf3dad..bf60c5c82d0e 100644
--- a/Documentation/gpu/rfc/xe.rst
+++ b
Also the uapi should be reviewed and scrutinized before xe
is accepted upstream and we shouldn't cause regression.
Link:
https://lore.kernel.org/all/20230630100059.122881-1-thomas.hellst...@linux.intel.com
Signed-off-by: Rodrigo Vivi
---
Documentation/gpu/rfc/xe.rst | 6 --
1 file changed
Hi Dave and Daniel,
I'm covering for Tvrtko on this week's fixes flow.
These 3 patches were queued since last week, but I had hold
because I had some doubts about the CI results.
I have confirmed those issues were not related to these 3
patches, so, here they are.
drm-intel-fixes-2023-08-17:
-
On Mon, Aug 14, 2023 at 06:12:09PM -0700, Alan Previn wrote:
> If we are at the end of suspend or very early in resume
> its possible an async fence signal could lead us to the
> execution of the context destruction worker (after the
> prior worker flush).
>
> Even if checking that the CT is
On Mon, Aug 14, 2023 at 06:12:08PM -0700, Alan Previn wrote:
> When suspending, flush the context-guc-id
> deregistration worker at the final stages of
> intel_gt_suspend_late when we finally call gt_sanitize
> that eventually leads down to __uc_sanitize so that
> the deregistration worker doesn't
ruct intel_wakeref *wf,
>"wakeref.work", >work, 0);
> }
>
Please add a documentation for this function making sure you have the following
mentions:
/**
[snip]
* @timeout_ms: Timeout in ums, 0 means never timeout.
*
* Returns 0 on success, -ETIMEDO
(1):
drm/i915/display: pre-initialize some values in probe_gmdid_display()
Rodrigo Vivi (1):
Merge drm/drm-next into drm-intel-next
drivers/gpu/drm/i915/display/icl_dsi.c | 5 +-
drivers/gpu/drm/i915/display/intel_cdclk.c | 14 ++--
drivers/gpu/drm/i915/display
On Wed, Aug 02, 2023 at 04:35:01PM -0700, Alan Previn wrote:
> When suspending, add a timeout when calling
> intel_gt_pm_wait_for_idle else if we have a lost
> G2H event that holds a wakeref (which would be
> indicating of a bug elsewhere in the driver), we
> get to complete the suspend-resume
On Wed, Aug 02, 2023 at 04:34:59PM -0700, Alan Previn wrote:
> Suspend is not like reset, it can unroll, so we have to properly
> flush pending context-guc-id deregistrations to complete before
> we return from suspend calls.
But if is 'unrolls' the execution should just continue, no?!
In other
On Fri, Jun 30, 2023 at 06:44:52PM +0200, Thomas Hellström wrote:
> Add the first version of the VM_BIND locking document which is
> intended to be part of the xe driver upstreaming agreement.
>
> The document describes and discuss the locking used during exec-
> functions, evicton and for
> > +The restart state may, for example, be the number of successfully
> > +completed operations.
> > +
> > +Easiest for UMD would of course be if KMD did a full unwind on error
> > +so that no error state needs to be saved.
>
> But does KMD do it? As a UMD person, what should I expect?\
it is an open question. I believe we should rewind all the operations
in the same ioctl. Possible? Easy? I don't know, but it would be good
to have UMD input here.
Should KMD rewind everything that succedded before the error? or
have the cookie idea and block all the further operations on that
vm unless if the cookie information is valid?
>
>
> > diff --git a/Documentation/gpu/rfc/xe.rst b/Documentation/gpu/rfc/xe.rst
> > index 2516fe141db6..0f062e1346d2 100644
> > --- a/Documentation/gpu/rfc/xe.rst
> > +++ b/Documentation/gpu/rfc/xe.rst
> > @@ -138,8 +138,8 @@ memory fences. Ideally with helper support so people
> > don't get it wrong in all
> > possible ways.
> >
> > As a key measurable result, the benefits of ASYNC VM_BIND and a discussion
> > of
> > -various flavors, error handling and a sample API should be documented here
> > or in
> > -a separate document pointed to by this document.
> > +various flavors, error handling and sample API suggestions are documented
> > in
> > +Documentation/gpu/drm-vm-bind-async.rst
> >
> > Userptr integration and vm_bind
> > ---
>
While writing this answers I had to read everything again.
I agree with Danilo on ensuring we explicitly add the 'virtual'
to the gpu_vm description. And with that:
Reviewed-by: Rodrigo Vivi
Hi Dave and Daniel,
Here goes our first pull request of this round.
drm-intel-next-2023-08-03:
- Removing unused declarations (Arnd, Gustavo)
- ICL+ DSI modeset sequence fixes (Ville)
- Improvements on HDCP (Suraj)
- Fixes and clean up on MTL Display (Mika Kahola, Lee, RK, Nirmoy, Chaitanya)
-
tore(disable, in this case) efficient
> freq flag before setting the soft min frequency.
>
> v2: Bring the min freq down to RPn when we disable efficient freq (Rodrigo)
> Also made the change to set the min softlimit to RPn at init. Otherwise, we
> were storing
On Fri, Jul 21, 2023 at 01:44:34PM -0700, Belgaumkar, Vinay wrote:
>
> On 7/21/2023 1:41 PM, Rodrigo Vivi wrote:
> > On Fri, Jul 21, 2023 at 11:03:49AM -0700, Vinay Belgaumkar wrote:
> > > This should be done before the soft min/max frequencies are restored.
> > &g
tore(disable, in this case) efficient
> freq flag before setting the soft min frequency.
that's strange. so guc is returning the rpe when we request the min freq
during the soft config?
we could alternatively change the soft config to actually get the min
and not be tricked by this.
But also
On Thu, Jun 29, 2023 at 02:10:58PM -0700, Welty, Brian wrote:
>
> Hi Christian / Thomas,
>
> Wanted to ask if you have explored or thought about adding support in TTM
> such that a ttm_bo could have more than one underlying backing store segment
> (that is, to have a tree of ttm_resources)?
> We
ernal fence
>* that may fail catastrophically, then we want to avoid using
> - * sempahores as they bypass the fence signaling metadata, and we
> + * semaphores as they bypass the fence signaling metadata, and we
>* lose the fence->error propagation.
>
gt;
> Signed-off-by: Tvrtko Ursulin
> Fixes: 9275277d5324 ("drm/i915: use pat_index instead of cache_level")
> Cc: Fei Yang
> Cc: Andi Shyti
> Cc: Matt Roper
Reviewed-by: Rodrigo Vivi
> ---
> drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 3 ---
> 1 file cha
On Mon, May 22, 2023 at 01:16:03PM -0700, Kees Cook wrote:
> On Mon, May 22, 2023 at 03:52:28PM +, Azeem Shaikh wrote:
> > strlcpy() reads the entire source buffer first.
> > This read may exceed the destination size limit.
> > This is both inefficient and can lead to linear read
> > overflows
Brost
Cc: Thomas Hellström
Cc: Maarten Lankhorst
Cc: Lucas De Marchi
Cc: Mauro Carvalho Chehab
Signed-off-by: Rodrigo Vivi
---
Documentation/gpu/rfc/xe.rst | 7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/Documentation/gpu/rfc/xe.rst b/Documentation/gpu/rfc/xe.rst
On Mon, May 22, 2023 at 12:59:27PM +0100, Tvrtko Ursulin wrote:
> From: Tvrtko Ursulin
>
> In preparation for exposing via sysfs add helpers for managing rps
> thresholds.
>
> v2:
> * Force sw and hw re-programming on threshold change.
it makes sense now.
Revi
On Mon, May 22, 2023 at 12:59:26PM +0100, Tvrtko Ursulin wrote:
> From: Tvrtko Ursulin
>
> Record the default values as preparation for exposing the sysfs controls.
>
> Signed-off-by: Tvrtko Ursulin
> Cc: Rodrigo Vivi
Reviewed-by: Rodrigo Vivi
> ---
>
On Mon, May 22, 2023 at 12:59:25PM +0100, Tvrtko Ursulin wrote:
> From: Tvrtko Ursulin
>
> Since 36d516be867c ("drm/i915/gt: Switch to manual evaluation of RPS")
> thresholds are invariant so lets move their setting to init time.
>
> Signed-off-by: Tvrtko Ursulin
&
On Sat, May 20, 2023 at 02:07:51AM +0300, Dmitry Baryshkov wrote:
> On 20/05/2023 00:16, Rodrigo Vivi wrote:
> > On Fri, May 19, 2023 at 07:55:47PM +0300, Dmitry Baryshkov wrote:
> > > On 19/04/2023 18:43, Mark Yacoub wrote:
> > > > Hi all,
> > > > Th
On Fri, May 19, 2023 at 07:55:47PM +0300, Dmitry Baryshkov wrote:
> On 19/04/2023 18:43, Mark Yacoub wrote:
> > Hi all,
> > This is v10 of the HDCP patches. The patches are authored by Sean Paul.
> > I rebased and addressed the review comments in v6-v10.
> >
> > Main change in v10 is handling the
On Wed, May 17, 2023 at 01:02:03PM +0800, Cong Liu wrote:
> Be sure to properly free the allocated memory before exiting
> the live_nop_switch function.
>
> Signed-off-by: Cong Liu
> Suggested-by: Rodrigo Vivi
pushed, thanks for the patch
> ---
> .../gpu/d
rm/intel/-/issues/8389#note_1890428 for
> details.
>
In general we should always try to reduce the knobs and specially with a
register
that doesn't work with the new platforms with FW on control of all these
variations.
But this is a compelling argument.
Acked-by: Rodrigo Vivi
(if
On Fri, Apr 28, 2023 at 09:14:56AM +0100, Tvrtko Ursulin wrote:
> From: Tvrtko Ursulin
>
> In preparation for exposing via sysfs add helpers for managing rps
> thresholds.
>
> Signed-off-by: Tvrtko Ursulin
> ---
> drivers/gpu/drm/i915/gt/intel_rps.c | 36 +
>
On Fri, Apr 28, 2023 at 09:44:53AM +0100, Tvrtko Ursulin wrote:
>
> On 28/04/2023 09:14, Tvrtko Ursulin wrote:
> > From: Tvrtko Ursulin
> >
> > User feedback indicates significant performance gains are possible in
> > specific games with non default RPS up/down thresholds.
> >
> > Expose these
On Tue, May 16, 2023 at 12:21:05PM -0700, John Harrison wrote:
>On 5/16/2023 12:17, Belgaumkar, Vinay wrote:
>
>
>
>> On 4/18/2023 11:17 AM, [1]john.c.harri...@intel.com
On Mon, May 08, 2023 at 04:50:15PM +0800, Cong Liu wrote:
> Be sure to properly free the allocated memory before exiting
> the live_nop_switch function.
>
> Signed-off-by: Cong Liu
> ---
> drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c | 4 +++-
> 1 file changed, 3 insertions(+), 1
ons I am glad to discuss them.
> > Meanwhile I send original patchset with addressed remaining comments.
> >
> > To: Jani Nikula
> > To: Joonas Lahtinen
> > To: Rodrigo Vivi
> > To: Tvrtko Ursulin
> > To: David Airlie
> > To: Daniel Vetter
> &
0 ("drm/i915/dp: Compute DSC pipe config in atomic check")
> Signed-off-by: Nikita Zhandarovich
Reviewed-by: Rodrigo Vivi
and pushed.
Thanks for the patch and sorry for the delay.
> ---
> drivers/gpu/drm/i915/display/intel_dp.c | 5 +
> 1 file changed, 5 insertions(+)
&g
On Tue, May 02, 2023 at 07:57:02AM +, Matthew Brost wrote:
> On Thu, Apr 27, 2023 at 10:28:13AM +0200, Thomas Hellström wrote:
> >
> > On 4/26/23 22:57, Rodrigo Vivi wrote:
> > > The goal is to use devcoredump infrastructure to report error states
> >
On Tue, May 02, 2023 at 10:55:14AM +0300, Jani Nikula wrote:
> On Wed, 26 Apr 2023, Rodrigo Vivi wrote:
> > + drm_info(>drm, "Check your
> > /sys/class/drm/card/device/devcoredump/data\n");
>
> Drive-by comment, could use %d and xe->drm.primary->
On Tue, May 02, 2023 at 03:40:50PM +, Matthew Brost wrote:
> On Wed, Apr 26, 2023 at 04:57:02PM -0400, Rodrigo Vivi wrote:
> > Unfortunately devcoredump infrastructure does not provide and
> > interface for us to force the device removal upon the pci_remove
> &g
On Thu, Apr 27, 2023 at 11:56:22AM +1000, Dave Airlie wrote:
> On Thu, 20 Apr 2023 at 05:19, Rodrigo Vivi wrote:
> >
> > Let’s establish a merge plan for Xe, by writing down clear pre-merge goals,
> > in
> > order to avoid unnecessary delays.
>
> LGTM,
&g
101 - 200 of 1082 matches
Mail list logo