Re: [PATCH] drm: atmel-hlcdc: Support inverting the pixel clock polarity

2023-08-07 Thread Miquel Raynal
Hi Sam,

s...@ravnborg.org wrote on Mon, 7 Aug 2023 18:52:45 +0200:

> Hi Miquel,
> 
> On Mon, Aug 07, 2023 at 11:12:46AM +0200, Miquel Raynal wrote:
> > Hi Sam,
> > 
> > s...@ravnborg.org wrote on Sat, 10 Jun 2023 22:05:15 +0200:
> >   
> > > On Fri, Jun 09, 2023 at 04:48:43PM +0200, Miquel Raynal wrote:  
> > > > On the SoC host controller, the pixel clock can be:
> > > > * standard: data is launched on the rising edge
> > > > * inverted: data is launched on the falling edge
> > > > 
> > > > Some panels may need the inverted option to be used so let's support
> > > > this DRM flag.
> > > > 
> > > > Signed-off-by: Miquel Raynal 
> > > 
> > > Hi Miquel,
> > > 
> > > the patch is:
> > > Reviewed-by: Sam Ravnborg 
> > > 
> > > I hope someone else can pick it up and apply it to drm-misc as
> > > my drm-misc setup is hopelessly outdated atm.  
> > 
> > I haven't been noticed this patch was picked-up, is your tree still
> > outdated or can you take care of it?  
> 
> I am still hopelessly behind on stuff.

No problem.

> I copied a few people on this mail that I hope can help.

Nice, thanks a lot!

> Link to the original patch:
> https://lore.kernel.org/dri-devel/20230609144843.851327-1-miquel.ray...@bootlin.com/
> 
>   Sam

Let me know in case it's easier if I re-send it.

Thanks,
Miquèl


Re: [PATCH v4 44/48] mm: shrinker: add a secondary array for shrinker_info::{map, nr_deferred}

2023-08-07 Thread Qi Zheng

Hi Dave,

On 2023/8/8 10:12, Dave Chinner wrote:

On Mon, Aug 07, 2023 at 07:09:32PM +0800, Qi Zheng wrote:

Currently, we maintain two linear arrays per node per memcg, which are
shrinker_info::map and shrinker_info::nr_deferred. And we need to resize
them when the shrinker_nr_max is exceeded, that is, allocate a new array,
and then copy the old array to the new array, and finally free the old
array by RCU.

For shrinker_info::map, we do set_bit() under the RCU lock, so we may set
the value into the old map which is about to be freed. This may cause the
value set to be lost. The current solution is not to copy the old map when
resizing, but to set all the corresponding bits in the new map to 1. This
solves the data loss problem, but bring the overhead of more pointless
loops while doing memcg slab shrink.

For shrinker_info::nr_deferred, we will only modify it under the read lock
of shrinker_rwsem, so it will not run concurrently with the resizing. But
after we make memcg slab shrink lockless, there will be the same data loss
problem as shrinker_info::map, and we can't work around it like the map.

For such resizable arrays, the most straightforward idea is to change it
to xarray, like we did for list_lru [1]. We need to do xa_store() in the
list_lru_add()-->set_shrinker_bit(), but this will cause memory
allocation, and the list_lru_add() doesn't accept failure. A possible
solution is to pre-allocate, but the location of pre-allocation is not
well determined.


So you implemented a two level array that preallocates leaf
nodes to work around it? It's remarkable complex for what it does,


Yes, here I have implemented a two level array like the following:

+---+++-+
| shrinker_info | unit 0 | unit 1 | ... | (secondary array)
+---+++-+
 ^
 |
+---+-+
| nr_deferred[] | map | (leaf array)
+---+-+
(shrinker_info_unit)

The leaf array is never freed unless the memcg is destroyed. The
secondary array will be resized every time the shrinker id exceeds
shrinker_nr_max.


I can't help but think a radix tree using a special holder for
nr_deferred values of zero would end up being simpler...


I tried. If the shrinker uses list_lru, then we can preallocate
xa node where list_lru_one is pre-allocated. But for other types of
shrinkers, the location of pre-allocation is not easy to determine
(Such as deferred_split_shrinker). And we can't force all memcg aware
shrinkers to use list_lru, so I gave up using xarray and implemented the 
above two-level array.





Therefore, this commit chooses to introduce a secondary array for
shrinker_info::{map, nr_deferred}, so that we only need to copy this
secondary array every time the size is resized. Then even if we get the
old secondary array under the RCU lock, the found map and nr_deferred are
also true, so no data is lost.


I don't understand what you are trying to describe here. If we get
the old array, then don't we get either a stale nr_deferred value,
or the update we do gets lost because the next shrinker lookup will
find the new array and os the deferred value stored to the old one
is never seen again?


As shown above, the leaf array will not be freed when shrinker_info is
expanded, so the shrinker_info_unit can be indexed from both the old
and the new shrinker_info->unit[x]. So the updated nr_deferred and map
will not be lost.





[1]. 
https://lore.kernel.org/all/20220228122126.37293-13-songmuc...@bytedance.com/

Signed-off-by: Qi Zheng 
Reviewed-by: Muchun Song 
---

.

diff --git a/mm/shrinker.c b/mm/shrinker.c
index a27779ed3798..1911c06b8af5 100644
--- a/mm/shrinker.c
+++ b/mm/shrinker.c
@@ -12,15 +12,50 @@ DECLARE_RWSEM(shrinker_rwsem);
  #ifdef CONFIG_MEMCG
  static int shrinker_nr_max;
  
-/* The shrinker_info is expanded in a batch of BITS_PER_LONG */

-static inline int shrinker_map_size(int nr_items)
+static inline int shrinker_unit_size(int nr_items)
  {
-   return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long));
+   return (DIV_ROUND_UP(nr_items, SHRINKER_UNIT_BITS) * sizeof(struct 
shrinker_info_unit *));
  }
  
-static inline int shrinker_defer_size(int nr_items)

+static inline void shrinker_unit_free(struct shrinker_info *info, int start)
  {
-   return (round_up(nr_items, BITS_PER_LONG) * sizeof(atomic_long_t));
+   struct shrinker_info_unit **unit;
+   int nr, i;
+
+   if (!info)
+   return;
+
+   unit = info->unit;
+   nr = DIV_ROUND_UP(info->map_nr_max, SHRINKER_UNIT_BITS);
+
+   for (i = start; i < nr; i++) {
+   if (!unit[i])
+   break;
+
+   kvfree(unit[i]);
+   unit[i] = NULL;
+   }
+}
+
+static inline int shrinker_unit_alloc(struct shrinker_info *new,
+  struct shrinker_info *old, int 

Re: [PATCH drm-misc-next v9 06/11] drm/nouveau: fence: separate fence alloc and emit

2023-08-07 Thread Christian König




Am 07.08.23 um 20:54 schrieb Danilo Krummrich:

Hi Christian,

On 8/7/23 20:07, Christian König wrote:

Am 03.08.23 um 18:52 schrieb Danilo Krummrich:

The new (VM_BIND) UAPI exports DMA fences through DRM syncobjs. Hence,
in order to emit fences within DMA fence signalling critical sections
(e.g. as typically done in the DRM GPU schedulers run_job() 
callback) we

need to separate fence allocation and fence emitting.


At least from the description that sounds like it might be illegal. 
Daniel can you take a look as well.


What exactly are you doing here?


I'm basically doing exactly the same as amdgpu_fence_emit() does in 
amdgpu_ib_schedule() called by amdgpu_job_run().


The difference - and this is what this patch is for - is that I 
separate the fence allocation from emitting the fence, such that the 
fence structure is allocated before the job is submitted to the GPU 
scheduler. amdgpu solves this with GFP_ATOMIC within 
amdgpu_fence_emit() to allocate the fence structure in this case.


Yeah, that use case is perfectly valid. Maybe update the commit message 
a bit to better describe that.


Something like "Separate fence allocation and emitting to avoid 
allocation within DMA fence signalling critical sections inside the DRM 
scheduler. This helps implementing the new UAPI".


Regards,
Christian.



- Danilo



Regards,
Christian.



Signed-off-by: Danilo Krummrich 
---
  drivers/gpu/drm/nouveau/dispnv04/crtc.c |  9 -
  drivers/gpu/drm/nouveau/nouveau_bo.c    | 52 
+++--

  drivers/gpu/drm/nouveau/nouveau_chan.c  |  6 ++-
  drivers/gpu/drm/nouveau/nouveau_dmem.c  |  9 +++--
  drivers/gpu/drm/nouveau/nouveau_fence.c | 16 +++-
  drivers/gpu/drm/nouveau/nouveau_fence.h |  3 +-
  drivers/gpu/drm/nouveau/nouveau_gem.c   |  5 ++-
  7 files changed, 59 insertions(+), 41 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/dispnv04/crtc.c 
b/drivers/gpu/drm/nouveau/dispnv04/crtc.c

index a6f2e681bde9..a34924523133 100644
--- a/drivers/gpu/drm/nouveau/dispnv04/crtc.c
+++ b/drivers/gpu/drm/nouveau/dispnv04/crtc.c
@@ -1122,11 +1122,18 @@ nv04_page_flip_emit(struct nouveau_channel 
*chan,

  PUSH_NVSQ(push, NV_SW, NV_SW_PAGE_FLIP, 0x);
  PUSH_KICK(push);
-    ret = nouveau_fence_new(chan, false, pfence);
+    ret = nouveau_fence_new(pfence);
  if (ret)
  goto fail;
+    ret = nouveau_fence_emit(*pfence, chan);
+    if (ret)
+    goto fail_fence_unref;
+
  return 0;
+
+fail_fence_unref:
+    nouveau_fence_unref(pfence);
  fail:
  spin_lock_irqsave(&dev->event_lock, flags);
  list_del(&s->head);
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
b/drivers/gpu/drm/nouveau/nouveau_bo.c

index 057bc995f19b..e9cbbf594e6f 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -820,29 +820,39 @@ nouveau_bo_move_m2mf(struct ttm_buffer_object 
*bo, int evict,

  mutex_lock(&cli->mutex);
  else
  mutex_lock_nested(&cli->mutex, SINGLE_DEPTH_NESTING);
+
  ret = nouveau_fence_sync(nouveau_bo(bo), chan, true, 
ctx->interruptible);

-    if (ret == 0) {
-    ret = drm->ttm.move(chan, bo, bo->resource, new_reg);
-    if (ret == 0) {
-    ret = nouveau_fence_new(chan, false, &fence);
-    if (ret == 0) {
-    /* TODO: figure out a better solution here
- *
- * wait on the fence here explicitly as going through
- * ttm_bo_move_accel_cleanup somehow doesn't seem 
to do it.

- *
- * Without this the operation can timeout and we'll 
fallback to a
- * software copy, which might take several minutes 
to finish.

- */
-    nouveau_fence_wait(fence, false, false);
-    ret = ttm_bo_move_accel_cleanup(bo,
-    &fence->base,
-    evict, false,
-    new_reg);
-    nouveau_fence_unref(&fence);
-    }
-    }
+    if (ret)
+    goto out_unlock;
+
+    ret = drm->ttm.move(chan, bo, bo->resource, new_reg);
+    if (ret)
+    goto out_unlock;
+
+    ret = nouveau_fence_new(&fence);
+    if (ret)
+    goto out_unlock;
+
+    ret = nouveau_fence_emit(fence, chan);
+    if (ret) {
+    nouveau_fence_unref(&fence);
+    goto out_unlock;
  }
+
+    /* TODO: figure out a better solution here
+ *
+ * wait on the fence here explicitly as going through
+ * ttm_bo_move_accel_cleanup somehow doesn't seem to do it.
+ *
+ * Without this the operation can timeout and we'll fallback to a
+ * software copy, which might take several minutes to finish.
+ */
+    nouveau_fence_wait(fence, false, false);
+    ret = ttm_bo_move_accel_cleanup(bo, &fence->base, evict, false,
+    new_reg);
+    nouveau_fence_unref(&fence);
+
+out_unlock:
  mutex_unlock(&cli->mutex);
  return r

RE: [Patch v2 2/3] drm/mst: Refactor the flow for payload allocation/removement

2023-08-07 Thread Lin, Wayne
[AMD Official Use Only - General]

Thanks for your time, Lyude!

Regards,
Wayne

> -Original Message-
> From: Lyude Paul 
> Sent: Tuesday, August 8, 2023 4:23 AM
> To: Lin, Wayne ; dri-devel@lists.freedesktop.org;
> amd-...@lists.freedesktop.org
> Cc: jani.nik...@intel.com; ville.syrj...@linux.intel.com; imre.d...@intel.com;
> Wentland, Harry ; Zuo, Jerry
> 
> Subject: Re: [Patch v2 2/3] drm/mst: Refactor the flow for payload
> allocation/removement
>
> Oo! This is a wonderful idea so far - keeping track of the status of
> allocations like this solves a lot of problems, especially with regards to 
> the fact
> this actually seems to make it possible for us to have much better handling of
> payload failures in drivers - especially in situations like suspend/resume. 
> The
> naming changes here are awesome too.
>
> I think this patch is good as far as I can tell review-wise! I haven't been 
> able to
> test it quite yet but I'll do it asap.
>
> On Mon, 2023-08-07 at 10:56 +0800, Wayne Lin wrote:
> > [Why]
> > Today, the allocation/deallocation steps and status is a bit unclear.
> >
> > For instance, payload->vc_start_slot = -1 stands for "the failure of
> > updating DPCD payload ID table" and can also represent as "payload is
> > not allocated yet". These two cases should be handled differently and
> > hence better to distinguish them for better understanding.
> >
> > [How]
> > Define enumeration - ALLOCATION_LOCAL, ALLOCATION_DFP and
> > ALLOCATION_REMOTE to distinguish different allocation status. Adjust
> > the code to handle different status accordingly for better
> > understanding the sequence of payload allocation and payload
> removement.
> >
> > For payload creation, the procedure should look like this:
> > DRM part 1:
> > * step 1 - update sw mst mgr variables to add a new payload
> > * step 2 - add payload at immediate DFP DPCD payload table
> >
> > Driver:
> > * Add new payload in HW and sync up with DFP by sending ACT
> >
> > DRM Part 2:
> > * Send ALLOCATE_PAYLOAD sideband message to allocate bandwidth along
> the
> >   virtual channel.
> >
> > And as for payload removement, the procedure should look like this:
> > DRM part 1:
> > * step 1 - Send ALLOCATE_PAYLOAD sideband message to release bandwidth
> >along the virtual channel
> > * step 2 - Clear payload allocation at immediate DFP DPCD payload
> > table
> >
> > Driver:
> > * Remove the payload in HW and sync up with DFP by sending ACT
> >
> > DRM part 2:
> > * update sw mst mgr variables to remove the payload
> >
> > Note that it's fine to fail when communicate with the branch device
> > connected at immediate downstrean-facing port, but updating variables
> > of SW mst mgr and HW configuration should be conducted anyway. That's
> > because it's under commit_tail and we need to complete the HW
> programming.
>
> yay!
>
> >
> > Changes since v1:
> > * Remove the set but not use variable 'old_payload' in function
> >   'nv50_msto_prepare'. Catched by kernel test robot 
> >
> > Signed-off-by: Wayne Lin 
> > ---
> >  .../amd/display/amdgpu_dm/amdgpu_dm_helpers.c |  20 ++-
> > drivers/gpu/drm/display/drm_dp_mst_topology.c | 159 +++--
> -
> >  drivers/gpu/drm/i915/display/intel_dp_mst.c   |  18 +-
> >  drivers/gpu/drm/nouveau/dispnv50/disp.c   |  21 +--
> >  include/drm/display/drm_dp_mst_helper.h   |  23 ++-
> >  5 files changed, 153 insertions(+), 88 deletions(-)
> >
> > diff --git
> a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
> > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
> > index d9a482908380..9ad509279b0a 100644
> > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
> > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
> > @@ -219,7 +219,7 @@ static void dm_helpers_construct_old_payload(
> > /* Set correct time_slots/PBN of old payload.
> >  * other fields (delete & dsc_enabled) in
> >  * struct drm_dp_mst_atomic_payload are don't care fields
> > -* while calling drm_dp_remove_payload()
> > +* while calling drm_dp_remove_payload_part2()
> >  */
> > for (i = 0; i < current_link_table.stream_count; i++) {
> > dc_alloc =
> > @@ -262,13 +262,12 @@ bool
> > dm_helpers_dp_mst_write_payload_allocation_table(
> >
> > mst_mgr = &aconnector->mst_root->mst_mgr;
> > mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state);
> > -
> > -   /* It's OK for this to fail */
> > new_payload = drm_atomic_get_mst_payload_state(mst_state,
> > aconnector->mst_output_port);
> >
> > if (enable) {
> > target_payload = new_payload;
> >
> > +   /* It's OK for this to fail */
> > drm_dp_add_payload_part1(mst_mgr, mst_state,
> new_payload);
> > } else {
> > /* construct old payload by VCPI*/
> > @@ -276,7 +275,7 @@ bool
> dm_helpers_dp_mst_write_payload_allocation_table(
> > new_payload, &old_payload);
> > t

RE: [PATCH 3/3] drm/mst: adjust the function drm_dp_remove_payload_part2()

2023-08-07 Thread Lin, Wayne
[AMD Official Use Only - General]

> -Original Message-
> From: Imre Deak 
> Sent: Tuesday, August 8, 2023 12:00 AM
> To: Lin, Wayne 
> Cc: dri-devel@lists.freedesktop.org; amd-...@lists.freedesktop.org;
> ly...@redhat.com; jani.nik...@intel.com; ville.syrj...@linux.intel.com;
> Wentland, Harry ; Zuo, Jerry
> 
> Subject: Re: [PATCH 3/3] drm/mst: adjust the function
> drm_dp_remove_payload_part2()
>
> On Mon, Aug 07, 2023 at 02:43:02AM +, Lin, Wayne wrote:
> > [AMD Official Use Only - General]
> >
> > > -Original Message-
> > > From: Imre Deak 
> > > Sent: Friday, August 4, 2023 11:32 PM
> > > To: Lin, Wayne 
> > > Cc: dri-devel@lists.freedesktop.org; amd-...@lists.freedesktop.org;
> > > ly...@redhat.com; jani.nik...@intel.com;
> > > ville.syrj...@linux.intel.com; Wentland, Harry
> > > ; Zuo, Jerry 
> > > Subject: Re: [PATCH 3/3] drm/mst: adjust the function
> > > drm_dp_remove_payload_part2()
> > >
> > > On Fri, Aug 04, 2023 at 02:20:29PM +0800, Wayne Lin wrote:
> > > > [...]
> > > > diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c
> > > > b/drivers/gpu/drm/display/drm_dp_mst_topology.c
> > > > index e04f87ff755a..4270178f95f6 100644
> > > > --- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
> > > > +++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
> > > > @@ -3382,8 +3382,7 @@
> > > EXPORT_SYMBOL(drm_dp_remove_payload_part1);
> > > >   * drm_dp_remove_payload_part2() - Remove an MST payload locally
> > > >   * @mgr: Manager to use.
> > > >   * @mst_state: The MST atomic state
> > > > - * @old_payload: The payload with its old state
> > > > - * @new_payload: The payload with its latest state
> > > > + * @payload: The payload with its latest state
> > > >   *
> > > >   * Updates the starting time slots of all other payloads which
> > > > would have
> > > been shifted towards
> > > >   * the start of the payload ID table as a result of removing a
> > > > payload. Driver should call this @@ -3392,25 +3391,36 @@
> > > EXPORT_SYMBOL(drm_dp_remove_payload_part1);
> > > >   */
> > > >  void drm_dp_remove_payload_part2(struct
> drm_dp_mst_topology_mgr
> > > *mgr,
> > > >  struct drm_dp_mst_topology_state
> > > *mst_state,
> > > > -const struct drm_dp_mst_atomic_payload
> > > *old_payload,
> > > > -struct drm_dp_mst_atomic_payload
> > > *new_payload)
> > > > +struct drm_dp_mst_atomic_payload
> > > *payload)
> > > >  {
> > > > struct drm_dp_mst_atomic_payload *pos;
> > > > +   u8 time_slots_to_remove;
> > > > +   u8 next_payload_vc_start = mgr->next_start_slot;
> > > > +
> > > > +   /* Find the current allocated time slot number of the payload */
> > > > +   list_for_each_entry(pos, &mst_state->payloads, next) {
> > > > +   if (pos != payload &&
> > > > +   pos->vc_start_slot > payload->vc_start_slot &&
> > > > +   pos->vc_start_slot < next_payload_vc_start)
> > > > +   next_payload_vc_start = pos->vc_start_slot;
> > > > +   }
> > > > +
> > > > +   time_slots_to_remove = next_payload_vc_start -
> > > > +payload->vc_start_slot;
> > >
> > > Imo, the intuitive way would be to pass the old payload state to
> > > this function - which already contains the required time_slots param
> > > - and refactor things instead moving vc_start_slot from the payload
> > > state to mgr suggested by Ville earlier.
> > >
> > > --Imre
> >
> > Hi Imre,
> > Thanks for your feedback!
> >
> > I understand it's functionally correct. But IMHO, it's still a bit
> > conceptually different between the time slot in old state and the time
> > slot in current payload table. My thought is the time slot at the
> > moment when we are removing the payload would be a better choice.
>
> Yes, they are different. The old state contains the time slot the payload was
> added with in a preceding commit and so the time slot value which should be
> used when removing the same payload in the current commit.
>
> The new state contains a time slot value with which the payload will be added
> in the current commit and can be different than the one in the old state if 
> the
> current commit has changed the payload size (meaning that the same atomic
> commit will first remove the payload using the time slot value in the old 
> state
> and afterwards will add back the same payload using the time slot value in the
> new state).
>
Appreciate your time, Imre!

Yes I understand, so I'm not using the number of the time slot in the new state.
I'm referring to the start slot instead which is updated during every allocation
and removement at current commit.

Like what you said, current commit manipulation could be a mix of allocations
and removements for the payload. My thought is, conceptually, looking up the
latest number of time slot is a better choice rather than the one in old state.
It's relatively intuitive to me since we are removing payload from current
payload table and wh

Re: [PATCH v2 0/6] Adds support for ConfigFS to VKMS!

2023-08-07 Thread Brandon Ross Pollack
Some of these comments have been sitting for a while.  Would it be ok if 
yi...@chromium.org and myself picked these up and did an iteration so we 
could also get 
https://patchwork.kernel.org/project/dri-devel/patch/20230711013148.3155572-1-br...@chromium.org/ 
submitted?  These will enable a lot of virtual multi display testing in 
linux! :)


On 6/24/23 07:23, Jim Shargo wrote:

Intro
=

At long last, we're back!

This patchset adds basic ConfigFS support to VKMS, allowing users to
build new DRM devices with user-defined DRM objects and object
relationships by creating, writing, and symlinking files.

Usageubmitted
=

After installing these patches, you can create a VKMS device with two
displays and a movable overlay like so (this is documented in the
patches):

   $ modprobe vkms enable_overlay=1 enable_cursor=1 enable_writeback=1
   $ mkdir -p /config/
   $ mount -t configfs none /config

   $ export DRM_PLANE_TYPE_PRIMARY=1
   $ export DRM_PLANE_TYPE_CURSOR=2
   $ export DRM_PLANE_TYPE_OVERLAY=0

   $ mkdir /config/vkms/test

   $ mkdir /config/vkms/test/planes/primary
   $ echo $DRM_PLANE_TYPE_PRIMARY > /config/vkms/test/planes/primary/type

   $ mkdir /config/vkms/test/planes/other_primary
   $ echo $DRM_PLANE_TYPE_PRIMARY > /config/vkms/test/planes/other_primary/type

   $ mkdir /config/vkms/test/planes/cursor
   $ echo $DRM_PLANE_TYPE_CURSOR > /config/vkms/test/planes/cursor/type

   $ mkdir /config/vkms/test/planes/overlay
   $ echo $DRM_PLANE_TYPE_OVERLAY > /config/vkms/test/planes/overlay/type

   $ mkdir /config/vkms/test/crtcs/crtc
   $ mkdir /config/vkms/test/crtcs/crtc_other
   $ mkdir /config/vkms/test/encoders/encoder
   $ mkdir /config/vkms/test/connectors/connector

   $ ln -s /config/vkms/test/encoders/encoder 
/config/vkms/test/connectors/connector/possible_encoders
   $ ln -s /config/vkms/test/crtcs/crtc 
/config/vkms/test/encoders/encoder/possible_crtcs/
   $ ln -s /config/vkms/test/crtcs/crtc 
/config/vkms/test/planes/primary/possible_crtcs/
   $ ln -s /config/vkms/test/crtcs/crtc 
/config/vkms/test/planes/cursor/possible_crtcs/
   $ ln -s /config/vkms/test/crtcs/crtc 
/config/vkms/test/planes/overlay/possible_crtcs/
   $ ln -s /config/vkms/test/crtcs/crtc_other 
/config/vkms/test/planes/overlay/possible_crtcs/
   $ ln -s /config/vkms/test/crtcs/crtc_other 
/config/vkms/test/planes/other_primary/possible_crtcs/

   $ echo 1 > /config/vkms/test/enabled

Changes within core VKMS


This introduces a few important changes to the overall structure of
VKMS:

   - Devices are now memory managed!
   - Support for multiple CRTCs and other objects has been added

Since v1


   - Added DRMM memory management to automatically clean up resources
   - Added a param to disable the default device
   - Renamed "cards" to "devices" to improve legibility
   - Added a lock for the configfs setup handler
   - Moved all the new docs into the relevant .c file
   - Addressed as many of s...@poorly.run as possible

Testing
===

   - New IGT tests (see
 gitlab.freedesktop.org/jshargo/igt-gpu-tools/-/merge_requests/1)
   - Existing IGT tests (excluding .*suspend.*, including .*kms_flip.*
 .*kms_writeback.* .*kms_cursor_crc.*, .*kms_plane.*)

Outro
=

I'm excited to share these changes, it's my still my first kernel patch
and I've been putting a lot of love into these.

Jim Shargo (6):
   drm/vkms: Back VKMS with DRM memory management instead of static
 objects
   drm/vkms: Support multiple DRM objects (crtcs, etc.) per VKMS device
   drm/vkms: Provide platform data when creating VKMS devices
   drm/vkms: Add ConfigFS scaffolding to VKMS
   drm/vkms: Support enabling ConfigFS devices
   drm/vkms: Add a module param to enable/disable the default device

  Documentation/gpu/vkms.rst|  17 +-
  drivers/gpu/drm/Kconfig   |   1 +
  drivers/gpu/drm/vkms/Makefile |   1 +
  drivers/gpu/drm/vkms/vkms_composer.c  |  28 +-
  drivers/gpu/drm/vkms/vkms_configfs.c  | 657 ++
  drivers/gpu/drm/vkms/vkms_crtc.c  |  97 ++--
  drivers/gpu/drm/vkms/vkms_drv.c   | 208 +---
  drivers/gpu/drm/vkms/vkms_drv.h   | 166 +--
  drivers/gpu/drm/vkms/vkms_output.c| 299 ++--
  drivers/gpu/drm/vkms/vkms_plane.c |  44 +-
  drivers/gpu/drm/vkms/vkms_writeback.c |  26 +-
  11 files changed, 1312 insertions(+), 232 deletions(-)
  create mode 100644 drivers/gpu/drm/vkms/vkms_configfs.c


Re: [PATCH v4 46/48] mm: shrinker: make memcg slab shrink lockless

2023-08-07 Thread Dave Chinner
On Mon, Aug 07, 2023 at 07:09:34PM +0800, Qi Zheng wrote:
> Like global slab shrink, this commit also uses refcount+RCU method to make
> memcg slab shrink lockless.

This patch does random code cleanups amongst the actual RCU changes.
Can you please move the cleanups to a spearate patch to reduce the
noise in this one?

> diff --git a/mm/shrinker.c b/mm/shrinker.c
> index d318f5621862..fee6f62904fb 100644
> --- a/mm/shrinker.c
> +++ b/mm/shrinker.c
> @@ -107,6 +107,12 @@ static struct shrinker_info 
> *shrinker_info_protected(struct mem_cgroup *memcg,
>lockdep_is_held(&shrinker_rwsem));
>  }
>  
> +static struct shrinker_info *shrinker_info_rcu(struct mem_cgroup *memcg,
> +int nid)
> +{
> + return rcu_dereference(memcg->nodeinfo[nid]->shrinker_info);
> +}

This helper doesn't add value. It doesn't tell me that
rcu_read_lock() needs to be held when it is called, for one

>  static int expand_one_shrinker_info(struct mem_cgroup *memcg, int new_size,
>   int old_size, int new_nr_max)
>  {
> @@ -198,7 +204,7 @@ void set_shrinker_bit(struct mem_cgroup *memcg, int nid, 
> int shrinker_id)
>   struct shrinker_info_unit *unit;
>  
>   rcu_read_lock();
> - info = rcu_dereference(memcg->nodeinfo[nid]->shrinker_info);
> + info = shrinker_info_rcu(memcg, nid);

... whilst the original code here was obviously correct.

>   unit = info->unit[shriner_id_to_index(shrinker_id)];
>   if (!WARN_ON_ONCE(shrinker_id >= info->map_nr_max)) {
>   /* Pairs with smp mb in shrink_slab() */
> @@ -211,7 +217,7 @@ void set_shrinker_bit(struct mem_cgroup *memcg, int nid, 
> int shrinker_id)
>  
>  static DEFINE_IDR(shrinker_idr);
>  
> -static int prealloc_memcg_shrinker(struct shrinker *shrinker)
> +static int shrinker_memcg_alloc(struct shrinker *shrinker)

Cleanups in a separate patch.

> @@ -253,10 +258,15 @@ static long xchg_nr_deferred_memcg(int nid, struct 
> shrinker *shrinker,
>  {
>   struct shrinker_info *info;
>   struct shrinker_info_unit *unit;
> + long nr_deferred;
>  
> - info = shrinker_info_protected(memcg, nid);
> + rcu_read_lock();
> + info = shrinker_info_rcu(memcg, nid);
>   unit = info->unit[shriner_id_to_index(shrinker->id)];
> - return 
> atomic_long_xchg(&unit->nr_deferred[shriner_id_to_offset(shrinker->id)], 0);
> + nr_deferred = 
> atomic_long_xchg(&unit->nr_deferred[shriner_id_to_offset(shrinker->id)], 0);
> + rcu_read_unlock();
> +
> + return nr_deferred;
>  }

This adds two rcu_read_lock() sections to every call to
do_shrink_slab(). It's not at all clear ifrom any of the other code
that do_shrink_slab() now has internal rcu_read_lock() sections

> @@ -464,18 +480,23 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, 
> int nid,
>   if (!mem_cgroup_online(memcg))
>   return 0;
>  
> - if (!down_read_trylock(&shrinker_rwsem))
> - return 0;
> -
> - info = shrinker_info_protected(memcg, nid);
> +again:
> + rcu_read_lock();
> + info = shrinker_info_rcu(memcg, nid);
>   if (unlikely(!info))
>   goto unlock;
>  
> - for (; index < shriner_id_to_index(info->map_nr_max); index++) {
> + if (index < shriner_id_to_index(info->map_nr_max)) {
>   struct shrinker_info_unit *unit;
>  
>   unit = info->unit[index];
>  
> + /*
> +  * The shrinker_info_unit will not be freed, so we can
> +  * safely release the RCU lock here.
> +  */
> + rcu_read_unlock();

Why - what guarantees that the shrinker_info_unit exists at this
point? We hold no reference to it, we hold no reference to any
shrinker, etc. What provides this existence guarantee?

> +
>   for_each_set_bit(offset, unit->map, SHRINKER_UNIT_BITS) {
>   struct shrink_control sc = {
>   .gfp_mask = gfp_mask,
> @@ -485,12 +506,14 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, 
> int nid,
>   struct shrinker *shrinker;
>   int shrinker_id = calc_shrinker_id(index, offset);
>  
> + rcu_read_lock();
>   shrinker = idr_find(&shrinker_idr, shrinker_id);
> - if (unlikely(!shrinker || !(shrinker->flags & 
> SHRINKER_REGISTERED))) {
> - if (!shrinker)
> - clear_bit(offset, unit->map);
> + if (unlikely(!shrinker || !shrinker_try_get(shrinker))) 
> {
> + clear_bit(offset, unit->map);
> + rcu_read_unlock();
>   continue;
>   }
> + rcu_read_unlock();
>  
>   /* Call non-slab shri

[PATCH] Initial backport of vkms changes from 6.4, including jshargo and brpols configs changes

2023-08-07 Thread Brandon Pollack
WIP: Need to run all tast criticals and test the multidisplay tests that
are WIP.

BUG=b:283357160
TEST=Booted on a betty-arc-r device and ran autologin.py -a
Change-Id: I13cef8cf019744813f51cfffed3d7ccb987834e8

Change-Id: Iae7d788bc4725dfdca044204fa1af27a5a1ec5a8
---
 drivers/gpu/drm/vkms/Makefile |   1 +
 drivers/gpu/drm/vkms/vkms_composer.c  |  74 ++-
 drivers/gpu/drm/vkms/vkms_configfs.c  | 719 ++
 drivers/gpu/drm/vkms/vkms_crtc.c  |  98 ++--
 drivers/gpu/drm/vkms/vkms_drv.c   | 227 
 drivers/gpu/drm/vkms/vkms_drv.h   | 191 +--
 drivers/gpu/drm/vkms/vkms_formats.c   | 298 +--
 drivers/gpu/drm/vkms/vkms_formats.h   |   4 +-
 drivers/gpu/drm/vkms/vkms_output.c| 351 +++--
 drivers/gpu/drm/vkms/vkms_plane.c | 102 ++--
 drivers/gpu/drm/vkms/vkms_writeback.c |  33 +-
 include/drm/drm_fixed.h   |   7 +
 12 files changed, 1638 insertions(+), 467 deletions(-)
 create mode 100644 drivers/gpu/drm/vkms/vkms_configfs.c

diff --git a/drivers/gpu/drm/vkms/Makefile b/drivers/gpu/drm/vkms/Makefile
index 1b28a6a32948..6b83907ad554 100644
--- a/drivers/gpu/drm/vkms/Makefile
+++ b/drivers/gpu/drm/vkms/Makefile
@@ -1,5 +1,6 @@
 # SPDX-License-Identifier: GPL-2.0-only
 vkms-y := \
+   vkms_configfs.o \
vkms_drv.o \
vkms_plane.o \
vkms_output.o \
diff --git a/drivers/gpu/drm/vkms/vkms_composer.c 
b/drivers/gpu/drm/vkms/vkms_composer.c
index 8e53fa80742b..61061a277fca 100644
--- a/drivers/gpu/drm/vkms/vkms_composer.c
+++ b/drivers/gpu/drm/vkms/vkms_composer.c
@@ -4,6 +4,7 @@
 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -22,7 +23,7 @@ static u16 pre_mul_blend_channel(u16 src, u16 dst, u16 alpha)
 
 /**
  * pre_mul_alpha_blend - alpha blending equation
- * @src_frame_info: source framebuffer's metadata
+ * @frame_info: Source framebuffer's metadata
  * @stage_buffer: The line with the pixels from src_plane
  * @output_buffer: A line buffer that receives all the blends output
  *
@@ -53,10 +54,30 @@ static void pre_mul_alpha_blend(struct vkms_frame_info 
*frame_info,
}
 }
 
-static bool check_y_limit(struct vkms_frame_info *frame_info, int y)
+static int get_y_pos(struct vkms_frame_info *frame_info, int y)
 {
-   if (y >= frame_info->dst.y1 && y < frame_info->dst.y2)
-   return true;
+   if (frame_info->rotation & DRM_MODE_REFLECT_Y)
+   return drm_rect_height(&frame_info->rotated) - y - 1;
+
+   switch (frame_info->rotation & DRM_MODE_ROTATE_MASK) {
+   case DRM_MODE_ROTATE_90:
+   return frame_info->rotated.x2 - y - 1;
+   case DRM_MODE_ROTATE_270:
+   return y + frame_info->rotated.x1;
+   default:
+   return y;
+   }
+}
+
+static bool check_limit(struct vkms_frame_info *frame_info, int pos)
+{
+   if (drm_rotation_90_or_270(frame_info->rotation)) {
+   if (pos >= 0 && pos < drm_rect_width(&frame_info->rotated))
+   return true;
+   } else {
+   if (pos >= frame_info->rotated.y1 && pos < 
frame_info->rotated.y2)
+   return true;
+   }
 
return false;
 }
@@ -69,11 +90,13 @@ static void fill_background(const struct pixel_argb_u16 
*background_color,
 }
 
 /**
- * @wb_frame_info: The writeback frame buffer metadata
+ * blend - blend the pixels from all planes and compute crc
+ * @wb: The writeback frame buffer metadata
  * @crtc_state: The crtc state
  * @crc32: The crc output of the final frame
  * @output_buffer: A buffer of a row that will receive the result of the 
blend(s)
  * @stage_buffer: The line with the pixels from plane being blend to the output
+ * @row_size: The size, in bytes, of a single row
  *
  * This function blends the pixels (Using the `pre_mul_alpha_blend`)
  * from all planes, calculates the crc32 of the output from the former step,
@@ -86,6 +109,7 @@ static void blend(struct vkms_writeback_job *wb,
 {
struct vkms_plane_state **plane = crtc_state->active_planes;
u32 n_active_planes = crtc_state->num_active_planes;
+   int y_pos;
 
const struct pixel_argb_u16 background_color = { .a = 0x };
 
@@ -96,10 +120,12 @@ static void blend(struct vkms_writeback_job *wb,
 
/* The active planes are composed associatively in z-order. */
for (size_t i = 0; i < n_active_planes; i++) {
-   if (!check_y_limit(plane[i]->frame_info, y))
+   y_pos = get_y_pos(plane[i]->frame_info, y);
+
+   if (!check_limit(plane[i]->frame_info, y_pos))
continue;
 
-   plane[i]->plane_read(stage_buffer, 
plane[i]->frame_info, y);
+   vkms_compose_row(stage_buffer, plane[i], y_pos);
pre_mul_alpha_blend(plane[i]->frame_info, stage_buffer,
output_buffer);
 

[no subject]

2023-08-07 Thread Brandon Pollack
Any progress on this?  Is it ok if yi...@chromium.org and I do the
followups on this patch so that we can also submit the Hotplug patch I
wrote (that's now archived?).




Re: [PATCH drm-misc-next 0/5] Nouveau VM_BIND uAPI Fixes

2023-08-07 Thread Dave Airlie
For the series:

Reviewed-by: Dave Airlie 

On Tue, 8 Aug 2023 at 02:32, Danilo Krummrich  wrote:
>
> The patch series provides a few fixes for the recently merged VM_BIND uAPI
> mostly addressing a couple of warnings.
>
> It also contains one patch to slightly reduce the memory footprint of
> struct nouveau_uvma.
>
> Danilo Krummrich (5):
>   nouveau/dmem: fix copy-paste error in nouveau_dmem_migrate_chunk()
>   drm/nouveau: nvkm: vmm: silence warning from cast
>   drm/nouveau: remove incorrect __user annotations
>   drm/nouveau: uvmm: remove incorrect calls to mas_unlock()
>   drm/nouveau: uvmm: remove dedicated VM pointer from VMAs
>
>  drivers/gpu/drm/nouveau/nouveau_dmem.c|  2 +-
>  drivers/gpu/drm/nouveau/nouveau_exec.c|  6 ++---
>  drivers/gpu/drm/nouveau/nouveau_exec.h|  2 +-
>  drivers/gpu/drm/nouveau/nouveau_uvmm.c| 23 ---
>  drivers/gpu/drm/nouveau/nouveau_uvmm.h| 14 +--
>  .../gpu/drm/nouveau/nvkm/subdev/mmu/uvmm.c|  5 ++--
>  6 files changed, 24 insertions(+), 28 deletions(-)
>
>
> base-commit: 82d750e9d2f5d0594c8f7057ce59127e701af781
> --
> 2.41.0
>


Re: [PATCH v4 45/48] mm: shrinker: make global slab shrink lockless

2023-08-07 Thread Dave Chinner
On Mon, Aug 07, 2023 at 07:09:33PM +0800, Qi Zheng wrote:
> diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
> index eb342994675a..f06225f18531 100644
> --- a/include/linux/shrinker.h
> +++ b/include/linux/shrinker.h
> @@ -4,6 +4,8 @@
>  
>  #include 
>  #include 
> +#include 
> +#include 
>  
>  #define SHRINKER_UNIT_BITS   BITS_PER_LONG
>  
> @@ -87,6 +89,10 @@ struct shrinker {
>   int seeks;  /* seeks to recreate an obj */
>   unsigned flags;
>  
> + refcount_t refcount;
> + struct completion done;
> + struct rcu_head rcu;

Documentation, please. What does the refcount protect, what does the
completion provide, etc.

> +
>   void *private_data;
>  
>   /* These are for internal use */
> @@ -120,6 +126,17 @@ struct shrinker *shrinker_alloc(unsigned int flags, 
> const char *fmt, ...);
>  void shrinker_register(struct shrinker *shrinker);
>  void shrinker_free(struct shrinker *shrinker);
>  
> +static inline bool shrinker_try_get(struct shrinker *shrinker)
> +{
> + return refcount_inc_not_zero(&shrinker->refcount);
> +}
> +
> +static inline void shrinker_put(struct shrinker *shrinker)
> +{
> + if (refcount_dec_and_test(&shrinker->refcount))
> + complete(&shrinker->done);
> +}
> +
>  #ifdef CONFIG_SHRINKER_DEBUG
>  extern int __printf(2, 3) shrinker_debugfs_rename(struct shrinker *shrinker,
> const char *fmt, ...);
> diff --git a/mm/shrinker.c b/mm/shrinker.c
> index 1911c06b8af5..d318f5621862 100644
> --- a/mm/shrinker.c
> +++ b/mm/shrinker.c
> @@ -2,6 +2,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  
>  #include "internal.h"
> @@ -577,33 +578,42 @@ unsigned long shrink_slab(gfp_t gfp_mask, int nid, 
> struct mem_cgroup *memcg,
>   if (!mem_cgroup_disabled() && !mem_cgroup_is_root(memcg))
>   return shrink_slab_memcg(gfp_mask, nid, memcg, priority);
>  
> - if (!down_read_trylock(&shrinker_rwsem))
> - goto out;
> -
> - list_for_each_entry(shrinker, &shrinker_list, list) {
> + rcu_read_lock();
> + list_for_each_entry_rcu(shrinker, &shrinker_list, list) {
>   struct shrink_control sc = {
>   .gfp_mask = gfp_mask,
>   .nid = nid,
>   .memcg = memcg,
>   };
>  
> + if (!shrinker_try_get(shrinker))
> + continue;
> +
> + /*
> +  * We can safely unlock the RCU lock here since we already
> +  * hold the refcount of the shrinker.
> +  */
> + rcu_read_unlock();
> +
>   ret = do_shrink_slab(&sc, shrinker, priority);
>   if (ret == SHRINK_EMPTY)
>   ret = 0;
>   freed += ret;
> +
>   /*
> -  * Bail out if someone want to register a new shrinker to
> -  * prevent the registration from being stalled for long periods
> -  * by parallel ongoing shrinking.
> +  * This shrinker may be deleted from shrinker_list and freed
> +  * after the shrinker_put() below, but this shrinker is still
> +  * used for the next traversal. So it is necessary to hold the
> +  * RCU lock first to prevent this shrinker from being freed,
> +  * which also ensures that the next shrinker that is traversed
> +  * will not be freed (even if it is deleted from shrinker_list
> +  * at the same time).
>*/

This needs to be moved to the head of the function, and document
the whole list walk, get, put and completion parts of the algorithm
that make it safe. There's more to this than "we hold a reference
count", especially the tricky "we might see the shrinker before it
is fully initialised" case


.
>  void shrinker_free(struct shrinker *shrinker)
>  {
>   struct dentry *debugfs_entry = NULL;
> @@ -686,9 +712,18 @@ void shrinker_free(struct shrinker *shrinker)
>   if (!shrinker)
>   return;
>  
> + if (shrinker->flags & SHRINKER_REGISTERED) {
> + shrinker_put(shrinker);
> + wait_for_completion(&shrinker->done);
> + }

Needs a comment explaining why we need to wait here...
> +
>   down_write(&shrinker_rwsem);
>   if (shrinker->flags & SHRINKER_REGISTERED) {
> - list_del(&shrinker->list);
> + /*
> +  * Lookups on the shrinker are over and will fail in the future,
> +  * so we can now remove it from the lists and free it.
> +  */

 rather than here after the wait has been done and provided the
guarantee that no shrinker is running or will run again...

-Dave.
-- 
Dave Chinner
da...@fromorbit.com


Re: [PATCH v4 44/48] mm: shrinker: add a secondary array for shrinker_info::{map, nr_deferred}

2023-08-07 Thread Dave Chinner
On Mon, Aug 07, 2023 at 07:09:32PM +0800, Qi Zheng wrote:
> Currently, we maintain two linear arrays per node per memcg, which are
> shrinker_info::map and shrinker_info::nr_deferred. And we need to resize
> them when the shrinker_nr_max is exceeded, that is, allocate a new array,
> and then copy the old array to the new array, and finally free the old
> array by RCU.
> 
> For shrinker_info::map, we do set_bit() under the RCU lock, so we may set
> the value into the old map which is about to be freed. This may cause the
> value set to be lost. The current solution is not to copy the old map when
> resizing, but to set all the corresponding bits in the new map to 1. This
> solves the data loss problem, but bring the overhead of more pointless
> loops while doing memcg slab shrink.
> 
> For shrinker_info::nr_deferred, we will only modify it under the read lock
> of shrinker_rwsem, so it will not run concurrently with the resizing. But
> after we make memcg slab shrink lockless, there will be the same data loss
> problem as shrinker_info::map, and we can't work around it like the map.
> 
> For such resizable arrays, the most straightforward idea is to change it
> to xarray, like we did for list_lru [1]. We need to do xa_store() in the
> list_lru_add()-->set_shrinker_bit(), but this will cause memory
> allocation, and the list_lru_add() doesn't accept failure. A possible
> solution is to pre-allocate, but the location of pre-allocation is not
> well determined.

So you implemented a two level array that preallocates leaf
nodes to work around it? It's remarkable complex for what it does,
I can't help but think a radix tree using a special holder for
nr_deferred values of zero would end up being simpler...

> Therefore, this commit chooses to introduce a secondary array for
> shrinker_info::{map, nr_deferred}, so that we only need to copy this
> secondary array every time the size is resized. Then even if we get the
> old secondary array under the RCU lock, the found map and nr_deferred are
> also true, so no data is lost.

I don't understand what you are trying to describe here. If we get
the old array, then don't we get either a stale nr_deferred value,
or the update we do gets lost because the next shrinker lookup will
find the new array and os the deferred value stored to the old one
is never seen again?

> 
> [1]. 
> https://lore.kernel.org/all/20220228122126.37293-13-songmuc...@bytedance.com/
> 
> Signed-off-by: Qi Zheng 
> Reviewed-by: Muchun Song 
> ---
.
> diff --git a/mm/shrinker.c b/mm/shrinker.c
> index a27779ed3798..1911c06b8af5 100644
> --- a/mm/shrinker.c
> +++ b/mm/shrinker.c
> @@ -12,15 +12,50 @@ DECLARE_RWSEM(shrinker_rwsem);
>  #ifdef CONFIG_MEMCG
>  static int shrinker_nr_max;
>  
> -/* The shrinker_info is expanded in a batch of BITS_PER_LONG */
> -static inline int shrinker_map_size(int nr_items)
> +static inline int shrinker_unit_size(int nr_items)
>  {
> - return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long));
> + return (DIV_ROUND_UP(nr_items, SHRINKER_UNIT_BITS) * sizeof(struct 
> shrinker_info_unit *));
>  }
>  
> -static inline int shrinker_defer_size(int nr_items)
> +static inline void shrinker_unit_free(struct shrinker_info *info, int start)
>  {
> - return (round_up(nr_items, BITS_PER_LONG) * sizeof(atomic_long_t));
> + struct shrinker_info_unit **unit;
> + int nr, i;
> +
> + if (!info)
> + return;
> +
> + unit = info->unit;
> + nr = DIV_ROUND_UP(info->map_nr_max, SHRINKER_UNIT_BITS);
> +
> + for (i = start; i < nr; i++) {
> + if (!unit[i])
> + break;
> +
> + kvfree(unit[i]);
> + unit[i] = NULL;
> + }
> +}
> +
> +static inline int shrinker_unit_alloc(struct shrinker_info *new,
> +struct shrinker_info *old, int nid)
> +{
> + struct shrinker_info_unit *unit;
> + int nr = DIV_ROUND_UP(new->map_nr_max, SHRINKER_UNIT_BITS);
> + int start = old ? DIV_ROUND_UP(old->map_nr_max, SHRINKER_UNIT_BITS) : 0;
> + int i;
> +
> + for (i = start; i < nr; i++) {
> + unit = kvzalloc_node(sizeof(*unit), GFP_KERNEL, nid);

A unit is 576 bytes. Why is this using kvzalloc_node()?

> + if (!unit) {
> + shrinker_unit_free(new, start);
> + return -ENOMEM;
> + }
> +
> + new->unit[i] = unit;
> + }
> +
> + return 0;
>  }
>  
>  void free_shrinker_info(struct mem_cgroup *memcg)
> @@ -32,6 +67,7 @@ void free_shrinker_info(struct mem_cgroup *memcg)
>   for_each_node(nid) {
>   pn = memcg->nodeinfo[nid];
>   info = rcu_dereference_protected(pn->shrinker_info, true);
> + shrinker_unit_free(info, 0);
>   kvfree(info);
>   rcu_assign_pointer(pn->shrinker_info, NULL);
>   }

Why is this safe? The info and maps are looked up by RCU, so why is
freeing them without a RCU grace pe

Re: [PATCH RFC v5 02/10] drm: Introduce solid fill DRM plane property

2023-08-07 Thread Dmitry Baryshkov



On 8 August 2023 00:41:07 GMT+03:00, Jessica Zhang  
wrote:
>
>
>On 8/4/2023 6:27 AM, Dmitry Baryshkov wrote:
>> On Fri, 28 Jul 2023 at 20:03, Jessica Zhang  
>> wrote:
>>> 
>>> Document and add support for solid_fill property to drm_plane. In
>>> addition, add support for setting and getting the values for solid_fill.
>>> 
>>> To enable solid fill planes, userspace must assign a property blob to
>>> the "solid_fill" plane property containing the following information:
>>> 
>>> struct drm_mode_solid_fill {
>>>  u32 version;
>>>  u32 r, g, b;
>>> };
>>> 
>>> Signed-off-by: Jessica Zhang 
>>> ---
>>>   drivers/gpu/drm/drm_atomic_state_helper.c |  9 +
>>>   drivers/gpu/drm/drm_atomic_uapi.c | 55 
>>> +++
>>>   drivers/gpu/drm/drm_blend.c   | 30 +
>>>   include/drm/drm_blend.h   |  1 +
>>>   include/drm/drm_plane.h   | 35 
>>>   include/uapi/drm/drm_mode.h   | 24 ++
>>>   6 files changed, 154 insertions(+)
>>> 
>> 
>> [skipped most of the patch]
>> 
>>> diff --git a/include/uapi/drm/drm_mode.h b/include/uapi/drm/drm_mode.h
>>> index 43691058d28f..53c8efa5ad7f 100644
>>> --- a/include/uapi/drm/drm_mode.h
>>> +++ b/include/uapi/drm/drm_mode.h
>>> @@ -259,6 +259,30 @@ struct drm_mode_modeinfo {
>>>  char name[DRM_DISPLAY_MODE_LEN];
>>>   };
>>> 
>>> +/**
>>> + * struct drm_mode_solid_fill - User info for solid fill planes
>>> + *
>>> + * This is the userspace API solid fill information structure.
>>> + *
>>> + * Userspace can enable solid fill planes by assigning the plane 
>>> "solid_fill"
>>> + * property to a blob containing a single drm_mode_solid_fill struct 
>>> populated with an RGB323232
>>> + * color and setting the pixel source to "SOLID_FILL".
>>> + *
>>> + * For information on the plane property, see 
>>> drm_plane_create_solid_fill_property()
>>> + *
>>> + * @version: Version of the blob. Currently, there is only support for 
>>> version == 1
>>> + * @r: Red color value of single pixel
>>> + * @g: Green color value of single pixel
>>> + * @b: Blue color value of single pixel
>>> + */
>>> +struct drm_mode_solid_fill {
>>> +   __u32 version;
>>> +   __u32 r;
>>> +   __u32 g;
>>> +   __u32 b;
>> 
>> Another thought about the drm_mode_solid_fill uABI. I still think we
>> should add alpha here. The reason is the following:
>> 
>> It is true that we have  drm_plane_state::alpha and the plane's
>> "alpha" property. However it is documented as "the plane-wide opacity
>> [...] It can be combined with pixel alpha. The pixel values in the
>> framebuffers are expected to not be pre-multiplied by the global alpha
>> associated to the plane.".
>> 
>> I can imagine a use case, when a user might want to enable plane-wide
>> opacity, set "pixel blend mode" to "Coverage" and then switch between
>> partially opaque framebuffer and partially opaque solid-fill without
>> touching the plane's alpha value.
>
>Hi Dmitry,
>
>I don't really agree that adding a solid fill alpha would be a good idea. 
>Since the intent behind solid fill is to have a single color for the entire 
>plane, I think it makes more sense to have solid fill rely on the global plane 
>alpha.
>
>As stated in earlier discussions, I think having both a solid_fill.alpha and a 
>plane_state.alpha would be redundant and serve to confuse the user as to which 
>one to set.

That depends on the blending mode: in Coverage mode one has independent plane 
and contents alpha values. And I consider alpha value to be a part of the 
colour in the rgba/bgra modes.


>
>Thanks,
>
>Jessica Zhang
>
>> 
>> -- 
>> With best wishes
>> Dmitry

-- 
With best wishes
Dmitry


Re: [PATCH] drm/nouveau/sched: Don't pass user flags to drm_syncobj_find_fence()

2023-08-07 Thread Danilo Krummrich
On Mon, Aug 07, 2023 at 06:41:44PM -0500, Faith Ekstrand wrote:
> The flags field in drm_syncobj_find_fence() takes SYNCOBJ_WAIT flags
> from the syncobj UAPI whereas sync->flags is from the nouveau UAPI. What
> we actually want is 0 flags which tells it to just try to find the
> fence and then return without waiting.

Good catch!

Reviewed-by: Danilo Krummrich 

> 
> Signed-off-by: Faith Ekstrand 
> Fixes: b88baab82871 ("drm/nouveau: implement new VM_BIND uAPI")
> Cc: Danilo Krummrich 
> Cc: Dave Airlie 
> ---
>  drivers/gpu/drm/nouveau/nouveau_sched.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.c 
> b/drivers/gpu/drm/nouveau/nouveau_sched.c
> index b3b59fbec291..3424a1bf6af3 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_sched.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_sched.c
> @@ -142,7 +142,7 @@ sync_find_fence(struct nouveau_job *job,
>  
>   ret = drm_syncobj_find_fence(job->file_priv,
>sync->handle, point,
> -  sync->flags, fence);
> +  0 /* flags */, fence);
>   if (ret)
>   return ret;
>  
> -- 
> 2.41.0
> 



Re: [Freedreno] [PATCH 1/2] drm/msm/dpu: move writeback's atomic_check to dpu_writeback.c

2023-08-07 Thread Abhinav Kumar




On 5/18/2023 7:30 PM, Dmitry Baryshkov wrote:

dpu_encoder_phys_wb is the only user of encoder's atomic_check callback.
Move corresponding checks to drm_writeback_connector's implementation
and drop the dpu_encoder_phys_wb_atomic_check() function.

Signed-off-by: Dmitry Baryshkov 
---


I dont think this is correct even though I can make writeback work with 
these. The issue is that, in the recent changes which I was holding back 
posting till I reviewed this, I use the API 
drm_atomic_helper_check_wb_encoder_state() to check the supported 
formats in writeback (something which should have been present from the 
beginning).


It seems incorrect to call this from the connector's atomic_check.

And I checked the writeback job validation across other vendor drivers 
and the validation seems to be in encoder's atomic_check and not the 
connector's.


I dont want to break that pattern for MSM alone.


  .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c   | 54 --
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  4 +-
  drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c | 57 ++-
  drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.h |  3 +-
  4 files changed, 60 insertions(+), 58 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
index e14646c0501c..e73d5284eb2a 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
@@ -225,59 +225,6 @@ static void dpu_encoder_phys_wb_setup_cdp(struct 
dpu_encoder_phys *phys_enc)
}
  }
  
-/**

- * dpu_encoder_phys_wb_atomic_check - verify and fixup given atomic states
- * @phys_enc:  Pointer to physical encoder
- * @crtc_state:Pointer to CRTC atomic state
- * @conn_state:Pointer to connector atomic state
- */
-static int dpu_encoder_phys_wb_atomic_check(
-   struct dpu_encoder_phys *phys_enc,
-   struct drm_crtc_state *crtc_state,
-   struct drm_connector_state *conn_state)
-{
-   struct drm_framebuffer *fb;
-   const struct drm_display_mode *mode = &crtc_state->mode;
-
-   DPU_DEBUG("[atomic_check:%d, \"%s\",%d,%d]\n",
-   phys_enc->hw_wb->idx, mode->name, mode->hdisplay, 
mode->vdisplay);
-
-   if (!conn_state || !conn_state->connector) {
-   DPU_ERROR("invalid connector state\n");
-   return -EINVAL;
-   } else if (conn_state->connector->status !=
-   connector_status_connected) {
-   DPU_ERROR("connector not connected %d\n",
-   conn_state->connector->status);
-   return -EINVAL;
-   }
-
-   if (!conn_state->writeback_job || !conn_state->writeback_job->fb)
-   return 0;
-
-   fb = conn_state->writeback_job->fb;
-
-   DPU_DEBUG("[fb_id:%u][fb:%u,%u]\n", fb->base.id,
-   fb->width, fb->height);
-
-   if (fb->width != mode->hdisplay) {
-   DPU_ERROR("invalid fb w=%d, mode w=%d\n", fb->width,
-   mode->hdisplay);
-   return -EINVAL;
-   } else if (fb->height != mode->vdisplay) {
-   DPU_ERROR("invalid fb h=%d, mode h=%d\n", fb->height,
- mode->vdisplay);
-   return -EINVAL;
-   } else if (fb->width > phys_enc->hw_wb->caps->maxlinewidth) {
-   DPU_ERROR("invalid fb w=%d, maxlinewidth=%u\n",
- fb->width, 
phys_enc->hw_wb->caps->maxlinewidth);
-   return -EINVAL;
-   }
-
-   return 0;
-}
-
-
  /**
   * _dpu_encoder_phys_wb_update_flush - flush hardware update
   * @phys_enc: Pointer to physical encoder
@@ -652,7 +599,6 @@ static void dpu_encoder_phys_wb_init_ops(struct 
dpu_encoder_phys_ops *ops)
ops->enable = dpu_encoder_phys_wb_enable;
ops->disable = dpu_encoder_phys_wb_disable;
ops->destroy = dpu_encoder_phys_wb_destroy;
-   ops->atomic_check = dpu_encoder_phys_wb_atomic_check;
ops->wait_for_commit_done = dpu_encoder_phys_wb_wait_for_commit_done;
ops->prepare_for_kickoff = dpu_encoder_phys_wb_prepare_for_kickoff;
ops->handle_post_kickoff = dpu_encoder_phys_wb_handle_post_kickoff;
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
index 10bd0fd4ff48..78b8e7fc1de8 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
@@ -661,8 +661,8 @@ static int _dpu_kms_initialize_writeback(struct drm_device 
*dev,
return PTR_ERR(encoder);
}
  
-	rc = dpu_writeback_init(dev, encoder, wb_formats,

-   n_formats);
+   rc = dpu_writeback_init(dev, encoder, wb_formats, n_formats,
+   dpu_rm_get_wb(&dpu_kms->rm, 
info.h_tile_instance[0])->caps->maxlinewidth);
if (rc) {
DPU_ER

Re: [PATCH 3/3] drm/msm/dpu: drop dpu_encoder_phys_ops.atomic_mode_set

2023-08-07 Thread Abhinav Kumar




On 6/4/2023 7:45 AM, Dmitry Baryshkov wrote:

The atomic_mode_set() callback only sets the phys_enc's IRQ data. As the
INTF and WB are statically allocated to each encoder/phys_enc, drop the
atomic_mode_set callback and set the IRQs during encoder init.

For the CMD panel usecase some of IRQ indexes depend on the selected
resources. Move setting them to the irq_enable() callback.



The irq_enable() callback is called from the 
dpu_encoder_virt_atomic_enable() after the phys layer's enable.


Thats late.

So lets consider the case where command mode panel's clock is powered 
from bootloader (quite common).


Now, as soon as the tearcheck is configured and interface is ON from the 
phys's enable(), nothing prevents / should prevent the interrupt from 
firing.


So I feel / think mode_set is the correct location to assign these.

I can ack patches 1 and 2 but I think you did those mainly for this one, 
so I would like to get some clarity on this part.



Signed-off-by: Dmitry Baryshkov 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c   |  2 --
  .../gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h  |  5 ---
  .../drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c  | 32 ---
  .../drm/msm/disp/dpu1/dpu_encoder_phys_vid.c  | 13 ++--
  .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c   | 11 +--
  5 files changed, 17 insertions(+), 46 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
index cc61f0cf059d..6b5c80dc5967 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
@@ -1148,8 +1148,6 @@ static void dpu_encoder_virt_atomic_mode_set(struct 
drm_encoder *drm_enc,
phys->hw_ctl = to_dpu_hw_ctl(hw_ctl[i]);
  
  		phys->cached_mode = crtc_state->adjusted_mode;

-   if (phys->ops.atomic_mode_set)
-   phys->ops.atomic_mode_set(phys, crtc_state, conn_state);
}
  }
  
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h

index faf033cd086e..24dbc28be4f8 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h
@@ -67,8 +67,6 @@ struct dpu_encoder_phys;
   * @is_master:Whether this phys_enc is the current 
master
   *encoder. Can be switched at enable time. Based
   *on split_role and current mode (CMD/VID).
- * @atomic_mode_set:   DRM Call. Set a DRM mode.
- * This likely caches the mode, for use at enable.
   * @enable:   DRM Call. Enable a DRM mode.
   * @disable:  DRM Call. Disable mode.
   * @atomic_check: DRM Call. Atomic check new DRM state.
@@ -95,9 +93,6 @@ struct dpu_encoder_phys;
  struct dpu_encoder_phys_ops {
void (*prepare_commit)(struct dpu_encoder_phys *encoder);
bool (*is_master)(struct dpu_encoder_phys *encoder);
-   void (*atomic_mode_set)(struct dpu_encoder_phys *encoder,
-   struct drm_crtc_state *crtc_state,
-   struct drm_connector_state *conn_state);
void (*enable)(struct dpu_encoder_phys *encoder);
void (*disable)(struct dpu_encoder_phys *encoder);
int (*atomic_check)(struct dpu_encoder_phys *encoder,
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
index 3422b49f23c2..a0b7d8803e94 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
@@ -140,23 +140,6 @@ static void dpu_encoder_phys_cmd_underrun_irq(void *arg, 
int irq_idx)
dpu_encoder_underrun_callback(phys_enc->parent, phys_enc);
  }
  
-static void dpu_encoder_phys_cmd_atomic_mode_set(

-   struct dpu_encoder_phys *phys_enc,
-   struct drm_crtc_state *crtc_state,
-   struct drm_connector_state *conn_state)
-{
-   phys_enc->irq[INTR_IDX_CTL_START] = phys_enc->hw_ctl->caps->intr_start;
-
-   phys_enc->irq[INTR_IDX_PINGPONG] = phys_enc->hw_pp->caps->intr_done;
-
-   if (phys_enc->has_intf_te)
-   phys_enc->irq[INTR_IDX_RDPTR] = 
phys_enc->hw_intf->cap->intr_tear_rd_ptr;
-   else
-   phys_enc->irq[INTR_IDX_RDPTR] = 
phys_enc->hw_pp->caps->intr_rdptr;
-
-   phys_enc->irq[INTR_IDX_UNDERRUN] = 
phys_enc->hw_intf->cap->intr_underrun;
-}
-
  static int _dpu_encoder_phys_cmd_handle_ppdone_timeout(
struct dpu_encoder_phys *phys_enc)
  {
@@ -287,6 +270,14 @@ static void dpu_encoder_phys_cmd_irq_enable(struct 
dpu_encoder_phys *phys_enc)
true,

atomic_read(&phys_enc->vblank_refcount));
  
+	phys_enc->irq[INTR_IDX_CTL_START] = phys_enc->hw_ctl->caps->intr_start;

+   phys_enc->irq[INTR_IDX_PING

[PATCH] drm/nouveau/sched: Don't pass user flags to drm_syncobj_find_fence()

2023-08-07 Thread Faith Ekstrand
The flags field in drm_syncobj_find_fence() takes SYNCOBJ_WAIT flags
from the syncobj UAPI whereas sync->flags is from the nouveau UAPI. What
we actually want is 0 flags which tells it to just try to find the
fence and then return without waiting.

Signed-off-by: Faith Ekstrand 
Fixes: b88baab82871 ("drm/nouveau: implement new VM_BIND uAPI")
Cc: Danilo Krummrich 
Cc: Dave Airlie 
---
 drivers/gpu/drm/nouveau/nouveau_sched.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.c 
b/drivers/gpu/drm/nouveau/nouveau_sched.c
index b3b59fbec291..3424a1bf6af3 100644
--- a/drivers/gpu/drm/nouveau/nouveau_sched.c
+++ b/drivers/gpu/drm/nouveau/nouveau_sched.c
@@ -142,7 +142,7 @@ sync_find_fence(struct nouveau_job *job,
 
ret = drm_syncobj_find_fence(job->file_priv,
 sync->handle, point,
-sync->flags, fence);
+0 /* flags */, fence);
if (ret)
return ret;
 
-- 
2.41.0



Re: [PATCH v4 45/48] mm: shrinker: make global slab shrink lockless

2023-08-07 Thread Dave Chinner
On Mon, Aug 07, 2023 at 07:09:33PM +0800, Qi Zheng wrote:
> The shrinker_rwsem is a global read-write lock in shrinkers subsystem,
> which protects most operations such as slab shrink, registration and
> unregistration of shrinkers, etc. This can easily cause problems in the
> following cases.

> This commit uses the refcount+RCU method [5] proposed by Dave Chinner
> to re-implement the lockless global slab shrink. The memcg slab shrink is
> handled in the subsequent patch.

> ---
>  include/linux/shrinker.h | 17 ++
>  mm/shrinker.c| 70 +---
>  2 files changed, 68 insertions(+), 19 deletions(-)

There's no documentation in the code explaining how the lockless
shrinker algorithm works. It's left to the reader to work out how
this all goes together

> diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
> index eb342994675a..f06225f18531 100644
> --- a/include/linux/shrinker.h
> +++ b/include/linux/shrinker.h
> @@ -4,6 +4,8 @@
>  
>  #include 
>  #include 
> +#include 
> +#include 
>  
>  #define SHRINKER_UNIT_BITS   BITS_PER_LONG
>  
> @@ -87,6 +89,10 @@ struct shrinker {
>   int seeks;  /* seeks to recreate an obj */
>   unsigned flags;
>  
> + refcount_t refcount;
> + struct completion done;
> + struct rcu_head rcu;

What does the refcount protect, why do we need the completion, etc?

> +
>   void *private_data;
>  
>   /* These are for internal use */
> @@ -120,6 +126,17 @@ struct shrinker *shrinker_alloc(unsigned int flags, 
> const char *fmt, ...);
>  void shrinker_register(struct shrinker *shrinker);
>  void shrinker_free(struct shrinker *shrinker);
>  
> +static inline bool shrinker_try_get(struct shrinker *shrinker)
> +{
> + return refcount_inc_not_zero(&shrinker->refcount);
> +}
> +
> +static inline void shrinker_put(struct shrinker *shrinker)
> +{
> + if (refcount_dec_and_test(&shrinker->refcount))
> + complete(&shrinker->done);
> +}
> +
>  #ifdef CONFIG_SHRINKER_DEBUG
>  extern int __printf(2, 3) shrinker_debugfs_rename(struct shrinker *shrinker,
> const char *fmt, ...);
> diff --git a/mm/shrinker.c b/mm/shrinker.c
> index 1911c06b8af5..d318f5621862 100644
> --- a/mm/shrinker.c
> +++ b/mm/shrinker.c
> @@ -2,6 +2,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  
>  #include "internal.h"
> @@ -577,33 +578,42 @@ unsigned long shrink_slab(gfp_t gfp_mask, int nid, 
> struct mem_cgroup *memcg,
>   if (!mem_cgroup_disabled() && !mem_cgroup_is_root(memcg))
>   return shrink_slab_memcg(gfp_mask, nid, memcg, priority);
>  
> - if (!down_read_trylock(&shrinker_rwsem))
> - goto out;
> -
> - list_for_each_entry(shrinker, &shrinker_list, list) {
> + rcu_read_lock();
> + list_for_each_entry_rcu(shrinker, &shrinker_list, list) {
>   struct shrink_control sc = {
>   .gfp_mask = gfp_mask,
>   .nid = nid,
>   .memcg = memcg,
>   };
>  
> + if (!shrinker_try_get(shrinker))
> + continue;
> +
> + /*
> +  * We can safely unlock the RCU lock here since we already
> +  * hold the refcount of the shrinker.
> +  */
> + rcu_read_unlock();
> +
>   ret = do_shrink_slab(&sc, shrinker, priority);
>   if (ret == SHRINK_EMPTY)
>   ret = 0;
>   freed += ret;
> +
>   /*
> -  * Bail out if someone want to register a new shrinker to
> -  * prevent the registration from being stalled for long periods
> -  * by parallel ongoing shrinking.
> +  * This shrinker may be deleted from shrinker_list and freed
> +  * after the shrinker_put() below, but this shrinker is still
> +  * used for the next traversal. So it is necessary to hold the
> +  * RCU lock first to prevent this shrinker from being freed,
> +  * which also ensures that the next shrinker that is traversed
> +  * will not be freed (even if it is deleted from shrinker_list
> +  * at the same time).
>*/

This comment really should be at the head of the function,
describing the algorithm used within the function itself. i.e. how
reference counts are used w.r.t. the rcu_read_lock() usage to
guarantee existence of the shrinker and the validity of the list
walk.

I'm not going to remember all these little details when I look at
this code in another 6 months time, and having to work it out from
first principles every time I look at the code will waste of a lot
of time...

-Dave.
-- 
Dave Chinner
da...@fromorbit.com


Re: [PATCH v3 1/4] drm/msm/dpu: Move DPU encoder wide_bus_en setting

2023-08-07 Thread Jessica Zhang




On 8/2/2023 12:32 PM, Marijn Suijten wrote:

I find this title very undescriptive, it doesn't really explain from/to
where this move is happening nor why.

On 2023-08-02 11:08:48, Jessica Zhang wrote:

Move the setting of dpu_enc.wide_bus_en to
dpu_encoder_virt_atomic_enable() so that it mirrors the setting of
dpu_enc.dsc.


mirroring "the setting of dpu_enc.dsc" very much sounds like you are
mirroring _its value_, but that is not the case.  You are moving the
initialization (or just setting, because it could also be overwriting?)
to _the same place_ where .dsc is assigned.


Hi Marijn,

Hmm.. got it. Will reword it to "mirror how dpu_enc.dsc is being set" if 
that makes it clearer.




I am pretty sure that this has a runtime impact which we discussed
before (hotplug...?) but the commit message omits that.  This is
mandatory.


I'm assuming the prior discussion you're referring to is with Kuogee on 
his DSC fix [1]. Unlike DSC, both DSI and DP know if wide bus is enabled 
upon initialization.


The main reasons the setting of the wide_bus_en flag was moved here were

1) to mirror how dpu_enc.dsc was being set (as stated in the commit 
message) as wide bus is related to DSC,


and 2) account for the possibility of DSC for DSI being set during 
runtime in the future.


Thanks,

Jessica Zhang

[1] https://patchwork.freedesktop.org/patch/543867/



- Marijn



Signed-off-by: Jessica Zhang 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 11 +++
  1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
index d34e684a4178..3dcd37c48aac 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
@@ -1194,11 +1194,18 @@ static void dpu_encoder_virt_atomic_enable(struct 
drm_encoder *drm_enc,
struct dpu_encoder_virt *dpu_enc = NULL;
int ret = 0;
struct drm_display_mode *cur_mode = NULL;
+   struct msm_drm_private *priv = drm_enc->dev->dev_private;
+   struct msm_display_info *disp_info;
  
  	dpu_enc = to_dpu_encoder_virt(drm_enc);

+   disp_info = &dpu_enc->disp_info;
  
  	dpu_enc->dsc = dpu_encoder_get_dsc_config(drm_enc);
  
+	if (disp_info->intf_type == INTF_DP)

+   dpu_enc->wide_bus_en = msm_dp_wide_bus_available(
+   priv->dp[disp_info->h_tile_instance[0]]);
+
mutex_lock(&dpu_enc->enc_lock);
cur_mode = &dpu_enc->base.crtc->state->adjusted_mode;
  
@@ -2383,10 +2390,6 @@ struct drm_encoder *dpu_encoder_init(struct drm_device *dev,

timer_setup(&dpu_enc->frame_done_timer,
dpu_encoder_frame_done_timeout, 0);
  
-	if (disp_info->intf_type == INTF_DP)

-   dpu_enc->wide_bus_en = msm_dp_wide_bus_available(
-   priv->dp[disp_info->h_tile_instance[0]]);
-
INIT_DELAYED_WORK(&dpu_enc->delayed_off_work,
dpu_encoder_off_work);
dpu_enc->idle_timeout = IDLE_TIMEOUT;

--
2.41.0



Re: [PATCH v3 2/4] drm/msm/dpu: Enable widebus for DSI INTF

2023-08-07 Thread Jessica Zhang




On 8/2/2023 12:39 PM, Marijn Suijten wrote:

On 2023-08-02 11:08:49, Jessica Zhang wrote:

DPU supports a data-bus widen mode for DSI INTF.

Enable this mode for all supported chipsets if widebus is enabled for DSI.

Signed-off-by: Jessica Zhang 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c  | 11 ---
  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c |  4 +++-
  drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c  |  3 +++
  drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h  |  1 +
  drivers/gpu/drm/msm/msm_drv.h|  6 +-
  5 files changed, 20 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
index 3dcd37c48aac..de08aad39e15 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
@@ -1196,15 +1196,20 @@ static void dpu_encoder_virt_atomic_enable(struct 
drm_encoder *drm_enc,
struct drm_display_mode *cur_mode = NULL;
struct msm_drm_private *priv = drm_enc->dev->dev_private;
struct msm_display_info *disp_info;
+   int index;
  
  	dpu_enc = to_dpu_encoder_virt(drm_enc);

disp_info = &dpu_enc->disp_info;
  
+	disp_info = &dpu_enc->disp_info;

+   index = disp_info->h_tile_instance[0];
+
dpu_enc->dsc = dpu_encoder_get_dsc_config(drm_enc);
  
-	if (disp_info->intf_type == INTF_DP)

-   dpu_enc->wide_bus_en = msm_dp_wide_bus_available(
-   priv->dp[disp_info->h_tile_instance[0]]);
+   if (disp_info->intf_type == INTF_DSI)
+   dpu_enc->wide_bus_en = 
msm_dsi_is_widebus_enabled(priv->dsi[index]);
+   else if (disp_info->intf_type == INTF_DP)
+   dpu_enc->wide_bus_en = 
msm_dp_wide_bus_available(priv->dp[index]);


This inconsistency really is killing.  wide_bus vs widebus, and one
function has an is_ while the other does not.


Hi Marijn,

Acked. Will change the DSI function name to match DP.



  
  	mutex_lock(&dpu_enc->enc_lock);

cur_mode = &dpu_enc->base.crtc->state->adjusted_mode;
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
index df88358e7037..dace6168be2d 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
@@ -69,8 +69,10 @@ static void _dpu_encoder_phys_cmd_update_intf_cfg(
phys_enc->hw_intf,
phys_enc->hw_pp->idx);
  
-	if (intf_cfg.dsc != 0)

+   if (intf_cfg.dsc != 0) {
cmd_mode_cfg.data_compress = true;
+   cmd_mode_cfg.wide_bus_en = 
dpu_encoder_is_widebus_enabled(phys_enc->parent);
+   }
  
  	if (phys_enc->hw_intf->ops.program_intf_cmd_cfg)

phys_enc->hw_intf->ops.program_intf_cmd_cfg(phys_enc->hw_intf, 
&cmd_mode_cfg);
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
index 8ec6505d9e78..dc6f3febb574 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
@@ -521,6 +521,9 @@ static void dpu_hw_intf_program_intf_cmd_cfg(struct 
dpu_hw_intf *ctx,
if (cmd_mode_cfg->data_compress)
intf_cfg2 |= INTF_CFG2_DCE_DATA_COMPRESS;
  
+	if (cmd_mode_cfg->wide_bus_en)

+   intf_cfg2 |= INTF_CFG2_DATABUS_WIDEN;
+
DPU_REG_WRITE(&ctx->hw, INTF_CONFIG2, intf_cfg2);
  }
  
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h

index 77f80531782b..c539025c418b 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
@@ -50,6 +50,7 @@ struct dpu_hw_intf_status {
  
  struct dpu_hw_intf_cmd_mode_cfg {

u8 data_compress;   /* enable data compress between dpu and dsi */
+   u8 wide_bus_en; /* enable databus widen mode */


Any clue why these weren't just bool types?  These suffix-comments also
aren't adhering to the kerneldoc format, or is there a different
variant?


It seems that the `u8` declaration and comment docs were meant to mirror 
the other dpu_hw_intf_* structs [1]


[1] 
https://elixir.bootlin.com/linux/v6.5-rc5/source/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h#L44





  };
  
  /**

diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index 9d9d5e009163..e4f706b16aad 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -344,6 +344,7 @@ void msm_dsi_snapshot(struct msm_disp_state *disp_state, 
struct msm_dsi *msm_dsi
  bool msm_dsi_is_cmd_mode(struct msm_dsi *msm_dsi);
  bool msm_dsi_is_bonded_dsi(struct msm_dsi *msm_dsi);
  bool msm_dsi_is_master_dsi(struct msm_dsi *msm_dsi);
+bool msm_dsi_is_widebus_enabled(struct msm_dsi *msm_dsi);
  struct drm_dsc_config *msm_dsi_get_dsc_config(struct msm_dsi *msm_dsi);
  #else
 

Re: [PATCH RFC v5 02/10] drm: Introduce solid fill DRM plane property

2023-08-07 Thread Jessica Zhang




On 8/4/2023 6:27 AM, Dmitry Baryshkov wrote:

On Fri, 28 Jul 2023 at 20:03, Jessica Zhang  wrote:


Document and add support for solid_fill property to drm_plane. In
addition, add support for setting and getting the values for solid_fill.

To enable solid fill planes, userspace must assign a property blob to
the "solid_fill" plane property containing the following information:

struct drm_mode_solid_fill {
 u32 version;
 u32 r, g, b;
};

Signed-off-by: Jessica Zhang 
---
  drivers/gpu/drm/drm_atomic_state_helper.c |  9 +
  drivers/gpu/drm/drm_atomic_uapi.c | 55 +++
  drivers/gpu/drm/drm_blend.c   | 30 +
  include/drm/drm_blend.h   |  1 +
  include/drm/drm_plane.h   | 35 
  include/uapi/drm/drm_mode.h   | 24 ++
  6 files changed, 154 insertions(+)



[skipped most of the patch]


diff --git a/include/uapi/drm/drm_mode.h b/include/uapi/drm/drm_mode.h
index 43691058d28f..53c8efa5ad7f 100644
--- a/include/uapi/drm/drm_mode.h
+++ b/include/uapi/drm/drm_mode.h
@@ -259,6 +259,30 @@ struct drm_mode_modeinfo {
 char name[DRM_DISPLAY_MODE_LEN];
  };

+/**
+ * struct drm_mode_solid_fill - User info for solid fill planes
+ *
+ * This is the userspace API solid fill information structure.
+ *
+ * Userspace can enable solid fill planes by assigning the plane "solid_fill"
+ * property to a blob containing a single drm_mode_solid_fill struct populated 
with an RGB323232
+ * color and setting the pixel source to "SOLID_FILL".
+ *
+ * For information on the plane property, see 
drm_plane_create_solid_fill_property()
+ *
+ * @version: Version of the blob. Currently, there is only support for version 
== 1
+ * @r: Red color value of single pixel
+ * @g: Green color value of single pixel
+ * @b: Blue color value of single pixel
+ */
+struct drm_mode_solid_fill {
+   __u32 version;
+   __u32 r;
+   __u32 g;
+   __u32 b;


Another thought about the drm_mode_solid_fill uABI. I still think we
should add alpha here. The reason is the following:

It is true that we have  drm_plane_state::alpha and the plane's
"alpha" property. However it is documented as "the plane-wide opacity
[...] It can be combined with pixel alpha. The pixel values in the
framebuffers are expected to not be pre-multiplied by the global alpha
associated to the plane.".

I can imagine a use case, when a user might want to enable plane-wide
opacity, set "pixel blend mode" to "Coverage" and then switch between
partially opaque framebuffer and partially opaque solid-fill without
touching the plane's alpha value.


Hi Dmitry,

I don't really agree that adding a solid fill alpha would be a good 
idea. Since the intent behind solid fill is to have a single color for 
the entire plane, I think it makes more sense to have solid fill rely on 
the global plane alpha.


As stated in earlier discussions, I think having both a solid_fill.alpha 
and a plane_state.alpha would be redundant and serve to confuse the user 
as to which one to set.


Thanks,

Jessica Zhang



--
With best wishes
Dmitry


Re: [PATCH v3 2/4] drm/msm/dpu: Enable widebus for DSI INTF

2023-08-07 Thread Jessica Zhang




On 8/2/2023 11:20 AM, Dmitry Baryshkov wrote:

On Wed, 2 Aug 2023 at 21:09, Jessica Zhang  wrote:


DPU supports a data-bus widen mode for DSI INTF.

Enable this mode for all supported chipsets if widebus is enabled for DSI.

Signed-off-by: Jessica Zhang 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c  | 11 ---
  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c |  4 +++-
  drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c  |  3 +++
  drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h  |  1 +
  drivers/gpu/drm/msm/msm_drv.h|  6 +-
  5 files changed, 20 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
index 3dcd37c48aac..de08aad39e15 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
@@ -1196,15 +1196,20 @@ static void dpu_encoder_virt_atomic_enable(struct 
drm_encoder *drm_enc,
 struct drm_display_mode *cur_mode = NULL;
 struct msm_drm_private *priv = drm_enc->dev->dev_private;
 struct msm_display_info *disp_info;
+   int index;

 dpu_enc = to_dpu_encoder_virt(drm_enc);
 disp_info = &dpu_enc->disp_info;

+   disp_info = &dpu_enc->disp_info;
+   index = disp_info->h_tile_instance[0];
+
 dpu_enc->dsc = dpu_encoder_get_dsc_config(drm_enc);

-   if (disp_info->intf_type == INTF_DP)
-   dpu_enc->wide_bus_en = msm_dp_wide_bus_available(
-   priv->dp[disp_info->h_tile_instance[0]]);
+   if (disp_info->intf_type == INTF_DSI)
+   dpu_enc->wide_bus_en = 
msm_dsi_is_widebus_enabled(priv->dsi[index]);
+   else if (disp_info->intf_type == INTF_DP)
+   dpu_enc->wide_bus_en = 
msm_dp_wide_bus_available(priv->dp[index]);


If you change the order, you won't have to touch DP lines.


Hi Dmitry,

Acked.





 mutex_lock(&dpu_enc->enc_lock);
 cur_mode = &dpu_enc->base.crtc->state->adjusted_mode;
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
index df88358e7037..dace6168be2d 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
@@ -69,8 +69,10 @@ static void _dpu_encoder_phys_cmd_update_intf_cfg(
 phys_enc->hw_intf,
 phys_enc->hw_pp->idx);

-   if (intf_cfg.dsc != 0)
+   if (intf_cfg.dsc != 0) {
 cmd_mode_cfg.data_compress = true;
+   cmd_mode_cfg.wide_bus_en = 
dpu_encoder_is_widebus_enabled(phys_enc->parent);
+   }


This embeds the knowledge that a wide bus can only be enabled when DSC
is in use. Please move the wide_bus_en assignment out of conditional
code.


Wide bus for DSI will only be enabled if DSC is enabled, so this is 
technically not wrong, as DP will use the video mode path.






 if (phys_enc->hw_intf->ops.program_intf_cmd_cfg)
 phys_enc->hw_intf->ops.program_intf_cmd_cfg(phys_enc->hw_intf, 
&cmd_mode_cfg);
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
index 8ec6505d9e78..dc6f3febb574 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
@@ -521,6 +521,9 @@ static void dpu_hw_intf_program_intf_cmd_cfg(struct 
dpu_hw_intf *ctx,


This function is only enabled for DPU >= 7.0, while IIRC wide bus can
be enabled even for some of the earlier chipsets.


The command mode path is only called for DSI, which only supports wide 
bus for DPU 7.0+.





 if (cmd_mode_cfg->data_compress)
 intf_cfg2 |= INTF_CFG2_DCE_DATA_COMPRESS;

+   if (cmd_mode_cfg->wide_bus_en)
+   intf_cfg2 |= INTF_CFG2_DATABUS_WIDEN;
+
 DPU_REG_WRITE(&ctx->hw, INTF_CONFIG2, intf_cfg2);
  }

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
index 77f80531782b..c539025c418b 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
@@ -50,6 +50,7 @@ struct dpu_hw_intf_status {

  struct dpu_hw_intf_cmd_mode_cfg {
 u8 data_compress;   /* enable data compress between dpu and dsi */
+   u8 wide_bus_en; /* enable databus widen mode */
  };

  /**
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index 9d9d5e009163..e4f706b16aad 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -344,6 +344,7 @@ void msm_dsi_snapshot(struct msm_disp_state *disp_state, 
struct msm_dsi *msm_dsi
  bool msm_dsi_is_cmd_mode(struct msm_dsi *msm_dsi);
  bool msm_dsi_is_bonded_dsi(struct msm_dsi *msm_dsi);
  bool msm_dsi_is_master_dsi(struct msm_dsi *msm_dsi);
+bool msm_dsi_is_widebus_enabled(struct msm_dsi *msm_dsi);
  struct drm_ds

[PATCH v2 2/2] drm/v3d: Expose the total GPU usage stats on sysfs

2023-08-07 Thread Maíra Canal
The previous patch exposed the accumulated amount of active time per
client for each V3D queue. But this doesn't provide a global notion of
the GPU usage.

Therefore, provide the accumulated amount of active time for each V3D
queue (BIN, RENDER, CSD, TFU and CACHE_CLEAN), considering all the jobs
submitted to the queue, independent of the client.

This data is exposed through the sysfs interface, so that if the
interface is queried at two different points of time the usage percentage
of each of the queues can be calculated.

Co-developed-by: Jose Maria Casanova Crespo 
Signed-off-by: Jose Maria Casanova Crespo 
Signed-off-by: Maíra Canal 
---
 drivers/gpu/drm/v3d/Makefile|   3 +-
 drivers/gpu/drm/v3d/v3d_drv.c   |   9 +++
 drivers/gpu/drm/v3d/v3d_drv.h   |   7 +++
 drivers/gpu/drm/v3d/v3d_gem.c   |   5 +-
 drivers/gpu/drm/v3d/v3d_irq.c   |  24 ++--
 drivers/gpu/drm/v3d/v3d_sched.c |  13 +++-
 drivers/gpu/drm/v3d/v3d_sysfs.c | 101 
 7 files changed, 155 insertions(+), 7 deletions(-)
 create mode 100644 drivers/gpu/drm/v3d/v3d_sysfs.c

diff --git a/drivers/gpu/drm/v3d/Makefile b/drivers/gpu/drm/v3d/Makefile
index e8b314137020..4b21b20e4998 100644
--- a/drivers/gpu/drm/v3d/Makefile
+++ b/drivers/gpu/drm/v3d/Makefile
@@ -11,7 +11,8 @@ v3d-y := \
v3d_mmu.o \
v3d_perfmon.o \
v3d_trace_points.o \
-   v3d_sched.o
+   v3d_sched.o \
+   v3d_sysfs.o
 
 v3d-$(CONFIG_DEBUG_FS) += v3d_debugfs.o
 
diff --git a/drivers/gpu/drm/v3d/v3d_drv.c b/drivers/gpu/drm/v3d/v3d_drv.c
index ca65c707da03..7fc84a2525ca 100644
--- a/drivers/gpu/drm/v3d/v3d_drv.c
+++ b/drivers/gpu/drm/v3d/v3d_drv.c
@@ -309,8 +309,14 @@ static int v3d_platform_drm_probe(struct platform_device 
*pdev)
if (ret)
goto irq_disable;
 
+   ret = v3d_sysfs_init(dev);
+   if (ret)
+   goto drm_unregister;
+
return 0;
 
+drm_unregister:
+   drm_dev_unregister(drm);
 irq_disable:
v3d_irq_disable(v3d);
 gem_destroy:
@@ -324,6 +330,9 @@ static void v3d_platform_drm_remove(struct platform_device 
*pdev)
 {
struct drm_device *drm = platform_get_drvdata(pdev);
struct v3d_dev *v3d = to_v3d_dev(drm);
+   struct device *dev = &pdev->dev;
+
+   v3d_sysfs_destroy(dev);
 
drm_dev_unregister(drm);
 
diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
index 7f2897e5b2cb..c8f95a91af46 100644
--- a/drivers/gpu/drm/v3d/v3d_drv.h
+++ b/drivers/gpu/drm/v3d/v3d_drv.h
@@ -38,6 +38,9 @@ struct v3d_queue_state {
 
u64 fence_context;
u64 emit_seqno;
+
+   u64 start_ns;
+   u64 enabled_ns;
 };
 
 /* Performance monitor object. The perform lifetime is controlled by userspace
@@ -441,3 +444,7 @@ int v3d_perfmon_destroy_ioctl(struct drm_device *dev, void 
*data,
  struct drm_file *file_priv);
 int v3d_perfmon_get_values_ioctl(struct drm_device *dev, void *data,
 struct drm_file *file_priv);
+
+/* v3d_sysfs.c */
+int v3d_sysfs_init(struct device *dev);
+void v3d_sysfs_destroy(struct device *dev);
diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
index 40ed0c7c3fad..630ea2db8f8f 100644
--- a/drivers/gpu/drm/v3d/v3d_gem.c
+++ b/drivers/gpu/drm/v3d/v3d_gem.c
@@ -1014,8 +1014,11 @@ v3d_gem_init(struct drm_device *dev)
u32 pt_size = 4096 * 1024;
int ret, i;
 
-   for (i = 0; i < V3D_MAX_QUEUES; i++)
+   for (i = 0; i < V3D_MAX_QUEUES; i++) {
v3d->queue[i].fence_context = dma_fence_context_alloc(1);
+   v3d->queue[i].start_ns = 0;
+   v3d->queue[i].enabled_ns = 0;
+   }
 
spin_lock_init(&v3d->mm_lock);
spin_lock_init(&v3d->job_lock);
diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c
index c898800ae9c2..be4ff7559309 100644
--- a/drivers/gpu/drm/v3d/v3d_irq.c
+++ b/drivers/gpu/drm/v3d/v3d_irq.c
@@ -102,9 +102,13 @@ v3d_irq(int irq, void *arg)
struct v3d_fence *fence =
to_v3d_fence(v3d->bin_job->base.irq_fence);
struct v3d_file_priv *file = 
v3d->bin_job->base.file->driver_priv;
+   u64 runtime = local_clock() - file->start_ns[V3D_BIN];
 
-   file->enabled_ns[V3D_BIN] += local_clock() - 
file->start_ns[V3D_BIN];
file->start_ns[V3D_BIN] = 0;
+   v3d->queue[V3D_BIN].start_ns = 0;
+
+   file->enabled_ns[V3D_BIN] += runtime;
+   v3d->queue[V3D_BIN].enabled_ns += runtime;
 
trace_v3d_bcl_irq(&v3d->drm, fence->seqno);
dma_fence_signal(&fence->base);
@@ -115,9 +119,13 @@ v3d_irq(int irq, void *arg)
struct v3d_fence *fence =
to_v3d_fence(v3d->render_job->base.irq_fence);
struct v3d_file_priv *file = 
v3d->render_job->base.file->driver_priv;
+   u64 runtime = local_clock() - file

[PATCH v2 1/2] drm/v3d: Implement show_fdinfo() callback for GPU usage stats

2023-08-07 Thread Maíra Canal
This patch exposes the accumulated amount of active time per client
through the fdinfo infrastructure. The amount of active time is exposed
for each V3D queue: BIN, RENDER, CSD, TFU and CACHE_CLEAN.

In order to calculate the amount of active time per client, a CPU clock
is used through the function local_clock(). The point where the jobs has
started is marked and is finally compared with the time that the job had
finished.

Moreover, the number of jobs submitted to each queue is also exposed on
fdinfo through the identifier "v3d-jobs-".

Co-developed-by: Jose Maria Casanova Crespo 
Signed-off-by: Jose Maria Casanova Crespo 
Signed-off-by: Maíra Canal 
---
 drivers/gpu/drm/v3d/v3d_drv.c   | 30 +-
 drivers/gpu/drm/v3d/v3d_drv.h   | 23 +++
 drivers/gpu/drm/v3d/v3d_gem.c   |  1 +
 drivers/gpu/drm/v3d/v3d_irq.c   | 17 +
 drivers/gpu/drm/v3d/v3d_sched.c | 24 
 5 files changed, 94 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/v3d/v3d_drv.c b/drivers/gpu/drm/v3d/v3d_drv.c
index ffbbe9d527d3..ca65c707da03 100644
--- a/drivers/gpu/drm/v3d/v3d_drv.c
+++ b/drivers/gpu/drm/v3d/v3d_drv.c
@@ -19,6 +19,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 #include 
@@ -111,6 +112,10 @@ v3d_open(struct drm_device *dev, struct drm_file *file)
v3d_priv->v3d = v3d;
 
for (i = 0; i < V3D_MAX_QUEUES; i++) {
+   v3d_priv->enabled_ns[i] = 0;
+   v3d_priv->start_ns[i] = 0;
+   v3d_priv->jobs_sent[i] = 0;
+
sched = &v3d->queue[i].sched;
drm_sched_entity_init(&v3d_priv->sched_entity[i],
  DRM_SCHED_PRIORITY_NORMAL, &sched,
@@ -136,7 +141,29 @@ v3d_postclose(struct drm_device *dev, struct drm_file 
*file)
kfree(v3d_priv);
 }
 
-DEFINE_DRM_GEM_FOPS(v3d_drm_fops);
+static void v3d_show_fdinfo(struct drm_printer *p, struct drm_file *file)
+{
+   struct v3d_file_priv *file_priv = file->driver_priv;
+   u64 timestamp = local_clock();
+   enum v3d_queue queue;
+
+   for (queue = 0; queue < V3D_MAX_QUEUES; queue++) {
+   drm_printf(p, "drm-engine-%s: \t%llu ns\n",
+  v3d_queue_to_string(queue),
+  file_priv->start_ns[queue] ? 
file_priv->enabled_ns[queue]
+ + timestamp - 
file_priv->start_ns[queue]
+ : 
file_priv->enabled_ns[queue]);
+
+   drm_printf(p, "v3d-jobs-%s: \t%llu jobs\n",
+  v3d_queue_to_string(queue), 
file_priv->jobs_sent[queue]);
+   }
+}
+
+static const struct file_operations v3d_drm_fops = {
+   .owner = THIS_MODULE,
+   DRM_GEM_FOPS,
+   .show_fdinfo = drm_show_fdinfo,
+};
 
 /* DRM_AUTH is required on SUBMIT_CL for now, while we don't have GMP
  * protection between clients.  Note that render nodes would be
@@ -176,6 +203,7 @@ static const struct drm_driver v3d_drm_driver = {
.ioctls = v3d_drm_ioctls,
.num_ioctls = ARRAY_SIZE(v3d_drm_ioctls),
.fops = &v3d_drm_fops,
+   .show_fdinfo = v3d_show_fdinfo,
 
.name = DRIVER_NAME,
.desc = DRIVER_DESC,
diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
index 7f664a4b2a75..7f2897e5b2cb 100644
--- a/drivers/gpu/drm/v3d/v3d_drv.h
+++ b/drivers/gpu/drm/v3d/v3d_drv.h
@@ -21,6 +21,18 @@ struct reset_control;
 
 #define V3D_MAX_QUEUES (V3D_CACHE_CLEAN + 1)
 
+static inline char *v3d_queue_to_string(enum v3d_queue queue)
+{
+   switch (queue) {
+   case V3D_BIN: return "bin";
+   case V3D_RENDER: return "render";
+   case V3D_TFU: return "tfu";
+   case V3D_CSD: return "csd";
+   case V3D_CACHE_CLEAN: return "cache_clean";
+   }
+   return "UNKNOWN";
+}
+
 struct v3d_queue_state {
struct drm_gpu_scheduler sched;
 
@@ -167,6 +179,12 @@ struct v3d_file_priv {
} perfmon;
 
struct drm_sched_entity sched_entity[V3D_MAX_QUEUES];
+
+   u64 start_ns[V3D_MAX_QUEUES];
+
+   u64 enabled_ns[V3D_MAX_QUEUES];
+
+   u64 jobs_sent[V3D_MAX_QUEUES];
 };
 
 struct v3d_bo {
@@ -238,6 +256,11 @@ struct v3d_job {
 */
struct v3d_perfmon *perfmon;
 
+   /* File descriptor of the process that submitted the job that could be 
used
+* for collecting stats by process of GPU usage.
+*/
+   struct drm_file *file;
+
/* Callback for the freeing of the job on refcount going to 0. */
void (*free)(struct kref *ref);
 };
diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
index 2e94ce788c71..40ed0c7c3fad 100644
--- a/drivers/gpu/drm/v3d/v3d_gem.c
+++ b/drivers/gpu/drm/v3d/v3d_gem.c
@@ -415,6 +415,7 @@ v3d_job_init(struct v3d_dev *v3d, struct drm_file 
*file_priv,
job = *container;
job->v3d = v3d;
job->free = free;
+   job->f

[PATCH v2 0/2] drm/v3d: Expose GPU usage stats

2023-08-07 Thread Maíra Canal
This patchset exposes GPU usages stats both globally and per-file
descriptor.

The first patch exposes the accumulated amount of active time per client
through the fdinfo infrastructure. The amount of active time is exposed
for each V3D queue. Moreover, it exposes the number of jobs submitted to
each queue.

The second patch exposes the accumulated amount of active time for each
V3D queue, independent of the client. This data is exposed through the
sysfs interface.

With these patches, it is possible to calculate the GPU usage percentage
per queue globally and per-file descriptor.

* Example fdinfo output:

$ cat /proc/1140/fdinfo/4
pos:0
flags:  0242
mnt_id: 24
ino:209
drm-driver: v3d
drm-client-id:  44
drm-engine-bin: 1661076898 ns
v3d-jobs-bin:   19576 jobs
drm-engine-render:  31469427170 ns
v3d-jobs-render:19575 jobs
drm-engine-tfu: 5002964 ns
v3d-jobs-tfu:   13 jobs
drm-engine-csd: 188038329691 ns
v3d-jobs-csd:   250393 jobs
drm-engine-cache_clean: 27736024038 ns
v3d-jobs-cache_clean:   250392 job

* Example gputop output:

DRM minor 128
 PID bin   render   tfucsd  
  cache_clean NAME
1140 |▎||██▋   || ||█▍  
 ||█▋   | computecloth
1158 |▍||▉ || ||
 || | gears
1002 |▏||█▎|| ||
 || | chromium-browse

Best Regards,
- Maíra
---

v1 -> v2: 
https://lore.kernel.org/dri-devel/20230727142929.1275149-1-mca...@igalia.com/T/

* Use sysfs to expose global GPU stats (Tvrtko Ursulin)

Maíra Canal (2):
  drm/v3d: Implement show_fdinfo() callback for GPU usage stats
  drm/v3d: Expose the total GPU usage stats on sysfs

 drivers/gpu/drm/v3d/Makefile|   3 +-
 drivers/gpu/drm/v3d/v3d_drv.c   |  39 +++-
 drivers/gpu/drm/v3d/v3d_drv.h   |  30 ++
 drivers/gpu/drm/v3d/v3d_gem.c   |   6 +-
 drivers/gpu/drm/v3d/v3d_irq.c   |  33 +++
 drivers/gpu/drm/v3d/v3d_sched.c |  35 +++
 drivers/gpu/drm/v3d/v3d_sysfs.c | 101 
 7 files changed, 244 insertions(+), 3 deletions(-)
 create mode 100644 drivers/gpu/drm/v3d/v3d_sysfs.c

--
2.41.0



Re: [PATCH -next] drm/mcde: remove redundant of_match_ptr

2023-08-07 Thread Linus Walleij
On Mon, Jul 31, 2023 at 3:19 PM Zhu Wang  wrote:

> The driver depends on CONFIG_OF, so it is not necessary to use
> of_match_ptr here.
>
> Even for drivers that do not depend on CONFIG_OF, it's almost always
> better to leave out the of_match_ptr(), since the only thing it can
> possibly do is to save a few bytes of .text if a driver can be used both
> with and without it. Hence we remove of_match_ptr.
>
> Signed-off-by: Zhu Wang 

Patch applied!

Yours,
Linus Walleij


Re: [PATCH -next] drm/tve200: remove redundant of_match_ptr

2023-08-07 Thread Linus Walleij
On Mon, Jul 31, 2023 at 2:43 PM Zhu Wang  wrote:

> The driver depends on CONFIG_OF, so it is not necessary to use
> of_match_ptr here.
>
> Even for drivers that do not depend on CONFIG_OF, it's almost always
> better to leave out the of_match_ptr(), since the only thing it can
> possibly do is to save a few bytes of .text if a driver can be used both
> with and without it. Hence we remove of_match_ptr.
>
> Signed-off-by: Zhu Wang 

Patch applied!

Yours,
Linus Walleij


Re: [PATCH] drm/nouveau/disp: Revert a NULL check inside nouveau_connector_get_modes

2023-08-07 Thread Lyude Paul
Ugh, thanks for catching this!

Reviewed-by: Lyude Paul 

On Sat, 2023-08-05 at 12:18 +0200, Karol Herbst wrote:
> The original commit adding that check tried to protect the kenrel against
> a potential invalid NULL pointer access.
> 
> However we call nouveau_connector_detect_depth once without a native_mode
> set on purpose for non LVDS connectors and this broke DP support in a few
> cases.
> 
> Cc: Olaf Skibbe 
> Cc: Lyude Paul 
> Closes: https://gitlab.freedesktop.org/drm/nouveau/-/issues/238
> Closes: https://gitlab.freedesktop.org/drm/nouveau/-/issues/245
> Fixes: 20a2ce87fbaf8 ("drm/nouveau/dp: check for NULL 
> nv_connector->native_mode")
> Signed-off-by: Karol Herbst 
> ---
>  drivers/gpu/drm/nouveau/nouveau_connector.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c 
> b/drivers/gpu/drm/nouveau/nouveau_connector.c
> index f75c6f09dd2af..a2e0033e8a260 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_connector.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
> @@ -967,7 +967,7 @@ nouveau_connector_get_modes(struct drm_connector 
> *connector)
>   /* Determine display colour depth for everything except LVDS now,
>* DP requires this before mode_valid() is called.
>*/
> - if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS && 
> nv_connector->native_mode)
> + if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS)
>   nouveau_connector_detect_depth(connector);
>  
>   /* Find the native mode if this is a digital panel, if we didn't

-- 
Cheers,
 Lyude Paul (she/her)
 Software Engineer at Red Hat



Re: [Patch v2 2/3] drm/mst: Refactor the flow for payload allocation/removement

2023-08-07 Thread Lyude Paul
Oo! This is a wonderful idea so far - keeping track of the status of
allocations like this solves a lot of problems, especially with regards to the
fact this actually seems to make it possible for us to have much better
handling of payload failures in drivers - especially in situations like
suspend/resume. The naming changes here are awesome too.

I think this patch is good as far as I can tell review-wise! I haven't been
able to test it quite yet but I'll do it asap.

On Mon, 2023-08-07 at 10:56 +0800, Wayne Lin wrote:
> [Why]
> Today, the allocation/deallocation steps and status is a bit unclear.
> 
> For instance, payload->vc_start_slot = -1 stands for "the failure of
> updating DPCD payload ID table" and can also represent as "payload is not
> allocated yet". These two cases should be handled differently and hence
> better to distinguish them for better understanding.
> 
> [How]
> Define enumeration - ALLOCATION_LOCAL, ALLOCATION_DFP and ALLOCATION_REMOTE
> to distinguish different allocation status. Adjust the code to handle
> different status accordingly for better understanding the sequence of
> payload allocation and payload removement.
> 
> For payload creation, the procedure should look like this:
> DRM part 1:
> * step 1 - update sw mst mgr variables to add a new payload
> * step 2 - add payload at immediate DFP DPCD payload table
> 
> Driver:
> * Add new payload in HW and sync up with DFP by sending ACT
> 
> DRM Part 2:
> * Send ALLOCATE_PAYLOAD sideband message to allocate bandwidth along the
>   virtual channel.
> 
> And as for payload removement, the procedure should look like this:
> DRM part 1:
> * step 1 - Send ALLOCATE_PAYLOAD sideband message to release bandwidth
>along the virtual channel
> * step 2 - Clear payload allocation at immediate DFP DPCD payload table
> 
> Driver:
> * Remove the payload in HW and sync up with DFP by sending ACT
> 
> DRM part 2:
> * update sw mst mgr variables to remove the payload
> 
> Note that it's fine to fail when communicate with the branch device
> connected at immediate downstrean-facing port, but updating variables of
> SW mst mgr and HW configuration should be conducted anyway. That's because
> it's under commit_tail and we need to complete the HW programming.

yay!

> 
> Changes since v1:
> * Remove the set but not use variable 'old_payload' in function
>   'nv50_msto_prepare'. Catched by kernel test robot 
> 
> Signed-off-by: Wayne Lin 
> ---
>  .../amd/display/amdgpu_dm/amdgpu_dm_helpers.c |  20 ++-
>  drivers/gpu/drm/display/drm_dp_mst_topology.c | 159 +++---
>  drivers/gpu/drm/i915/display/intel_dp_mst.c   |  18 +-
>  drivers/gpu/drm/nouveau/dispnv50/disp.c   |  21 +--
>  include/drm/display/drm_dp_mst_helper.h   |  23 ++-
>  5 files changed, 153 insertions(+), 88 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c 
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
> index d9a482908380..9ad509279b0a 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
> @@ -219,7 +219,7 @@ static void dm_helpers_construct_old_payload(
>   /* Set correct time_slots/PBN of old payload.
>* other fields (delete & dsc_enabled) in
>* struct drm_dp_mst_atomic_payload are don't care fields
> -  * while calling drm_dp_remove_payload()
> +  * while calling drm_dp_remove_payload_part2()
>*/
>   for (i = 0; i < current_link_table.stream_count; i++) {
>   dc_alloc =
> @@ -262,13 +262,12 @@ bool dm_helpers_dp_mst_write_payload_allocation_table(
>  
>   mst_mgr = &aconnector->mst_root->mst_mgr;
>   mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state);
> -
> - /* It's OK for this to fail */
>   new_payload = drm_atomic_get_mst_payload_state(mst_state, 
> aconnector->mst_output_port);
>  
>   if (enable) {
>   target_payload = new_payload;
>  
> + /* It's OK for this to fail */
>   drm_dp_add_payload_part1(mst_mgr, mst_state, new_payload);
>   } else {
>   /* construct old payload by VCPI*/
> @@ -276,7 +275,7 @@ bool dm_helpers_dp_mst_write_payload_allocation_table(
>   new_payload, &old_payload);
>   target_payload = &old_payload;
>  
> - drm_dp_remove_payload(mst_mgr, mst_state, &old_payload, 
> new_payload);
> + drm_dp_remove_payload_part1(mst_mgr, mst_state, new_payload);
>   }
>  
>   /* mst_mgr->->payloads are VC payload notify MST branch using DPCD or
> @@ -342,7 +341,7 @@ bool dm_helpers_dp_mst_send_payload_allocation(
>   struct amdgpu_dm_connector *aconnector;
>   struct drm_dp_mst_topology_state *mst_state;
>   struct drm_dp_mst_topology_mgr *mst_mgr;
> - struct drm_dp_mst_atomic_payload *payload;
> + struct drm_dp_mst_atomic_payload *new_payload, *old_payload;
> 

Re: [PATCH -next] drm: bridge: dw_hdmi: clean up some inconsistent indentings

2023-08-07 Thread Robert Foss
On Mon, Aug 7, 2023 at 2:43 AM Yang Li  wrote:
>
> drivers/gpu/drm/bridge/synopsys/dw-hdmi-cec.c:332 dw_hdmi_cec_suspend() warn: 
> inconsistent indenting
>
> Reported-by: Abaci Robot 
> Closes: https://bugzilla.openanolis.cn/show_bug.cgi?id=6101
> Signed-off-by: Yang Li 
> ---
>  drivers/gpu/drm/bridge/synopsys/dw-hdmi-cec.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-cec.c 
> b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-cec.c
> index be21c11de1f2..14640b219dfa 100644
> --- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi-cec.c
> +++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi-cec.c
> @@ -329,9 +329,9 @@ static int __maybe_unused dw_hdmi_cec_suspend(struct 
> device *dev)
> struct dw_hdmi_cec *cec = dev_get_drvdata(dev);
>
> /* store interrupt status/mask registers */
> -cec->regs_polarity = dw_hdmi_read(cec, HDMI_CEC_POLARITY);
> -cec->regs_mask = dw_hdmi_read(cec, HDMI_CEC_MASK);
> -cec->regs_mute_stat0 = dw_hdmi_read(cec, HDMI_IH_MUTE_CEC_STAT0);
> +   cec->regs_polarity = dw_hdmi_read(cec, HDMI_CEC_POLARITY);
> +   cec->regs_mask = dw_hdmi_read(cec, HDMI_CEC_MASK);
> +   cec->regs_mute_stat0 = dw_hdmi_read(cec, HDMI_IH_MUTE_CEC_STAT0);
>
> return 0;
>  }

NAK

The value of maintaining the git blame history is higher than
following the correct whitespace


Re: [Intel-gfx] [PATCH] drm/i915/guc: Fix potential null pointer deref in GuC 'steal id' test

2023-08-07 Thread John Harrison

On 8/3/2023 06:28, Andi Shyti wrote:

Hi John,

On Wed, Aug 02, 2023 at 11:49:40AM -0700, john.c.harri...@intel.com wrote:

From: John Harrison 

It was noticed that if the very first 'stealing' request failed to
create for some reason then the 'steal all ids' loop would immediately
exit with 'last' still being NULL. The test would attempt to continue
but using a null pointer. Fix that by aborting the test if it fails to
create any requests at all.

Signed-off-by: John Harrison 
---
  drivers/gpu/drm/i915/gt/uc/selftest_guc.c | 6 +++---
  1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/selftest_guc.c 
b/drivers/gpu/drm/i915/gt/uc/selftest_guc.c
index 1fd760539f77b..bfb72143566f6 100644
--- a/drivers/gpu/drm/i915/gt/uc/selftest_guc.c
+++ b/drivers/gpu/drm/i915/gt/uc/selftest_guc.c
@@ -204,9 +204,9 @@ static int intel_guc_steal_guc_ids(void *arg)
if (IS_ERR(rq)) {
ret = PTR_ERR(rq);
rq = NULL;
-   if (ret != -EAGAIN) {
-   guc_err(guc, "Failed to create request %d: 
%pe\n",
-   context_index, ERR_PTR(ret));
+   if ((ret != -EAGAIN) || !last) {

isn't last alway NULL here?

Andi
No, only on the first pass around the loop. When a request is 
successfully created, the else clause below assigns last to that new 
request. So if the failure to create only happens on pass 2 or later, 
last will be non-null. Which is the whole point of the code. It keeps 
creating all the contexts/requests that it can until it runs out of 
resources and gets an EAGAIN failure. At which point, last will be 
pointing to the last successful creation and the test continues to the 
next part of actually stealing an id.


But if the EAGAIN failure happens on the first pass then last will be 
null and it is not safe/valid to proceed so it needs to abort. And if 
anything other than EAGAIN is returned then something has gone wrong and 
it doesn't matter what last is set to, it needs to abort regardless.


John.





+   guc_err(guc, "Failed to create %srequest %d: 
%pe\n",
+   last ? "" : "first ", context_index, 
ERR_PTR(ret));
goto err_spin_rq;
}
} else {
--
2.39.1




Re: [PATCH drm-misc-next v9 06/11] drm/nouveau: fence: separate fence alloc and emit

2023-08-07 Thread Danilo Krummrich

Hi Christian,

On 8/7/23 20:07, Christian König wrote:

Am 03.08.23 um 18:52 schrieb Danilo Krummrich:

The new (VM_BIND) UAPI exports DMA fences through DRM syncobjs. Hence,
in order to emit fences within DMA fence signalling critical sections
(e.g. as typically done in the DRM GPU schedulers run_job() callback) we
need to separate fence allocation and fence emitting.


At least from the description that sounds like it might be illegal. 
Daniel can you take a look as well.


What exactly are you doing here?


I'm basically doing exactly the same as amdgpu_fence_emit() does in 
amdgpu_ib_schedule() called by amdgpu_job_run().


The difference - and this is what this patch is for - is that I separate 
the fence allocation from emitting the fence, such that the fence 
structure is allocated before the job is submitted to the GPU scheduler. 
amdgpu solves this with GFP_ATOMIC within amdgpu_fence_emit() to 
allocate the fence structure in this case.


- Danilo



Regards,
Christian.



Signed-off-by: Danilo Krummrich 
---
  drivers/gpu/drm/nouveau/dispnv04/crtc.c |  9 -
  drivers/gpu/drm/nouveau/nouveau_bo.c    | 52 +++--
  drivers/gpu/drm/nouveau/nouveau_chan.c  |  6 ++-
  drivers/gpu/drm/nouveau/nouveau_dmem.c  |  9 +++--
  drivers/gpu/drm/nouveau/nouveau_fence.c | 16 +++-
  drivers/gpu/drm/nouveau/nouveau_fence.h |  3 +-
  drivers/gpu/drm/nouveau/nouveau_gem.c   |  5 ++-
  7 files changed, 59 insertions(+), 41 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/dispnv04/crtc.c 
b/drivers/gpu/drm/nouveau/dispnv04/crtc.c

index a6f2e681bde9..a34924523133 100644
--- a/drivers/gpu/drm/nouveau/dispnv04/crtc.c
+++ b/drivers/gpu/drm/nouveau/dispnv04/crtc.c
@@ -1122,11 +1122,18 @@ nv04_page_flip_emit(struct nouveau_channel *chan,
  PUSH_NVSQ(push, NV_SW, NV_SW_PAGE_FLIP, 0x);
  PUSH_KICK(push);
-    ret = nouveau_fence_new(chan, false, pfence);
+    ret = nouveau_fence_new(pfence);
  if (ret)
  goto fail;
+    ret = nouveau_fence_emit(*pfence, chan);
+    if (ret)
+    goto fail_fence_unref;
+
  return 0;
+
+fail_fence_unref:
+    nouveau_fence_unref(pfence);
  fail:
  spin_lock_irqsave(&dev->event_lock, flags);
  list_del(&s->head);
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
b/drivers/gpu/drm/nouveau/nouveau_bo.c

index 057bc995f19b..e9cbbf594e6f 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -820,29 +820,39 @@ nouveau_bo_move_m2mf(struct ttm_buffer_object 
*bo, int evict,

  mutex_lock(&cli->mutex);
  else
  mutex_lock_nested(&cli->mutex, SINGLE_DEPTH_NESTING);
+
  ret = nouveau_fence_sync(nouveau_bo(bo), chan, true, 
ctx->interruptible);

-    if (ret == 0) {
-    ret = drm->ttm.move(chan, bo, bo->resource, new_reg);
-    if (ret == 0) {
-    ret = nouveau_fence_new(chan, false, &fence);
-    if (ret == 0) {
-    /* TODO: figure out a better solution here
- *
- * wait on the fence here explicitly as going through
- * ttm_bo_move_accel_cleanup somehow doesn't seem to 
do it.

- *
- * Without this the operation can timeout and we'll 
fallback to a
- * software copy, which might take several minutes to 
finish.

- */
-    nouveau_fence_wait(fence, false, false);
-    ret = ttm_bo_move_accel_cleanup(bo,
-    &fence->base,
-    evict, false,
-    new_reg);
-    nouveau_fence_unref(&fence);
-    }
-    }
+    if (ret)
+    goto out_unlock;
+
+    ret = drm->ttm.move(chan, bo, bo->resource, new_reg);
+    if (ret)
+    goto out_unlock;
+
+    ret = nouveau_fence_new(&fence);
+    if (ret)
+    goto out_unlock;
+
+    ret = nouveau_fence_emit(fence, chan);
+    if (ret) {
+    nouveau_fence_unref(&fence);
+    goto out_unlock;
  }
+
+    /* TODO: figure out a better solution here
+ *
+ * wait on the fence here explicitly as going through
+ * ttm_bo_move_accel_cleanup somehow doesn't seem to do it.
+ *
+ * Without this the operation can timeout and we'll fallback to a
+ * software copy, which might take several minutes to finish.
+ */
+    nouveau_fence_wait(fence, false, false);
+    ret = ttm_bo_move_accel_cleanup(bo, &fence->base, evict, false,
+    new_reg);
+    nouveau_fence_unref(&fence);
+
+out_unlock:
  mutex_unlock(&cli->mutex);
  return ret;
  }
diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c 
b/drivers/gpu/drm/nouveau/nouveau_chan.c

index 6d639314250a..f69be4c8f9f2 100644
--- a/drivers/gpu/drm/nouveau/nouveau_chan.c
+++ b/drivers/gpu/drm/nouveau/nouveau_chan.c
@@ -62,9 +62,11 @@ nouveau_channel_idle(struct nouveau_channel *chan)
  struct nouveau_fence *fence = NULL;
  int ret;
-    ret

Re: [PATCH drm-misc-next v9 06/11] drm/nouveau: fence: separate fence alloc and emit

2023-08-07 Thread Christian König

Am 03.08.23 um 18:52 schrieb Danilo Krummrich:

The new (VM_BIND) UAPI exports DMA fences through DRM syncobjs. Hence,
in order to emit fences within DMA fence signalling critical sections
(e.g. as typically done in the DRM GPU schedulers run_job() callback) we
need to separate fence allocation and fence emitting.


At least from the description that sounds like it might be illegal. 
Daniel can you take a look as well.


What exactly are you doing here?

Regards,
Christian.



Signed-off-by: Danilo Krummrich 
---
  drivers/gpu/drm/nouveau/dispnv04/crtc.c |  9 -
  drivers/gpu/drm/nouveau/nouveau_bo.c| 52 +++--
  drivers/gpu/drm/nouveau/nouveau_chan.c  |  6 ++-
  drivers/gpu/drm/nouveau/nouveau_dmem.c  |  9 +++--
  drivers/gpu/drm/nouveau/nouveau_fence.c | 16 +++-
  drivers/gpu/drm/nouveau/nouveau_fence.h |  3 +-
  drivers/gpu/drm/nouveau/nouveau_gem.c   |  5 ++-
  7 files changed, 59 insertions(+), 41 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/dispnv04/crtc.c 
b/drivers/gpu/drm/nouveau/dispnv04/crtc.c
index a6f2e681bde9..a34924523133 100644
--- a/drivers/gpu/drm/nouveau/dispnv04/crtc.c
+++ b/drivers/gpu/drm/nouveau/dispnv04/crtc.c
@@ -1122,11 +1122,18 @@ nv04_page_flip_emit(struct nouveau_channel *chan,
PUSH_NVSQ(push, NV_SW, NV_SW_PAGE_FLIP, 0x);
PUSH_KICK(push);
  
-	ret = nouveau_fence_new(chan, false, pfence);

+   ret = nouveau_fence_new(pfence);
if (ret)
goto fail;
  
+	ret = nouveau_fence_emit(*pfence, chan);

+   if (ret)
+   goto fail_fence_unref;
+
return 0;
+
+fail_fence_unref:
+   nouveau_fence_unref(pfence);
  fail:
spin_lock_irqsave(&dev->event_lock, flags);
list_del(&s->head);
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
b/drivers/gpu/drm/nouveau/nouveau_bo.c
index 057bc995f19b..e9cbbf594e6f 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -820,29 +820,39 @@ nouveau_bo_move_m2mf(struct ttm_buffer_object *bo, int 
evict,
mutex_lock(&cli->mutex);
else
mutex_lock_nested(&cli->mutex, SINGLE_DEPTH_NESTING);
+
ret = nouveau_fence_sync(nouveau_bo(bo), chan, true, 
ctx->interruptible);
-   if (ret == 0) {
-   ret = drm->ttm.move(chan, bo, bo->resource, new_reg);
-   if (ret == 0) {
-   ret = nouveau_fence_new(chan, false, &fence);
-   if (ret == 0) {
-   /* TODO: figure out a better solution here
-*
-* wait on the fence here explicitly as going 
through
-* ttm_bo_move_accel_cleanup somehow doesn't 
seem to do it.
-*
-* Without this the operation can timeout and 
we'll fallback to a
-* software copy, which might take several 
minutes to finish.
-*/
-   nouveau_fence_wait(fence, false, false);
-   ret = ttm_bo_move_accel_cleanup(bo,
-   &fence->base,
-   evict, false,
-   new_reg);
-   nouveau_fence_unref(&fence);
-   }
-   }
+   if (ret)
+   goto out_unlock;
+
+   ret = drm->ttm.move(chan, bo, bo->resource, new_reg);
+   if (ret)
+   goto out_unlock;
+
+   ret = nouveau_fence_new(&fence);
+   if (ret)
+   goto out_unlock;
+
+   ret = nouveau_fence_emit(fence, chan);
+   if (ret) {
+   nouveau_fence_unref(&fence);
+   goto out_unlock;
}
+
+   /* TODO: figure out a better solution here
+*
+* wait on the fence here explicitly as going through
+* ttm_bo_move_accel_cleanup somehow doesn't seem to do it.
+*
+* Without this the operation can timeout and we'll fallback to a
+* software copy, which might take several minutes to finish.
+*/
+   nouveau_fence_wait(fence, false, false);
+   ret = ttm_bo_move_accel_cleanup(bo, &fence->base, evict, false,
+   new_reg);
+   nouveau_fence_unref(&fence);
+
+out_unlock:
mutex_unlock(&cli->mutex);
return ret;
  }
diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c 
b/drivers/gpu/drm/nouveau/nouveau_chan.c
index 6d639314250a..f69be4c8f9f2 100644
--- a/drivers/gpu/drm/nouveau/nouveau_chan.c
+++ b/drivers/gpu/drm/nouveau/nouveau_chan.c
@@ -62,9 +62,11 @@ nouveau_channel_idle(struct nouveau_channel *chan)
struct nouveau_fence *fence = NULL;
int ret;
  
-		ret = nouveau_fence_new(chan, fals

Re: [Intel-gfx] [PATCH v1 3/3] drm/i915/gt: Timeout when waiting for idle in suspending

2023-08-07 Thread Rodrigo Vivi
On Wed, Aug 02, 2023 at 04:35:01PM -0700, Alan Previn wrote:
> When suspending, add a timeout when calling
> intel_gt_pm_wait_for_idle else if we have a lost
> G2H event that holds a wakeref (which would be
> indicating of a bug elsewhere in the driver), we
> get to complete the suspend-resume cycle, albeit
> without all the lower power hw counters hitting
> its targets, instead of hanging in the kernel.
> 
> Signed-off-by: Alan Previn 
> ---
>  drivers/gpu/drm/i915/gt/intel_engine_cs.c |  2 +-
>  drivers/gpu/drm/i915/gt/intel_gt_pm.c |  7 ++-
>  drivers/gpu/drm/i915/gt/intel_gt_pm.h |  7 ++-
>  drivers/gpu/drm/i915/intel_wakeref.c  | 14 ++
>  drivers/gpu/drm/i915/intel_wakeref.h  |  5 +++--
>  5 files changed, 26 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
> b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
> index ee15486fed0d..090438eb8682 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
> @@ -688,7 +688,7 @@ void intel_engines_release(struct intel_gt *gt)
>   if (!engine->release)
>   continue;
>  
> - intel_wakeref_wait_for_idle(&engine->wakeref);
> + intel_wakeref_wait_for_idle(&engine->wakeref, 0);
>   GEM_BUG_ON(intel_engine_pm_is_awake(engine));
>  
>   engine->release(engine);
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c 
> b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
> index 3162d859ed68..dfe77eb3efd1 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
> @@ -289,6 +289,8 @@ int intel_gt_resume(struct intel_gt *gt)
>  
>  static void wait_for_suspend(struct intel_gt *gt)
>  {
> + int timeout_ms = CONFIG_DRM_I915_MAX_REQUEST_BUSYWAIT ? : 1;
> +
>   if (!intel_gt_pm_is_awake(gt)) {
>   intel_uc_suspend_prepare(>->uc);
>   return;
> @@ -305,7 +307,10 @@ static void wait_for_suspend(struct intel_gt *gt)
>   intel_uc_suspend_prepare(>->uc);
>   }
>  
> - intel_gt_pm_wait_for_idle(gt);
> + /* we are suspending, so we shouldn't be waiting forever */
> + if (intel_gt_pm_wait_timeout_for_idle(gt, timeout_ms) == -ETIME)
> + drm_warn(>->i915->drm, "Bailing from %s after %d milisec 
> timeout\n",
> +  __func__, timeout_ms);
>  }
>  
>  void intel_gt_suspend_prepare(struct intel_gt *gt)
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.h 
> b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
> index 6c9a46452364..5358acc2b5b1 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_pm.h
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
> @@ -68,7 +68,12 @@ static inline void intel_gt_pm_might_put(struct intel_gt 
> *gt)
>  
>  static inline int intel_gt_pm_wait_for_idle(struct intel_gt *gt)
>  {
> - return intel_wakeref_wait_for_idle(>->wakeref);
> + return intel_wakeref_wait_for_idle(>->wakeref, 0);
> +}
> +
> +static inline int intel_gt_pm_wait_timeout_for_idle(struct intel_gt *gt, int 
> timeout_ms)
> +{
> + return intel_wakeref_wait_for_idle(>->wakeref, timeout_ms);
>  }
>  
>  void intel_gt_pm_init_early(struct intel_gt *gt);
> diff --git a/drivers/gpu/drm/i915/intel_wakeref.c 
> b/drivers/gpu/drm/i915/intel_wakeref.c
> index 718f2f1b6174..7e01d4cc300c 100644
> --- a/drivers/gpu/drm/i915/intel_wakeref.c
> +++ b/drivers/gpu/drm/i915/intel_wakeref.c
> @@ -111,14 +111,20 @@ void __intel_wakeref_init(struct intel_wakeref *wf,
>"wakeref.work", &key->work, 0);
>  }
>  
> -int intel_wakeref_wait_for_idle(struct intel_wakeref *wf)
> +int intel_wakeref_wait_for_idle(struct intel_wakeref *wf, int timeout_ms)
>  {
> - int err;
> + int err = 0;
>  
>   might_sleep();
>  
> - err = wait_var_event_killable(&wf->wakeref,
> -   !intel_wakeref_is_active(wf));
> + if (!timeout_ms)
> + err = wait_var_event_killable(&wf->wakeref,
> +   !intel_wakeref_is_active(wf));
> + else if (wait_var_event_timeout(&wf->wakeref,
> + !intel_wakeref_is_active(wf),
> + msecs_to_jiffies(timeout_ms)) < 1)
> + err = -ETIME;

it looks to me that -ETIMEDOUT would be a better error.

> +
>   if (err)
>   return err;
>  
> diff --git a/drivers/gpu/drm/i915/intel_wakeref.h 
> b/drivers/gpu/drm/i915/intel_wakeref.h
> index ec881b097368..6fbb7a2fb6ea 100644
> --- a/drivers/gpu/drm/i915/intel_wakeref.h
> +++ b/drivers/gpu/drm/i915/intel_wakeref.h
> @@ -251,15 +251,16 @@ __intel_wakeref_defer_park(struct intel_wakeref *wf)
>  /**
>   * intel_wakeref_wait_for_idle: Wait until the wakeref is idle
>   * @wf: the wakeref
> + * @timeout_ms: timeout to wait in milisecs, zero means forever
>   *
>   * Wait for the earlier asynchronous release of the wakeref. Note
>   * this will wait for a

Re: [PATCH RFC v5 01/10] drm: Introduce pixel_source DRM plane property

2023-08-07 Thread Jessica Zhang




On 8/4/2023 6:15 AM, Sebastian Wick wrote:

On Fri, Jul 28, 2023 at 7:03 PM Jessica Zhang  wrote:


Add support for pixel_source property to drm_plane and related
documentation. In addition, force pixel_source to
DRM_PLANE_PIXEL_SOURCE_FB in DRM_IOCTL_MODE_SETPLANE as to not break
legacy userspace.

This enum property will allow user to specify a pixel source for the
plane. Possible pixel sources will be defined in the
drm_plane_pixel_source enum.

The current possible pixel sources are DRM_PLANE_PIXEL_SOURCE_NONE and
DRM_PLANE_PIXEL_SOURCE_FB with *_PIXEL_SOURCE_FB being the default value.

Signed-off-by: Jessica Zhang 
---
  drivers/gpu/drm/drm_atomic_state_helper.c |  1 +
  drivers/gpu/drm/drm_atomic_uapi.c |  4 ++
  drivers/gpu/drm/drm_blend.c   | 85 +++
  drivers/gpu/drm/drm_plane.c   |  3 ++
  include/drm/drm_blend.h   |  2 +
  include/drm/drm_plane.h   | 21 
  6 files changed, 116 insertions(+)

diff --git a/drivers/gpu/drm/drm_atomic_state_helper.c 
b/drivers/gpu/drm/drm_atomic_state_helper.c
index 784e63d70a42..01638c51ce0a 100644
--- a/drivers/gpu/drm/drm_atomic_state_helper.c
+++ b/drivers/gpu/drm/drm_atomic_state_helper.c
@@ -252,6 +252,7 @@ void __drm_atomic_helper_plane_state_reset(struct 
drm_plane_state *plane_state,

 plane_state->alpha = DRM_BLEND_ALPHA_OPAQUE;
 plane_state->pixel_blend_mode = DRM_MODE_BLEND_PREMULTI;
+   plane_state->pixel_source = DRM_PLANE_PIXEL_SOURCE_FB;

 if (plane->color_encoding_property) {
 if (!drm_object_property_get_default_value(&plane->base,
diff --git a/drivers/gpu/drm/drm_atomic_uapi.c 
b/drivers/gpu/drm/drm_atomic_uapi.c
index d867e7f9f2cd..454f980e16c9 100644
--- a/drivers/gpu/drm/drm_atomic_uapi.c
+++ b/drivers/gpu/drm/drm_atomic_uapi.c
@@ -544,6 +544,8 @@ static int drm_atomic_plane_set_property(struct drm_plane 
*plane,
 state->src_w = val;
 } else if (property == config->prop_src_h) {
 state->src_h = val;
+   } else if (property == plane->pixel_source_property) {
+   state->pixel_source = val;
 } else if (property == plane->alpha_property) {
 state->alpha = val;
 } else if (property == plane->blend_mode_property) {
@@ -616,6 +618,8 @@ drm_atomic_plane_get_property(struct drm_plane *plane,
 *val = state->src_w;
 } else if (property == config->prop_src_h) {
 *val = state->src_h;
+   } else if (property == plane->pixel_source_property) {
+   *val = state->pixel_source;
 } else if (property == plane->alpha_property) {
 *val = state->alpha;
 } else if (property == plane->blend_mode_property) {
diff --git a/drivers/gpu/drm/drm_blend.c b/drivers/gpu/drm/drm_blend.c
index 6e74de833466..c500310a3d09 100644
--- a/drivers/gpu/drm/drm_blend.c
+++ b/drivers/gpu/drm/drm_blend.c
@@ -185,6 +185,21 @@
   *  plane does not expose the "alpha" property, then this is
   *  assumed to be 1.0
   *
+ * pixel_source:
+ * pixel_source is set up with drm_plane_create_pixel_source_property().
+ * It is used to toggle the active source of pixel data for the plane.
+ * The plane will only display data from the set pixel_source -- any
+ * data from other sources will be ignored.
+ *
+ * Possible values:
+ *
+ * "NONE":
+ * No active pixel source.
+ * Committing with a NONE pixel source will disable the plane.
+ *
+ * "FB":
+ * Framebuffer source set by the "FB_ID" property.
+ *
   * Note that all the property extensions described here apply either to the
   * plane or the CRTC (e.g. for the background color, which currently is not
   * exposed and assumed to be black).
@@ -615,3 +630,73 @@ int drm_plane_create_blend_mode_property(struct drm_plane 
*plane,
 return 0;
  }
  EXPORT_SYMBOL(drm_plane_create_blend_mode_property);
+
+/**
+ * drm_plane_create_pixel_source_property - create a new pixel source property
+ * @plane: DRM plane
+ * @extra_sources: Bitmask of additional supported pixel_sources for the 
driver.
+ *DRM_PLANE_PIXEL_SOURCE_FB always be enabled as a supported
+ *source.
+ *
+ * This creates a new property describing the current source of pixel data for 
the
+ * plane. The pixel_source will be initialized as DRM_PLANE_PIXEL_SOURCE_FB by 
default.
+ *
+ * Drivers can set a custom default source by overriding the pixel_source 
value in
+ * drm_plane_funcs.reset()
+ *
+ * The property is exposed to userspace as an enumeration property called
+ * "pixel_source" and has the following enumeration values:
+ *
+ * "NONE":
+ *  No active pixel source
+ *
+ * "FB":
+ * Framebuffer pixel source
+ *
+ * Returns:
+ * Zero on success, negative errno on failure.
+ */
+int drm_plane_create_pixel_source_property(struct drm_plane *pla

Re: [PATCH v1 1/3] drm/i915/guc: Flush context destruction worker at suspend

2023-08-07 Thread Rodrigo Vivi
On Wed, Aug 02, 2023 at 04:34:59PM -0700, Alan Previn wrote:
> Suspend is not like reset, it can unroll, so we have to properly
> flush pending context-guc-id deregistrations to complete before
> we return from suspend calls.

But if is 'unrolls' the execution should just continue, no?!
In other words, why is this flush needed? What happens if we
don't flush, but resume doesn't proceed? in in which case
of resume you are thinking that it returns and not having flushed?

> 
> Signed-off-by: Alan Previn 
> ---
>  drivers/gpu/drm/i915/gt/intel_gt_pm.c | 6 +-
>  drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 5 +
>  drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h | 2 ++
>  drivers/gpu/drm/i915/gt/uc/intel_uc.c | 7 +++
>  drivers/gpu/drm/i915/gt/uc/intel_uc.h | 1 +
>  5 files changed, 20 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c 
> b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
> index 5a942af0a14e..3162d859ed68 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
> @@ -289,8 +289,10 @@ int intel_gt_resume(struct intel_gt *gt)
>  
>  static void wait_for_suspend(struct intel_gt *gt)
>  {
> - if (!intel_gt_pm_is_awake(gt))
> + if (!intel_gt_pm_is_awake(gt)) {
> + intel_uc_suspend_prepare(>->uc);

why only on idle?

Well, I know, if we are in idle it is because all the requests had
already ended and gt will be wedged, but why do we need to do anything
if we are in idle?

And why here and not some upper layer? like in prepare

>   return;
> + }
>  
>   if (intel_gt_wait_for_idle(gt, I915_GT_SUSPEND_IDLE_TIMEOUT) == -ETIME) 
> {
>   /*
> @@ -299,6 +301,8 @@ static void wait_for_suspend(struct intel_gt *gt)
>*/
>   intel_gt_set_wedged(gt);
>   intel_gt_retire_requests(gt);
> + } else {
> + intel_uc_suspend_prepare(>->uc);
>   }
>  
>   intel_gt_pm_wait_for_idle(gt);
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> index a0e3ef1c65d2..dc7735a19a5a 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> @@ -1578,6 +1578,11 @@ static void guc_flush_submissions(struct intel_guc 
> *guc)
>   spin_unlock_irqrestore(&sched_engine->lock, flags);
>  }
>  
> +void intel_guc_submission_suspend_prepare(struct intel_guc *guc)
> +{
> + flush_work(&guc->submission_state.destroyed_worker);
> +}
> +
>  static void guc_flush_destroyed_contexts(struct intel_guc *guc);
>  
>  void intel_guc_submission_reset_prepare(struct intel_guc *guc)
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h 
> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> index c57b29cdb1a6..7f0705ece74b 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> @@ -38,6 +38,8 @@ int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
>  bool interruptible,
>  long timeout);
>  
> +void intel_guc_submission_suspend_prepare(struct intel_guc *guc);
> +
>  static inline bool intel_guc_submission_is_supported(struct intel_guc *guc)
>  {
>   return guc->submission_supported;
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c 
> b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
> index 18250fb64bd8..468d7b397927 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
> @@ -679,6 +679,13 @@ void intel_uc_runtime_suspend(struct intel_uc *uc)
>   guc_disable_communication(guc);
>  }
>  
> +void intel_uc_suspend_prepare(struct intel_uc *uc)
> +{
> + struct intel_guc *guc = &uc->guc;
> +
> + intel_guc_submission_suspend_prepare(guc);
> +}
> +
>  void intel_uc_suspend(struct intel_uc *uc)
>  {
>   struct intel_guc *guc = &uc->guc;
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.h 
> b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
> index 014bb7d83689..036877a07261 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.h
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
> @@ -49,6 +49,7 @@ void intel_uc_reset_prepare(struct intel_uc *uc);
>  void intel_uc_reset(struct intel_uc *uc, intel_engine_mask_t stalled);
>  void intel_uc_reset_finish(struct intel_uc *uc);
>  void intel_uc_cancel_requests(struct intel_uc *uc);
> +void intel_uc_suspend_prepare(struct intel_uc *uc);
>  void intel_uc_suspend(struct intel_uc *uc);
>  void intel_uc_runtime_suspend(struct intel_uc *uc);
>  int intel_uc_resume(struct intel_uc *uc);
> -- 
> 2.39.0
> 


Re: [PATCH] drm/amdgpu: Clean up errors in vcn_v3_0.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:53 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: space required before the open brace '{'
> ERROR: "foo * bar" should be "foo *bar"
> ERROR: space required before the open parenthesis '('
> ERROR: that open brace { should be on the previous line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 11 +--
>  1 file changed, 5 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c 
> b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> index b76ba21b5a89..1e7613bb80ae 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> @@ -1105,7 +1105,7 @@ static int vcn_v3_0_start(struct amdgpu_device *adev)
> if (adev->vcn.harvest_config & (1 << i))
> continue;
>
> -   if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG){
> +   if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) {
> r = vcn_v3_0_start_dpg_mode(adev, i, 
> adev->vcn.indirect_sram);
> continue;
> }
> @@ -1789,7 +1789,7 @@ static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, 
> struct amdgpu_job *job,
> struct amdgpu_bo *bo;
> uint64_t start, end;
> unsigned int i;
> -   void * ptr;
> +   void *ptr;
> int r;
>
> addr &= AMDGPU_GMC_HOLE_MASK;
> @@ -2129,7 +2129,7 @@ static int vcn_v3_0_set_powergating_state(void *handle,
> return 0;
> }
>
> -   if(state == adev->vcn.cur_state)
> +   if (state == adev->vcn.cur_state)
> return 0;
>
> if (state == AMD_PG_STATE_GATE)
> @@ -2137,7 +2137,7 @@ static int vcn_v3_0_set_powergating_state(void *handle,
> else
> ret = vcn_v3_0_start(adev);
>
> -   if(!ret)
> +   if (!ret)
> adev->vcn.cur_state = state;
>
> return ret;
> @@ -2228,8 +2228,7 @@ static const struct amd_ip_funcs vcn_v3_0_ip_funcs = {
> .set_powergating_state = vcn_v3_0_set_powergating_state,
>  };
>
> -const struct amdgpu_ip_block_version vcn_v3_0_ip_block =
> -{
> +const struct amdgpu_ip_block_version vcn_v3_0_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_VCN,
> .major = 3,
> .minor = 0,
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in tonga_ih.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:50 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: that open brace { should be on the previous line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/tonga_ih.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/tonga_ih.c 
> b/drivers/gpu/drm/amd/amdgpu/tonga_ih.c
> index b08905d1c00f..917707bba7f3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/tonga_ih.c
> +++ b/drivers/gpu/drm/amd/amdgpu/tonga_ih.c
> @@ -493,8 +493,7 @@ static void tonga_ih_set_interrupt_funcs(struct 
> amdgpu_device *adev)
> adev->irq.ih_funcs = &tonga_ih_funcs;
>  }
>
> -const struct amdgpu_ip_block_version tonga_ih_ip_block =
> -{
> +const struct amdgpu_ip_block_version tonga_ih_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_IH,
> .major = 3,
> .minor = 0,
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in gfx_v7_0.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:49 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: that open brace { should be on the previous line
> ERROR: trailing statements should be on next line
> ERROR: open brace '{' following struct go on the same line
> ERROR: space prohibited before that '++' (ctx:WxB)
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c | 28 +++
>  1 file changed, 11 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c 
> b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
> index 8c174c11eaee..90b034b173c1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
> @@ -90,8 +90,7 @@ MODULE_FIRMWARE("amdgpu/mullins_ce.bin");
>  MODULE_FIRMWARE("amdgpu/mullins_rlc.bin");
>  MODULE_FIRMWARE("amdgpu/mullins_mec.bin");
>
> -static const struct amdgpu_gds_reg_offset amdgpu_gds_reg_offset[] =
> -{
> +static const struct amdgpu_gds_reg_offset amdgpu_gds_reg_offset[] = {
> {mmGDS_VMID0_BASE, mmGDS_VMID0_SIZE, mmGDS_GWS_VMID0, mmGDS_OA_VMID0},
> {mmGDS_VMID1_BASE, mmGDS_VMID1_SIZE, mmGDS_GWS_VMID1, mmGDS_OA_VMID1},
> {mmGDS_VMID2_BASE, mmGDS_VMID2_SIZE, mmGDS_GWS_VMID2, mmGDS_OA_VMID2},
> @@ -110,8 +109,7 @@ static const struct amdgpu_gds_reg_offset 
> amdgpu_gds_reg_offset[] =
> {mmGDS_VMID15_BASE, mmGDS_VMID15_SIZE, mmGDS_GWS_VMID15, 
> mmGDS_OA_VMID15}
>  };
>
> -static const u32 spectre_rlc_save_restore_register_list[] =
> -{
> +static const u32 spectre_rlc_save_restore_register_list[] = {
> (0x0e00 << 16) | (0xc12c >> 2),
> 0x,
> (0x0e00 << 16) | (0xc140 >> 2),
> @@ -557,8 +555,7 @@ static const u32 spectre_rlc_save_restore_register_list[] 
> =
> (0x0e00 << 16) | (0x9600 >> 2),
>  };
>
> -static const u32 kalindi_rlc_save_restore_register_list[] =
> -{
> +static const u32 kalindi_rlc_save_restore_register_list[] = {
> (0x0e00 << 16) | (0xc12c >> 2),
> 0x,
> (0x0e00 << 16) | (0xc140 >> 2),
> @@ -933,7 +930,8 @@ static int gfx_v7_0_init_microcode(struct amdgpu_device 
> *adev)
> case CHIP_MULLINS:
> chip_name = "mullins";
> break;
> -   default: BUG();
> +   default:
> +   BUG();
> }
>
> snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_pfp.bin", chip_name);
> @@ -2759,8 +2757,7 @@ static int gfx_v7_0_mec_init(struct amdgpu_device *adev)
> return 0;
>  }
>
> -struct hqd_registers
> -{
> +struct hqd_registers {
> u32 cp_mqd_base_addr;
> u32 cp_mqd_base_addr_hi;
> u32 cp_hqd_active;
> @@ -5124,11 +5121,11 @@ static void gfx_v7_0_get_cu_info(struct amdgpu_device 
> *adev)
> bitmap = gfx_v7_0_get_cu_active_bitmap(adev);
> cu_info->bitmap[i][j] = bitmap;
>
> -   for (k = 0; k < adev->gfx.config.max_cu_per_sh; k ++) 
> {
> +   for (k = 0; k < adev->gfx.config.max_cu_per_sh; k++) {
> if (bitmap & mask) {
> if (counter < ao_cu_num)
> ao_bitmap |= mask;
> -   counter ++;
> +   counter++;
> }
> mask <<= 1;
> }
> @@ -5150,8 +5147,7 @@ static void gfx_v7_0_get_cu_info(struct amdgpu_device 
> *adev)
> cu_info->lds_size = 64;
>  }
>
> -const struct amdgpu_ip_block_version gfx_v7_1_ip_block =
> -{
> +const struct amdgpu_ip_block_version gfx_v7_1_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_GFX,
> .major = 7,
> .minor = 1,
> @@ -5159,8 +5155,7 @@ const struct amdgpu_ip_block_version gfx_v7_1_ip_block =
> .funcs = &gfx_v7_0_ip_funcs,
>  };
>
> -const struct amdgpu_ip_block_version gfx_v7_2_ip_block =
> -{
> +const struct amdgpu_ip_block_version gfx_v7_2_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_GFX,
> .major = 7,
> .minor = 2,
> @@ -5168,8 +5163,7 @@ const struct amdgpu_ip_block_version gfx_v7_2_ip_block =
> .funcs = &gfx_v7_0_ip_funcs,
>  };
>
> -const struct amdgpu_ip_block_version gfx_v7_3_ip_block =
> -{
> +const struct amdgpu_ip_block_version gfx_v7_3_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_GFX,
> .major = 7,
> .minor = 3,
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in vcn_v4_0.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:43 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> spaces required around that '==' (ctx:VxV)
> ERROR: space required before the open parenthesis '('
> ERROR: that open brace { should be on the previous line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c | 11 +--
>  1 file changed, 5 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c 
> b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
> index 6089c7deba8a..ef5b16061e96 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
> @@ -1139,11 +1139,11 @@ static int vcn_v4_0_start(struct amdgpu_device *adev)
> if (status & 2)
> break;
> mdelay(10);
> -   if (amdgpu_emu_mode==1)
> +   if (amdgpu_emu_mode == 1)
> msleep(1);
> }
>
> -   if (amdgpu_emu_mode==1) {
> +   if (amdgpu_emu_mode == 1) {
> r = -1;
> if (status & 2) {
> r = 0;
> @@ -1959,7 +1959,7 @@ static int vcn_v4_0_set_powergating_state(void *handle, 
> enum amd_powergating_sta
> return 0;
> }
>
> -   if(state == adev->vcn.cur_state)
> +   if (state == adev->vcn.cur_state)
> return 0;
>
> if (state == AMD_PG_STATE_GATE)
> @@ -1967,7 +1967,7 @@ static int vcn_v4_0_set_powergating_state(void *handle, 
> enum amd_powergating_sta
> else
> ret = vcn_v4_0_start(adev);
>
> -   if(!ret)
> +   if (!ret)
> adev->vcn.cur_state = state;
>
> return ret;
> @@ -2101,8 +2101,7 @@ static const struct amd_ip_funcs vcn_v4_0_ip_funcs = {
> .set_powergating_state = vcn_v4_0_set_powergating_state,
>  };
>
> -const struct amdgpu_ip_block_version vcn_v4_0_ip_block =
> -{
> +const struct amdgpu_ip_block_version vcn_v4_0_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_VCN,
> .major = 4,
> .minor = 0,
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in uvd_v3_1.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:37 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: that open brace { should be on the previous line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c 
> b/drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c
> index 0fef925b6602..5534c769b655 100644
> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c
> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c
> @@ -815,8 +815,7 @@ static const struct amd_ip_funcs uvd_v3_1_ip_funcs = {
> .set_powergating_state = uvd_v3_1_set_powergating_state,
>  };
>
> -const struct amdgpu_ip_block_version uvd_v3_1_ip_block =
> -{
> +const struct amdgpu_ip_block_version uvd_v3_1_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_UVD,
> .major = 3,
> .minor = 1,
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in mxgpu_vi.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:35 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: spaces required around that '-=' (ctx:WxV)
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c 
> b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c
> index 288c414babdf..59f53c743362 100644
> --- a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c
> +++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c
> @@ -334,7 +334,7 @@ static void xgpu_vi_mailbox_send_ack(struct amdgpu_device 
> *adev)
> break;
> }
> mdelay(1);
> -   timeout -=1;
> +   timeout -= 1;
>
> reg = RREG32_NO_KIQ(mmMAILBOX_CONTROL);
> }
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in nv.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:34 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: that open brace { should be on the previous line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/nv.c | 48 +++--
>  1 file changed, 16 insertions(+), 32 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/nv.c b/drivers/gpu/drm/amd/amdgpu/nv.c
> index 51523b27a186..414c3c85172d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/nv.c
> +++ b/drivers/gpu/drm/amd/amdgpu/nv.c
> @@ -67,21 +67,18 @@
>  static const struct amd_ip_funcs nv_common_ip_funcs;
>
>  /* Navi */
> -static const struct amdgpu_video_codec_info nv_video_codecs_encode_array[] =
> -{
> +static const struct amdgpu_video_codec_info nv_video_codecs_encode_array[] = 
> {
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 
> 2304, 0)},
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 2304, 
> 0)},
>  };
>
> -static const struct amdgpu_video_codecs nv_video_codecs_encode =
> -{
> +static const struct amdgpu_video_codecs nv_video_codecs_encode = {
> .codec_count = ARRAY_SIZE(nv_video_codecs_encode_array),
> .codec_array = nv_video_codecs_encode_array,
>  };
>
>  /* Navi1x */
> -static const struct amdgpu_video_codec_info nv_video_codecs_decode_array[] =
> -{
> +static const struct amdgpu_video_codec_info nv_video_codecs_decode_array[] = 
> {
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 
> 3)},
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 
> 5)},
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 
> 4096, 52)},
> @@ -91,8 +88,7 @@ static const struct amdgpu_video_codec_info 
> nv_video_codecs_decode_array[] =
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 
> 0)},
>  };
>
> -static const struct amdgpu_video_codecs nv_video_codecs_decode =
> -{
> +static const struct amdgpu_video_codecs nv_video_codecs_decode = {
> .codec_count = ARRAY_SIZE(nv_video_codecs_decode_array),
> .codec_array = nv_video_codecs_decode_array,
>  };
> @@ -108,8 +104,7 @@ static const struct amdgpu_video_codecs 
> sc_video_codecs_encode = {
> .codec_array = sc_video_codecs_encode_array,
>  };
>
> -static const struct amdgpu_video_codec_info 
> sc_video_codecs_decode_array_vcn0[] =
> -{
> +static const struct amdgpu_video_codec_info 
> sc_video_codecs_decode_array_vcn0[] = {
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 
> 3)},
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 
> 5)},
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 
> 4096, 52)},
> @@ -120,8 +115,7 @@ static const struct amdgpu_video_codec_info 
> sc_video_codecs_decode_array_vcn0[]
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_AV1, 8192, 4352, 
> 0)},
>  };
>
> -static const struct amdgpu_video_codec_info 
> sc_video_codecs_decode_array_vcn1[] =
> -{
> +static const struct amdgpu_video_codec_info 
> sc_video_codecs_decode_array_vcn1[] = {
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 
> 3)},
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 
> 5)},
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 
> 4096, 52)},
> @@ -131,27 +125,23 @@ static const struct amdgpu_video_codec_info 
> sc_video_codecs_decode_array_vcn1[]
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 
> 0)},
>  };
>
> -static const struct amdgpu_video_codecs sc_video_codecs_decode_vcn0 =
> -{
> +static const struct amdgpu_video_codecs sc_video_codecs_decode_vcn0 = {
> .codec_count = ARRAY_SIZE(sc_video_codecs_decode_array_vcn0),
> .codec_array = sc_video_codecs_decode_array_vcn0,
>  };
>
> -static const struct amdgpu_video_codecs sc_video_codecs_decode_vcn1 =
> -{
> +static const struct amdgpu_video_codecs sc_video_codecs_decode_vcn1 = {
> .codec_count = ARRAY_SIZE(sc_video_codecs_decode_array_vcn1),
> .codec_array = sc_video_codecs_decode_array_vcn1,
>  };
>
>  /* SRIOV Sienna Cichlid, not const since data is controlled by host */
> -static struct amdgpu_video_codec_info sriov_sc_video_codecs_encode_array[] =
> -{
> +static struct amdgpu_video_codec_info sriov_sc_video_codecs_encode_array[] = 
> {
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 
> 2160, 0)},
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 7680, 4352, 
> 0)},
>  };
>
> -static struct amdgpu_video_codec_info 
> sriov_sc_video_codecs_decode_array_vcn0[] =
> -{
> +static struct amdgpu_video_codec_info 
> sriov_sc_video_codecs_decode_array_vcn0[] = {
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 
> 3)},
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MP

Re: [PATCH] drm/amdgpu: Clean up errors in amdgpu_virt.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:31 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: space required before the open parenthesis '('
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
> index ec044f711eb9..96857ae7fb5b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
> @@ -520,7 +520,7 @@ static int amdgpu_virt_read_pf2vf_data(struct 
> amdgpu_device *adev)
> tmp = ((struct amd_sriov_msg_pf2vf_info 
> *)pf2vf_info)->mm_bw_management[i].encode_max_frame_pixels;
> adev->virt.encode_max_frame_pixels = max(tmp, 
> adev->virt.encode_max_frame_pixels);
> }
> -   if((adev->virt.decode_max_dimension_pixels > 0) || 
> (adev->virt.encode_max_dimension_pixels > 0))
> +   if ((adev->virt.decode_max_dimension_pixels > 0) || 
> (adev->virt.encode_max_dimension_pixels > 0))
> adev->virt.is_mm_bw_enabled = true;
>
> adev->unique_id =
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in amdgpu_ring.h

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:28 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: spaces required around that ':' (ctx:VxW)
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> index 028ff075db51..e2ab303ad270 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> @@ -389,7 +389,7 @@ static inline void amdgpu_ring_write_multiple(struct 
> amdgpu_ring *ring,
> occupied = ring->wptr & ring->buf_mask;
> dst = (void *)&ring->ring[occupied];
> chunk1 = ring->buf_mask + 1 - occupied;
> -   chunk1 = (chunk1 >= count_dw) ? count_dw: chunk1;
> +   chunk1 = (chunk1 >= count_dw) ? count_dw : chunk1;
> chunk2 = count_dw - chunk1;
> chunk1 <<= 2;
> chunk2 <<= 2;
> --
> 2.17.1
>


Re: [PATCH v5 1/8] drm/msm/dpu: fix the irq index in dpu_encoder_phys_wb_wait_for_commit_done

2023-08-07 Thread Abhinav Kumar




On 8/2/2023 3:04 AM, Dmitry Baryshkov wrote:

Since commit 1e7ac595fa46 ("drm/msm/dpu: pass irq to
dpu_encoder_helper_wait_for_irq()") the
dpu_encoder_phys_wb_wait_for_commit_done expects the IRQ index rather
than the IRQ index in phys_enc->intr table, however writeback got the
older invocation in place. This was unnoticed for several releases, but
now it's time to fix it.



The reason it went unnoticed is because the IRQ index is used within 
dpu_encoder_helper_wait_for_irq() only for cases when the interrupt did 
not fire (in other words not the *working* or common cases). Its used 
only for the trace in dpu_encoder_helper_wait_event_timeout(). So this 
was not really breaking writeback as such because the encoder kickoff / 
wait mechanism largely relies on the kickoff_cnt increment/decrement.


Nonetheless, the patch LGTM and works fine, hence

Reviewed-by: Abhinav Kumar 


Fixes: d7d0e73f7de3 ("drm/msm/dpu: introduce the dpu_encoder_phys_* for 
writeback")
Signed-off-by: Dmitry Baryshkov 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
index a466ff70a4d6..78037a697633 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
@@ -446,7 +446,8 @@ static int dpu_encoder_phys_wb_wait_for_commit_done(
wait_info.atomic_cnt = &phys_enc->pending_kickoff_cnt;
wait_info.timeout_ms = KICKOFF_TIMEOUT_MS;
  
-	ret = dpu_encoder_helper_wait_for_irq(phys_enc, INTR_IDX_WB_DONE,

+   ret = dpu_encoder_helper_wait_for_irq(phys_enc,
+   phys_enc->irq[INTR_IDX_WB_DONE],
dpu_encoder_phys_wb_done_irq, &wait_info);
if (ret == -ETIMEDOUT)
_dpu_encoder_phys_wb_handle_wbdone_timeout(phys_enc);


Re: [PATCH] drm/amdgpu: Clean up errors in amdgpu_trace.h

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:26 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: space required after that ',' (ctx:VxV)
> ERROR: "foo* bar" should be "foo *bar"
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
> index 525dffbe046a..2fd1bfb35916 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
> @@ -432,7 +432,7 @@ TRACE_EVENT(amdgpu_vm_flush,
>),
> TP_printk("ring=%s, id=%u, hub=%u, pd_addr=%010Lx",
>   __get_str(ring), __entry->vmid,
> - __entry->vm_hub,__entry->pd_addr)
> + __entry->vm_hub, __entry->pd_addr)
>  );
>
>  DECLARE_EVENT_CLASS(amdgpu_pasid,
> @@ -494,7 +494,7 @@ TRACE_EVENT(amdgpu_cs_bo_status,
>  );
>
>  TRACE_EVENT(amdgpu_bo_move,
> -   TP_PROTO(struct amdgpu_bo* bo, uint32_t new_placement, uint32_t 
> old_placement),
> +   TP_PROTO(struct amdgpu_bo *bo, uint32_t new_placement, uint32_t 
> old_placement),
> TP_ARGS(bo, new_placement, old_placement),
> TP_STRUCT__entry(
> __field(struct amdgpu_bo *, bo)
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in mes_v11_0.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:24 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: else should follow close brace '}'
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/mes_v11_0.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c 
> b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
> index 11fda318064f..6827d547042e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
> @@ -788,8 +788,7 @@ static int mes_v11_0_mqd_init(struct amdgpu_ring *ring)
> DOORBELL_SOURCE, 0);
> tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
> DOORBELL_HIT, 0);
> -   }
> -   else
> +   } else
> tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
> DOORBELL_EN, 0);
> mqd->cp_hqd_pq_doorbell_control = tmp;
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in amdgpu_atombios.h

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:23 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: open brace '{' following struct go on the same line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h | 9 +++--
>  1 file changed, 3 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> index b639a80ee3fc..0811474e8fd3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> @@ -89,8 +89,7 @@ struct atom_memory_info {
>
>  #define MAX_AC_TIMING_ENTRIES 16
>
> -struct atom_memory_clock_range_table
> -{
> +struct atom_memory_clock_range_table {
> u8 num_entries;
> u8 rsv[3];
> u32 mclk[MAX_AC_TIMING_ENTRIES];
> @@ -118,14 +117,12 @@ struct atom_mc_reg_table {
>
>  #define MAX_VOLTAGE_ENTRIES 32
>
> -struct atom_voltage_table_entry
> -{
> +struct atom_voltage_table_entry {
> u16 value;
> u32 smio_low;
>  };
>
> -struct atom_voltage_table
> -{
> +struct atom_voltage_table {
> u32 count;
> u32 mask_low;
> u32 phase_delay;
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in soc21.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:21 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: that open brace { should be on the previous line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/soc21.c | 30 ++
>  1 file changed, 10 insertions(+), 20 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/soc21.c 
> b/drivers/gpu/drm/amd/amdgpu/soc21.c
> index e5e5d68a4d70..4f3ecd66eb6b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/soc21.c
> +++ b/drivers/gpu/drm/amd/amdgpu/soc21.c
> @@ -48,33 +48,28 @@
>  static const struct amd_ip_funcs soc21_common_ip_funcs;
>
>  /* SOC21 */
> -static const struct amdgpu_video_codec_info 
> vcn_4_0_0_video_codecs_encode_array_vcn0[] =
> -{
> +static const struct amdgpu_video_codec_info 
> vcn_4_0_0_video_codecs_encode_array_vcn0[] = {
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 
> 2304, 0)},
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 2304, 
> 0)},
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_AV1, 8192, 4352, 
> 0)},
>  };
>
> -static const struct amdgpu_video_codec_info 
> vcn_4_0_0_video_codecs_encode_array_vcn1[] =
> -{
> +static const struct amdgpu_video_codec_info 
> vcn_4_0_0_video_codecs_encode_array_vcn1[] = {
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 
> 2304, 0)},
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 2304, 
> 0)},
>  };
>
> -static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_encode_vcn0 =
> -{
> +static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_encode_vcn0 = 
> {
> .codec_count = ARRAY_SIZE(vcn_4_0_0_video_codecs_encode_array_vcn0),
> .codec_array = vcn_4_0_0_video_codecs_encode_array_vcn0,
>  };
>
> -static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_encode_vcn1 =
> -{
> +static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_encode_vcn1 = 
> {
> .codec_count = ARRAY_SIZE(vcn_4_0_0_video_codecs_encode_array_vcn1),
> .codec_array = vcn_4_0_0_video_codecs_encode_array_vcn1,
>  };
>
> -static const struct amdgpu_video_codec_info 
> vcn_4_0_0_video_codecs_decode_array_vcn0[] =
> -{
> +static const struct amdgpu_video_codec_info 
> vcn_4_0_0_video_codecs_decode_array_vcn0[] = {
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 
> 4096, 52)},
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 
> 186)},
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 
> 0)},
> @@ -82,22 +77,19 @@ static const struct amdgpu_video_codec_info 
> vcn_4_0_0_video_codecs_decode_array_
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_AV1, 8192, 4352, 
> 0)},
>  };
>
> -static const struct amdgpu_video_codec_info 
> vcn_4_0_0_video_codecs_decode_array_vcn1[] =
> -{
> +static const struct amdgpu_video_codec_info 
> vcn_4_0_0_video_codecs_decode_array_vcn1[] = {
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 
> 4096, 52)},
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 
> 186)},
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 
> 0)},
> {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 
> 0)},
>  };
>
> -static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_decode_vcn0 =
> -{
> +static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_decode_vcn0 = 
> {
> .codec_count = ARRAY_SIZE(vcn_4_0_0_video_codecs_decode_array_vcn0),
> .codec_array = vcn_4_0_0_video_codecs_decode_array_vcn0,
>  };
>
> -static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_decode_vcn1 =
> -{
> +static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_decode_vcn1 = 
> {
> .codec_count = ARRAY_SIZE(vcn_4_0_0_video_codecs_decode_array_vcn1),
> .codec_array = vcn_4_0_0_video_codecs_decode_array_vcn1,
>  };
> @@ -445,8 +437,7 @@ static void soc21_program_aspm(struct amdgpu_device *adev)
> adev->nbio.funcs->program_aspm(adev);
>  }
>
> -const struct amdgpu_ip_block_version soc21_common_ip_block =
> -{
> +const struct amdgpu_ip_block_version soc21_common_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_COMMON,
> .major = 1,
> .minor = 0,
> @@ -547,8 +538,7 @@ static int soc21_update_umd_stable_pstate(struct 
> amdgpu_device *adev,
> return 0;
>  }
>
> -static const struct amdgpu_asic_funcs soc21_asic_funcs =
> -{
> +static const struct amdgpu_asic_funcs soc21_asic_funcs = {
> .read_disabled_bios = &soc21_read_disabled_bios,
> .read_bios_from_rom = &amdgpu_soc15_read_bios_from_rom,
> .read_register = &soc21_read_register,
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in dce_v8_0.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:18 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: that open brace { should be on the previous line
> ERROR: code indent should use tabs where possible
> ERROR: space required before the open brace '{'
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/dce_v8_0.c | 37 ++-
>  1 file changed, 14 insertions(+), 23 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c 
> b/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
> index d421a268c9ff..f2b3cb5ed6be 100644
> --- a/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
> @@ -53,8 +53,7 @@
>  static void dce_v8_0_set_display_funcs(struct amdgpu_device *adev);
>  static void dce_v8_0_set_irq_funcs(struct amdgpu_device *adev);
>
> -static const u32 crtc_offsets[6] =
> -{
> +static const u32 crtc_offsets[6] = {
> CRTC0_REGISTER_OFFSET,
> CRTC1_REGISTER_OFFSET,
> CRTC2_REGISTER_OFFSET,
> @@ -63,8 +62,7 @@ static const u32 crtc_offsets[6] =
> CRTC5_REGISTER_OFFSET
>  };
>
> -static const u32 hpd_offsets[] =
> -{
> +static const u32 hpd_offsets[] = {
> HPD0_REGISTER_OFFSET,
> HPD1_REGISTER_OFFSET,
> HPD2_REGISTER_OFFSET,
> @@ -1345,9 +1343,9 @@ static void dce_v8_0_audio_write_sad_regs(struct 
> drm_encoder *encoder)
> if (sad->channels > max_channels) {
> value = (sad->channels <<
>  
> AZALIA_F0_CODEC_PIN_CONTROL_AUDIO_DESCRIPTOR0__MAX_CHANNELS__SHIFT) |
> -   (sad->byte2 <<
> +   (sad->byte2 <<
>  
> AZALIA_F0_CODEC_PIN_CONTROL_AUDIO_DESCRIPTOR0__DESCRIPTOR_BYTE_2__SHIFT) |
> -   (sad->freq <<
> +   (sad->freq <<
>  
> AZALIA_F0_CODEC_PIN_CONTROL_AUDIO_DESCRIPTOR0__SUPPORTED_FREQUENCIES__SHIFT);
> max_channels = sad->channels;
> }
> @@ -1379,8 +1377,7 @@ static void dce_v8_0_audio_enable(struct amdgpu_device 
> *adev,
> enable ? 
> AZALIA_F0_CODEC_PIN_CONTROL_HOT_PLUG_CONTROL__AUDIO_ENABLED_MASK : 0);
>  }
>
> -static const u32 pin_offsets[7] =
> -{
> +static const u32 pin_offsets[7] = {
> (0x1780 - 0x1780),
> (0x1786 - 0x1780),
> (0x178c - 0x1780),
> @@ -1740,8 +1737,7 @@ static void dce_v8_0_afmt_fini(struct amdgpu_device 
> *adev)
> }
>  }
>
> -static const u32 vga_control_regs[6] =
> -{
> +static const u32 vga_control_regs[6] = {
> mmD1VGA_CONTROL,
> mmD2VGA_CONTROL,
> mmD3VGA_CONTROL,
> @@ -1895,9 +1891,9 @@ static int dce_v8_0_crtc_do_set_base(struct drm_crtc 
> *crtc,
> case DRM_FORMAT_XBGR:
> case DRM_FORMAT_ABGR:
> fb_format = ((GRPH_DEPTH_32BPP << 
> GRPH_CONTROL__GRPH_DEPTH__SHIFT) |
> -(GRPH_FORMAT_ARGB << 
> GRPH_CONTROL__GRPH_FORMAT__SHIFT));
> +   (GRPH_FORMAT_ARGB << 
> GRPH_CONTROL__GRPH_FORMAT__SHIFT));
> fb_swap = ((GRPH_RED_SEL_B << 
> GRPH_SWAP_CNTL__GRPH_RED_CROSSBAR__SHIFT) |
> -  (GRPH_BLUE_SEL_R << 
> GRPH_SWAP_CNTL__GRPH_BLUE_CROSSBAR__SHIFT));
> +   (GRPH_BLUE_SEL_R << 
> GRPH_SWAP_CNTL__GRPH_BLUE_CROSSBAR__SHIFT));
>  #ifdef __BIG_ENDIAN
> fb_swap |= (GRPH_ENDIAN_8IN32 << 
> GRPH_SWAP_CNTL__GRPH_ENDIAN_SWAP__SHIFT);
>  #endif
> @@ -3151,7 +3147,7 @@ static int dce_v8_0_pageflip_irq(struct amdgpu_device 
> *adev,
>
> spin_lock_irqsave(&adev_to_drm(adev)->event_lock, flags);
> works = amdgpu_crtc->pflip_works;
> -   if (amdgpu_crtc->pflip_status != AMDGPU_FLIP_SUBMITTED){
> +   if (amdgpu_crtc->pflip_status != AMDGPU_FLIP_SUBMITTED) {
> DRM_DEBUG_DRIVER("amdgpu_crtc->pflip_status = %d != "
> "AMDGPU_FLIP_SUBMITTED(%d)\n",
> amdgpu_crtc->pflip_status,
> @@ -3544,8 +3540,7 @@ static void dce_v8_0_set_irq_funcs(struct amdgpu_device 
> *adev)
> adev->hpd_irq.funcs = &dce_v8_0_hpd_irq_funcs;
>  }
>
> -const struct amdgpu_ip_block_version dce_v8_0_ip_block =
> -{
> +const struct amdgpu_ip_block_version dce_v8_0_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_DCE,
> .major = 8,
> .minor = 0,
> @@ -3553,8 +3548,7 @@ const struct amdgpu_ip_block_version dce_v8_0_ip_block =
> .funcs = &dce_v8_0_ip_funcs,
>  };
>
> -const struct amdgpu_ip_block_version dce_v8_1_ip_block =
> -{
> +const struct amdgpu_ip_block_version dce_v8_1_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_DCE

Re: [PATCH] drm/amdgpu/jpeg: Clean up errors in vcn_v1_0.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:12 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: space required before the open parenthesis '('
> ERROR: space prohibited after that '~' (ctx:WxW)
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c | 9 -
>  1 file changed, 4 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c 
> b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
> index 16feb491adf5..25ba27151ac0 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
> @@ -473,7 +473,7 @@ static void vcn_v1_0_disable_clock_gating(struct 
> amdgpu_device *adev)
> if (adev->cg_flags & AMD_CG_SUPPORT_VCN_MGCG)
> data |= 1 << UVD_CGC_CTRL__DYN_CLOCK_MODE__SHIFT;
> else
> -   data &= ~ UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK;
> +   data &= ~UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK;
>
> data |= 1 << UVD_CGC_CTRL__CLK_GATE_DLY_TIMER__SHIFT;
> data |= 4 << UVD_CGC_CTRL__CLK_OFF_DELAY__SHIFT;
> @@ -1772,7 +1772,7 @@ static int vcn_v1_0_set_powergating_state(void *handle,
> int ret;
> struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>
> -   if(state == adev->vcn.cur_state)
> +   if (state == adev->vcn.cur_state)
> return 0;
>
> if (state == AMD_PG_STATE_GATE)
> @@ -1780,7 +1780,7 @@ static int vcn_v1_0_set_powergating_state(void *handle,
> else
> ret = vcn_v1_0_start(adev);
>
> -   if(!ret)
> +   if (!ret)
> adev->vcn.cur_state = state;
> return ret;
>  }
> @@ -2065,8 +2065,7 @@ static void vcn_v1_0_set_irq_funcs(struct amdgpu_device 
> *adev)
> adev->vcn.inst->irq.funcs = &vcn_v1_0_irq_funcs;
>  }
>
> -const struct amdgpu_ip_block_version vcn_v1_0_ip_block =
> -{
> +const struct amdgpu_ip_block_version vcn_v1_0_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_VCN,
> .major = 1,
> .minor = 0,
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in mxgpu_nv.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:06 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: else should follow close brace '}'
> ERROR: that open brace { should be on the previous line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c | 6 ++
>  1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c 
> b/drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c
> index cae1aaa4ddb6..6a68ee946f1c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c
> +++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c
> @@ -183,12 +183,10 @@ static int xgpu_nv_send_access_requests(struct 
> amdgpu_device *adev,
> if (req != IDH_REQ_GPU_INIT_DATA) {
> pr_err("Doesn't get msg:%d from pf, 
> error=%d\n", event, r);
> return r;
> -   }
> -   else /* host doesn't support REQ_GPU_INIT_DATA 
> handshake */
> +   } else /* host doesn't support REQ_GPU_INIT_DATA 
> handshake */
> adev->virt.req_init_data_ver = 0;
> } else {
> -   if (req == IDH_REQ_GPU_INIT_DATA)
> -   {
> +   if (req == IDH_REQ_GPU_INIT_DATA) {
> adev->virt.req_init_data_ver =
> 
> RREG32_NO_KIQ(mmMAILBOX_MSGBUF_RCV_DW1);
>
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in dce_v10_0.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:04 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: that open brace { should be on the previous line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/dce_v10_0.c | 30 +-
>  1 file changed, 10 insertions(+), 20 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c 
> b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
> index 9a24ed463abd..584cd5277f92 100644
> --- a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
> @@ -52,8 +52,7 @@
>  static void dce_v10_0_set_display_funcs(struct amdgpu_device *adev);
>  static void dce_v10_0_set_irq_funcs(struct amdgpu_device *adev);
>
> -static const u32 crtc_offsets[] =
> -{
> +static const u32 crtc_offsets[] = {
> CRTC0_REGISTER_OFFSET,
> CRTC1_REGISTER_OFFSET,
> CRTC2_REGISTER_OFFSET,
> @@ -63,8 +62,7 @@ static const u32 crtc_offsets[] =
> CRTC6_REGISTER_OFFSET
>  };
>
> -static const u32 hpd_offsets[] =
> -{
> +static const u32 hpd_offsets[] = {
> HPD0_REGISTER_OFFSET,
> HPD1_REGISTER_OFFSET,
> HPD2_REGISTER_OFFSET,
> @@ -121,30 +119,26 @@ static const struct {
> .hpd = DISP_INTERRUPT_STATUS_CONTINUE5__DC_HPD6_INTERRUPT_MASK
>  } };
>
> -static const u32 golden_settings_tonga_a11[] =
> -{
> +static const u32 golden_settings_tonga_a11[] = {
> mmDCI_CLK_CNTL, 0x0080, 0x,
> mmFBC_DEBUG_COMP, 0x00f0, 0x0070,
> mmFBC_MISC, 0x1f311fff, 0x1230,
> mmHDMI_CONTROL, 0x31000111, 0x0011,
>  };
>
> -static const u32 tonga_mgcg_cgcg_init[] =
> -{
> +static const u32 tonga_mgcg_cgcg_init[] = {
> mmXDMA_CLOCK_GATING_CNTL, 0x, 0x0100,
> mmXDMA_MEM_POWER_CNTL, 0x0101, 0x,
>  };
>
> -static const u32 golden_settings_fiji_a10[] =
> -{
> +static const u32 golden_settings_fiji_a10[] = {
> mmDCI_CLK_CNTL, 0x0080, 0x,
> mmFBC_DEBUG_COMP, 0x00f0, 0x0070,
> mmFBC_MISC, 0x1f311fff, 0x1230,
> mmHDMI_CONTROL, 0x31000111, 0x0011,
>  };
>
> -static const u32 fiji_mgcg_cgcg_init[] =
> -{
> +static const u32 fiji_mgcg_cgcg_init[] = {
> mmXDMA_CLOCK_GATING_CNTL, 0x, 0x0100,
> mmXDMA_MEM_POWER_CNTL, 0x0101, 0x,
>  };
> @@ -1425,8 +1419,7 @@ static void dce_v10_0_audio_enable(struct amdgpu_device 
> *adev,
>enable ? 
> AZALIA_F0_CODEC_PIN_CONTROL_HOT_PLUG_CONTROL__AUDIO_ENABLED_MASK : 0);
>  }
>
> -static const u32 pin_offsets[] =
> -{
> +static const u32 pin_offsets[] = {
> AUD0_REGISTER_OFFSET,
> AUD1_REGISTER_OFFSET,
> AUD2_REGISTER_OFFSET,
> @@ -1811,8 +1804,7 @@ static void dce_v10_0_afmt_fini(struct amdgpu_device 
> *adev)
> }
>  }
>
> -static const u32 vga_control_regs[6] =
> -{
> +static const u32 vga_control_regs[6] = {
> mmD1VGA_CONTROL,
> mmD2VGA_CONTROL,
> mmD3VGA_CONTROL,
> @@ -3651,8 +3643,7 @@ static void dce_v10_0_set_irq_funcs(struct 
> amdgpu_device *adev)
> adev->hpd_irq.funcs = &dce_v10_0_hpd_irq_funcs;
>  }
>
> -const struct amdgpu_ip_block_version dce_v10_0_ip_block =
> -{
> +const struct amdgpu_ip_block_version dce_v10_0_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_DCE,
> .major = 10,
> .minor = 0,
> @@ -3660,8 +3651,7 @@ const struct amdgpu_ip_block_version dce_v10_0_ip_block 
> =
> .funcs = &dce_v10_0_ip_funcs,
>  };
>
> -const struct amdgpu_ip_block_version dce_v10_1_ip_block =
> -{
> +const struct amdgpu_ip_block_version dce_v10_1_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_DCE,
> .major = 10,
> .minor = 1,
> --
> 2.17.1
>


Re: [PATCH] drm/jpeg: Clean up errors in jpeg_v2_0.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 3:00 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: that open brace { should be on the previous line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c 
> b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
> index c25d4a07350b..1c8116d75f63 100644
> --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
> @@ -807,8 +807,7 @@ static void jpeg_v2_0_set_irq_funcs(struct amdgpu_device 
> *adev)
> adev->jpeg.inst->irq.funcs = &jpeg_v2_0_irq_funcs;
>  }
>
> -const struct amdgpu_ip_block_version jpeg_v2_0_ip_block =
> -{
> +const struct amdgpu_ip_block_version jpeg_v2_0_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_JPEG,
> .major = 2,
> .minor = 0,
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in uvd_v7_0.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 2:58 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: spaces required around that ':' (ctx:VxE)
> that open brace { should be on the previous line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c | 7 +++
>  1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c 
> b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> index abaa4463e906..86d1d46e1e5e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> @@ -679,11 +679,11 @@ static void uvd_v7_0_mc_resume(struct amdgpu_device 
> *adev)
> if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
> WREG32_SOC15(UVD, i, 
> mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW,
> i == 0 ?
> -   
> adev->firmware.ucode[AMDGPU_UCODE_ID_UVD].tmr_mc_addr_lo:
> +   
> adev->firmware.ucode[AMDGPU_UCODE_ID_UVD].tmr_mc_addr_lo :
> 
> adev->firmware.ucode[AMDGPU_UCODE_ID_UVD1].tmr_mc_addr_lo);
> WREG32_SOC15(UVD, i, 
> mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH,
> i == 0 ?
> -   
> adev->firmware.ucode[AMDGPU_UCODE_ID_UVD].tmr_mc_addr_hi:
> +   
> adev->firmware.ucode[AMDGPU_UCODE_ID_UVD].tmr_mc_addr_hi :
> 
> adev->firmware.ucode[AMDGPU_UCODE_ID_UVD1].tmr_mc_addr_hi);
> WREG32_SOC15(UVD, i, mmUVD_VCPU_CACHE_OFFSET0, 0);
> offset = 0;
> @@ -1908,8 +1908,7 @@ static void uvd_v7_0_set_irq_funcs(struct amdgpu_device 
> *adev)
> }
>  }
>
> -const struct amdgpu_ip_block_version uvd_v7_0_ip_block =
> -{
> +const struct amdgpu_ip_block_version uvd_v7_0_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_UVD,
> .major = 7,
> .minor = 0,
> --
> 2.17.1
>


Re: [PATCH] drm/amd: Clean up errors in amdgpu_cgs.c

2023-08-07 Thread Alex Deucher
Already fixed.

On Wed, Aug 2, 2023 at 2:55 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: switch and case should be at the same indent
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c | 64 -
>  1 file changed, 32 insertions(+), 32 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
> index 456e385333b6..fafe7057a8c9 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
> @@ -163,38 +163,38 @@ static uint16_t amdgpu_get_firmware_version(struct 
> cgs_device *cgs_device,
> uint16_t fw_version = 0;
>
> switch (type) {
> -   case CGS_UCODE_ID_SDMA0:
> -   fw_version = adev->sdma.instance[0].fw_version;
> -   break;
> -   case CGS_UCODE_ID_SDMA1:
> -   fw_version = adev->sdma.instance[1].fw_version;
> -   break;
> -   case CGS_UCODE_ID_CP_CE:
> -   fw_version = adev->gfx.ce_fw_version;
> -   break;
> -   case CGS_UCODE_ID_CP_PFP:
> -   fw_version = adev->gfx.pfp_fw_version;
> -   break;
> -   case CGS_UCODE_ID_CP_ME:
> -   fw_version = adev->gfx.me_fw_version;
> -   break;
> -   case CGS_UCODE_ID_CP_MEC:
> -   fw_version = adev->gfx.mec_fw_version;
> -   break;
> -   case CGS_UCODE_ID_CP_MEC_JT1:
> -   fw_version = adev->gfx.mec_fw_version;
> -   break;
> -   case CGS_UCODE_ID_CP_MEC_JT2:
> -   fw_version = adev->gfx.mec_fw_version;
> -   break;
> -   case CGS_UCODE_ID_RLC_G:
> -   fw_version = adev->gfx.rlc_fw_version;
> -   break;
> -   case CGS_UCODE_ID_STORAGE:
> -   break;
> -   default:
> -   DRM_ERROR("firmware type %d do not have version\n", 
> type);
> -   break;
> +   case CGS_UCODE_ID_SDMA0:
> +   fw_version = adev->sdma.instance[0].fw_version;
> +   break;
> +   case CGS_UCODE_ID_SDMA1:
> +   fw_version = adev->sdma.instance[1].fw_version;
> +   break;
> +   case CGS_UCODE_ID_CP_CE:
> +   fw_version = adev->gfx.ce_fw_version;
> +   break;
> +   case CGS_UCODE_ID_CP_PFP:
> +   fw_version = adev->gfx.pfp_fw_version;
> +   break;
> +   case CGS_UCODE_ID_CP_ME:
> +   fw_version = adev->gfx.me_fw_version;
> +   break;
> +   case CGS_UCODE_ID_CP_MEC:
> +   fw_version = adev->gfx.mec_fw_version;
> +   break;
> +   case CGS_UCODE_ID_CP_MEC_JT1:
> +   fw_version = adev->gfx.mec_fw_version;
> +   break;
> +   case CGS_UCODE_ID_CP_MEC_JT2:
> +   fw_version = adev->gfx.mec_fw_version;
> +   break;
> +   case CGS_UCODE_ID_RLC_G:
> +   fw_version = adev->gfx.rlc_fw_version;
> +   break;
> +   case CGS_UCODE_ID_STORAGE:
> +   break;
> +   default:
> +   DRM_ERROR("firmware type %d do not have version\n", type);
> +   break;
> }
> return fw_version;
>  }
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu/atomfirmware: Clean up errors in amdgpu_atomfirmware.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 2:51 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: spaces required around that '>=' (ctx:WxV)
> ERROR: spaces required around that '!=' (ctx:WxV)
> ERROR: code indent should use tabs where possible
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c | 8 
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
> index 0b7f4c4d58e5..835980e94b9e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
> @@ -58,7 +58,7 @@ uint32_t 
> amdgpu_atomfirmware_query_firmware_capability(struct amdgpu_device *ade
> if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context,
> index, &size, &frev, &crev, &data_offset)) {
> /* support firmware_info 3.1 + */
> -   if ((frev == 3 && crev >=1) || (frev > 3)) {
> +   if ((frev == 3 && crev >= 1) || (frev > 3)) {
> firmware_info = (union firmware_info *)
> (mode_info->atom_context->bios + data_offset);
> fw_cap = 
> le32_to_cpu(firmware_info->v31.firmware_capability);
> @@ -597,7 +597,7 @@ bool amdgpu_atomfirmware_ras_rom_addr(struct 
> amdgpu_device *adev,
>   index, &size, &frev, &crev,
>   &data_offset)) {
> /* support firmware_info 3.4 + */
> -   if ((frev == 3 && crev >=4) || (frev > 3)) {
> +   if ((frev == 3 && crev >= 4) || (frev > 3)) {
> firmware_info = (union firmware_info *)
> (mode_info->atom_context->bios + data_offset);
> /* The ras_rom_i2c_slave_addr should ideally
> @@ -850,7 +850,7 @@ int amdgpu_atomfirmware_get_fw_reserved_fb_size(struct 
> amdgpu_device *adev)
>
> firmware_info = (union firmware_info *)(ctx->bios + data_offset);
>
> -   if (frev !=3)
> +   if (frev != 3)
> return -EINVAL;
>
> switch (crev) {
> @@ -909,7 +909,7 @@ int amdgpu_atomfirmware_asic_init(struct amdgpu_device 
> *adev, bool fb_reset)
> }
>
> index = 
> get_index_into_master_table(atom_master_list_of_command_functions_v2_1,
> -asic_init);
> +   asic_init);
> if (amdgpu_atom_parse_cmd_header(mode_info->atom_context, index, 
> &frev, &crev)) {
> if (frev == 2 && crev >= 1) {
> memset(&asic_init_ps_v2_1, 0, 
> sizeof(asic_init_ps_v2_1));
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in mmhub_v9_4.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 2:48 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: code indent should use tabs where possible
> ERROR: space required before the open parenthesis '('
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/mmhub_v9_4.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v9_4.c 
> b/drivers/gpu/drm/amd/amdgpu/mmhub_v9_4.c
> index e790f890aec6..5718e4d40e66 100644
> --- a/drivers/gpu/drm/amd/amdgpu/mmhub_v9_4.c
> +++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v9_4.c
> @@ -108,7 +108,7 @@ static void mmhub_v9_4_setup_vm_pt_regs(struct 
> amdgpu_device *adev, uint32_t vmi
>  }
>
>  static void mmhub_v9_4_init_system_aperture_regs(struct amdgpu_device *adev,
> -int hubid)
> +   int hubid)
>  {
> uint64_t value;
> uint32_t tmp;
> @@ -1568,7 +1568,7 @@ static int mmhub_v9_4_get_ras_error_count(struct 
> amdgpu_device *adev,
> uint32_t sec_cnt, ded_cnt;
>
> for (i = 0; i < ARRAY_SIZE(mmhub_v9_4_ras_fields); i++) {
> -   if(mmhub_v9_4_ras_fields[i].reg_offset != reg->reg_offset)
> +   if (mmhub_v9_4_ras_fields[i].reg_offset != reg->reg_offset)
> continue;
>
> sec_cnt = (value &
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in vega20_ih.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 2:46 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: trailing statements should be on next line
> ERROR: that open brace { should be on the previous line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/vega20_ih.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/vega20_ih.c 
> b/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
> index 544ee55a22da..dbc99536440f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
> @@ -500,7 +500,8 @@ static int vega20_ih_self_irq(struct amdgpu_device *adev,
> case 2:
> schedule_work(&adev->irq.ih2_work);
> break;
> -   default: break;
> +   default:
> +   break;
> }
> return 0;
>  }
> @@ -710,8 +711,7 @@ static void vega20_ih_set_interrupt_funcs(struct 
> amdgpu_device *adev)
> adev->irq.ih_funcs = &vega20_ih_funcs;
>  }
>
> -const struct amdgpu_ip_block_version vega20_ih_ip_block =
> -{
> +const struct amdgpu_ip_block_version vega20_ih_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_IH,
> .major = 4,
> .minor = 2,
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in ih_v6_0.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 2:43 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: trailing statements should be on next line
> ERROR: that open brace { should be on the previous line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/ih_v6_0.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/ih_v6_0.c 
> b/drivers/gpu/drm/amd/amdgpu/ih_v6_0.c
> index 980b24120080..ec0c8f8b465a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/ih_v6_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/ih_v6_0.c
> @@ -494,7 +494,8 @@ static int ih_v6_0_self_irq(struct amdgpu_device *adev,
> *adev->irq.ih1.wptr_cpu = wptr;
> schedule_work(&adev->irq.ih1_work);
> break;
> -   default: break;
> +   default:
> +   break;
> }
> return 0;
>  }
> @@ -759,8 +760,7 @@ static void ih_v6_0_set_interrupt_funcs(struct 
> amdgpu_device *adev)
> adev->irq.ih_funcs = &ih_v6_0_funcs;
>  }
>
> -const struct amdgpu_ip_block_version ih_v6_0_ip_block =
> -{
> +const struct amdgpu_ip_block_version ih_v6_0_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_IH,
> .major = 6,
> .minor = 0,
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in amdgpu_psp.h

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 2:40 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: open brace '{' following struct go on the same line
> ERROR: open brace '{' following enum go on the same line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h | 12 
>  1 file changed, 4 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
> index c3203de4a007..feef988bf0c1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
> @@ -78,8 +78,7 @@ enum psp_bootloader_cmd {
> PSP_BL__LOAD_TOS_SPL_TABLE  = 0x1000,
>  };
>
> -enum psp_ring_type
> -{
> +enum psp_ring_type {
> PSP_RING_TYPE__INVALID = 0,
> /*
>  * These values map to the way the PSP kernel identifies the
> @@ -89,8 +88,7 @@ enum psp_ring_type
> PSP_RING_TYPE__KM = 2  /* Kernel mode ring (formerly called GPCOM) */
>  };
>
> -struct psp_ring
> -{
> +struct psp_ring {
> enum psp_ring_type  ring_type;
> struct psp_gfx_rb_frame *ring_mem;
> uint64_tring_mem_mc_addr;
> @@ -107,8 +105,7 @@ enum psp_reg_prog_id {
> PSP_REG_LAST
>  };
>
> -struct psp_funcs
> -{
> +struct psp_funcs {
> int (*init_microcode)(struct psp_context *psp);
> int (*bootloader_load_kdb)(struct psp_context *psp);
> int (*bootloader_load_spl)(struct psp_context *psp);
> @@ -307,8 +304,7 @@ struct psp_runtime_scpm_entry {
> enum psp_runtime_scpm_authentication scpm_status;
>  };
>
> -struct psp_context
> -{
> +struct psp_context {
> struct amdgpu_device*adev;
> struct psp_ring km_ring;
> struct psp_gfx_cmd_resp *cmd;
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in vce_v3_0.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 2:38 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: that open brace { should be on the previous line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/vce_v3_0.c | 9 +++--
>  1 file changed, 3 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c 
> b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
> index 8def62c83ffd..18f6e62af339 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
> @@ -998,8 +998,7 @@ static void vce_v3_0_set_irq_funcs(struct amdgpu_device 
> *adev)
> adev->vce.irq.funcs = &vce_v3_0_irq_funcs;
>  };
>
> -const struct amdgpu_ip_block_version vce_v3_0_ip_block =
> -{
> +const struct amdgpu_ip_block_version vce_v3_0_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_VCE,
> .major = 3,
> .minor = 0,
> @@ -1007,8 +1006,7 @@ const struct amdgpu_ip_block_version vce_v3_0_ip_block =
> .funcs = &vce_v3_0_ip_funcs,
>  };
>
> -const struct amdgpu_ip_block_version vce_v3_1_ip_block =
> -{
> +const struct amdgpu_ip_block_version vce_v3_1_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_VCE,
> .major = 3,
> .minor = 1,
> @@ -1016,8 +1014,7 @@ const struct amdgpu_ip_block_version vce_v3_1_ip_block =
> .funcs = &vce_v3_0_ip_funcs,
>  };
>
> -const struct amdgpu_ip_block_version vce_v3_4_ip_block =
> -{
> +const struct amdgpu_ip_block_version vce_v3_4_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_VCE,
> .major = 3,
> .minor = 4,
> --
> 2.17.1
>


Re: [PATCH] drm/amdgpu: Clean up errors in cik_ih.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 2:35 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: that open brace { should be on the previous line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/amdgpu/cik_ih.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/cik_ih.c 
> b/drivers/gpu/drm/amd/amdgpu/cik_ih.c
> index df385ffc9768..6f7c031dd197 100644
> --- a/drivers/gpu/drm/amd/amdgpu/cik_ih.c
> +++ b/drivers/gpu/drm/amd/amdgpu/cik_ih.c
> @@ -442,8 +442,7 @@ static void cik_ih_set_interrupt_funcs(struct 
> amdgpu_device *adev)
> adev->irq.ih_funcs = &cik_ih_funcs;
>  }
>
> -const struct amdgpu_ip_block_version cik_ih_ip_block =
> -{
> +const struct amdgpu_ip_block_version cik_ih_ip_block = {
> .type = AMD_IP_BLOCK_TYPE_IH,
> .major = 2,
> .minor = 0,
> --
> 2.17.1
>


Re: [PATCH] drm/amd/display: Clean up errors in dce_clk_mgr.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 2:26 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: spaces required around that '?' (ctx:VxE)
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/display/dc/dce/dce_clk_mgr.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clk_mgr.c 
> b/drivers/gpu/drm/amd/display/dc/dce/dce_clk_mgr.c
> index 07359eb89efc..e7acd6eec1fd 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce/dce_clk_mgr.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clk_mgr.c
> @@ -640,7 +640,7 @@ static void dce11_pplib_apply_display_requirements(
>  * on power saving.
>  *
>  */
> -   pp_display_cfg->min_dcfclock_khz = (context->stream_count > 4)?
> +   pp_display_cfg->min_dcfclock_khz = (context->stream_count > 4) ?
> pp_display_cfg->min_engine_clock_khz : 0;
>
> pp_display_cfg->min_engine_clock_deep_sleep_khz
> --
> 2.17.1
>


Re: [PATCH] drm/amd/display: Clean up errors in display_mode_vba_30.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 2:20 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: else should follow close brace '}'
>
> Signed-off-by: Ran Sun 
> ---
>  .../gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c  | 6 ++
>  1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c 
> b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
> index 9af1a43c042b..ad741a723c0e 100644
> --- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
> +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
> @@ -784,8 +784,7 @@ static unsigned int dscComputeDelay(enum 
> output_format_class pixelFormat, enum o
> Delay = Delay + 1;
> //   sft
> Delay = Delay + 1;
> -   }
> -   else {
> +   } else {
> //   sfr
> Delay = Delay + 2;
> //   dsccif
> @@ -3489,8 +3488,7 @@ static double TruncToValidBPP(
> if (Format == dm_n422) {
> MinDSCBPP = 7;
> MaxDSCBPP = 2 * DSCInputBitPerComponent - 1.0 / 16.0;
> -   }
> -   else {
> +   } else {
> MinDSCBPP = 8;
> MaxDSCBPP = 3 * DSCInputBitPerComponent - 1.0 / 16.0;
> }
> --
> 2.17.1
>


Re: [PATCH] drm/amd/display: Clean up errors in dcn10_dpp_dscl.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 2:16 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: else should follow close brace '}'
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c 
> b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c
> index 7e140c35a0ce..5ca9ab8a76e8 100644
> --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c
> +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c
> @@ -197,8 +197,7 @@ static void dpp1_dscl_set_lb(
> DITHER_EN, 0, /* Dithering enable: Disabled */
> INTERLEAVE_EN, lb_params->interleave_en, /* 
> Interleave source enable */
> LB_DATA_FORMAT__ALPHA_EN, lb_params->alpha_en); /* 
> Alpha enable */
> -   }
> -   else {
> +   } else {
> /* DSCL caps: pixel data processed in float format */
> REG_SET_2(LB_DATA_FORMAT, 0,
> INTERLEAVE_EN, lb_params->interleave_en, /* 
> Interleave source enable */
> --
> 2.17.1
>


Re: [PATCH] drm/amd/display: Clean up errors in dc_stream.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Wed, Aug 2, 2023 at 2:14 AM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: that open brace { should be on the previous line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/display/dc/core/dc_stream.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c 
> b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
> index ea3d4b328e8e..05bb23bc122d 100644
> --- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
> +++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
> @@ -71,8 +71,7 @@ static bool dc_stream_construct(struct dc_stream_state 
> *stream,
>
> /* Copy audio modes */
> /* TODO - Remove this translation */
> -   for (i = 0; i < (dc_sink_data->edid_caps.audio_mode_count); i++)
> -   {
> +   for (i = 0; i < (dc_sink_data->edid_caps.audio_mode_count); i++) {
> stream->audio_info.modes[i].channel_count = 
> dc_sink_data->edid_caps.audio_modes[i].channel_count;
> stream->audio_info.modes[i].format_code = 
> dc_sink_data->edid_caps.audio_modes[i].format_code;
> stream->audio_info.modes[i].sample_rates.all = 
> dc_sink_data->edid_caps.audio_modes[i].sample_rate;
> --
> 2.17.1
>


Re: [PATCH] drm/amd/display: Clean up errors in bios_parser2.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

As a follow up patch, care to drop the break statements after a return?

On Tue, Aug 1, 2023 at 11:23 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: switch and case should be at the same indent
> ERROR: code indent should use tabs where possible
>
> Signed-off-by: Ran Sun 
> ---
>  .../drm/amd/display/dc/bios/bios_parser2.c| 32 +--
>  1 file changed, 16 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c 
> b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
> index 540d19efad8f..033ce2638eb2 100644
> --- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
> +++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
> @@ -772,20 +772,20 @@ static enum bp_result bios_parser_get_device_tag(
> return BP_RESULT_BADINPUT;
>
> switch (bp->object_info_tbl.revision.minor) {
> -   case 4:
> -   default:
> +   case 4:
> +   default:
> /* getBiosObject will return MXM object */
> -   object = get_bios_object(bp, connector_object_id);
> +   object = get_bios_object(bp, connector_object_id);
>
> if (!object) {
> BREAK_TO_DEBUGGER(); /* Invalid object id */
> return BP_RESULT_BADINPUT;
> }
>
> -   info->acpi_device = 0; /* BIOS no longer provides this */
> -   info->dev_id = device_type_from_device_id(object->device_tag);
> -   break;
> -   case 5:
> +   info->acpi_device = 0; /* BIOS no longer provides this */
> +   info->dev_id = device_type_from_device_id(object->device_tag);
> +   break;
> +   case 5:
> object_path_v3 = get_bios_object_from_path_v3(bp, 
> connector_object_id);
>
> if (!object_path_v3) {
> @@ -1580,13 +1580,13 @@ static bool bios_parser_is_device_id_supported(
> uint32_t mask = get_support_mask_for_device_id(id);
>
> switch (bp->object_info_tbl.revision.minor) {
> -   case 4:
> -   default:
> -   return 
> (le16_to_cpu(bp->object_info_tbl.v1_4->supporteddevices) & mask) != 0;
> -   break;
> -   case 5:
> -   return 
> (le16_to_cpu(bp->object_info_tbl.v1_5->supporteddevices) & mask) != 0;
> -   break;
> +   case 4:
> +   default:
> +   return 
> (le16_to_cpu(bp->object_info_tbl.v1_4->supporteddevices) & mask) != 0;
> +   break;
> +   case 5:
> +   return 
> (le16_to_cpu(bp->object_info_tbl.v1_5->supporteddevices) & mask) != 0;
> +   break;
> }
>
> return false;
> @@ -1755,7 +1755,7 @@ static enum bp_result bios_parser_get_firmware_info(
> case 2:
> case 3:
> result = get_firmware_info_v3_2(bp, info);
> -break;
> +   break;
> case 4:
> result = get_firmware_info_v3_4(bp, info);
> break;
> @@ -2225,7 +2225,7 @@ static enum bp_result 
> bios_parser_get_disp_connector_caps_info(
> return BP_RESULT_BADINPUT;
>
> switch (bp->object_info_tbl.revision.minor) {
> -   case 4:
> +   case 4:
> default:
> object = get_bios_object(bp, object_id);
>
> --
> 2.17.1
>


[PATCH v4 6/9] interconnect: Fix locking for runpm vs reclaim

2023-08-07 Thread Rob Clark
From: Rob Clark 

For cases where icc_bw_set() can be called in callbaths that could
deadlock against shrinker/reclaim, such as runpm resume, we need to
decouple the icc locking.  Introduce a new icc_bw_lock for cases where
we need to serialize bw aggregation and update to decouple that from
paths that require memory allocation such as node/link creation/
destruction.

Fixes this lockdep splat:

   ==
   WARNING: possible circular locking dependency detected
   6.2.0-rc8-debug+ #554 Not tainted
   --
   ring0/132 is trying to acquire lock:
   ff80871916d0 (&gmu->lock){+.+.}-{3:3}, at: a6xx_pm_resume+0xf0/0x234

   but task is already holding lock:
   ffdb5aee57e8 (dma_fence_map){}-{0:0}, at: msm_job_run+0x68/0x150

   which lock already depends on the new lock.

   the existing dependency chain (in reverse order) is:

   -> #4 (dma_fence_map){}-{0:0}:
  __dma_fence_might_wait+0x74/0xc0
  dma_resv_lockdep+0x1f4/0x2f4
  do_one_initcall+0x104/0x2bc
  kernel_init_freeable+0x344/0x34c
  kernel_init+0x30/0x134
  ret_from_fork+0x10/0x20

   -> #3 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}:
  fs_reclaim_acquire+0x80/0xa8
  slab_pre_alloc_hook.constprop.0+0x40/0x25c
  __kmem_cache_alloc_node+0x60/0x1cc
  __kmalloc+0xd8/0x100
  topology_parse_cpu_capacity+0x8c/0x178
  get_cpu_for_node+0x88/0xc4
  parse_cluster+0x1b0/0x28c
  parse_cluster+0x8c/0x28c
  init_cpu_topology+0x168/0x188
  smp_prepare_cpus+0x24/0xf8
  kernel_init_freeable+0x18c/0x34c
  kernel_init+0x30/0x134
  ret_from_fork+0x10/0x20

   -> #2 (fs_reclaim){+.+.}-{0:0}:
  __fs_reclaim_acquire+0x3c/0x48
  fs_reclaim_acquire+0x54/0xa8
  slab_pre_alloc_hook.constprop.0+0x40/0x25c
  __kmem_cache_alloc_node+0x60/0x1cc
  __kmalloc+0xd8/0x100
  kzalloc.constprop.0+0x14/0x20
  icc_node_create_nolock+0x4c/0xc4
  icc_node_create+0x38/0x58
  qcom_icc_rpmh_probe+0x1b8/0x248
  platform_probe+0x70/0xc4
  really_probe+0x158/0x290
  __driver_probe_device+0xc8/0xe0
  driver_probe_device+0x44/0x100
  __driver_attach+0xf8/0x108
  bus_for_each_dev+0x78/0xc4
  driver_attach+0x2c/0x38
  bus_add_driver+0xd0/0x1d8
  driver_register+0xbc/0xf8
  __platform_driver_register+0x30/0x3c
  qnoc_driver_init+0x24/0x30
  do_one_initcall+0x104/0x2bc
  kernel_init_freeable+0x344/0x34c
  kernel_init+0x30/0x134
  ret_from_fork+0x10/0x20

   -> #1 (icc_lock){+.+.}-{3:3}:
  __mutex_lock+0xcc/0x3c8
  mutex_lock_nested+0x30/0x44
  icc_set_bw+0x88/0x2b4
  _set_opp_bw+0x8c/0xd8
  _set_opp+0x19c/0x300
  dev_pm_opp_set_opp+0x84/0x94
  a6xx_gmu_resume+0x18c/0x804
  a6xx_pm_resume+0xf8/0x234
  adreno_runtime_resume+0x2c/0x38
  pm_generic_runtime_resume+0x30/0x44
  __rpm_callback+0x15c/0x174
  rpm_callback+0x78/0x7c
  rpm_resume+0x318/0x524
  __pm_runtime_resume+0x78/0xbc
  adreno_load_gpu+0xc4/0x17c
  msm_open+0x50/0x120
  drm_file_alloc+0x17c/0x228
  drm_open_helper+0x74/0x118
  drm_open+0xa0/0x144
  drm_stub_open+0xd4/0xe4
  chrdev_open+0x1b8/0x1e4
  do_dentry_open+0x2f8/0x38c
  vfs_open+0x34/0x40
  path_openat+0x64c/0x7b4
  do_filp_open+0x54/0xc4
  do_sys_openat2+0x9c/0x100
  do_sys_open+0x50/0x7c
  __arm64_sys_openat+0x28/0x34
  invoke_syscall+0x8c/0x128
  el0_svc_common.constprop.0+0xa0/0x11c
  do_el0_svc+0xac/0xbc
  el0_svc+0x48/0xa0
  el0t_64_sync_handler+0xac/0x13c
  el0t_64_sync+0x190/0x194

   -> #0 (&gmu->lock){+.+.}-{3:3}:
  __lock_acquire+0xe00/0x1060
  lock_acquire+0x1e0/0x2f8
  __mutex_lock+0xcc/0x3c8
  mutex_lock_nested+0x30/0x44
  a6xx_pm_resume+0xf0/0x234
  adreno_runtime_resume+0x2c/0x38
  pm_generic_runtime_resume+0x30/0x44
  __rpm_callback+0x15c/0x174
  rpm_callback+0x78/0x7c
  rpm_resume+0x318/0x524
  __pm_runtime_resume+0x78/0xbc
  pm_runtime_get_sync.isra.0+0x14/0x20
  msm_gpu_submit+0x58/0x178
  msm_job_run+0x78/0x150
  drm_sched_main+0x290/0x370
  kthread+0xf0/0x100
  ret_from_fork+0x10/0x20

   other info that might help us debug this:

   Chain exists of:
 &gmu->lock --> mmu_notifier_invalidate_range_start --> dma_fence_map

Possible unsafe locking scenario:

  CPU0CPU1
  
 lock(dma_fence_map);
  lock(mmu_notif

[PATCH v4 9/9] drm/msm: Enable fence signalling annotations

2023-08-07 Thread Rob Clark
From: Rob Clark 

Now that the runpm/qos/interconnect lockdep vs reclaim issues are
solved, we can enable the fence signalling annotations without lockdep
making it's immediate displeasure known.

Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/msm/msm_ringbuffer.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c 
b/drivers/gpu/drm/msm/msm_ringbuffer.c
index 7f5e0a961bba..cb9cf41bcb9b 100644
--- a/drivers/gpu/drm/msm/msm_ringbuffer.c
+++ b/drivers/gpu/drm/msm/msm_ringbuffer.c
@@ -97,6 +97,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu 
*gpu, int id,
 /* currently managing hangcheck ourselves: */
sched_timeout = MAX_SCHEDULE_TIMEOUT;
 
+   ring->sched.fence_signalling = true;
ret = drm_sched_init(&ring->sched, &msm_sched_ops,
num_hw_submissions, 0, sched_timeout,
NULL, NULL, to_msm_bo(ring->bo)->name, gpu->dev->dev);
-- 
2.41.0



[PATCH v4 8/9] drm/sched: Add (optional) fence signaling annotation

2023-08-07 Thread Rob Clark
From: Rob Clark 

Based on
https://lore.kernel.org/dri-devel/20200604081224.863494-10-daniel.vet...@ffwll.ch/
but made to be optional.

Signed-off-by: Rob Clark 
Reviewed-by: Luben Tuikov 
---
 drivers/gpu/drm/scheduler/sched_main.c | 9 +
 include/drm/gpu_scheduler.h| 2 ++
 2 files changed, 11 insertions(+)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
b/drivers/gpu/drm/scheduler/sched_main.c
index 7b2bfc10c1a5..b0368b815ff5 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -1005,10 +1005,15 @@ static bool drm_sched_blocked(struct drm_gpu_scheduler 
*sched)
 static int drm_sched_main(void *param)
 {
struct drm_gpu_scheduler *sched = (struct drm_gpu_scheduler *)param;
+   const bool fence_signalling = sched->fence_signalling;
+   bool fence_cookie;
int r;
 
sched_set_fifo_low(current);
 
+   if (fence_signalling)
+   fence_cookie = dma_fence_begin_signalling();
+
while (!kthread_should_stop()) {
struct drm_sched_entity *entity = NULL;
struct drm_sched_fence *s_fence;
@@ -1064,6 +1069,10 @@ static int drm_sched_main(void *param)
 
wake_up(&sched->job_scheduled);
}
+
+   if (fence_signalling)
+   dma_fence_end_signalling(fence_cookie);
+
return 0;
 }
 
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index e95b4837e5a3..58d958ad31a1 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -493,6 +493,7 @@ struct drm_sched_backend_ops {
  * @ready: marks if the underlying HW is ready to work
  * @free_guilty: A hit to time out handler to free the guilty job.
  * @dev: system &struct device
+ * @fence_signalling: Opt in to fence signalling annotations
  *
  * One scheduler is implemented for each hardware ring.
  */
@@ -517,6 +518,7 @@ struct drm_gpu_scheduler {
boolready;
boolfree_guilty;
struct device   *dev;
+   boolfence_signalling;
 };
 
 int drm_sched_init(struct drm_gpu_scheduler *sched,
-- 
2.41.0



[PATCH v4 7/9] interconnect: Teach lockdep about icc_bw_lock order

2023-08-07 Thread Rob Clark
From: Rob Clark 

Teach lockdep that icc_bw_lock is needed in code paths that could
deadlock if they trigger reclaim.

Signed-off-by: Rob Clark 
---
 drivers/interconnect/core.c | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
index e15a92a79df1..1afbc4f7c6e7 100644
--- a/drivers/interconnect/core.c
+++ b/drivers/interconnect/core.c
@@ -1041,13 +1041,21 @@ void icc_sync_state(struct device *dev)
}
}
}
+   mutex_unlock(&icc_bw_lock);
mutex_unlock(&icc_lock);
 }
 EXPORT_SYMBOL_GPL(icc_sync_state);
 
 static int __init icc_init(void)
 {
-   struct device_node *root = of_find_node_by_path("/");
+   struct device_node *root;
+
+   /* Teach lockdep about lock ordering wrt. shrinker: */
+   fs_reclaim_acquire(GFP_KERNEL);
+   might_lock(&icc_bw_lock);
+   fs_reclaim_release(GFP_KERNEL);
+
+   root = of_find_node_by_path("/");
 
providers_count = of_count_icc_providers(root);
of_node_put(root);
-- 
2.41.0



[PATCH v4 5/9] PM / QoS: Teach lockdep about dev_pm_qos_mtx locking order

2023-08-07 Thread Rob Clark
From: Rob Clark 

Annotate dev_pm_qos_mtx to teach lockdep to scream about allocations
that could trigger reclaim under dev_pm_qos_mtx.

Signed-off-by: Rob Clark 
---
 drivers/base/power/qos.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c
index 5ec06585b6d1..63cb1086f195 100644
--- a/drivers/base/power/qos.c
+++ b/drivers/base/power/qos.c
@@ -1017,3 +1017,14 @@ void dev_pm_qos_hide_latency_tolerance(struct device 
*dev)
pm_runtime_put(dev);
 }
 EXPORT_SYMBOL_GPL(dev_pm_qos_hide_latency_tolerance);
+
+static int __init dev_pm_qos_init(void)
+{
+   /* Teach lockdep about lock ordering wrt. shrinker: */
+   fs_reclaim_acquire(GFP_KERNEL);
+   might_lock(&dev_pm_qos_mtx);
+   fs_reclaim_release(GFP_KERNEL);
+
+   return 0;
+}
+early_initcall(dev_pm_qos_init);
-- 
2.41.0



Re: [PATCH] drm/amd/display: Clean up errors in dcn316_smu.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 11:03 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: open brace '{' following struct go on the same line
> ERROR: code indent should use tabs where possible
>
> Signed-off-by: Ran Sun 
> ---
>  .../amd/display/dc/clk_mgr/dcn316/dcn316_smu.c | 18 --
>  1 file changed, 8 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_smu.c 
> b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_smu.c
> index 457a9254ae1c..3ed19197a755 100644
> --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_smu.c
> +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_smu.c
> @@ -34,23 +34,21 @@
>  #define MAX_INSTANCE7
>  #define MAX_SEGMENT 6
>
> -struct IP_BASE_INSTANCE
> -{
> +struct IP_BASE_INSTANCE {
>  unsigned int segment[MAX_SEGMENT];
>  };
>
> -struct IP_BASE
> -{
> +struct IP_BASE {
>  struct IP_BASE_INSTANCE instance[MAX_INSTANCE];
>  };
>
>  static const struct IP_BASE MP0_BASE = { { { { 0x00016000, 0x00DC, 
> 0x00E0, 0x00E4, 0x0243FC00, 0 } },
> -{ { 0, 0, 0, 0, 0, 0 } },
> -{ { 0, 0, 0, 0, 0, 0 } },
> -{ { 0, 0, 0, 0, 0, 0 } },
> -{ { 0, 0, 0, 0, 0, 0 } },
> -{ { 0, 0, 0, 0, 0, 0 } },
> -{ { 0, 0, 0, 0, 0, 0 } } } };
> +   { { 0, 0, 0, 0, 0, 0 } },
> +   { { 0, 0, 0, 0, 0, 0 } },
> +   { { 0, 0, 0, 0, 0, 0 } },
> +   { { 0, 0, 0, 0, 0, 0 } },
> +   { { 0, 0, 0, 0, 0, 0 } },
> +   { { 0, 0, 0, 0, 0, 0 } } } };
>
>  #define REG(reg_name) \
> (MP0_BASE.instance[0].segment[reg ## reg_name ## _BASE_IDX] + reg ## 
> reg_name)
> --
> 2.17.1
>


[PATCH v4 3/9] PM / QoS: Fix constraints alloc vs reclaim locking

2023-08-07 Thread Rob Clark
From: Rob Clark 

In the process of adding lockdep annotation for drm GPU scheduler's
job_run() to detect potential deadlock against shrinker/reclaim, I hit
this lockdep splat:

   ==
   WARNING: possible circular locking dependency detected
   6.2.0-rc8-debug+ #558 Tainted: GW
   --
   ring0/125 is trying to acquire lock:
   ffd6d6ce0f28 (dev_pm_qos_mtx){+.+.}-{3:3}, at: 
dev_pm_qos_update_request+0x38/0x68

   but task is already holding lock:
   ff8087239208 (&gpu->active_lock){+.+.}-{3:3}, at: 
msm_gpu_submit+0xec/0x178

   which lock already depends on the new lock.

   the existing dependency chain (in reverse order) is:

   -> #4 (&gpu->active_lock){+.+.}-{3:3}:
  __mutex_lock+0xcc/0x3c8
  mutex_lock_nested+0x30/0x44
  msm_gpu_submit+0xec/0x178
  msm_job_run+0x78/0x150
  drm_sched_main+0x290/0x370
  kthread+0xf0/0x100
  ret_from_fork+0x10/0x20

   -> #3 (dma_fence_map){}-{0:0}:
  __dma_fence_might_wait+0x74/0xc0
  dma_resv_lockdep+0x1f4/0x2f4
  do_one_initcall+0x104/0x2bc
  kernel_init_freeable+0x344/0x34c
  kernel_init+0x30/0x134
  ret_from_fork+0x10/0x20

   -> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}:
  fs_reclaim_acquire+0x80/0xa8
  slab_pre_alloc_hook.constprop.0+0x40/0x25c
  __kmem_cache_alloc_node+0x60/0x1cc
  __kmalloc+0xd8/0x100
  topology_parse_cpu_capacity+0x8c/0x178
  get_cpu_for_node+0x88/0xc4
  parse_cluster+0x1b0/0x28c
  parse_cluster+0x8c/0x28c
  init_cpu_topology+0x168/0x188
  smp_prepare_cpus+0x24/0xf8
  kernel_init_freeable+0x18c/0x34c
  kernel_init+0x30/0x134
  ret_from_fork+0x10/0x20

   -> #1 (fs_reclaim){+.+.}-{0:0}:
  __fs_reclaim_acquire+0x3c/0x48
  fs_reclaim_acquire+0x54/0xa8
  slab_pre_alloc_hook.constprop.0+0x40/0x25c
  __kmem_cache_alloc_node+0x60/0x1cc
  kmalloc_trace+0x50/0xa8
  dev_pm_qos_constraints_allocate+0x38/0x100
  __dev_pm_qos_add_request+0xb0/0x1e8
  dev_pm_qos_add_request+0x58/0x80
  dev_pm_qos_expose_latency_limit+0x60/0x13c
  register_cpu+0x12c/0x130
  topology_init+0xac/0xbc
  do_one_initcall+0x104/0x2bc
  kernel_init_freeable+0x344/0x34c
  kernel_init+0x30/0x134
  ret_from_fork+0x10/0x20

   -> #0 (dev_pm_qos_mtx){+.+.}-{3:3}:
  __lock_acquire+0xe00/0x1060
  lock_acquire+0x1e0/0x2f8
  __mutex_lock+0xcc/0x3c8
  mutex_lock_nested+0x30/0x44
  dev_pm_qos_update_request+0x38/0x68
  msm_devfreq_boost+0x40/0x70
  msm_devfreq_active+0xc0/0xf0
  msm_gpu_submit+0x10c/0x178
  msm_job_run+0x78/0x150
  drm_sched_main+0x290/0x370
  kthread+0xf0/0x100
  ret_from_fork+0x10/0x20

   other info that might help us debug this:

   Chain exists of:
 dev_pm_qos_mtx --> dma_fence_map --> &gpu->active_lock

Possible unsafe locking scenario:

  CPU0CPU1
  
 lock(&gpu->active_lock);
  lock(dma_fence_map);
  lock(&gpu->active_lock);
 lock(dev_pm_qos_mtx);

*** DEADLOCK ***

   3 locks held by ring0/123:
#0: ff8087251170 (&gpu->lock){+.+.}-{3:3}, at: msm_job_run+0x64/0x150
#1: ffd00b0e57e8 (dma_fence_map){}-{0:0}, at: msm_job_run+0x68/0x150
#2: ff8087251208 (&gpu->active_lock){+.+.}-{3:3}, at: 
msm_gpu_submit+0xec/0x178

   stack backtrace:
   CPU: 6 PID: 123 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #559
   Hardware name: Google Lazor (rev1 - 2) with LTE (DT)
   Call trace:
dump_backtrace.part.0+0xb4/0xf8
show_stack+0x20/0x38
dump_stack_lvl+0x9c/0xd0
dump_stack+0x18/0x34
print_circular_bug+0x1b4/0x1f0
check_noncircular+0x78/0xac
__lock_acquire+0xe00/0x1060
lock_acquire+0x1e0/0x2f8
__mutex_lock+0xcc/0x3c8
mutex_lock_nested+0x30/0x44
dev_pm_qos_update_request+0x38/0x68
msm_devfreq_boost+0x40/0x70
msm_devfreq_active+0xc0/0xf0
msm_gpu_submit+0x10c/0x178
msm_job_run+0x78/0x150
drm_sched_main+0x290/0x370
kthread+0xf0/0x100
ret_from_fork+0x10/0x20

The issue is that dev_pm_qos_mtx is held in the runpm suspend/resume (or
freq change) path, but it is also held across allocations that could
recurse into shrinker.

Solve this by changing dev_pm_qos_constraints_allocate() into a function
that can be called unconditionally before the device qos object is
needed and before aquiring dev_pm_qos_mtx.  This way the allocations can
be done without holding the mutex.  In the case that we raced with
another thread to allocate the qos object, detect this *after* acquiring
the dev_pm_qos_mtx and simply free the

[PATCH v4 4/9] PM / QoS: Decouple request alloc from dev_pm_qos_mtx

2023-08-07 Thread Rob Clark
From: Rob Clark 

Similar to the previous patch, move the allocation out from under
dev_pm_qos_mtx, by speculatively doing the allocation and handle
any race after acquiring dev_pm_qos_mtx by freeing the redundant
allocation.

Signed-off-by: Rob Clark 
---
 drivers/base/power/qos.c | 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c
index 7e95760d16dc..5ec06585b6d1 100644
--- a/drivers/base/power/qos.c
+++ b/drivers/base/power/qos.c
@@ -930,8 +930,12 @@ s32 dev_pm_qos_get_user_latency_tolerance(struct device 
*dev)
 int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val)
 {
struct dev_pm_qos *qos = dev_pm_qos_constraints_allocate(dev);
+   struct dev_pm_qos_request *req = NULL;
int ret = 0;
 
+   if (!dev->power.qos->latency_tolerance_req)
+   req = kzalloc(sizeof(*req), GFP_KERNEL);
+
mutex_lock(&dev_pm_qos_mtx);
 
dev_pm_qos_constraints_set(dev, qos);
@@ -945,8 +949,6 @@ int dev_pm_qos_update_user_latency_tolerance(struct device 
*dev, s32 val)
goto out;
 
if (!dev->power.qos->latency_tolerance_req) {
-   struct dev_pm_qos_request *req;
-
if (val < 0) {
if (val == PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT)
ret = 0;
@@ -954,17 +956,15 @@ int dev_pm_qos_update_user_latency_tolerance(struct 
device *dev, s32 val)
ret = -EINVAL;
goto out;
}
-   req = kzalloc(sizeof(*req), GFP_KERNEL);
if (!req) {
ret = -ENOMEM;
goto out;
}
ret = __dev_pm_qos_add_request(dev, req, 
DEV_PM_QOS_LATENCY_TOLERANCE, val);
-   if (ret < 0) {
-   kfree(req);
+   if (ret < 0)
goto out;
-   }
dev->power.qos->latency_tolerance_req = req;
+   req = NULL;
} else {
if (val < 0) {
__dev_pm_qos_drop_user_request(dev, 
DEV_PM_QOS_LATENCY_TOLERANCE);
@@ -976,6 +976,7 @@ int dev_pm_qos_update_user_latency_tolerance(struct device 
*dev, s32 val)
 
  out:
mutex_unlock(&dev_pm_qos_mtx);
+   kfree(req);
return ret;
 }
 EXPORT_SYMBOL_GPL(dev_pm_qos_update_user_latency_tolerance);
-- 
2.41.0



[PATCH v4 1/9] PM / devfreq: Drop unneed locking to appease lockdep

2023-08-07 Thread Rob Clark
From: Rob Clark 

In the process of adding lockdep annotation for GPU job_run() path to
catch potential deadlocks against the shrinker/reclaim path, I turned
up this lockdep splat:

   ==
   WARNING: possible circular locking dependency detected
   6.2.0-rc8-debug+ #556 Not tainted
   --
   ring0/123 is trying to acquire lock:
   ff8087219078 (&devfreq->lock){+.+.}-{3:3}, at: 
devfreq_monitor_resume+0x3c/0xf0

   but task is already holding lock:
   ffd6f64e57e8 (dma_fence_map){}-{0:0}, at: msm_job_run+0x68/0x150

   which lock already depends on the new lock.

   the existing dependency chain (in reverse order) is:

   -> #3 (dma_fence_map){}-{0:0}:
  __dma_fence_might_wait+0x74/0xc0
  dma_resv_lockdep+0x1f4/0x2f4
  do_one_initcall+0x104/0x2bc
  kernel_init_freeable+0x344/0x34c
  kernel_init+0x30/0x134
  ret_from_fork+0x10/0x20

   -> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}:
  fs_reclaim_acquire+0x80/0xa8
  slab_pre_alloc_hook.constprop.0+0x40/0x25c
  __kmem_cache_alloc_node+0x60/0x1cc
  __kmalloc+0xd8/0x100
  topology_parse_cpu_capacity+0x8c/0x178
  get_cpu_for_node+0x88/0xc4
  parse_cluster+0x1b0/0x28c
  parse_cluster+0x8c/0x28c
  init_cpu_topology+0x168/0x188
  smp_prepare_cpus+0x24/0xf8
  kernel_init_freeable+0x18c/0x34c
  kernel_init+0x30/0x134
  ret_from_fork+0x10/0x20

   -> #1 (fs_reclaim){+.+.}-{0:0}:
  __fs_reclaim_acquire+0x3c/0x48
  fs_reclaim_acquire+0x54/0xa8
  slab_pre_alloc_hook.constprop.0+0x40/0x25c
  __kmem_cache_alloc_node+0x60/0x1cc
  __kmalloc_node_track_caller+0xb8/0xe0
  kstrdup+0x70/0x90
  kstrdup_const+0x38/0x48
  kvasprintf_const+0x48/0xbc
  kobject_set_name_vargs+0x40/0xb0
  dev_set_name+0x64/0x8c
  devfreq_add_device+0x31c/0x55c
  devm_devfreq_add_device+0x6c/0xb8
  msm_devfreq_init+0xa8/0x16c
  msm_gpu_init+0x38c/0x570
  adreno_gpu_init+0x1b4/0x2b4
  a6xx_gpu_init+0x15c/0x3e4
  adreno_bind+0x218/0x254
  component_bind_all+0x114/0x1ec
  msm_drm_bind+0x2b8/0x608
  try_to_bring_up_aggregate_device+0x88/0x1a4
  __component_add+0xec/0x13c
  component_add+0x1c/0x28
  dsi_dev_attach+0x28/0x34
  dsi_host_attach+0xdc/0x124
  mipi_dsi_attach+0x30/0x44
  devm_mipi_dsi_attach+0x2c/0x70
  ti_sn_bridge_probe+0x298/0x2c4
  auxiliary_bus_probe+0x7c/0x94
  really_probe+0x158/0x290
  __driver_probe_device+0xc8/0xe0
  driver_probe_device+0x44/0x100
  __device_attach_driver+0x64/0xdc
  bus_for_each_drv+0xa0/0xc8
  __device_attach+0xd8/0x168
  device_initial_probe+0x1c/0x28
  bus_probe_device+0x38/0xa0
  deferred_probe_work_func+0xc8/0xe0
  process_one_work+0x2d8/0x478
  process_scheduled_works+0x4c/0x50
  worker_thread+0x218/0x274
  kthread+0xf0/0x100
  ret_from_fork+0x10/0x20

   -> #0 (&devfreq->lock){+.+.}-{3:3}:
  __lock_acquire+0xe00/0x1060
  lock_acquire+0x1e0/0x2f8
  __mutex_lock+0xcc/0x3c8
  mutex_lock_nested+0x30/0x44
  devfreq_monitor_resume+0x3c/0xf0
  devfreq_simple_ondemand_handler+0x54/0x7c
  devfreq_resume_device+0xa4/0xe8
  msm_devfreq_resume+0x78/0xa8
  a6xx_pm_resume+0x110/0x234
  adreno_runtime_resume+0x2c/0x38
  pm_generic_runtime_resume+0x30/0x44
  __rpm_callback+0x15c/0x174
  rpm_callback+0x78/0x7c
  rpm_resume+0x318/0x524
  __pm_runtime_resume+0x78/0xbc
  pm_runtime_get_sync.isra.0+0x14/0x20
  msm_gpu_submit+0x58/0x178
  msm_job_run+0x78/0x150
  drm_sched_main+0x290/0x370
  kthread+0xf0/0x100
  ret_from_fork+0x10/0x20

   other info that might help us debug this:

   Chain exists of:
 &devfreq->lock --> mmu_notifier_invalidate_range_start --> dma_fence_map

Possible unsafe locking scenario:

  CPU0CPU1
  
 lock(dma_fence_map);
  lock(mmu_notifier_invalidate_range_start);
  lock(dma_fence_map);
 lock(&devfreq->lock);

*** DEADLOCK ***

   2 locks held by ring0/123:
#0: ff8087201170 (&gpu->lock){+.+.}-{3:3}, at: msm_job_run+0x64/0x150
#1: ffd6f64e57e8 (dma_fence_map){}-{0:0}, at: msm_job_run+0x68/0x150

   stack backtrace:
   CPU: 6 PID: 123 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #556
   Hardware name: Google Lazor (rev1 - 2) with LTE (DT)
   Call trace:
dump_backtrace.part.0+0xb4/0xf8
show_stack+0x20/0x38
dump_stack_lvl+0

[PATCH v4 2/9] PM / devfreq: Teach lockdep about locking order

2023-08-07 Thread Rob Clark
From: Rob Clark 

This will make it easier to catch places doing allocations that can
trigger reclaim under devfreq->lock.

Because devfreq->lock is held over various devfreq_dev_profile
callbacks, there might be some fallout if those callbacks do allocations
that can trigger reclaim, but I've looked through the various callback
implementations and don't see anything obvious.  If it does trigger any
lockdep splats, those should be fixed.

Signed-off-by: Rob Clark 
---
 drivers/devfreq/devfreq.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
index e5558ec68ce8..81add6064406 100644
--- a/drivers/devfreq/devfreq.c
+++ b/drivers/devfreq/devfreq.c
@@ -817,6 +817,12 @@ struct devfreq *devfreq_add_device(struct device *dev,
}
 
mutex_init(&devfreq->lock);
+
+   /* Teach lockdep about lock ordering wrt. shrinker: */
+   fs_reclaim_acquire(GFP_KERNEL);
+   might_lock(&devfreq->lock);
+   fs_reclaim_release(GFP_KERNEL);
+
devfreq->dev.parent = dev;
devfreq->dev.class = devfreq_class;
devfreq->dev.release = devfreq_dev_release;
-- 
2.41.0



[PATCH v4 0/9] drm/msm+PM+icc: Make job_run() reclaim-safe

2023-08-07 Thread Rob Clark
From: Rob Clark 

Inspired by 
https://lore.kernel.org/dri-devel/20200604081224.863494-10-daniel.vet...@ffwll.ch/
it seemed like a good idea to get rid of memory allocation in job_run()
fence signaling path, and use lockdep annotations to yell at us about
anything that could deadlock against shrinker/reclaim.  Anything that
can trigger reclaim, or block on any other thread that has triggered
reclaim, can block the GPU shrinker from releasing memory if it is
waiting the job to complete, causing deadlock.

The first two patches decouple allocation from devfreq->lock, and teach
lockdep that devfreq->lock can be acquired in paths that the shrinker
indirectly depends on.

The next three patches do the same for PM QoS.  And the next two do a
similar thing for interconnect.

And then finally the last two patches enable the lockdep fence-
signalling annotations.


v2: Switch from embedding hw_fence in submit/job object to preallocating
the hw_fence.  Rework "fenced unpin" locking to drop obj lock from
fence signaling path (ie. the part that was still WIP in the first
iteration of the patchset).  Adds the final patch to enable fence
signaling annotations now that job_run() and job_free() are safe.
The PM devfreq/QoS and interconnect patches are unchanged.

v3: Mostly unchanged, but series is much smaller now that drm changes
have landed, mostly consisting of the remaining devfreq/qos/
interconnect fixes.

v4: Re-work PM / QoS patch based on Rafael's suggestion

Rob Clark (9):
  PM / devfreq: Drop unneed locking to appease lockdep
  PM / devfreq: Teach lockdep about locking order
  PM / QoS: Fix constraints alloc vs reclaim locking
  PM / QoS: Decouple request alloc from dev_pm_qos_mtx
  PM / QoS: Teach lockdep about dev_pm_qos_mtx locking order
  interconnect: Fix locking for runpm vs reclaim
  interconnect: Teach lockdep about icc_bw_lock order
  drm/sched: Add (optional) fence signaling annotation
  drm/msm: Enable fence signalling annotations

 drivers/base/power/qos.c   | 98 +++---
 drivers/devfreq/devfreq.c  | 52 +++---
 drivers/gpu/drm/msm/msm_ringbuffer.c   |  1 +
 drivers/gpu/drm/scheduler/sched_main.c |  9 +++
 drivers/interconnect/core.c| 18 -
 include/drm/gpu_scheduler.h|  2 +
 6 files changed, 127 insertions(+), 53 deletions(-)

-- 
2.41.0



Re: [PATCH] drm/amd/display: Clean up errors in dcn316_clk_mgr.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 11:01 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: open brace '{' following struct go on the same line
>
> Signed-off-by: Ran Sun 
> ---
>  .../gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c  | 6 ++
>  1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c 
> b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
> index 0349631991b8..09151cc56ce4 100644
> --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
> +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
> @@ -45,13 +45,11 @@
>  #define MAX_INSTANCE7
>  #define MAX_SEGMENT 6
>
> -struct IP_BASE_INSTANCE
> -{
> +struct IP_BASE_INSTANCE {
>  unsigned int segment[MAX_SEGMENT];
>  };
>
> -struct IP_BASE
> -{
> +struct IP_BASE {
>  struct IP_BASE_INSTANCE instance[MAX_INSTANCE];
>  };
>
> --
> 2.17.1
>


Re: [PATCH] drm/amd/display: Clean up errors in dcn315_smu.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 10:58 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: open brace '{' following struct go on the same line
> ERROR: code indent should use tabs where possible
>
> Signed-off-by: Ran Sun 
> ---
>  .../display/dc/clk_mgr/dcn315/dcn315_smu.c| 26 +--
>  1 file changed, 12 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c 
> b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c
> index 925d6e13620e..3e0da873cf4c 100644
> --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c
> +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c
> @@ -33,28 +33,26 @@
>  #define MAX_INSTANCE6
>  #define MAX_SEGMENT 6
>
> -struct IP_BASE_INSTANCE
> -{
> +struct IP_BASE_INSTANCE {
>  unsigned int segment[MAX_SEGMENT];
>  };
>
> -struct IP_BASE
> -{
> +struct IP_BASE {
>  struct IP_BASE_INSTANCE instance[MAX_INSTANCE];
>  };
>
>  static const struct IP_BASE MP0_BASE = { { { { 0x00016000, 0x00DC, 
> 0x00E0, 0x00E4, 0x0243FC00, 0 } },
> -{ { 0, 0, 0, 0, 0, 0 } },
> -{ { 0, 0, 0, 0, 0, 0 } },
> -{ { 0, 0, 0, 0, 0, 0 } },
> -{ { 0, 0, 0, 0, 0, 0 } },
> -{ { 0, 0, 0, 0, 0, 0 } } } };
> +   { { 0, 0, 0, 0, 0, 0 } },
> +   { { 0, 0, 0, 0, 0, 0 } },
> +   { { 0, 0, 0, 0, 0, 0 } },
> +   { { 0, 0, 0, 0, 0, 0 } },
> +   { { 0, 0, 0, 0, 0, 0 } } } };
>  static const struct IP_BASE NBIO_BASE = { { { { 0x, 0x0014, 
> 0x0D20, 0x00010400, 0x0241B000, 0x0404 } },
> -{ { 0, 0, 0, 0, 0, 0 } },
> -{ { 0, 0, 0, 0, 0, 0 } },
> -{ { 0, 0, 0, 0, 0, 0 } },
> -{ { 0, 0, 0, 0, 0, 0 } },
> -{ { 0, 0, 0, 0, 0, 0 } } } };
> +   { { 0, 0, 0, 0, 0, 0 } },
> +   { { 0, 0, 0, 0, 0, 0 } },
> +   { { 0, 0, 0, 0, 0, 0 } },
> +   { { 0, 0, 0, 0, 0, 0 } },
> +   { { 0, 0, 0, 0, 0, 0 } } } };
>
>  #define regBIF_BX_PF2_RSMU_INDEX 
>0x
>  #define regBIF_BX_PF2_RSMU_INDEX_BASE_IDX
>1
> --
> 2.17.1
>


Re: [PATCH 00/11] fbdev/sbus: Initializers for struct fb_ops

2023-08-07 Thread Sam Ravnborg
Hi Thomas,

On Sun, Aug 06, 2023 at 01:58:51PM +0200, Thomas Zimmermann wrote:
> Add initializer macros for struct fb_ops of drivers that operate
> on SBUS-based framebuffers. Also add a Kconfig token to select the
> correct dependencies.
> 
> All drivers for SBUS-based framebuffers use the regular helpers
> for framebuffers in I/O memory (fb_io_*() and cfb_*()). Each driver
> provides its own implementation of mmap and ioctls around common
> helpers from sbuslib.o. Patches 1 to 3 clean up the code a bit and
> add a initializer macros that set up struct fb_ops correctly.
> 
> Patches 4 to 11 convert the drivers. Each patch slightly renames
> the driver's mmap and ioctl functions so that it matches the name
> pattern of sbuslib.o.
> 
> Like the other fbdev initializer macros, the SBUS helpers are
> easily grep-able. In a later patch, they can be left to empty values
> if the rsp. functionality, such as file I/O or console, has been
> disabled.
> 
> There are no functional changes. The helpers set the defaults that
> the drivers already use. The fb_io_*() functions that the initializer
> macro sets are the defaults if struct fb_ops.fb_read or .fb_write are
> NULL. After all drivers have been updated to set them explicitly, the
> defaults can be dropped and the functions can be made optional.

I have looked thought it all and it looks good.
I throw it after my sparc32 build setup - also OK.

cg6 and ffb uses their own imageblit and friends, but this is nicely
handled in the patches.
I also like how you managed to handle the compat case.

All are:
Reviewed-by: Sam Ravnborg 

Sam


Re: [PATCH] drm/amd/display: Clean up errors in dce112_hw_sequencer.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 10:54 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: space required before the open brace '{'
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.c 
> b/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.c
> index 690caaaff019..0ef9ebb3c1e2 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.c
> @@ -127,7 +127,7 @@ static bool dce112_enable_display_power_gating(
> else
> cntl = ASIC_PIPE_DISABLE;
>
> -   if (power_gating != PIPE_GATING_CONTROL_INIT || controller_id == 0){
> +   if (power_gating != PIPE_GATING_CONTROL_INIT || controller_id == 0) {
>
> bp_result = dcb->funcs->enable_disp_power_gating(
> dcb, controller_id + 1, cntl);
> --
> 2.17.1
>


Re: [PATCH] drm/amd/display: Clean up errors in dce110_hw_sequencer.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 10:53 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: space required before the open brace '{'
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c 
> b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
> index 20d4d08a6a2f..7f306d979c63 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
> @@ -219,7 +219,7 @@ static bool dce110_enable_display_power_gating(
> if (controller_id == underlay_idx)
> controller_id = CONTROLLER_ID_UNDERLAY0 - 1;
>
> -   if (power_gating != PIPE_GATING_CONTROL_INIT || controller_id == 0){
> +   if (power_gating != PIPE_GATING_CONTROL_INIT || controller_id == 0) {
>
> bp_result = dcb->funcs->enable_disp_power_gating(
> dcb, controller_id + 1, cntl);
> --
> 2.17.1
>


Re: [PATCH] drm/amd/display: Clean up errors in dce110_timing_generator.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 10:52 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: spaces required around that '=' (ctx:WxV)
>
> Signed-off-by: Ran Sun 
> ---
>  .../gpu/drm/amd/display/dc/dce110/dce110_timing_generator.c   | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_timing_generator.c 
> b/drivers/gpu/drm/amd/display/dc/dce110/dce110_timing_generator.c
> index 27cbb5b42c7e..6424e7f279dc 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_timing_generator.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_timing_generator.c
> @@ -288,7 +288,7 @@ bool dce110_timing_generator_program_timing_generator(
>
> uint32_t vsync_offset = dc_crtc_timing->v_border_bottom +
> dc_crtc_timing->v_front_porch;
> -   uint32_t v_sync_start =dc_crtc_timing->v_addressable + vsync_offset;
> +   uint32_t v_sync_start = dc_crtc_timing->v_addressable + vsync_offset;
>
> uint32_t hsync_offset = dc_crtc_timing->h_border_right +
> dc_crtc_timing->h_front_porch;
> @@ -603,7 +603,7 @@ void dce110_timing_generator_program_blanking(
>  {
> uint32_t vsync_offset = timing->v_border_bottom +
> timing->v_front_porch;
> -   uint32_t v_sync_start =timing->v_addressable + vsync_offset;
> +   uint32_t v_sync_start = timing->v_addressable + vsync_offset;
>
> uint32_t hsync_offset = timing->h_border_right +
> timing->h_front_porch;
> --
> 2.17.1
>


Re: [PATCH] drm/amd/dc: Clean up errors in hpd_regs.h

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 10:47 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: space required after that ',' (ctx:VxV)
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/display/dc/gpio/hpd_regs.h | 10 +-
>  1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/gpio/hpd_regs.h 
> b/drivers/gpu/drm/amd/display/dc/gpio/hpd_regs.h
> index dcfdd71b2304..debb363cfcf4 100644
> --- a/drivers/gpu/drm/amd/display/dc/gpio/hpd_regs.h
> +++ b/drivers/gpu/drm/amd/display/dc/gpio/hpd_regs.h
> @@ -36,17 +36,17 @@
>  #define ONE_MORE_5 6
>
>
> -#define HPD_GPIO_REG_LIST_ENTRY(type,cd,id) \
> +#define HPD_GPIO_REG_LIST_ENTRY(type, cd, id) \
> .type ## _reg =  REG(DC_GPIO_HPD_## type),\
> .type ## _mask =  DC_GPIO_HPD_ ## type ## __DC_GPIO_HPD ## id ## _ ## 
> type ## _MASK,\
> .type ## _shift = DC_GPIO_HPD_ ## type ## __DC_GPIO_HPD ## id ## _ ## 
> type ## __SHIFT
>
>  #define HPD_GPIO_REG_LIST(id) \
> {\
> -   HPD_GPIO_REG_LIST_ENTRY(MASK,cd,id),\
> -   HPD_GPIO_REG_LIST_ENTRY(A,cd,id),\
> -   HPD_GPIO_REG_LIST_ENTRY(EN,cd,id),\
> -   HPD_GPIO_REG_LIST_ENTRY(Y,cd,id)\
> +   HPD_GPIO_REG_LIST_ENTRY(MASK, cd, id),\
> +   HPD_GPIO_REG_LIST_ENTRY(A, cd, id),\
> +   HPD_GPIO_REG_LIST_ENTRY(EN, cd, id),\
> +   HPD_GPIO_REG_LIST_ENTRY(Y, cd, id)\
> }
>
>  #define HPD_REG_LIST(id) \
> --
> 2.17.1
>


Re: [PATCH] drm/amd/display: Clean up errors in ddc_regs.h

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 10:44 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: space required after that ',' (ctx:VxV)
>
> Signed-off-by: Ran Sun 
> ---
>  .../gpu/drm/amd/display/dc/gpio/ddc_regs.h| 40 +--
>  1 file changed, 20 insertions(+), 20 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/gpio/ddc_regs.h 
> b/drivers/gpu/drm/amd/display/dc/gpio/ddc_regs.h
> index 59884ef651b3..4a2bf81286d8 100644
> --- a/drivers/gpu/drm/amd/display/dc/gpio/ddc_regs.h
> +++ b/drivers/gpu/drm/amd/display/dc/gpio/ddc_regs.h
> @@ -31,21 +31,21 @@
>  /** new register headers */
>  /*** following in header */
>
> -#define DDC_GPIO_REG_LIST_ENTRY(type,cd,id) \
> +#define DDC_GPIO_REG_LIST_ENTRY(type, cd, id) \
> .type ## _reg =   REG(DC_GPIO_DDC ## id ## _ ## type),\
> .type ## _mask =  DC_GPIO_DDC ## id ## _ ## type ## __DC_GPIO_DDC ## 
> id ## cd ## _ ## type ## _MASK,\
> .type ## _shift = DC_GPIO_DDC ## id ## _ ## type ## __DC_GPIO_DDC ## 
> id ## cd ## _ ## type ## __SHIFT
>
> -#define DDC_GPIO_REG_LIST(cd,id) \
> +#define DDC_GPIO_REG_LIST(cd, id) \
> {\
> -   DDC_GPIO_REG_LIST_ENTRY(MASK,cd,id),\
> -   DDC_GPIO_REG_LIST_ENTRY(A,cd,id),\
> -   DDC_GPIO_REG_LIST_ENTRY(EN,cd,id),\
> -   DDC_GPIO_REG_LIST_ENTRY(Y,cd,id)\
> +   DDC_GPIO_REG_LIST_ENTRY(MASK, cd, id),\
> +   DDC_GPIO_REG_LIST_ENTRY(A, cd, id),\
> +   DDC_GPIO_REG_LIST_ENTRY(EN, cd, id),\
> +   DDC_GPIO_REG_LIST_ENTRY(Y, cd, id)\
> }
>
> -#define DDC_REG_LIST(cd,id) \
> -   DDC_GPIO_REG_LIST(cd,id),\
> +#define DDC_REG_LIST(cd, id) \
> +   DDC_GPIO_REG_LIST(cd, id),\
> .ddc_setup = REG(DC_I2C_DDC ## id ## _SETUP)
>
> #define DDC_REG_LIST_DCN2(cd, id) \
> @@ -54,34 +54,34 @@
> .phy_aux_cntl = REG(PHY_AUX_CNTL), \
> .dc_gpio_aux_ctrl_5 = REG(DC_GPIO_AUX_CTRL_5)
>
> -#define DDC_GPIO_VGA_REG_LIST_ENTRY(type,cd)\
> +#define DDC_GPIO_VGA_REG_LIST_ENTRY(type, cd)\
> .type ## _reg =   REG(DC_GPIO_DDCVGA_ ## type),\
> .type ## _mask =  DC_GPIO_DDCVGA_ ## type ## __DC_GPIO_DDCVGA ## cd 
> ## _ ## type ## _MASK,\
> .type ## _shift = DC_GPIO_DDCVGA_ ## type ## __DC_GPIO_DDCVGA ## cd 
> ## _ ## type ## __SHIFT
>
>  #define DDC_GPIO_VGA_REG_LIST(cd) \
> {\
> -   DDC_GPIO_VGA_REG_LIST_ENTRY(MASK,cd),\
> -   DDC_GPIO_VGA_REG_LIST_ENTRY(A,cd),\
> -   DDC_GPIO_VGA_REG_LIST_ENTRY(EN,cd),\
> -   DDC_GPIO_VGA_REG_LIST_ENTRY(Y,cd)\
> +   DDC_GPIO_VGA_REG_LIST_ENTRY(MASK, cd),\
> +   DDC_GPIO_VGA_REG_LIST_ENTRY(A, cd),\
> +   DDC_GPIO_VGA_REG_LIST_ENTRY(EN, cd),\
> +   DDC_GPIO_VGA_REG_LIST_ENTRY(Y, cd)\
> }
>
>  #define DDC_VGA_REG_LIST(cd) \
> DDC_GPIO_VGA_REG_LIST(cd),\
> .ddc_setup = mmDC_I2C_DDCVGA_SETUP
>
> -#define DDC_GPIO_I2C_REG_LIST_ENTRY(type,cd) \
> +#define DDC_GPIO_I2C_REG_LIST_ENTRY(type, cd) \
> .type ## _reg =   REG(DC_GPIO_I2CPAD_ ## type),\
> .type ## _mask =  DC_GPIO_I2CPAD_ ## type ## __DC_GPIO_ ## cd ## _ ## 
> type ## _MASK,\
> .type ## _shift = DC_GPIO_I2CPAD_ ## type ## __DC_GPIO_ ## cd ## _ ## 
> type ## __SHIFT
>
>  #define DDC_GPIO_I2C_REG_LIST(cd) \
> {\
> -   DDC_GPIO_I2C_REG_LIST_ENTRY(MASK,cd),\
> -   DDC_GPIO_I2C_REG_LIST_ENTRY(A,cd),\
> -   DDC_GPIO_I2C_REG_LIST_ENTRY(EN,cd),\
> -   DDC_GPIO_I2C_REG_LIST_ENTRY(Y,cd)\
> +   DDC_GPIO_I2C_REG_LIST_ENTRY(MASK, cd),\
> +   DDC_GPIO_I2C_REG_LIST_ENTRY(A, cd),\
> +   DDC_GPIO_I2C_REG_LIST_ENTRY(EN, cd),\
> +   DDC_GPIO_I2C_REG_LIST_ENTRY(Y, cd)\
> }
>
>  #define DDC_I2C_REG_LIST(cd) \
> @@ -150,12 +150,12 @@ struct ddc_sh_mask {
>
>  #define ddc_data_regs(id) \
>  {\
> -   DDC_REG_LIST(DATA,id)\
> +   DDC_REG_LIST(DATA, id)\
>  }
>
>  #define ddc_clk_regs(id) \
>  {\
> -   DDC_REG_LIST(CLK,id)\
> +   DDC_REG_LIST(CLK, id)\
>  }
>
>  #define ddc_vga_data_regs \
> --
> 2.17.1
>


Re: [PATCH] drm/amd/display: Clean up errors in color_gamma.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 10:40 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: trailing whitespace
> ERROR: else should follow close brace '}'
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/display/modules/color/color_gamma.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c 
> b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
> index 67a062af3ab0..ff8e5708735d 100644
> --- a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
> +++ b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
> @@ -359,7 +359,7 @@ static struct fixed31_32 translate_from_linear_space(
> scratch_1 = dc_fixpt_add(one, args->a3);
> /* In the first region (first 16 points) and in the
>  * region delimited by START/END we calculate with
> -* full precision to avoid error accumulation.
> +* full precision to avoid error accumulation.
>  */
> if ((cal_buffer->buffer_index >= PRECISE_LUT_REGION_START &&
> cal_buffer->buffer_index <= PRECISE_LUT_REGION_END) ||
> @@ -379,8 +379,7 @@ static struct fixed31_32 translate_from_linear_space(
> scratch_1 = dc_fixpt_sub(scratch_1, args->a2);
>
> return scratch_1;
> -   }
> -   else
> +   } else
> return dc_fixpt_mul(args->arg, args->a1);
>  }
>
> --
> 2.17.1
>


Re: [PATCH] drm/amd/pm: Clean up errors in amdgpu_pm.c

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 10:31 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: that open brace { should be on the previous line
> ERROR: space required before the open parenthesis '('
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/pm/amdgpu_pm.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c 
> b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> index 3922dd274f30..acaab3441030 100644
> --- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> +++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> @@ -743,7 +743,7 @@ static ssize_t amdgpu_set_pp_od_clk_voltage(struct device 
> *dev,
> type = PP_OD_EDIT_CCLK_VDDC_TABLE;
> else if (*buf == 'm')
> type = PP_OD_EDIT_MCLK_VDDC_TABLE;
> -   else if(*buf == 'r')
> +   else if (*buf == 'r')
> type = PP_OD_RESTORE_DEFAULT_TABLE;
> else if (*buf == 'c')
> type = PP_OD_COMMIT_DPM_TABLE;
> @@ -3532,7 +3532,8 @@ void amdgpu_pm_sysfs_fini(struct amdgpu_device *adev)
>  #if defined(CONFIG_DEBUG_FS)
>
>  static void amdgpu_debugfs_prints_cpu_info(struct seq_file *m,
> -  struct amdgpu_device *adev) {
> +  struct amdgpu_device *adev)
> +{
> uint16_t *p_val;
> uint32_t size;
> int i;
> --
> 2.17.1
>


Re: [PATCH] drm/amd/pm: Clean up errors in sislands_smc.h

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 10:25 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: that open brace { should be on the previous line
>
> Signed-off-by: Ran Sun 
> ---
>  .../gpu/drm/amd/pm/legacy-dpm/sislands_smc.h  | 63 +++
>  1 file changed, 21 insertions(+), 42 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/sislands_smc.h 
> b/drivers/gpu/drm/amd/pm/legacy-dpm/sislands_smc.h
> index c7dc117a688c..90ec411c5029 100644
> --- a/drivers/gpu/drm/amd/pm/legacy-dpm/sislands_smc.h
> +++ b/drivers/gpu/drm/amd/pm/legacy-dpm/sislands_smc.h
> @@ -29,8 +29,7 @@
>
>  #define SISLANDS_MAX_SMC_PERFORMANCE_LEVELS_PER_SWSTATE 16
>
> -struct PP_SIslands_Dpm2PerfLevel
> -{
> +struct PP_SIslands_Dpm2PerfLevel {
>  uint8_t MaxPS;
>  uint8_t TgtAct;
>  uint8_t MaxPS_StepInc;
> @@ -47,8 +46,7 @@ struct PP_SIslands_Dpm2PerfLevel
>
>  typedef struct PP_SIslands_Dpm2PerfLevel PP_SIslands_Dpm2PerfLevel;
>
> -struct PP_SIslands_DPM2Status
> -{
> +struct PP_SIslands_DPM2Status {
>  uint32_tdpm2Flags;
>  uint8_t CurrPSkip;
>  uint8_t CurrPSkipPowerShift;
> @@ -68,8 +66,7 @@ struct PP_SIslands_DPM2Status
>
>  typedef struct PP_SIslands_DPM2Status PP_SIslands_DPM2Status;
>
> -struct PP_SIslands_DPM2Parameters
> -{
> +struct PP_SIslands_DPM2Parameters {
>  uint32_tTDPLimit;
>  uint32_tNearTDPLimit;
>  uint32_tSafePowerLimit;
> @@ -78,8 +75,7 @@ struct PP_SIslands_DPM2Parameters
>  };
>  typedef struct PP_SIslands_DPM2Parameters PP_SIslands_DPM2Parameters;
>
> -struct PP_SIslands_PAPMStatus
> -{
> +struct PP_SIslands_PAPMStatus {
>  uint32_tEstimatedDGPU_T;
>  uint32_tEstimatedDGPU_P;
>  uint32_tEstimatedAPU_T;
> @@ -89,8 +85,7 @@ struct PP_SIslands_PAPMStatus
>  };
>  typedef struct PP_SIslands_PAPMStatus PP_SIslands_PAPMStatus;
>
> -struct PP_SIslands_PAPMParameters
> -{
> +struct PP_SIslands_PAPMParameters {
>  uint32_tNearTDPLimitTherm;
>  uint32_tNearTDPLimitPAPM;
>  uint32_tPlatformPowerLimit;
> @@ -100,8 +95,7 @@ struct PP_SIslands_PAPMParameters
>  };
>  typedef struct PP_SIslands_PAPMParameters PP_SIslands_PAPMParameters;
>
> -struct SISLANDS_SMC_SCLK_VALUE
> -{
> +struct SISLANDS_SMC_SCLK_VALUE {
>  uint32_tvCG_SPLL_FUNC_CNTL;
>  uint32_tvCG_SPLL_FUNC_CNTL_2;
>  uint32_tvCG_SPLL_FUNC_CNTL_3;
> @@ -113,8 +107,7 @@ struct SISLANDS_SMC_SCLK_VALUE
>
>  typedef struct SISLANDS_SMC_SCLK_VALUE SISLANDS_SMC_SCLK_VALUE;
>
> -struct SISLANDS_SMC_MCLK_VALUE
> -{
> +struct SISLANDS_SMC_MCLK_VALUE {
>  uint32_tvMPLL_FUNC_CNTL;
>  uint32_tvMPLL_FUNC_CNTL_1;
>  uint32_tvMPLL_FUNC_CNTL_2;
> @@ -129,8 +122,7 @@ struct SISLANDS_SMC_MCLK_VALUE
>
>  typedef struct SISLANDS_SMC_MCLK_VALUE SISLANDS_SMC_MCLK_VALUE;
>
> -struct SISLANDS_SMC_VOLTAGE_VALUE
> -{
> +struct SISLANDS_SMC_VOLTAGE_VALUE {
>  uint16_tvalue;
>  uint8_t index;
>  uint8_t phase_settings;
> @@ -138,8 +130,7 @@ struct SISLANDS_SMC_VOLTAGE_VALUE
>
>  typedef struct SISLANDS_SMC_VOLTAGE_VALUE SISLANDS_SMC_VOLTAGE_VALUE;
>
> -struct SISLANDS_SMC_HW_PERFORMANCE_LEVEL
> -{
> +struct SISLANDS_SMC_HW_PERFORMANCE_LEVEL {
>  uint8_t ACIndex;
>  uint8_t displayWatermark;
>  uint8_t gen2PCIE;
> @@ -180,8 +171,7 @@ struct SISLANDS_SMC_HW_PERFORMANCE_LEVEL
>
>  typedef struct SISLANDS_SMC_HW_PERFORMANCE_LEVEL 
> SISLANDS_SMC_HW_PERFORMANCE_LEVEL;
>
> -struct SISLANDS_SMC_SWSTATE
> -{
> +struct SISLANDS_SMC_SWSTATE {
> uint8_t flags;
> uint8_t levelCount;
> uint8_t padding2;
> @@ -205,8 +195,7 @@ struct SISLANDS_SMC_SWSTATE_SINGLE {
>  #define SISLANDS_SMC_VOLTAGEMASK_VDDC_PHASE_SHEDDING 3
>  #define SISLANDS_SMC_VOLTAGEMASK_MAX   4
>
> -struct SISLANDS_SMC_VOLTAGEMASKTABLE
> -{
> +struct SISLANDS_SMC_VOLTAGEMASKTABLE {
>  uint32_t lowMask[SISLANDS_SMC_VOLTAGEMASK_MAX];
>  };
>
> @@ -214,8 +203,7 @@ typedef struct SISLANDS_SMC_VOLTAGEMASKTABLE 
> SISLANDS_SMC_VOLTAGEMASKTABLE;
>
>  #define SISLANDS_MAX_NO_VREG_STEPS 32
>
> -struct SISLANDS_SMC_STATETABLE
> -{
> +struct SISLANDS_SMC_STATETABLE {
> uint8_t thermalProtectType;
> uint8_t systemFlags;
> uint8_t maxVDDCIndexInPPTable;
> @@ -254,8 +242,7 @@ typedef struct SISLANDS_SMC_STATETABLE 
> SISLANDS_SMC_STATETABLE;
>  #define SI_SMC_SOFT_REGISTER_svi_rework_gpio_id_svd   0x11c
>  #define SI_SMC_SOFT_REGISTER_svi_rework_gpio_id_svc   0x120
>
> -struct PP_SIslands_FanTable
> -{
> +struct PP_SIslands_FanTable {
> uint8_t  fdo_mode;
> uint8_t  padding;
> int16_t  temp_min;
> @@ -285,8 +272,7 @@ typedef struct PP_SIslands_FanTable PP_SIslands_FanTable;
>  #define SMC_SISLAN

Re: [PATCH] drm/amd/pm: Clean up errors in r600_dpm.h

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 10:15 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: that open brace { should be on the previous line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/pm/legacy-dpm/r600_dpm.h | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/r600_dpm.h 
> b/drivers/gpu/drm/amd/pm/legacy-dpm/r600_dpm.h
> index 055321f61ca7..3e7caa715533 100644
> --- a/drivers/gpu/drm/amd/pm/legacy-dpm/r600_dpm.h
> +++ b/drivers/gpu/drm/amd/pm/legacy-dpm/r600_dpm.h
> @@ -117,8 +117,7 @@ enum r600_display_watermark {
> R600_DISPLAY_WATERMARK_HIGH = 1,
>  };
>
> -enum r600_display_gap
> -{
> +enum r600_display_gap {
>  R600_PM_DISPLAY_GAP_VBLANK_OR_WM = 0,
>  R600_PM_DISPLAY_GAP_VBLANK   = 1,
>  R600_PM_DISPLAY_GAP_WATERMARK= 2,
> --
> 2.17.1
>


Re: [PATCH] drivers/amd/pm: Clean up errors in smu8_smumgr.h

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 10:10 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: that open brace { should be on the previous line
> ERROR: space prohibited before that ',' (ctx:WxW)
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c | 48 --
>  1 file changed, 17 insertions(+), 31 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c 
> b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
> index 36c831b280ed..5d28c951a319 100644
> --- a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
> +++ b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
> @@ -191,8 +191,7 @@ static void sumo_construct_vid_mapping_table(struct 
> amdgpu_device *adev,
>  }
>
>  #if 0
> -static const struct kv_lcac_config_values sx_local_cac_cfg_kv[] =
> -{
> +static const struct kv_lcac_config_values sx_local_cac_cfg_kv[] = {
> {  0,   4,1},
> {  1,   4,1},
> {  2,   5,1},
> @@ -204,32 +203,27 @@ static const struct kv_lcac_config_values 
> sx_local_cac_cfg_kv[] =
> { 0x }
>  };
>
> -static const struct kv_lcac_config_values mc0_local_cac_cfg_kv[] =
> -{
> +static const struct kv_lcac_config_values mc0_local_cac_cfg_kv[] = {
> {  0,   4,1},
> { 0x }
>  };
>
> -static const struct kv_lcac_config_values mc1_local_cac_cfg_kv[] =
> -{
> +static const struct kv_lcac_config_values mc1_local_cac_cfg_kv[] = {
> {  0,   4,1},
> { 0x }
>  };
>
> -static const struct kv_lcac_config_values mc2_local_cac_cfg_kv[] =
> -{
> +static const struct kv_lcac_config_values mc2_local_cac_cfg_kv[] = {
> {  0,   4,1},
> { 0x }
>  };
>
> -static const struct kv_lcac_config_values mc3_local_cac_cfg_kv[] =
> -{
> +static const struct kv_lcac_config_values mc3_local_cac_cfg_kv[] = {
> {  0,   4,1},
> { 0x }
>  };
>
> -static const struct kv_lcac_config_values cpl_local_cac_cfg_kv[] =
> -{
> +static const struct kv_lcac_config_values cpl_local_cac_cfg_kv[] = {
> {  0,   4,1},
> {  1,   4,1},
> {  2,   5,1},
> @@ -260,39 +254,32 @@ static const struct kv_lcac_config_values 
> cpl_local_cac_cfg_kv[] =
> { 0x }
>  };
>
> -static const struct kv_lcac_config_reg sx0_cac_config_reg[] =
> -{
> +static const struct kv_lcac_config_reg sx0_cac_config_reg[] = {
> { 0xc0400d00, 0x003e, 17, 0x3fc0, 22, 0x0001fffe, 1, 
> 0x0001, 0 }
>  };
>
> -static const struct kv_lcac_config_reg mc0_cac_config_reg[] =
> -{
> +static const struct kv_lcac_config_reg mc0_cac_config_reg[] = {
> { 0xc0400d30, 0x003e, 17, 0x3fc0, 22, 0x0001fffe, 1, 
> 0x0001, 0 }
>  };
>
> -static const struct kv_lcac_config_reg mc1_cac_config_reg[] =
> -{
> +static const struct kv_lcac_config_reg mc1_cac_config_reg[] = {
> { 0xc0400d3c, 0x003e, 17, 0x3fc0, 22, 0x0001fffe, 1, 
> 0x0001, 0 }
>  };
>
> -static const struct kv_lcac_config_reg mc2_cac_config_reg[] =
> -{
> +static const struct kv_lcac_config_reg mc2_cac_config_reg[] = {
> { 0xc0400d48, 0x003e, 17, 0x3fc0, 22, 0x0001fffe, 1, 
> 0x0001, 0 }
>  };
>
> -static const struct kv_lcac_config_reg mc3_cac_config_reg[] =
> -{
> +static const struct kv_lcac_config_reg mc3_cac_config_reg[] = {
> { 0xc0400d54, 0x003e, 17, 0x3fc0, 22, 0x0001fffe, 1, 
> 0x0001, 0 }
>  };
>
> -static const struct kv_lcac_config_reg cpl_cac_config_reg[] =
> -{
> +static const struct kv_lcac_config_reg cpl_cac_config_reg[] = {
> { 0xc0400d80, 0x003e, 17, 0x3fc0, 22, 0x0001fffe, 1, 
> 0x0001, 0 }
>  };
>  #endif
>
> -static const struct kv_pt_config_reg didt_config_kv[] =
> -{
> +static const struct kv_pt_config_reg didt_config_kv[] = {
> { 0x10, 0x00ff, 0, 0x0, KV_CONFIGREG_DIDT_IND },
> { 0x10, 0xff00, 8, 0x0, KV_CONFIGREG_DIDT_IND },
> { 0x10, 0x00ff, 16, 0x0, KV_CONFIGREG_DIDT_IND },
> @@ -1173,9 +1160,9 @@ static void kv_calculate_dfs_bypass_settings(struct 
> amdgpu_device *adev)
> pi->graphics_level[i].ClkBypassCntl = 
> 2;
> else if 
> (kv_get_clock_difference(table->entries[i].clk, 26600) < 200)
> pi->graphics_level[i].ClkBypassCntl = 
> 7;
> -   else if 
> (kv_get_clock_difference(table->entries[i].clk , 2) < 200)
> +   else if 
> (kv_get_clock_difference(table->entries[i].clk, 2) < 200)
> pi->graphics_level[i].ClkBypassCntl = 
> 6;
> -   else if 
> (kv_get_clock_difference(table->entries[i].clk , 1) < 200)
> +   else if 
> (kv_get

Re: [PATCH] drm/amd/pm: Clean up errors in smu8_smumgr.h

2023-08-07 Thread Alex Deucher
Need to be careful with these changes to make sure we aren't changing
the size calculations somewhere.

Alex

On Tue, Aug 1, 2023 at 10:03 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: Use C99 flexible arrays
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/pm/powerplay/smumgr/smu8_smumgr.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/smumgr/smu8_smumgr.h 
> b/drivers/gpu/drm/amd/pm/powerplay/smumgr/smu8_smumgr.h
> index c7b61222d258..475ffcf743d2 100644
> --- a/drivers/gpu/drm/amd/pm/powerplay/smumgr/smu8_smumgr.h
> +++ b/drivers/gpu/drm/amd/pm/powerplay/smumgr/smu8_smumgr.h
> @@ -73,7 +73,7 @@ struct smu8_register_index_data_pair {
>
>  struct smu8_ih_meta_data {
> uint32_t command;
> -   struct smu8_register_index_data_pair register_index_value_pair[1];
> +   struct smu8_register_index_data_pair register_index_value_pair[0];
>  };
>
>  struct smu8_smumgr {
> --
> 2.17.1
>


Re: [PATCH] drm/amd/pm: Clean up errors in smu75.h

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 9:59 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: space prohibited before open square bracket '['
> ERROR: "foo * bar" should be "foo *bar"
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/pm/powerplay/inc/smu75.h | 12 ++--
>  1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/inc/smu75.h 
> b/drivers/gpu/drm/amd/pm/powerplay/inc/smu75.h
> index 771523001533..7d5ed7751976 100644
> --- a/drivers/gpu/drm/amd/pm/powerplay/inc/smu75.h
> +++ b/drivers/gpu/drm/amd/pm/powerplay/inc/smu75.h
> @@ -224,8 +224,8 @@ struct SMU7_LocalDpmScoreboard {
> uint8_t  DteClampMode;
> uint8_t  FpsClampMode;
>
> -   uint16_t LevelResidencyCounters [SMU75_MAX_LEVELS_GRAPHICS];
> -   uint16_t LevelSwitchCounters [SMU75_MAX_LEVELS_GRAPHICS];
> +   uint16_t LevelResidencyCounters[SMU75_MAX_LEVELS_GRAPHICS];
> +   uint16_t LevelSwitchCounters[SMU75_MAX_LEVELS_GRAPHICS];
>
> void (*TargetStateCalculator)(uint8_t);
> void (*SavedTargetStateCalculator)(uint8_t);
> @@ -316,7 +316,7 @@ struct SMU7_VoltageScoreboard {
>
> VoltageChangeHandler_t functionLinks[6];
>
> -   uint16_t * VddcFollower1;
> +   uint16_t *VddcFollower1;
> int16_t  Driver_OD_RequestedVidOffset1;
> int16_t  Driver_OD_RequestedVidOffset2;
>  };
> @@ -677,9 +677,9 @@ typedef struct SCS_CELL_t SCS_CELL_t;
>
>  struct VFT_TABLE_t {
> VFT_CELL_tCell[TEMP_RANGE_MAXSTEPS][NUM_VFT_COLUMNS];
> -   uint16_t  AvfsGbv [NUM_VFT_COLUMNS];
> -   uint16_t  BtcGbv  [NUM_VFT_COLUMNS];
> -   int16_t   Temperature [TEMP_RANGE_MAXSTEPS];
> +   uint16_t  AvfsGbv[NUM_VFT_COLUMNS];
> +   uint16_t  BtcGbv[NUM_VFT_COLUMNS];
> +   int16_t   Temperature[TEMP_RANGE_MAXSTEPS];
>
>  #ifdef SMU__FIRMWARE_SCKS_PRESENT__1
> SCS_CELL_tScksCell[TEMP_RANGE_MAXSTEPS][NUM_VFT_COLUMNS];
> --
> 2.17.1
>


Re: [PATCH] drm/amd/pm: Clean up errors in smu73.h

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 9:56 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: open brace '{' following struct go on the same line
> ERROR: space prohibited before open square bracket '['
> ERROR: "foo * bar" should be "foo *bar"
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/pm/powerplay/inc/smu73.h | 45 
>  1 file changed, 17 insertions(+), 28 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/inc/smu73.h 
> b/drivers/gpu/drm/amd/pm/powerplay/inc/smu73.h
> index c6b12a4c00db..cf4b2c3c65bc 100644
> --- a/drivers/gpu/drm/amd/pm/powerplay/inc/smu73.h
> +++ b/drivers/gpu/drm/amd/pm/powerplay/inc/smu73.h
> @@ -37,8 +37,7 @@ enum Poly3rdOrderCoeff {
>  POLY_3RD_ORDER_COUNT
>  };
>
> -struct SMU7_Poly3rdOrder_Data
> -{
> +struct SMU7_Poly3rdOrder_Data {
>  int32_t a;
>  int32_t b;
>  int32_t c;
> @@ -51,8 +50,7 @@ struct SMU7_Poly3rdOrder_Data
>
>  typedef struct SMU7_Poly3rdOrder_Data SMU7_Poly3rdOrder_Data;
>
> -struct Power_Calculator_Data
> -{
> +struct Power_Calculator_Data {
>uint16_t NoLoadVoltage;
>uint16_t LoadVoltage;
>uint16_t Resistance;
> @@ -71,8 +69,7 @@ struct Power_Calculator_Data
>
>  typedef struct Power_Calculator_Data PowerCalculatorData_t;
>
> -struct Gc_Cac_Weight_Data
> -{
> +struct Gc_Cac_Weight_Data {
>uint8_t index;
>uint32_t value;
>  };
> @@ -187,8 +184,7 @@ typedef struct {
>  #define SMU73_THERMAL_CLAMP_MODE_COUNT 8
>
>
> -struct SMU7_HystController_Data
> -{
> +struct SMU7_HystController_Data {
>  uint16_t waterfall_up;
>  uint16_t waterfall_down;
>  uint16_t waterfall_limit;
> @@ -199,8 +195,7 @@ struct SMU7_HystController_Data
>
>  typedef struct SMU7_HystController_Data SMU7_HystController_Data;
>
> -struct SMU73_PIDController
> -{
> +struct SMU73_PIDController {
>  uint32_t Ki;
>  int32_t LFWindupUpperLim;
>  int32_t LFWindupLowerLim;
> @@ -215,8 +210,7 @@ struct SMU73_PIDController
>
>  typedef struct SMU73_PIDController SMU73_PIDController;
>
> -struct SMU7_LocalDpmScoreboard
> -{
> +struct SMU7_LocalDpmScoreboard {
>  uint32_t PercentageBusy;
>
>  int32_t  PIDError;
> @@ -261,8 +255,8 @@ struct SMU7_LocalDpmScoreboard
>  uint8_t  DteClampMode;
>  uint8_t  FpsClampMode;
>
> -uint16_t LevelResidencyCounters [SMU73_MAX_LEVELS_GRAPHICS];
> -uint16_t LevelSwitchCounters [SMU73_MAX_LEVELS_GRAPHICS];
> +uint16_t LevelResidencyCounters[SMU73_MAX_LEVELS_GRAPHICS];
> +uint16_t LevelSwitchCounters[SMU73_MAX_LEVELS_GRAPHICS];
>
>  void (*TargetStateCalculator)(uint8_t);
>  void (*SavedTargetStateCalculator)(uint8_t);
> @@ -315,8 +309,7 @@ typedef uint8_t (*VoltageChangeHandler_t)(uint16_t, 
> uint8_t);
>
>  typedef uint32_t SMU_VoltageLevel;
>
> -struct SMU7_VoltageScoreboard
> -{
> +struct SMU7_VoltageScoreboard {
>  SMU_VoltageLevel TargetVoltage;
>  uint16_t MaxVid;
>  uint8_t  HighestVidOffset;
> @@ -354,7 +347,7 @@ struct SMU7_VoltageScoreboard
>
>  VoltageChangeHandler_t functionLinks[6];
>
> -uint16_t * VddcFollower1;
> +uint16_t *VddcFollower1;
>
>  int16_t  Driver_OD_RequestedVidOffset1;
>  int16_t  Driver_OD_RequestedVidOffset2;
> @@ -366,8 +359,7 @@ typedef struct SMU7_VoltageScoreboard 
> SMU7_VoltageScoreboard;
>  // 
> -
>  #define SMU7_MAX_PCIE_LINK_SPEEDS 3 /* 0:Gen1 1:Gen2 2:Gen3 */
>
> -struct SMU7_PCIeLinkSpeedScoreboard
> -{
> +struct SMU7_PCIeLinkSpeedScoreboard {
>  uint8_t DpmEnable;
>  uint8_t DpmRunning;
>  uint8_t DpmForce;
> @@ -396,8 +388,7 @@ typedef struct SMU7_PCIeLinkSpeedScoreboard 
> SMU7_PCIeLinkSpeedScoreboard;
>  #define SMU7_SCALE_I  7
>  #define SMU7_SCALE_R 12
>
> -struct SMU7_PowerScoreboard
> -{
> +struct SMU7_PowerScoreboard {
>  uint32_t GpuPower;
>
>  uint32_t VddcPower;
> @@ -436,8 +427,7 @@ typedef struct SMU7_PowerScoreboard SMU7_PowerScoreboard;
>  #define SMU7_VCE_SCLK_HANDSHAKE_DISABLE  0x0002
>
>  // All 'soft registers' should be uint32_t.
> -struct SMU73_SoftRegisters
> -{
> +struct SMU73_SoftRegisters {
>  uint32_tRefClockFrequency;
>  uint32_tPmTimerPeriod;
>  uint32_tFeatureEnables;
> @@ -493,8 +483,7 @@ struct SMU73_SoftRegisters
>
>  typedef struct SMU73_SoftRegisters SMU73_SoftRegisters;
>
> -struct SMU73_Firmware_Header
> -{
> +struct SMU73_Firmware_Header {
>  uint32_t Digest[5];
>  uint32_t Version;
>  uint32_t HeaderSize;
> @@ -708,9 +697,9 @@ typedef struct VFT_CELL_t VFT_CELL_t;
>
>  struct VFT_TABLE_t {
>VFT_CELL_tCell[TEMP_RANGE_MAXSTEPS][NUM_VFT_COLUMNS];
> -  uint16_t  AvfsGbv [NUM_VFT_COLUMNS];
> -  uint16_t  BtcGbv  [NUM_VFT_COLUMNS];
> -  uint16_t  Temperature [TEMP_RANGE_MAXSTEPS];
> +  uint16_t  AvfsGbv[NUM_VFT_COLUMNS];
> +  uint16_t  Btc

Re: [PATCH RFC v5 09/10] drm/msm/dpu: Use DRM solid_fill property

2023-08-07 Thread Jessica Zhang




On 7/31/2023 5:52 PM, Dmitry Baryshkov wrote:

On 01/08/2023 03:39, Jessica Zhang wrote:



On 7/30/2023 9:15 PM, Dmitry Baryshkov wrote:

On 28/07/2023 20:02, Jessica Zhang wrote:

Drop DPU_PLANE_COLOR_FILL_FLAG and check the DRM solid_fill property to
determine if the plane is solid fill. In addition drop the DPU plane
color_fill field as we can now use drm_plane_state.solid_fill instead,
and pass in drm_plane_state.alpha to _dpu_plane_color_fill_pipe() to
allow userspace to configure the alpha value for the solid fill color.

Reviewed-by: Dmitry Baryshkov 
Signed-off-by: Jessica Zhang 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 24 
++--

  1 file changed, 18 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c

index 114c803ff99b..95fc0394d13e 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
@@ -42,7 +42,6 @@
  #define SHARP_SMOOTH_THR_DEFAULT    8
  #define SHARP_NOISE_THR_DEFAULT    2
-#define DPU_PLANE_COLOR_FILL_FLAG    BIT(31)
  #define DPU_ZPOS_MAX 255
  /*
@@ -82,7 +81,6 @@ struct dpu_plane {
  enum dpu_sspp pipe;
-    uint32_t color_fill;
  bool is_error;
  bool is_rt_pipe;
  const struct dpu_mdss_cfg *catalog;
@@ -606,6 +604,20 @@ static void _dpu_plane_color_fill_pipe(struct 
dpu_plane_state *pstate,
  _dpu_plane_setup_scaler(pipe, fmt, true, &pipe_cfg, 
pstate->rotation);

  }
+static uint32_t _dpu_plane_get_bgr_fill_color(struct drm_solid_fill 
solid_fill)


As I commented for v4 (please excuse me for not responding to your 
email at thattime), we can return abgr here, taking 
plane->state->alpha into account.


Hi Dmitry,

Since it seems that this comment wasn't resolved, I can drop your R-B 
tag in the next revision.


It's a minor issue, so no need to drop the tag.



 From my previous response, I pointed out that the color parameter 
expects an RGB value [1].


So is the intention here to refactor _dpu_plane_color_fill() to accept 
an ABGR color?


That's what I'm suggesting.


Hi Dmitry,

Got it, sounds good to me.

Thanks,

Jessica Zhang





Thanks,

Jessica Zhang

[1] 
https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c#L676





+{
+    uint32_t ret = 0;
+    uint8_t b = solid_fill.b >> 24;
+    uint8_t g = solid_fill.g >> 24;
+    uint8_t r = solid_fill.r >> 24;
+
+    ret |= b << 16;
+    ret |= g << 8;
+    ret |= r;
+
+    return ret;
+}
+
  /**
   * _dpu_plane_color_fill - enables color fill on plane
   * @pdpu:   Pointer to DPU plane object
@@ -977,9 +989,9 @@ void dpu_plane_flush(struct drm_plane *plane)
  if (pdpu->is_error)
  /* force white frame with 100% alpha pipe output on error */
  _dpu_plane_color_fill(pdpu, 0xFF, 0xFF);
-    else if (pdpu->color_fill & DPU_PLANE_COLOR_FILL_FLAG)
-    /* force 100% alpha */
-    _dpu_plane_color_fill(pdpu, pdpu->color_fill, 0xFF);
+    else if (drm_plane_solid_fill_enabled(plane->state))
+    _dpu_plane_color_fill(pdpu, 
_dpu_plane_get_bgr_fill_color(plane->state->solid_fill),

+    plane->state->alpha);
  else {
  dpu_plane_flush_csc(pdpu, &pstate->pipe);
  dpu_plane_flush_csc(pdpu, &pstate->r_pipe);
@@ -1024,7 +1036,7 @@ static void dpu_plane_sspp_update_pipe(struct 
drm_plane *plane,

  }
  /* override for color fill */
-    if (pdpu->color_fill & DPU_PLANE_COLOR_FILL_FLAG) {
+    if (drm_plane_solid_fill_enabled(plane->state)) {
  _dpu_plane_set_qos_ctrl(plane, pipe, false);
  /* skip remaining processing on color fill */



--
With best wishes
Dmitry



--
With best wishes
Dmitry



Re: [PATCH] drm/amd/pm: Clean up errors in hwmgr.h

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 9:48 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: open brace '{' following struct go on the same line
> ERROR: Use C99 flexible arrays
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h | 8 +++-
>  1 file changed, 3 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h 
> b/drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h
> index 612d66aeaab9..81650727a5de 100644
> --- a/drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h
> +++ b/drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h
> @@ -190,8 +190,7 @@ struct phm_vce_clock_voltage_dependency_table {
>  };
>
>
> -enum SMU_ASIC_RESET_MODE
> -{
> +enum SMU_ASIC_RESET_MODE {
>  SMU_ASIC_RESET_MODE_0,
>  SMU_ASIC_RESET_MODE_1,
>  SMU_ASIC_RESET_MODE_2,
> @@ -516,7 +515,7 @@ struct phm_vq_budgeting_record {
>
>  struct phm_vq_budgeting_table {
> uint8_t numEntries;
> -   struct phm_vq_budgeting_record entries[1];
> +   struct phm_vq_budgeting_record entries[0];
>  };
>
>  struct phm_clock_and_voltage_limits {
> @@ -607,8 +606,7 @@ struct phm_ppt_v2_information {
> uint8_t  uc_dcef_dpm_voltage_mode;
>  };
>
> -struct phm_ppt_v3_information
> -{
> +struct phm_ppt_v3_information {
> uint8_t uc_thermal_controller_type;
>
> uint16_t us_small_power_limit1;
> --
> 2.17.1
>


Re: [PATCH] drm/amd/pm: Clean up errors in hardwaremanager.h

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 9:39 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: open brace '{' following struct go on the same line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/pm/powerplay/inc/hardwaremanager.h | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/inc/hardwaremanager.h 
> b/drivers/gpu/drm/amd/pm/powerplay/inc/hardwaremanager.h
> index 01a7d66864f2..f4f9a104d170 100644
> --- a/drivers/gpu/drm/amd/pm/powerplay/inc/hardwaremanager.h
> +++ b/drivers/gpu/drm/amd/pm/powerplay/inc/hardwaremanager.h
> @@ -44,8 +44,7 @@ struct phm_fan_speed_info {
>  };
>
>  /* Automatic Power State Throttling */
> -enum PHM_AutoThrottleSource
> -{
> +enum PHM_AutoThrottleSource {
>  PHM_AutoThrottleSource_Thermal,
>  PHM_AutoThrottleSource_External
>  };
> --
> 2.17.1
>


Re: [PATCH] drm/amd/pm: Clean up errors in pp_thermal.h

2023-08-07 Thread Alex Deucher
Applied.  Thanks!

On Tue, Aug 1, 2023 at 9:38 PM Ran Sun  wrote:
>
> Fix the following errors reported by checkpatch:
>
> ERROR: open brace '{' following struct go on the same line
>
> Signed-off-by: Ran Sun 
> ---
>  drivers/gpu/drm/amd/pm/powerplay/inc/pp_thermal.h | 6 ++
>  1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/inc/pp_thermal.h 
> b/drivers/gpu/drm/amd/pm/powerplay/inc/pp_thermal.h
> index f7c41185097e..2003acc70ca0 100644
> --- a/drivers/gpu/drm/amd/pm/powerplay/inc/pp_thermal.h
> +++ b/drivers/gpu/drm/amd/pm/powerplay/inc/pp_thermal.h
> @@ -25,14 +25,12 @@
>
>  #include "power_state.h"
>
> -static const struct PP_TemperatureRange __maybe_unused 
> SMU7ThermalWithDelayPolicy[] =
> -{
> +static const struct PP_TemperatureRange __maybe_unused 
> SMU7ThermalWithDelayPolicy[] = {
> {-273150,  99000, 99000, -273150, 99000, 99000, -273150, 99000, 
> 99000},
> { 12, 12, 12, 12, 12, 12, 12, 12, 
> 12},
>  };
>
> -static const struct PP_TemperatureRange __maybe_unused SMU7ThermalPolicy[] =
> -{
> +static const struct PP_TemperatureRange __maybe_unused SMU7ThermalPolicy[] = 
> {
> {-273150,  99000, 99000, -273150, 99000, 99000, -273150, 99000, 
> 99000},
> { 12, 12, 12, 12, 12, 12, 12, 12, 
> 12},
>  };
> --
> 2.17.1
>


  1   2   3   >