Re: [PATCH v4 08/13] drm/msm/dpu: add support for virtual planes

2024-06-10 Thread Abhinav Kumar




On 6/7/2024 7:45 PM, Abhinav Kumar wrote:



On 6/7/2024 5:57 PM, Dmitry Baryshkov wrote:
On Sat, 8 Jun 2024 at 02:55, Abhinav Kumar  
wrote:




On 6/7/2024 3:26 PM, Dmitry Baryshkov wrote:
On Sat, 8 Jun 2024 at 00:39, Abhinav Kumar 
 wrote:




On 6/7/2024 2:10 PM, Dmitry Baryshkov wrote:

On Fri, Jun 07, 2024 at 12:22:16PM -0700, Abhinav Kumar wrote:



On 6/7/2024 12:16 AM, Dmitry Baryshkov wrote:

On Thu, Jun 06, 2024 at 03:21:11PM -0700, Abhinav Kumar wrote:

On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:
Only several SSPP blocks support such features as YUV output 
or scaling,
thus different DRM planes have different features.  Properly 
utilizing

all planes requires the attention of the compositor, who should
prefer simpler planes to YUV-supporting ones. Otherwise it is 
very easy

to end up in a situation when all featureful planes are already
allocated for simple windows, leaving no spare plane for YUV 
playback.


To solve this problem make all planes virtual. Each plane is 
registered
as if it supports all possible features, but then at the 
runtime during
the atomic_check phase the driver selects backing SSPP block 
for each

plane.

Note, this does not provide support for using two different 
SSPP blocks
for a single plane or using two rectangles of an SSPP to drive 
two
planes. Each plane still gets its own SSPP and can utilize 
either a solo
rectangle or both multirect rectangles depending on the 
resolution.


Note #2: By default support for virtual planes is turned off 
and the
driver still uses old code path with preallocated SSPP block 
for each
plane. To enable virtual planes, pass 
'msm.dpu_use_virtual_planes=1'

kernel parameter.



While posting the next revision, can you pls leave a note in the commit 
text on the reason behind picking crtc_id for sspp allocation and not 
encoder_id?


I recall you mentioned that, two rects of the smartDMA cannot goto two 
LMs. This is true. But crtc mapping need not goto 1:1 with LM mapping. 
It depends on topology. I think I forgot the full explanation for this 
aspect of it. Hence it will be better to note that in the commit text.





I like the overall approach in this patch. Some comments below.


Signed-off-by: Dmitry Baryshkov 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c  |  50 +
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  10 +-
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h   |   4 +
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 230 
+++---

  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h |  19 ++
  drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c    |  77 
  drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h    |  28 +++
  7 files changed, 390 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c

index 88c2e51ab166..794c5643584f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
@@ -1168,6 +1168,49 @@ static bool 
dpu_crtc_needs_dirtyfb(struct drm_crtc_state *cstate)

    return false;
  }
+static int dpu_crtc_reassign_planes(struct drm_crtc *crtc, 
struct drm_crtc_state *crtc_state)

+{
+ int total_planes = crtc->dev->mode_config.num_total_plane;
+ struct drm_atomic_state *state = crtc_state->state;
+ struct dpu_global_state *global_state;
+ struct drm_plane_state **states;
+ struct drm_plane *plane;
+ int ret;
+
+ global_state = dpu_kms_get_global_state(crtc_state->state);
+ if (IS_ERR(global_state))
+ return PTR_ERR(global_state);
+
+ dpu_rm_release_all_sspp(global_state, crtc);
+


Do we need to call dpu_rm_release_all_sspp() even in the
_dpu_plane_atomic_disable()?


It allows the driver to optimize the usage of the SSPP rectangles.



No, what I meant was that we should call 
dpu_rm_release_all_sspp() in
dpu_plane_atomic_update() as well because in the atomic_check() 
path where
its called today, its being called only for zpos_changed and 
planes_changed

but during disable we must call this for sure.


No. the dpu_rm_release_all_sspp() should only be called during check.
When dpu_plane_atomic_update() is called, the state should already be
finalised. The atomic_check() callback is called when a plane is 
going

to be disabled.



atomic_check() will be called when plane is disabled but
dpu_rm_release_all_sspp() may not be called as it is protected by
zpos_changed and planes_changed. OR you need to add a !visible check
here to call dpu_rm_release_all_sspp() that time. Thats whay I wrote
previously.


Unless I miss something, if a plane gets disabled, then obviously
planes_changed is true.

[trimmed]



Do you mean DRM fwk sets planes_changed correctly for this case?

Currently we have

  if (!new_state->visible) {
  _dpu_plane_atomic_disable(plane);
  } else {
  dpu_plane_sspp_atomic_update(plane);
  }

So I wanted to ensure that when plane gets disabled, its SS

Re: [PATCH v4 08/13] drm/msm/dpu: add support for virtual planes

2024-06-10 Thread Abhinav Kumar




On 6/7/2024 7:45 PM, Abhinav Kumar wrote:



On 6/7/2024 5:57 PM, Dmitry Baryshkov wrote:
On Sat, 8 Jun 2024 at 02:55, Abhinav Kumar  
wrote:




On 6/7/2024 3:26 PM, Dmitry Baryshkov wrote:
On Sat, 8 Jun 2024 at 00:39, Abhinav Kumar 
 wrote:




On 6/7/2024 2:10 PM, Dmitry Baryshkov wrote:

On Fri, Jun 07, 2024 at 12:22:16PM -0700, Abhinav Kumar wrote:



On 6/7/2024 12:16 AM, Dmitry Baryshkov wrote:

On Thu, Jun 06, 2024 at 03:21:11PM -0700, Abhinav Kumar wrote:

On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:
Only several SSPP blocks support such features as YUV output 
or scaling,
thus different DRM planes have different features.  Properly 
utilizing

all planes requires the attention of the compositor, who should
prefer simpler planes to YUV-supporting ones. Otherwise it is 
very easy

to end up in a situation when all featureful planes are already
allocated for simple windows, leaving no spare plane for YUV 
playback.


To solve this problem make all planes virtual. Each plane is 
registered
as if it supports all possible features, but then at the 
runtime during
the atomic_check phase the driver selects backing SSPP block 
for each

plane.

Note, this does not provide support for using two different 
SSPP blocks
for a single plane or using two rectangles of an SSPP to drive 
two
planes. Each plane still gets its own SSPP and can utilize 
either a solo
rectangle or both multirect rectangles depending on the 
resolution.


Note #2: By default support for virtual planes is turned off 
and the
driver still uses old code path with preallocated SSPP block 
for each
plane. To enable virtual planes, pass 
'msm.dpu_use_virtual_planes=1'

kernel parameter.



While posting the next revision, can you pls leave a note in the commit 
text on the reason behind picking crtc_id for sspp allocation and not 
encoder_id?


I recall you mentioned that, two rects of the smartDMA cannot goto two 
LMs. This is true. But crtc mapping need not goto 1:1 with LM mapping. 
It depends on topology. I think I forgot the full explanation for this 
aspect of it. Hence it will be better to note that in the commit text.





I like the overall approach in this patch. Some comments below.


Signed-off-by: Dmitry Baryshkov 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c  |  50 +
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  10 +-
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h   |   4 +
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 230 
+++---

  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h |  19 ++
  drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c    |  77 
  drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h    |  28 +++
  7 files changed, 390 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c

index 88c2e51ab166..794c5643584f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
@@ -1168,6 +1168,49 @@ static bool 
dpu_crtc_needs_dirtyfb(struct drm_crtc_state *cstate)

    return false;
  }
+static int dpu_crtc_reassign_planes(struct drm_crtc *crtc, 
struct drm_crtc_state *crtc_state)

+{
+ int total_planes = crtc->dev->mode_config.num_total_plane;
+ struct drm_atomic_state *state = crtc_state->state;
+ struct dpu_global_state *global_state;
+ struct drm_plane_state **states;
+ struct drm_plane *plane;
+ int ret;
+
+ global_state = dpu_kms_get_global_state(crtc_state->state);
+ if (IS_ERR(global_state))
+ return PTR_ERR(global_state);
+
+ dpu_rm_release_all_sspp(global_state, crtc);
+


Do we need to call dpu_rm_release_all_sspp() even in the
_dpu_plane_atomic_disable()?


It allows the driver to optimize the usage of the SSPP rectangles.



No, what I meant was that we should call 
dpu_rm_release_all_sspp() in
dpu_plane_atomic_update() as well because in the atomic_check() 
path where
its called today, its being called only for zpos_changed and 
planes_changed

but during disable we must call this for sure.


No. the dpu_rm_release_all_sspp() should only be called during check.
When dpu_plane_atomic_update() is called, the state should already be
finalised. The atomic_check() callback is called when a plane is 
going

to be disabled.



atomic_check() will be called when plane is disabled but
dpu_rm_release_all_sspp() may not be called as it is protected by
zpos_changed and planes_changed. OR you need to add a !visible check
here to call dpu_rm_release_all_sspp() that time. Thats whay I wrote
previously.


Unless I miss something, if a plane gets disabled, then obviously
planes_changed is true.

[trimmed]



Do you mean DRM fwk sets planes_changed correctly for this case?

Currently we have

  if (!new_state->visible) {
  _dpu_plane_atomic_disable(plane);
  } else {
  dpu_plane_sspp_atomic_update(plane);
  }

So I wanted to ensure that when plane gets disabled, its SS

Re: [PATCH v4 09/13] drm/msm/dpu: allow using two SSPP blocks for a single plane

2024-06-10 Thread Abhinav Kumar




On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Virtual wide planes give high amount of flexibility, but it is not
always enough:

In parallel multirect case only the half of the usual width is supported
for tiled formats. Thus the whole width of two tiled multirect
rectangles can not be greater than max_linewidth, which is not enough
for some platforms/compositors.

Another example is as simple as wide YUV plane. YUV planes can not use
multirect, so currently they are limited to max_linewidth too.

Now that the planes are fully virtualized, add support for allocating
two SSPP blocks to drive a single DRM plane. This fixes both mentioned
cases and allows all planes to go up to 2*max_linewidth (at the cost of
making some of the planes unavailable to the user).

Signed-off-by: Dmitry Baryshkov 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 172 --
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h |   8 +
  2 files changed, 131 insertions(+), 49 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
index 2961b809ccf3..cde20c1fa90d 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
@@ -886,6 +886,28 @@ static int dpu_plane_atomic_check_nopipe(struct drm_plane 
*plane,
return 0;
  }
  
+static int dpu_plane_is_multirect_parallel_capable(struct dpu_sw_pipe *pipe,

+  struct dpu_sw_pipe_cfg 
*pipe_cfg,
+  const struct dpu_format *fmt,
+  uint32_t max_linewidth)
+{
+   if (drm_rect_width(_cfg->src_rect) != 
drm_rect_width(_cfg->dst_rect) ||
+   drm_rect_height(_cfg->src_rect) != 
drm_rect_height(_cfg->dst_rect))
+   return false;
+
+   if (pipe_cfg->rotation & DRM_MODE_ROTATE_90)
+   return false;
+
+   if (DPU_FORMAT_IS_YUV(fmt))
+   return false;
+
+   if (DPU_FORMAT_IS_UBWC(fmt) &&
+   drm_rect_width(_cfg->src_rect) > max_linewidth / 2)
+   return false;
+
+   return true;
+}
+


This is a good idea to separate out multirect checks to a separate API. 
I think can push this part of the change even today.



  static int dpu_plane_atomic_check_pipes(struct drm_plane *plane,
struct drm_atomic_state *state,
const struct drm_crtc_state *crtc_state)
@@ -899,7 +921,6 @@ static int dpu_plane_atomic_check_pipes(struct drm_plane 
*plane,
const struct dpu_format *fmt;
struct dpu_sw_pipe_cfg *pipe_cfg = >pipe_cfg;
struct dpu_sw_pipe_cfg *r_pipe_cfg = >r_pipe_cfg;
-   uint32_t max_linewidth;
uint32_t supported_rotations;
const struct dpu_sspp_cfg *pipe_hw_caps;
const struct dpu_sspp_sub_blks *sblk;
@@ -919,15 +940,8 @@ static int dpu_plane_atomic_check_pipes(struct drm_plane 
*plane,
  drm_rect_height(_plane_state->dst
return -ERANGE;
  
-	pipe->multirect_index = DPU_SSPP_RECT_SOLO;

-   pipe->multirect_mode = DPU_SSPP_MULTIRECT_NONE;
-   r_pipe->multirect_index = DPU_SSPP_RECT_SOLO;
-   r_pipe->multirect_mode = DPU_SSPP_MULTIRECT_NONE;
-
fmt = to_dpu_format(msm_framebuffer_format(new_plane_state->fb));
  
-	max_linewidth = pdpu->catalog->caps->max_linewidth;

-
supported_rotations = DRM_MODE_REFLECT_MASK | DRM_MODE_ROTATE_0;
  
  	if (pipe_hw_caps->features & BIT(DPU_SSPP_INLINE_ROTATION))

@@ -943,41 +957,6 @@ static int dpu_plane_atomic_check_pipes(struct drm_plane 
*plane,
return ret;
  
  	if (drm_rect_width(_pipe_cfg->src_rect) != 0) {

-   /*
-* In parallel multirect case only the half of the usual width
-* is supported for tiled formats. If we are here, we know that
-* full width is more than max_linewidth, thus each rect is
-* wider than allowed.
-*/
-   if (DPU_FORMAT_IS_UBWC(fmt) &&
-   drm_rect_width(_cfg->src_rect) > max_linewidth) {
-   DPU_DEBUG_PLANE(pdpu, "invalid src " DRM_RECT_FMT " line:%u, 
tiled format\n",
-   DRM_RECT_ARG(_cfg->src_rect), 
max_linewidth);
-   return -E2BIG;
-   }
-
-   if (drm_rect_width(_cfg->src_rect) != 
drm_rect_width(_cfg->dst_rect) ||
-   drm_rect_height(_cfg->src_rect) != 
drm_rect_height(_cfg->dst_rect) ||
-   (!test_bit(DPU_SSPP_SMART_DMA_V1, >sspp->cap->features) 
&&
-!test_bit(DPU_SSPP_SMART_DMA_V2, 
>sspp->cap->features)) ||
-   pipe_cfg->rotation & DRM_MODE_ROTATE_90 ||
-   DPU_FORMAT_IS_YUV(fmt)) {
-   DPU_DEBUG_PLANE(pdpu, "invalid src " DRM_RECT_FMT " line:%u, 
can't use split 

Re: [PATCH v4 09/13] drm/msm/dpu: allow using two SSPP blocks for a single plane

2024-06-10 Thread Abhinav Kumar




On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Virtual wide planes give high amount of flexibility, but it is not
always enough:

In parallel multirect case only the half of the usual width is supported
for tiled formats. Thus the whole width of two tiled multirect
rectangles can not be greater than max_linewidth, which is not enough
for some platforms/compositors.

Another example is as simple as wide YUV plane. YUV planes can not use
multirect, so currently they are limited to max_linewidth too.

Now that the planes are fully virtualized, add support for allocating
two SSPP blocks to drive a single DRM plane. This fixes both mentioned
cases and allows all planes to go up to 2*max_linewidth (at the cost of
making some of the planes unavailable to the user).

Signed-off-by: Dmitry Baryshkov 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 172 --
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h |   8 +
  2 files changed, 131 insertions(+), 49 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
index 2961b809ccf3..cde20c1fa90d 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
@@ -886,6 +886,28 @@ static int dpu_plane_atomic_check_nopipe(struct drm_plane 
*plane,
return 0;
  }
  
+static int dpu_plane_is_multirect_parallel_capable(struct dpu_sw_pipe *pipe,

+  struct dpu_sw_pipe_cfg 
*pipe_cfg,
+  const struct dpu_format *fmt,
+  uint32_t max_linewidth)
+{
+   if (drm_rect_width(_cfg->src_rect) != 
drm_rect_width(_cfg->dst_rect) ||
+   drm_rect_height(_cfg->src_rect) != 
drm_rect_height(_cfg->dst_rect))
+   return false;
+
+   if (pipe_cfg->rotation & DRM_MODE_ROTATE_90)
+   return false;
+
+   if (DPU_FORMAT_IS_YUV(fmt))
+   return false;
+
+   if (DPU_FORMAT_IS_UBWC(fmt) &&
+   drm_rect_width(_cfg->src_rect) > max_linewidth / 2)
+   return false;
+
+   return true;
+}
+


This is a good idea to separate out multirect checks to a separate API. 
I think can push this part of the change even today.



  static int dpu_plane_atomic_check_pipes(struct drm_plane *plane,
struct drm_atomic_state *state,
const struct drm_crtc_state *crtc_state)
@@ -899,7 +921,6 @@ static int dpu_plane_atomic_check_pipes(struct drm_plane 
*plane,
const struct dpu_format *fmt;
struct dpu_sw_pipe_cfg *pipe_cfg = >pipe_cfg;
struct dpu_sw_pipe_cfg *r_pipe_cfg = >r_pipe_cfg;
-   uint32_t max_linewidth;
uint32_t supported_rotations;
const struct dpu_sspp_cfg *pipe_hw_caps;
const struct dpu_sspp_sub_blks *sblk;
@@ -919,15 +940,8 @@ static int dpu_plane_atomic_check_pipes(struct drm_plane 
*plane,
  drm_rect_height(_plane_state->dst
return -ERANGE;
  
-	pipe->multirect_index = DPU_SSPP_RECT_SOLO;

-   pipe->multirect_mode = DPU_SSPP_MULTIRECT_NONE;
-   r_pipe->multirect_index = DPU_SSPP_RECT_SOLO;
-   r_pipe->multirect_mode = DPU_SSPP_MULTIRECT_NONE;
-
fmt = to_dpu_format(msm_framebuffer_format(new_plane_state->fb));
  
-	max_linewidth = pdpu->catalog->caps->max_linewidth;

-
supported_rotations = DRM_MODE_REFLECT_MASK | DRM_MODE_ROTATE_0;
  
  	if (pipe_hw_caps->features & BIT(DPU_SSPP_INLINE_ROTATION))

@@ -943,41 +957,6 @@ static int dpu_plane_atomic_check_pipes(struct drm_plane 
*plane,
return ret;
  
  	if (drm_rect_width(_pipe_cfg->src_rect) != 0) {

-   /*
-* In parallel multirect case only the half of the usual width
-* is supported for tiled formats. If we are here, we know that
-* full width is more than max_linewidth, thus each rect is
-* wider than allowed.
-*/
-   if (DPU_FORMAT_IS_UBWC(fmt) &&
-   drm_rect_width(_cfg->src_rect) > max_linewidth) {
-   DPU_DEBUG_PLANE(pdpu, "invalid src " DRM_RECT_FMT " line:%u, 
tiled format\n",
-   DRM_RECT_ARG(_cfg->src_rect), 
max_linewidth);
-   return -E2BIG;
-   }
-
-   if (drm_rect_width(_cfg->src_rect) != 
drm_rect_width(_cfg->dst_rect) ||
-   drm_rect_height(_cfg->src_rect) != 
drm_rect_height(_cfg->dst_rect) ||
-   (!test_bit(DPU_SSPP_SMART_DMA_V1, >sspp->cap->features) 
&&
-!test_bit(DPU_SSPP_SMART_DMA_V2, 
>sspp->cap->features)) ||
-   pipe_cfg->rotation & DRM_MODE_ROTATE_90 ||
-   DPU_FORMAT_IS_YUV(fmt)) {
-   DPU_DEBUG_PLANE(pdpu, "invalid src " DRM_RECT_FMT " line:%u, 
can't use split 

[Lldb-commits] [lldb] [lldb-dap] Added "port" property to vscode "attach" command. (PR #91570)

2024-06-10 Thread Santhosh Kumar Ellendula via lldb-commits


@@ -672,9 +672,14 @@ void request_attach(const llvm::json::Object ) {
   lldb::SBError error;
   FillResponse(request, response);
   lldb::SBAttachInfo attach_info;
+  const int invalid_port = 0;
   auto arguments = request.getObject("arguments");
   const lldb::pid_t pid =
   GetUnsigned(arguments, "pid", LLDB_INVALID_PROCESS_ID);
+  const auto gdb_remote_port =
+  GetUnsigned(arguments, "gdb-remote-port", invalid_port);
+  llvm::StringRef gdb_remote_hostname =
+  GetString(arguments, "gdb-remote-hostname", "localhost");

santhoshe447 wrote:

Using 127.0.0.1 can provide slight advantage in terms of consistency and 
compatibility in specific scenarios.
Any suggestions from others?  

https://github.com/llvm/llvm-project/pull/91570
___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [lldb] [lldb-dap] Added "port" property to vscode "attach" command. (PR #91570)

2024-06-10 Thread Santhosh Kumar Ellendula via lldb-commits


@@ -0,0 +1,202 @@
+"""
+Test lldb-dap "port" configuration to "attach" request
+"""
+
+
+import dap_server
+from lldbsuite.test.decorators import *
+from lldbsuite.test.lldbtest import *
+from lldbsuite.test import lldbutil
+from lldbsuite.test import lldbplatformutil
+import lldbdap_testcase
+import os
+import shutil
+import subprocess
+import tempfile
+import threading
+import sys
+import socket
+import select
+
+
+# A class representing a pipe for communicating with debug server.
+# This class includes menthods to open the pipe and read the port number from 
it.
+class Pipe(object):
+def __init__(self, prefix):
+self.name = os.path.join(prefix, "stub_port_number")
+os.mkfifo(self.name)
+self._fd = os.open(self.name, os.O_RDONLY | os.O_NONBLOCK)
+
+def finish_connection(self, timeout):
+pass
+
+def read(self, size, timeout):
+(readers, _, _) = select.select([self._fd], [], [], timeout)
+if self._fd not in readers:
+raise TimeoutError
+return os.read(self._fd, size)
+
+def close(self):
+os.close(self._fd)

santhoshe447 wrote:

I apologize for the confusion regarding the "Pipe " class.
There is a conditional declaration of the "Pipe" class, one for windows and 
another for non-windows host platform. My confusion was whether I should move 
both of them or not.
Now I have moved both the "Pipe" classes into common location 
"lldbgdbserverutils.py" and pushed it.

https://github.com/llvm/llvm-project/pull/91570
___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


Olingo is not able to identify DateTimeOffset.

2024-06-10 Thread Gourav Kumar
Hi,

We are using *olingo-4.7.0 *to work with Odata V4. While working with we
are facing multiple issues with datatypes. we are deserializing the JSON
content which contains the DateTimeOffset datatype. for example:
{
"streetAddress": "abs",
"displayName": "xxx",
"surname": "fff",
"givenName": "xxx",
"price": "1.02",
"id": "1232-3232-1212",
"employeeHireDate": "2024-05-26T00:00:00Z",
"accountEnabled": true
}


and while creating the Odata entity it treats 'employeeHireDate' as
EDM.string which is not true this should be DateTimeOffset due to this
Olingo client throwing the below exceptions
DateTimeOffset issue:
Error: Error Code: Error while executing
activity operation for update. The Exception is (Request_BadRequest) A
value was encountered that has a type name that is incompatible with
the metadata. The value specified its type as 'Edm.String', but the
type specified in the metadata is 'Edm.DateTimeOffset'. [HTTP/1.1 400
Bad Request], Error Desc: Error while executing activity operation for
update. The Exception is (Request_BadRequest) A value was encountered
that has a type name that is incompatible with the metadata. The value
specified its type as 'Edm.String', but the type specified in the
metadata is 'Edm.DateTimeOffset'. [HTTP/1.1 400 Bad Request] ,Stack
Trace: org.jitterbit.connector.sdk.exceptions.ActivityExecutionException:
Error while executing activity operation for update. The Exception is
(Request_BadRequest) A value was encountered that has a type name that
is incompatible with the metadata. The value specified its type as
'Edm.String', but the type specified in the metadata is
'Edm.DateTimeOffset'. [HTTP/1.1 400 Bad Request]

*This is coming for price.* Caused by: java.lang.Exception: (BadRequest) A
value was encountered that has a type name that is incompatible with the
metadata. The value specified its type as 'Edm.Double', but the type
specified in the metadata is 'Edm.Decimal'. CorrelationId:
3057a9d4-e8a8-4c6d-b3e7-6b740a9a404e. [HTTP/1.1 400 Bad Request]


This is how we are Deserializing the JSON Content
*client.getBinder().getODataEntity(client.getDeserializer(ContentType.APPLICATION_JSON).toEntity(stream));*

[image: image.png]

is there any way we can avoid this case or any workaround that we can
follow to resolve this issue? As our requests are dynamic in nature, so we
can't do JSON iteration and set every single field to the appropriate
property.

Looking forward to your quick response.
Thanks and Regards,
Gourav Kumar

-- 


The contents of this message are confidential.  If you are not the 
intended recipient of this communication, kindly contact the sender 
immediately and permanently delete this message.


[Lldb-commits] [lldb] [lldb-dap] Added "port" property to vscode "attach" command. (PR #91570)

2024-06-10 Thread Santhosh Kumar Ellendula via lldb-commits

https://github.com/santhoshe447 updated 
https://github.com/llvm/llvm-project/pull/91570

>From 960351c9abf51f42d92604ac6297aa5b76ddfba5 Mon Sep 17 00:00:00 2001
From: Santhosh Kumar Ellendula 
Date: Fri, 17 Nov 2023 15:09:10 +0530
Subject: [PATCH 01/17] [lldb][test] Add the ability to extract the variable
 value out of the summary.

---
 .../Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py   | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py 
b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
index 9d79872b029a3..0cf9d4fde4948 100644
--- a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
+++ b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
@@ -195,6 +195,9 @@ def collect_console(self, duration):
 
 def get_local_as_int(self, name, threadId=None):
 value = self.dap_server.get_local_variable_value(name, 
threadId=threadId)
+# 'value' may have the variable value and summary.
+# Extract the variable value since summary can have nonnumeric 
characters.
+value = value.split(" ")[0]
 if value.startswith("0x"):
 return int(value, 16)
 elif value.startswith("0"):

>From ab44a6991c5bc8ac5764c3f71cbe3acc747b3776 Mon Sep 17 00:00:00 2001
From: Santhosh Kumar Ellendula 
Date: Fri, 3 May 2024 02:47:05 -0700
Subject: [PATCH 02/17] [lldb-dap] Added "port" property to vscode "attach"
 command.

Adding a "port" property to the VsCode "attach" command likely extends the 
functionality of the debugger configuratiuon to allow attaching to a process 
using PID or PORT number.
Currently, the "Attach" configuration lets the user specify a pid. We tell the 
user to use the attachCommands property to run "gdb-remote ".
Followed the below conditions for "attach" command with "port" and "pid"
We should add a "port" property. If port is specified and pid is not, use that 
port to attach. If both port and pid are specified, return an error saying that 
the user can't specify both pid and port.

Ex - launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "lldb-dap Debug",
"type": "lldb-dap",
"request": "attach",
"port":1234,
"program": "${workspaceFolder}/a.out",
"args": [],
"stopOnEntry": false,
"cwd": "${workspaceFolder}",
"env": [],

}
]
}
---
 lldb/include/lldb/lldb-defines.h  |   1 +
 .../Python/lldbsuite/test/lldbtest.py |   9 ++
 .../test/tools/lldb-dap/dap_server.py |   6 +
 .../test/tools/lldb-dap/lldbdap_testcase.py   |  20 +++
 .../attach/TestDAP_attachByPortNum.py | 120 ++
 lldb/tools/lldb-dap/lldb-dap.cpp  |  36 +-
 lldb/tools/lldb-dap/package.json  |  11 ++
 7 files changed, 199 insertions(+), 4 deletions(-)
 create mode 100644 
lldb/test/API/tools/lldb-dap/attach/TestDAP_attachByPortNum.py

diff --git a/lldb/include/lldb/lldb-defines.h b/lldb/include/lldb/lldb-defines.h
index c7bd019c5c90e..a1e6ee2ce468c 100644
--- a/lldb/include/lldb/lldb-defines.h
+++ b/lldb/include/lldb/lldb-defines.h
@@ -96,6 +96,7 @@
 #define LLDB_INVALID_QUEUE_ID 0
 #define LLDB_INVALID_CPU_ID UINT32_MAX
 #define LLDB_INVALID_WATCHPOINT_RESOURCE_ID UINT32_MAX
+#define LLDB_INVALID_PORT_NUMBER 0
 
 /// CPU Type definitions
 #define LLDB_ARCH_DEFAULT "systemArch"
diff --git a/lldb/packages/Python/lldbsuite/test/lldbtest.py 
b/lldb/packages/Python/lldbsuite/test/lldbtest.py
index 5fd686c143e9f..fb3cd22959df2 100644
--- a/lldb/packages/Python/lldbsuite/test/lldbtest.py
+++ b/lldb/packages/Python/lldbsuite/test/lldbtest.py
@@ -1572,6 +1572,15 @@ def findBuiltClang(self):
 
 return os.environ["CC"]
 
+def getBuiltServerTool(self, server_tool):
+# Tries to find simulation/lldb-server/gdbserver tool at the same 
folder as the lldb.
+lldb_dir = os.path.dirname(lldbtest_config.lldbExec)
+path = shutil.which(server_tool, path=lldb_dir)
+if path is not None:
+return path
+
+return ""
+
 def yaml2obj(self, yaml_path, obj_path, max_size=None):
 """
 Create an object file at the given path from a yaml file.
diff --git a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py 
b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py
index 5838281bcb1a1..96d312565f953 100644
--- a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py
+++ b/lldb/packages/Pyth

[jira] [Updated] (HDDS-9626) [Recon] Disk Usage page with high number of key/bucket/volume

2024-06-10 Thread Devesh Kumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-9626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devesh Kumar Singh updated HDDS-9626:
-
Fix Version/s: 1.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> [Recon] Disk Usage page with high number of key/bucket/volume
> -
>
> Key: HDDS-9626
> URL: https://issues.apache.org/jira/browse/HDDS-9626
> Project: Apache Ozone
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Pratyush Bhatt
>Assignee: smita
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.5.0
>
> Attachments: image-2023-11-03-12-16-00-296.png
>
>
> When the number of keys/volume/bucket are huge, the current disk usage UI 
> doesnt make much sense. This particular case had approx 4.5k keys inside a 
> bucket, which is pretty normal. 
> !image-2023-11-03-12-16-00-296.png|width=556,height=598!
> Plus the ticks on y axis looks out of the mark, the bucket path in the top is 
> crossing the chart completely horizontally.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@ozone.apache.org
For additional commands, e-mail: issues-h...@ozone.apache.org



[Lldb-commits] [lldb] [lldb-dap] Added "port" property to vscode "attach" command. (PR #91570)

2024-06-10 Thread Santhosh Kumar Ellendula via lldb-commits


@@ -0,0 +1,202 @@
+"""
+Test lldb-dap "port" configuration to "attach" request
+"""
+
+
+import dap_server
+from lldbsuite.test.decorators import *
+from lldbsuite.test.lldbtest import *
+from lldbsuite.test import lldbutil
+from lldbsuite.test import lldbplatformutil
+import lldbdap_testcase
+import os
+import shutil
+import subprocess
+import tempfile
+import threading
+import sys
+import socket
+import select
+
+
+# A class representing a pipe for communicating with debug server.
+# This class includes menthods to open the pipe and read the port number from 
it.
+class Pipe(object):
+def __init__(self, prefix):
+self.name = os.path.join(prefix, "stub_port_number")
+os.mkfifo(self.name)
+self._fd = os.open(self.name, os.O_RDONLY | os.O_NONBLOCK)
+
+def finish_connection(self, timeout):
+pass
+
+def read(self, size, timeout):
+(readers, _, _) = select.select([self._fd], [], [], timeout)
+if self._fd not in readers:
+raise TimeoutError
+return os.read(self._fd, size)
+
+def close(self):
+os.close(self._fd)
+
+
+class TestDAP_attachByPortNum(lldbdap_testcase.DAPTestCaseBase):
+default_timeout = 20
+
+def set_and_hit_breakpoint(self, continueToExit=True):
+source = "main.c"
+main_source_path = os.path.join(os.getcwd(), source)
+breakpoint1_line = line_number(main_source_path, "// breakpoint 1")
+lines = [breakpoint1_line]
+# Set breakpoint in the thread function so we can step the threads
+breakpoint_ids = self.set_source_breakpoints(main_source_path, lines)
+self.assertEqual(
+len(breakpoint_ids), len(lines), "expect correct number of 
breakpoints"
+)
+self.continue_to_breakpoints(breakpoint_ids)
+if continueToExit:
+self.continue_to_exit()
+
+def get_debug_server_command_line_args(self):
+args = []
+if lldbplatformutil.getPlatform() == "linux":
+args = ["gdbserver"]
+elif lldbplatformutil.getPlatform() == "macosx":
+args = ["--listen"]
+if lldb.remote_platform:
+args += ["*:0"]
+else:
+args += ["localhost:0"]
+return args
+
+def get_debug_server_pipe(self):
+pipe = Pipe(self.getBuildDir())
+self.addTearDownHook(lambda: pipe.close())
+pipe.finish_connection(self.default_timeout)
+return pipe
+
+@skipIfWindows
+@skipIfNetBSD
+def test_by_port(self):
+"""
+Tests attaching to a process by port.
+"""
+self.build_and_create_debug_adaptor()
+program = self.getBuildArtifact("a.out")
+
+debug_server_tool = self.getBuiltinDebugServerTool()
+
+pipe = self.get_debug_server_pipe()
+args = self.get_debug_server_command_line_args()
+args += [program]
+args += ["--named-pipe", pipe.name]
+
+self.process = self.spawnSubprocess(
+debug_server_tool, args, install_remote=False
+)
+
+# Read the port number from the debug server pipe.
+port = pipe.read(10, self.default_timeout)
+# Trim null byte, convert to int
+port = int(port[:-1])
+self.assertIsNotNone(
+port, " Failed to read the port number from debug server pipe"
+)
+
+self.attach(program=program, gdbRemotePort=port, sourceInitFile=True)
+self.set_and_hit_breakpoint(continueToExit=True)
+self.process.terminate()
+
+@skipIfWindows
+@skipIfNetBSD
+def test_by_port_and_pid(self):
+"""
+Tests attaching to a process by process ID and port number.
+"""
+self.build_and_create_debug_adaptor()
+program = self.getBuildArtifact("a.out")
+
+debug_server_tool = self.getBuiltinDebugServerTool()
+pipe = self.get_debug_server_pipe()
+args = self.get_debug_server_command_line_args()
+args += [program]
+args += ["--named-pipe", pipe.name]
+
+self.process = self.spawnSubprocess(
+debug_server_tool, args, install_remote=False
+)
+
+# Read the port number from the debug server pipe.
+port = pipe.read(10, self.default_timeout)
+# Trim null byte, convert to int
+port = int(port[:-1])

santhoshe447 wrote:

Yeah, agreed with your point.

https://github.com/llvm/llvm-project/pull/91570
___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [lldb] [lldb-dap] Added "port" property to vscode "attach" command. (PR #91570)

2024-06-10 Thread Santhosh Kumar Ellendula via lldb-commits

https://github.com/santhoshe447 updated 
https://github.com/llvm/llvm-project/pull/91570

>From 960351c9abf51f42d92604ac6297aa5b76ddfba5 Mon Sep 17 00:00:00 2001
From: Santhosh Kumar Ellendula 
Date: Fri, 17 Nov 2023 15:09:10 +0530
Subject: [PATCH 01/16] [lldb][test] Add the ability to extract the variable
 value out of the summary.

---
 .../Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py   | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py 
b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
index 9d79872b029a3..0cf9d4fde4948 100644
--- a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
+++ b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
@@ -195,6 +195,9 @@ def collect_console(self, duration):
 
 def get_local_as_int(self, name, threadId=None):
 value = self.dap_server.get_local_variable_value(name, 
threadId=threadId)
+# 'value' may have the variable value and summary.
+# Extract the variable value since summary can have nonnumeric 
characters.
+value = value.split(" ")[0]
 if value.startswith("0x"):
 return int(value, 16)
 elif value.startswith("0"):

>From ab44a6991c5bc8ac5764c3f71cbe3acc747b3776 Mon Sep 17 00:00:00 2001
From: Santhosh Kumar Ellendula 
Date: Fri, 3 May 2024 02:47:05 -0700
Subject: [PATCH 02/16] [lldb-dap] Added "port" property to vscode "attach"
 command.

Adding a "port" property to the VsCode "attach" command likely extends the 
functionality of the debugger configuratiuon to allow attaching to a process 
using PID or PORT number.
Currently, the "Attach" configuration lets the user specify a pid. We tell the 
user to use the attachCommands property to run "gdb-remote ".
Followed the below conditions for "attach" command with "port" and "pid"
We should add a "port" property. If port is specified and pid is not, use that 
port to attach. If both port and pid are specified, return an error saying that 
the user can't specify both pid and port.

Ex - launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "lldb-dap Debug",
"type": "lldb-dap",
"request": "attach",
"port":1234,
"program": "${workspaceFolder}/a.out",
"args": [],
"stopOnEntry": false,
"cwd": "${workspaceFolder}",
"env": [],

}
]
}
---
 lldb/include/lldb/lldb-defines.h  |   1 +
 .../Python/lldbsuite/test/lldbtest.py |   9 ++
 .../test/tools/lldb-dap/dap_server.py |   6 +
 .../test/tools/lldb-dap/lldbdap_testcase.py   |  20 +++
 .../attach/TestDAP_attachByPortNum.py | 120 ++
 lldb/tools/lldb-dap/lldb-dap.cpp  |  36 +-
 lldb/tools/lldb-dap/package.json  |  11 ++
 7 files changed, 199 insertions(+), 4 deletions(-)
 create mode 100644 
lldb/test/API/tools/lldb-dap/attach/TestDAP_attachByPortNum.py

diff --git a/lldb/include/lldb/lldb-defines.h b/lldb/include/lldb/lldb-defines.h
index c7bd019c5c90e..a1e6ee2ce468c 100644
--- a/lldb/include/lldb/lldb-defines.h
+++ b/lldb/include/lldb/lldb-defines.h
@@ -96,6 +96,7 @@
 #define LLDB_INVALID_QUEUE_ID 0
 #define LLDB_INVALID_CPU_ID UINT32_MAX
 #define LLDB_INVALID_WATCHPOINT_RESOURCE_ID UINT32_MAX
+#define LLDB_INVALID_PORT_NUMBER 0
 
 /// CPU Type definitions
 #define LLDB_ARCH_DEFAULT "systemArch"
diff --git a/lldb/packages/Python/lldbsuite/test/lldbtest.py 
b/lldb/packages/Python/lldbsuite/test/lldbtest.py
index 5fd686c143e9f..fb3cd22959df2 100644
--- a/lldb/packages/Python/lldbsuite/test/lldbtest.py
+++ b/lldb/packages/Python/lldbsuite/test/lldbtest.py
@@ -1572,6 +1572,15 @@ def findBuiltClang(self):
 
 return os.environ["CC"]
 
+def getBuiltServerTool(self, server_tool):
+# Tries to find simulation/lldb-server/gdbserver tool at the same 
folder as the lldb.
+lldb_dir = os.path.dirname(lldbtest_config.lldbExec)
+path = shutil.which(server_tool, path=lldb_dir)
+if path is not None:
+return path
+
+return ""
+
 def yaml2obj(self, yaml_path, obj_path, max_size=None):
 """
 Create an object file at the given path from a yaml file.
diff --git a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py 
b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py
index 5838281bcb1a1..96d312565f953 100644
--- a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py
+++ b/lldb/packages/Pyth

[Lldb-commits] [lldb] [lldb-dap] Added "port" property to vscode "attach" command. (PR #91570)

2024-06-10 Thread Santhosh Kumar Ellendula via lldb-commits

https://github.com/santhoshe447 updated 
https://github.com/llvm/llvm-project/pull/91570

>From 960351c9abf51f42d92604ac6297aa5b76ddfba5 Mon Sep 17 00:00:00 2001
From: Santhosh Kumar Ellendula 
Date: Fri, 17 Nov 2023 15:09:10 +0530
Subject: [PATCH 01/15] [lldb][test] Add the ability to extract the variable
 value out of the summary.

---
 .../Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py   | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py 
b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
index 9d79872b029a3..0cf9d4fde4948 100644
--- a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
+++ b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
@@ -195,6 +195,9 @@ def collect_console(self, duration):
 
 def get_local_as_int(self, name, threadId=None):
 value = self.dap_server.get_local_variable_value(name, 
threadId=threadId)
+# 'value' may have the variable value and summary.
+# Extract the variable value since summary can have nonnumeric 
characters.
+value = value.split(" ")[0]
 if value.startswith("0x"):
 return int(value, 16)
 elif value.startswith("0"):

>From ab44a6991c5bc8ac5764c3f71cbe3acc747b3776 Mon Sep 17 00:00:00 2001
From: Santhosh Kumar Ellendula 
Date: Fri, 3 May 2024 02:47:05 -0700
Subject: [PATCH 02/15] [lldb-dap] Added "port" property to vscode "attach"
 command.

Adding a "port" property to the VsCode "attach" command likely extends the 
functionality of the debugger configuratiuon to allow attaching to a process 
using PID or PORT number.
Currently, the "Attach" configuration lets the user specify a pid. We tell the 
user to use the attachCommands property to run "gdb-remote ".
Followed the below conditions for "attach" command with "port" and "pid"
We should add a "port" property. If port is specified and pid is not, use that 
port to attach. If both port and pid are specified, return an error saying that 
the user can't specify both pid and port.

Ex - launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "lldb-dap Debug",
"type": "lldb-dap",
"request": "attach",
"port":1234,
"program": "${workspaceFolder}/a.out",
"args": [],
"stopOnEntry": false,
"cwd": "${workspaceFolder}",
"env": [],

}
]
}
---
 lldb/include/lldb/lldb-defines.h  |   1 +
 .../Python/lldbsuite/test/lldbtest.py |   9 ++
 .../test/tools/lldb-dap/dap_server.py |   6 +
 .../test/tools/lldb-dap/lldbdap_testcase.py   |  20 +++
 .../attach/TestDAP_attachByPortNum.py | 120 ++
 lldb/tools/lldb-dap/lldb-dap.cpp  |  36 +-
 lldb/tools/lldb-dap/package.json  |  11 ++
 7 files changed, 199 insertions(+), 4 deletions(-)
 create mode 100644 
lldb/test/API/tools/lldb-dap/attach/TestDAP_attachByPortNum.py

diff --git a/lldb/include/lldb/lldb-defines.h b/lldb/include/lldb/lldb-defines.h
index c7bd019c5c90e..a1e6ee2ce468c 100644
--- a/lldb/include/lldb/lldb-defines.h
+++ b/lldb/include/lldb/lldb-defines.h
@@ -96,6 +96,7 @@
 #define LLDB_INVALID_QUEUE_ID 0
 #define LLDB_INVALID_CPU_ID UINT32_MAX
 #define LLDB_INVALID_WATCHPOINT_RESOURCE_ID UINT32_MAX
+#define LLDB_INVALID_PORT_NUMBER 0
 
 /// CPU Type definitions
 #define LLDB_ARCH_DEFAULT "systemArch"
diff --git a/lldb/packages/Python/lldbsuite/test/lldbtest.py 
b/lldb/packages/Python/lldbsuite/test/lldbtest.py
index 5fd686c143e9f..fb3cd22959df2 100644
--- a/lldb/packages/Python/lldbsuite/test/lldbtest.py
+++ b/lldb/packages/Python/lldbsuite/test/lldbtest.py
@@ -1572,6 +1572,15 @@ def findBuiltClang(self):
 
 return os.environ["CC"]
 
+def getBuiltServerTool(self, server_tool):
+# Tries to find simulation/lldb-server/gdbserver tool at the same 
folder as the lldb.
+lldb_dir = os.path.dirname(lldbtest_config.lldbExec)
+path = shutil.which(server_tool, path=lldb_dir)
+if path is not None:
+return path
+
+return ""
+
 def yaml2obj(self, yaml_path, obj_path, max_size=None):
 """
 Create an object file at the given path from a yaml file.
diff --git a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py 
b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py
index 5838281bcb1a1..96d312565f953 100644
--- a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py
+++ b/lldb/packages/Pyth

[dspace-community] filter-media method

2024-06-10 Thread SAI KUMAR S
Hi All,

I have a query regarding filter-media. I have uploaded around 1000 books to 
a collection and generated thumbnails for the PDF files using the command 
line *dspace filter-media -f.*

However, when I upload another 1000 files to the same collection, I need to 
generate thumbnails only for the newly uploaded files. I tried using the 
skip mode by creating a *skip-list.txt*, but I am not getting the desired 
result.

Could anyone of you provide me an example of how to correctly use the 
skip-list.txt method to generate thumbnails?

Alternatively, is there any other method, such as using a script (e.g., 
Python), to generate the thumbnails for only the newly uploaded files?

Please help me solve this query.

Thanks & Regards
Sai Kumar S

-- 
All messages to this mailing list should adhere to the Code of Conduct: 
https://www.lyrasis.org/about/Pages/Code-of-Conduct.aspx
--- 
You received this message because you are subscribed to the Google Groups 
"DSpace Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dspace-community+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dspace-community/6b4f01c6-8293-4566-82de-8069955414d5n%40googlegroups.com.


RE: [core-for-CI PATCH] Revert "e1000e: move force SMBUS near the end of enable_ulp function"

2024-06-10 Thread Borah, Chaitanya Kumar
Hi Jani,

> -Original Message-
> From: Saarinen, Jani 
> Sent: Monday, June 10, 2024 2:28 PM
> To: Saarinen, Jani ; Borah, Chaitanya Kumar
> ; intel-gfx@lists.freedesktop.org
> Cc: Borah, Chaitanya Kumar 
> Subject: RE: [core-for-CI PATCH] Revert "e1000e: move force SMBUS near the
> end of enable_ulp function"
> 
> Hi,
> > -Original Message-
> > From: Intel-gfx  On Behalf Of
> > Saarinen, Jani
> > Sent: Monday, 10 June 2024 11.23
> > To: Borah, Chaitanya Kumar ; intel-
> > g...@lists.freedesktop.org
> > Cc: Borah, Chaitanya Kumar 
> > Subject: RE: [core-for-CI PATCH] Revert "e1000e: move force SMBUS near
> > the end of enable_ulp function"
> >
> > Hi,
> > > -Original Message-
> > > From: Intel-gfx  On Behalf
> > > Of Chaitanya Kumar Borah
> > > Sent: Monday, 10 June 2024 10.46
> > > To: intel-gfx@lists.freedesktop.org
> > > Cc: Borah, Chaitanya Kumar 
> > > Subject: [core-for-CI PATCH] Revert "e1000e: move force SMBUS near
> > > the end of enable_ulp function"
> > >
> > > This reverts commit bfd546a552e140b0a4c8a21527c39d6d21addb28.
> > >
> > > The commit seems to cause problems in suspend-resume tests
> > >
> > > [212.204897] e1000e :00:1f.6: PM: pci_pm_suspend():
> > > e1000e_pm_suspend [e1000e] returns -2 [212.204928] e1000e
> > :00:1f.6:
> > > PM: dpm_run_callback(): pci_pm_suspend returns -2 [212.204943]
> > > e1000e
> > > :00:1f.6: PM: failed to suspend async: error -2 [212.205092] PM:
> > > suspend of devices aborted after 302.254 msecs
> > >
> > > References:
> > > https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14904/shard-
> > > dg2-4/igt@gem_ccs@suspend-resume@linear-compressed-compfmt0-
> > > lmem0-lmem0.html
> > > References:
> > > https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11305
> > > Signed-off-by: Chaitanya Kumar Borah
> > > 
> >
> > Acked-By: Jani Saarinen 
> >
> > We have already trybot results from revert
> > https://patchwork.freedesktop.org/series/134603/#rev2 /
> > https://intel-gfx-
> > ci.01.org/tree/drm-tip/Trybot_134603v2/index.html?testfilter=suspend
> > So helps on these. Let's get this merged asap.
> When merging please reference
> https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11308
> 

As discussed, we already have 
https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11305 tracking the 
revert :)
We can close this one.

Regards

Chaitanya

> Br,
> Jani
> 
> >
> > Br,
> > Jani
> >
> > > ---
> > >  drivers/net/ethernet/intel/e1000e/ich8lan.c | 22
> > > - drivers/net/ethernet/intel/e1000e/netdev.c  |
> > > 18
> > > +
> > >  2 files changed, 18 insertions(+), 22 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c
> > > b/drivers/net/ethernet/intel/e1000e/ich8lan.c
> > > index 2e98a2a0bead..f9e94be36e97 100644
> > > --- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
> > > +++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
> > > @@ -1225,28 +1225,6 @@ s32 e1000_enable_ulp_lpt_lp(struct e1000_hw
> > > *hw, bool to_sx)
> > >   }
> > >
> > >  release:
> > > - /* Switching PHY interface always returns MDI error
> > > -  * so disable retry mechanism to avoid wasting time
> > > -  */
> > > - e1000e_disable_phy_retry(hw);
> > > -
> > > - /* Force SMBus mode in PHY */
> > > - ret_val = e1000_read_phy_reg_hv_locked(hw, CV_SMB_CTRL,
> > > _reg);
> > > - if (ret_val) {
> > > - e1000e_enable_phy_retry(hw);
> > > - hw->phy.ops.release(hw);
> > > - goto out;
> > > - }
> > > - phy_reg |= CV_SMB_CTRL_FORCE_SMBUS;
> > > - e1000_write_phy_reg_hv_locked(hw, CV_SMB_CTRL, phy_reg);
> > > -
> > > - e1000e_enable_phy_retry(hw);
> > > -
> > > - /* Force SMBus mode in MAC */
> > > - mac_reg = er32(CTRL_EXT);
> > > - mac_reg |= E1000_CTRL_EXT_FORCE_SMBUS;
> > > - ew32(CTRL_EXT, mac_reg);
> > > -
> > >   hw->phy.ops.release(hw);
> > >  out:
> > >   if (ret_val)
> > > diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c
> > > b/drivers/net/ethernet/intel/e1000e/netdev.c
> > > index da5c59daf8ba..220d62fca55d 100644
> > > --- a/drivers/net/ethernet/intel/e1000e/

[Lldb-commits] [lldb] [lldb-dap] Added "port" property to vscode "attach" command. (PR #91570)

2024-06-10 Thread Santhosh Kumar Ellendula via lldb-commits

https://github.com/santhoshe447 updated 
https://github.com/llvm/llvm-project/pull/91570

>From 960351c9abf51f42d92604ac6297aa5b76ddfba5 Mon Sep 17 00:00:00 2001
From: Santhosh Kumar Ellendula 
Date: Fri, 17 Nov 2023 15:09:10 +0530
Subject: [PATCH 01/14] [lldb][test] Add the ability to extract the variable
 value out of the summary.

---
 .../Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py   | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py 
b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
index 9d79872b029a3..0cf9d4fde4948 100644
--- a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
+++ b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
@@ -195,6 +195,9 @@ def collect_console(self, duration):
 
 def get_local_as_int(self, name, threadId=None):
 value = self.dap_server.get_local_variable_value(name, 
threadId=threadId)
+# 'value' may have the variable value and summary.
+# Extract the variable value since summary can have nonnumeric 
characters.
+value = value.split(" ")[0]
 if value.startswith("0x"):
 return int(value, 16)
 elif value.startswith("0"):

>From ab44a6991c5bc8ac5764c3f71cbe3acc747b3776 Mon Sep 17 00:00:00 2001
From: Santhosh Kumar Ellendula 
Date: Fri, 3 May 2024 02:47:05 -0700
Subject: [PATCH 02/14] [lldb-dap] Added "port" property to vscode "attach"
 command.

Adding a "port" property to the VsCode "attach" command likely extends the 
functionality of the debugger configuratiuon to allow attaching to a process 
using PID or PORT number.
Currently, the "Attach" configuration lets the user specify a pid. We tell the 
user to use the attachCommands property to run "gdb-remote ".
Followed the below conditions for "attach" command with "port" and "pid"
We should add a "port" property. If port is specified and pid is not, use that 
port to attach. If both port and pid are specified, return an error saying that 
the user can't specify both pid and port.

Ex - launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "lldb-dap Debug",
"type": "lldb-dap",
"request": "attach",
"port":1234,
"program": "${workspaceFolder}/a.out",
"args": [],
"stopOnEntry": false,
"cwd": "${workspaceFolder}",
"env": [],

}
]
}
---
 lldb/include/lldb/lldb-defines.h  |   1 +
 .../Python/lldbsuite/test/lldbtest.py |   9 ++
 .../test/tools/lldb-dap/dap_server.py |   6 +
 .../test/tools/lldb-dap/lldbdap_testcase.py   |  20 +++
 .../attach/TestDAP_attachByPortNum.py | 120 ++
 lldb/tools/lldb-dap/lldb-dap.cpp  |  36 +-
 lldb/tools/lldb-dap/package.json  |  11 ++
 7 files changed, 199 insertions(+), 4 deletions(-)
 create mode 100644 
lldb/test/API/tools/lldb-dap/attach/TestDAP_attachByPortNum.py

diff --git a/lldb/include/lldb/lldb-defines.h b/lldb/include/lldb/lldb-defines.h
index c7bd019c5c90e..a1e6ee2ce468c 100644
--- a/lldb/include/lldb/lldb-defines.h
+++ b/lldb/include/lldb/lldb-defines.h
@@ -96,6 +96,7 @@
 #define LLDB_INVALID_QUEUE_ID 0
 #define LLDB_INVALID_CPU_ID UINT32_MAX
 #define LLDB_INVALID_WATCHPOINT_RESOURCE_ID UINT32_MAX
+#define LLDB_INVALID_PORT_NUMBER 0
 
 /// CPU Type definitions
 #define LLDB_ARCH_DEFAULT "systemArch"
diff --git a/lldb/packages/Python/lldbsuite/test/lldbtest.py 
b/lldb/packages/Python/lldbsuite/test/lldbtest.py
index 5fd686c143e9f..fb3cd22959df2 100644
--- a/lldb/packages/Python/lldbsuite/test/lldbtest.py
+++ b/lldb/packages/Python/lldbsuite/test/lldbtest.py
@@ -1572,6 +1572,15 @@ def findBuiltClang(self):
 
 return os.environ["CC"]
 
+def getBuiltServerTool(self, server_tool):
+# Tries to find simulation/lldb-server/gdbserver tool at the same 
folder as the lldb.
+lldb_dir = os.path.dirname(lldbtest_config.lldbExec)
+path = shutil.which(server_tool, path=lldb_dir)
+if path is not None:
+return path
+
+return ""
+
 def yaml2obj(self, yaml_path, obj_path, max_size=None):
 """
 Create an object file at the given path from a yaml file.
diff --git a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py 
b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py
index 5838281bcb1a1..96d312565f953 100644
--- a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py
+++ b/lldb/packages/Pyth

[core-for-CI PATCH] Revert "e1000e: move force SMBUS near the end of enable_ulp function"

2024-06-10 Thread Chaitanya Kumar Borah
This reverts commit bfd546a552e140b0a4c8a21527c39d6d21addb28.

The commit seems to cause problems in suspend-resume tests

[212.204897] e1000e :00:1f.6: PM: pci_pm_suspend(): e1000e_pm_suspend 
[e1000e] returns -2
[212.204928] e1000e :00:1f.6: PM: dpm_run_callback(): pci_pm_suspend 
returns -2
[212.204943] e1000e :00:1f.6: PM: failed to suspend async: error -2
[212.205092] PM: suspend of devices aborted after 302.254 msecs

References: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_14904/shard-dg2-4/igt@gem_ccs@suspend-res...@linear-compressed-compfmt0-lmem0-lmem0.html
References: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11305
Signed-off-by: Chaitanya Kumar Borah 
---
 drivers/net/ethernet/intel/e1000e/ich8lan.c | 22 -
 drivers/net/ethernet/intel/e1000e/netdev.c  | 18 +
 2 files changed, 18 insertions(+), 22 deletions(-)

diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c 
b/drivers/net/ethernet/intel/e1000e/ich8lan.c
index 2e98a2a0bead..f9e94be36e97 100644
--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
+++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
@@ -1225,28 +1225,6 @@ s32 e1000_enable_ulp_lpt_lp(struct e1000_hw *hw, bool 
to_sx)
}
 
 release:
-   /* Switching PHY interface always returns MDI error
-* so disable retry mechanism to avoid wasting time
-*/
-   e1000e_disable_phy_retry(hw);
-
-   /* Force SMBus mode in PHY */
-   ret_val = e1000_read_phy_reg_hv_locked(hw, CV_SMB_CTRL, _reg);
-   if (ret_val) {
-   e1000e_enable_phy_retry(hw);
-   hw->phy.ops.release(hw);
-   goto out;
-   }
-   phy_reg |= CV_SMB_CTRL_FORCE_SMBUS;
-   e1000_write_phy_reg_hv_locked(hw, CV_SMB_CTRL, phy_reg);
-
-   e1000e_enable_phy_retry(hw);
-
-   /* Force SMBus mode in MAC */
-   mac_reg = er32(CTRL_EXT);
-   mac_reg |= E1000_CTRL_EXT_FORCE_SMBUS;
-   ew32(CTRL_EXT, mac_reg);
-
hw->phy.ops.release(hw);
 out:
if (ret_val)
diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c 
b/drivers/net/ethernet/intel/e1000e/netdev.c
index da5c59daf8ba..220d62fca55d 100644
--- a/drivers/net/ethernet/intel/e1000e/netdev.c
+++ b/drivers/net/ethernet/intel/e1000e/netdev.c
@@ -6623,6 +6623,7 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool 
runtime)
struct e1000_hw *hw = >hw;
u32 ctrl, ctrl_ext, rctl, status, wufc;
int retval = 0;
+   u16 smb_ctrl;
 
/* Runtime suspend should only enable wakeup for link changes */
if (runtime)
@@ -6696,6 +6697,23 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool 
runtime)
if (retval)
return retval;
}
+
+   /* Force SMBUS to allow WOL */
+   /* Switching PHY interface always returns MDI error
+* so disable retry mechanism to avoid wasting time
+*/
+   e1000e_disable_phy_retry(hw);
+
+   e1e_rphy(hw, CV_SMB_CTRL, _ctrl);
+   smb_ctrl |= CV_SMB_CTRL_FORCE_SMBUS;
+   e1e_wphy(hw, CV_SMB_CTRL, smb_ctrl);
+
+   e1000e_enable_phy_retry(hw);
+
+   /* Force SMBus mode in MAC */
+   ctrl_ext = er32(CTRL_EXT);
+   ctrl_ext |= E1000_CTRL_EXT_FORCE_SMBUS;
+   ew32(CTRL_EXT, ctrl_ext);
}
 
/* Ensure that the appropriate bits are set in LPI_CTRL
-- 
2.25.1



[marxmail] A ghostly Engels might well guffaw

2024-06-10 Thread hari kumar
https://www.theguardian.com/business/article/2024/jun/09/luxury-penthouse-in-manchester-named-after-friedrich-engels


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#30709): https://groups.io/g/marxmail/message/30709
Mute This Topic: https://groups.io/mt/106588472/21656
-=-=-
POSTING RULES & NOTES
#1 YOU MUST clip all extraneous text when replying to a message.
#2 This mail-list, like most, is publicly & permanently archived.
#3 Subscribe and post under an alias if #2 is a concern.
#4 Do not exceed five posts a day.
-=-=-
Group Owner: marxmail+ow...@groups.io
Unsubscribe: https://groups.io/g/marxmail/leave/8674936/21656/1316126222/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-




[RESEND V5] ieee1275/ofdisk: retry on open and read failure

2024-06-10 Thread Mukesh Kumar Chaurasiya
Sometimes, when booting from a very busy SAN, the access to the
disk can fail and then GRUB will eventually drop to GRUB prompt.
This scenario is more frequent when deploying many machines at
the same time using the same SAN.
This patch aims to force the ofdisk module to retry the open or
read function for network disks excluding after it fails. We use
DEFAULT_RETRY_TIMEOUT, which is 15 seconds to specify the time it'll
retry to access the disk before it definitely fails. The timeout can be
changed by setting the environment variable ofdisk_retry_timeout.
If the environment variable fails to read, GRUB will consider the
default value of 15 seconds.

Signed-off-by: Diego Domingos 
Signed-off-by: Mukesh Kumar Chaurasiya 
---
 docs/grub.texi   |  8 +++
 grub-core/disk/ieee1275/ofdisk.c | 91 ++--
 2 files changed, 96 insertions(+), 3 deletions(-)

diff --git a/docs/grub.texi b/docs/grub.texi
index f3bdc2564..9514271fc 100644
--- a/docs/grub.texi
+++ b/docs/grub.texi
@@ -3308,6 +3308,7 @@ These variables have special meaning to GRUB.
 * net_default_ip::
 * net_default_mac::
 * net_default_server::
+* ofdisk_retry_timeout::
 * pager::
 * prefix::
 * pxe_blksize::
@@ -3738,6 +3739,13 @@ The default is the value of @samp{color_normal} 
(@pxref{color_normal}).
 @xref{Network}.
 
 
+@node ofdisk_retry_timeout
+@subsection ofdisk_retry_timeout
+
+The time in seconds till which the GRUB will retry to open or read a disk in
+case of failure to do so. This value defaults to 15 seconds.
+
+
 @node pager
 @subsection pager
 
diff --git a/grub-core/disk/ieee1275/ofdisk.c b/grub-core/disk/ieee1275/ofdisk.c
index c6cba0c8a..d90b9b70b 100644
--- a/grub-core/disk/ieee1275/ofdisk.c
+++ b/grub-core/disk/ieee1275/ofdisk.c
@@ -24,6 +24,9 @@
 #include 
 #include 
 #include 
+#include 
+
+#define RETRY_DEFAULT_TIMEOUT 15
 
 static char *last_devpath;
 static grub_ieee1275_ihandle_t last_ihandle;
@@ -452,7 +455,7 @@ compute_dev_path (const char *name)
 }
 
 static grub_err_t
-grub_ofdisk_open (const char *name, grub_disk_t disk)
+grub_ofdisk_open_real (const char *name, grub_disk_t disk)
 {
   grub_ieee1275_phandle_t dev;
   char *devpath;
@@ -525,6 +528,61 @@ grub_ofdisk_open (const char *name, grub_disk_t disk)
   return 0;
 }
 
+static grub_uint64_t
+grub_ofdisk_disk_timeout (grub_disk_t disk)
+{
+  grub_uint64_t retry = RETRY_DEFAULT_TIMEOUT;
+  const char *timeout = grub_env_get ("ofdisk_retry_timeout");
+  const char *timeout_end;
+
+  if (grub_strstr (disk->name, "fibre-channel") != NULL ||
+  grub_strstr (disk->name, "vfc-client") != NULL)
+{
+  if (timeout == NULL)
+{
+  return retry;
+}
+  retry = grub_strtoul (timeout, _end, 10);
+  /* Ignore all errors and return default timeout */
+  if (grub_errno != GRUB_ERR_NONE ||
+  *timeout == '\0' ||
+  *timeout_end != '\0')
+{
+  return RETRY_DEFAULT_TIMEOUT;
+}
+}
+  else
+return 0;
+
+  return retry;
+}
+
+static grub_err_t
+grub_ofdisk_open (const char *name, grub_disk_t disk)
+{
+  grub_err_t err;
+  grub_uint64_t timeout = grub_get_time_ms () + (grub_ofdisk_disk_timeout 
(disk) * 1000);
+  grub_uint16_t inc = 0;
+
+  do
+{
+  err = grub_ofdisk_open_real (name, disk);
+  if (err == GRUB_ERR_UNKNOWN_DEVICE)
+{
+  grub_dprintf ("ofdisk", "Failed to open disk %s.\n", name);
+}
+  if (grub_get_time_ms () >= timeout)
+break;
+  grub_dprintf ("ofdisk", "Retry to open disk %s.\n", name);
+  /*
+   * Increase in wait time for subsequent requests
+   * Cur time is used as a random number here
+   */
+  grub_millisleep ((32 << ++inc) * (grub_get_time_ms () % 32));
+} while (1);
+  return err;
+}
+
 static void
 grub_ofdisk_close (grub_disk_t disk)
 {
@@ -568,8 +626,8 @@ grub_ofdisk_prepare (grub_disk_t disk, grub_disk_addr_t 
sector)
 }
 
 static grub_err_t
-grub_ofdisk_read (grub_disk_t disk, grub_disk_addr_t sector,
- grub_size_t size, char *buf)
+grub_ofdisk_read_real (grub_disk_t disk, grub_disk_addr_t sector,
+   grub_size_t size, char *buf)
 {
   grub_err_t err;
   grub_ssize_t actual;
@@ -587,6 +645,33 @@ grub_ofdisk_read (grub_disk_t disk, grub_disk_addr_t 
sector,
   return 0;
 }
 
+static grub_err_t
+grub_ofdisk_read (grub_disk_t disk, grub_disk_addr_t sector,
+  grub_size_t size, char *buf)
+{
+  grub_err_t err;
+  grub_uint64_t timeout = grub_get_time_ms () + (grub_ofdisk_disk_timeout 
(disk) * 1000);
+  grub_uint16_t inc = 0;
+
+  do
+{
+  err = grub_ofdisk_read_real (disk, sector, size, buf);
+  if (err == GRUB_ERR_UNKNOWN_DEVICE)
+{
+  grub_dprintf ("ofdisk", "Failed to read disk %s.\n", 
(char*)disk->data);
+}
+  if (grub_get_time_ms

[RESEND V2] ieee1275/ofdisk: vscsi lun handling on lun len

2024-06-10 Thread Mukesh Kumar Chaurasiya
The information about "vscsi-report-luns" data is a list of disk details
with pairs of memory addresses and lengths.

  8 bytes 8 bytes
lun-addr  --->     8 bytes
^|  buf-addr | lun-count| > -
|   |   lun |
||  buf-addr | lun-count| | -
 "len"    | |  ...  |
||...   | | -
| | |   lun |
||  buf-addr | lun-count| | -
V |
  |---> -
|   lun |
-
|  ...  |
-
|   lun |
-
The way the expression (args.table + 4 + 8 * i) is used is incorrect and
can be confusing. The list of LUNs doesn't end with NULL, indicated by
while (*ptr). Usually, this loop doesn't process any LUNs because it ends
before checking any as first reported LUN is likely to be 0. The list of
LUNs ends based on its length, not by a NULL value.

Signed-off-by: Mukesh Kumar Chaurasiya 
---
 grub-core/disk/ieee1275/ofdisk.c | 27 ---
 1 file changed, 16 insertions(+), 11 deletions(-)

diff --git a/grub-core/disk/ieee1275/ofdisk.c b/grub-core/disk/ieee1275/ofdisk.c
index c6cba0c8a..1618544a8 100644
--- a/grub-core/disk/ieee1275/ofdisk.c
+++ b/grub-core/disk/ieee1275/ofdisk.c
@@ -222,8 +222,12 @@ dev_iterate (const struct grub_ieee1275_devalias *alias)
grub_ieee1275_cell_t table;
   }
   args;
+  struct lun_buf {
+grub_uint64_t *buf_addr;
+grub_uint64_t lun_count;
+  } *tbl;
   char *buf, *bufptr;
-  unsigned i;
+  unsigned int i, j;
 
   if (grub_ieee1275_open (alias->path, ))
return;
@@ -248,17 +252,18 @@ dev_iterate (const struct grub_ieee1275_devalias *alias)
return;
   bufptr = grub_stpcpy (buf, alias->path);
 
+  tbl = (struct lun_len *) args.table;
   for (i = 0; i < args.nentries; i++)
-   {
- grub_uint64_t *ptr;
-
- ptr = *(grub_uint64_t **) (args.table + 4 + 8 * i);
- while (*ptr)
-   {
- grub_snprintf (bufptr, 32, "/disk@%" PRIxGRUB_UINT64_T, *ptr++);
- dev_iterate_real (buf, buf);
-   }
-   }
+{
+  grub_uint64_t *ptr;
+
+  ptr = (grub_uint64_t *)(grub_addr_t) tbl[i].buf_addr;
+  for (j = 0; j < tbl[i].lun_count; j++)
+   {
+ grub_snprintf (bufptr, 32, "/disk@%" PRIxGRUB_UINT64_T, *ptr++);
+ dev_iterate_real (buf, buf);
+   }
+}
   grub_ieee1275_close (ihandle);
   grub_free (buf);
   return;
-- 
2.45.1


___
Grub-devel mailing list
Grub-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/grub-devel


Re: [efloraofindia:465827] Dendrobium moschatum (Banks) Sw. from Assam KD 22 Jun' 24

2024-06-09 Thread Pankaj Kumar
Yes Dendrobium moschatum.
Thanks for sharing
Pankaj

On Sunday 9 June 2024, J.M. Garg  wrote:

> Thanks, Karuna ji
>
> -- Forwarded message -
> From: 'Karuna Das' via eFloraofIndia 
> Date: Sun, 9 Jun 2024 at 20:25
> Subject: [efloraofindia:465823] Dendrobium moschatum (Banks) Sw. from
> Assam KD 22 Jun' 24
> To: indiantreepix 
>
>
> Dear All,
>Attached images are *Dendrobium moschatum* (Banks) Sw.from Assam*.*
>
> Date : 05.06 .2024
> Location: Assam
> Family : Orchidaceae
> Genus & species : *Dendrobium moschatum* (Banks) Sw
> Habit : Epiphyte
>
> With regards
> Karuna Kanta Das
> Guwahati 781012
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "eFloraofIndia" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to indiantreepix+unsubscr...@googlegroups.com.
> To view this discussion on the web, visit https://groups.google.com/d/
> msgid/indiantreepix/1717944275.S.28449.autosave.drafts.1717944901.18606%
> 40webmail.rediffmail.com
> <https://groups.google.com/d/msgid/indiantreepix/1717944275.S.28449.autosave.drafts.1717944901.18606%40webmail.rediffmail.com?utm_medium=email_source=footer>
> .
>
>
> --
> With regards,
> J.M.Garg
>


-- 

*Pankaj Kumar* MSc, PhD, FLS

IUCN-SSC Red List Authority for Orchids of Asia

IUCN-SSC: Chinese Species Specialist Group, Orchid Specialist Group of
Asia, Global Trade Subgroup, Western Ghats Plant Specialist Group. Hong
Kong Biodiversity Strategy and Action Plan

*Department of Plant and Soil Science, **Texas Tech University, Lubbock, TX
79409 USA*

*email*: sahanipan...@gmail.com; pankaj.ku...@ttu.edu | *Phone*: +1 806 317
7623 (USA); +852 9436 6251 (Hong Kong)

-- 
You received this message because you are subscribed to the Google Groups 
"eFloraofIndia" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to indiantreepix+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/indiantreepix/CABpo8%3D3AVENmWW%2B-o6mbMLVmcY33Ts0cJSUkMLkeHTTeWOSDCw%40mail.gmail.com.


Re: [efloraofindia:465809] Top Posters for May 2024

2024-06-09 Thread Pankaj Kumar
Thanks a lot for sharing sir!!

On Sun, 9 Jun 2024 at 08:01, J.M. Garg  wrote:

> Thanks a lot, Singh ji
>
> -- Forwarded message -
> From: Gurcharan Singh 
> Date: Sat, 8 Jun 2024 at 03:14
> Subject: [efloraofindia:465764] Top Posters for May 2024
> To: efloraofindia 
>
>
>
>   Friends,
> Sorry for posting May, 2024 data. A tragedy fell in our family, our
> younger son Kanwarpreet Singh (42 Years) in Canada expired on 29th of May
> and we had to rush there. Please pray for his soul.
> Here are top posters of May, 2024
>
>
>
>
> Top Posters of May  2024
>
> Total Posts
>
> 1.
>
> Saroj Kasaju
>
> 398
>
> 2.
>
> J. M. Garg
>
> 181
>
> 3.
>
> Sam Kuzhalanattu
>
> 84
>
> 4.
>
> Dinesh Valke
>
> 75
>
> 5.
>
> Gurcharan Singh
>
> 50
>
> 6.
>
> Taffazul Hussain
>
> 31
>
> 7.
>
> Mandru Ramesh Chaudhury
>
> 23
>
> 8.
>
> Pankaj Sahni
>
> 16
>
> 9.
>
> Sawmliana, M
>
> 11
>
> 10.
>
> Prabhat K Das (Pkdasr)
>
> 10
>
>
>
> Dr. Gurcharan Singh
> Retired  Associate Professor
> SGTB Khalsa College, University of Delhi, Delhi-110007
> Res: 932 Anand Kunj, Vikas Puri, New Delhi-110018.
> https://www.gurcharanfamily.com/
>
> --
> You received this message because you are subscribed to the Google Groups
> "eFloraofIndia" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to indiantreepix+unsubscr...@googlegroups.com.
> To view this discussion on the web, visit
> https://groups.google.com/d/msgid/indiantreepix/CAHiXKpW7WyJctPAbxWV8kLkGD8aG9cmPMJPnE%3DuZo-AZi1f-pQ%40mail.gmail.com
> 
> .
>
>
> --
> With regards,
> J.M.Garg
>

-- 
You received this message because you are subscribed to the Google Groups 
"eFloraofIndia" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to indiantreepix+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/indiantreepix/CABpo8%3D2Fo5odNA8_fPOpvExgdNfE6m%3DMkXymAWfRMHwVmvqD3A%40mail.gmail.com.


[rancid] Rancid - Error

2024-06-09 Thread Sathish Kumar Ippani


HI,
Thanks for your time and patience.

I am trying to install rancid and I am not I am getting below error. I am a 
non-linux engineer. Tried checking in internet but not get exact answer and 
issue stands un-resolved.


rancid@network:~$ /usr/lib/rancid/bin/rancid-cvs
/usr/lib/rancid/bin/rancid-cvs: 1: /etc/rancid/rancid.conf: ii#: not found
rancid@network:~$

Regards,
Sathish

Disclaimer: This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the system 
manager. This message contains confidential information and is intended only 
for the individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system. If you are not the intended recipient you are 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this information is strictly prohibited.
___
Rancid-discuss mailing list
Rancid-discuss@www.shrubbery.net
https://www.shrubbery.net/mailman/listinfo/rancid-discuss


Re: [marxmail] On elections in general, and on Modi

2024-06-08 Thread Marla Vijaya kumar via groups.io
 Sorry, I typed wrongly. Modi's NDA alliance got 285 seats, not 185 seats.
On Saturday, June 8, 2024 at 07:35:56 PM GMT+5:30, Marla Vijaya kumar via 
groups.io  wrote:  
 
  Well, the elections are played out over 6 weeks. There are some points for 
comrades to note.
1. The ECI (Election Commission of india) is a constitutionally independent 
authority meant to oversee the
 election process in an impartial manner. But Modi violated all rules, kicked 
out a Supreme Cout Judge and an
 opposition  representative and appointed his cronies. They had done their job 
to the hilt, to the complete
 satsifaction of Modi. But India is such a vast country with so much diversity, 
that manipulation of election results
 is a rather difficult process, unlike in Putin's Russia.

2. One very senior ex-bureacrat Ms. Sujatha Rao had observed that Modi's BJP 
had won 50 seats with a margin of
 less than 50 votes and 100 seats with less than 500 votes. She points out that 
this is a clear case of manipulation
 of EVMs (Electronic Voting Machines). But the Election Commission does not 
open its mouth on this. In fact,
 without manipulation, Modi would have ended up with less than 80 out of 543 
seas. India has an outdated First-
Past-The-Post system instead of proportional representation system.

3. The Left Parties had joined the India alliance, but also fought Congress in 
Kerala and lost. Total Left seats are
 only 9. One can vote for the left only if you got a vote in that particular 
constituency. But in Bengal, they
 contested 32 of the 42 seats, by leaving 10 seats to Congress. They got 
sizeable number of votes in most of the
 seats they contested, though they could not win anywhere in West Bengal.

4. Modi's NDA Alliance had secured 185 seats and the opposition I.N.D.I.A  
Alliance secured 236 seats out of
 543.  The rest have gone to smaller parties, who are not part of any group. 
But Modi does not command a
 majority of  his own and has to depend of economics wise right wing but 
politically religious tolerant regional
 parties, that depend on Muslim votes in ther respective states. Even if Modi 
cobbles up a majority and forms a
 government, he will not be able to implement the fascist agenda of his mother 
organization, the RSS.

5. Some smaller groups in Modi's NDA are acting smart and started blackmailing 
him for more cabinet posts
 than they are supposed to get. This has become a headache for Modi anf his 
team. If the refuse the demands,
 these smaller groups may shift their allegiance to the opposition I.N.D.I.A 
Alliance.

6. On the whole, Modi has lost his biting power and CAN'T act like a dictator. 
Any government may not last
 longer than 1 year and we may in all probability see another General Election.
Vijaya Kumar Marla
On Saturday, June 8, 2024 at 01:57:21 AM GMT+5:30, hari kumar 
 wrote:  
 
 I think so far, there has been no comment here on the electoral relative 
losses of Modi in India. It bears at least a few thoughts in my view. 
I put a "gift" free link here from todays' NYT (at bottom).

I have a general enquiry as to whether any of the anti-participation socialists 
here in bourgeois elections, would have voted in this election and potentially 
voted for the Congress?
Was there "any difference" between Modi and the Congress 'babus'? 
If there was, what was the difference? 

I mean I put my view quite plainly that Modi is a fascist, and there was some 
'value' to the Left in even voting for this ratty Congress. 

Thanks for considering, H

https://www.nytimes.com/interactive/2024/06/07/world/asia/india-election-map.html?unlocked_article_code=1.x00.qjvK.wJPibNg5sf17=url-share



-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#30682): https://groups.io/g/marxmail/message/30682
Mute This Topic: https://groups.io/mt/106550546/21656
-=-=-
POSTING RULES & NOTES
#1 YOU MUST clip all extraneous text when replying to a message.
#2 This mail-list, like most, is publicly & permanently archived.
#3 Subscribe and post under an alias if #2 is a concern.
#4 Do not exceed five posts a day.
-=-=-
Group Owner: marxmail+ow...@groups.io
Unsubscribe: https://groups.io/g/marxmail/leave/8674936/21656/1316126222/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-




Re: [marxmail] On elections in general, and on Modi

2024-06-08 Thread Marla Vijaya kumar via groups.io
 Well, the elections are played out over 6 weeks. There are some points for 
comrades to note.
1. The ECI (Election Commission of india) is a constitutionally independent 
authority meant to oversee the
 election process in an impartial manner. But Modi violated all rules, kicked 
out a Supreme Cout Judge and an
 opposition  representative and appointed his cronies. They had done their job 
to the hilt, to the complete
 satsifaction of Modi. But India is such a vast country with so much diversity, 
that manipulation of election results
 is a rather difficult process, unlike in Putin's Russia.

2. One very senior ex-bureacrat Ms. Sujatha Rao had observed that Modi's BJP 
had won 50 seats with a margin of
 less than 50 votes and 100 seats with less than 500 votes. She points out that 
this is a clear case of manipulation
 of EVMs (Electronic Voting Machines). But the Election Commission does not 
open its mouth on this. In fact,
 without manipulation, Modi would have ended up with less than 80 out of 543 
seas. India has an outdated First-
Past-The-Post system instead of proportional representation system.

3. The Left Parties had joined the India alliance, but also fought Congress in 
Kerala and lost. Total Left seats are
 only 9. One can vote for the left only if you got a vote in that particular 
constituency. But in Bengal, they
 contested 32 of the 42 seats, by leaving 10 seats to Congress. They got 
sizeable number of votes in most of the
 seats they contested, though they could not win anywhere in West Bengal.

4. Modi's NDA Alliance had secured 185 seats and the opposition I.N.D.I.A  
Alliance secured 236 seats out of
 543.  The rest have gone to smaller parties, who are not part of any group. 
But Modi does not command a
 majority of  his own and has to depend of economics wise right wing but 
politically religious tolerant regional
 parties, that depend on Muslim votes in ther respective states. Even if Modi 
cobbles up a majority and forms a
 government, he will not be able to implement the fascist agenda of his mother 
organization, the RSS.

5. Some smaller groups in Modi's NDA are acting smart and started blackmailing 
him for more cabinet posts
 than they are supposed to get. This has become a headache for Modi anf his 
team. If the refuse the demands,
 these smaller groups may shift their allegiance to the opposition I.N.D.I.A 
Alliance.

6. On the whole, Modi has lost his biting power and CAN'T act like a dictator. 
Any government may not last
 longer than 1 year and we may in all probability see another General Election.
Vijaya Kumar Marla
On Saturday, June 8, 2024 at 01:57:21 AM GMT+5:30, hari kumar 
 wrote:  
 
 I think so far, there has been no comment here on the electoral relative 
losses of Modi in India. It bears at least a few thoughts in my view. 
I put a "gift" free link here from todays' NYT (at bottom).

I have a general enquiry as to whether any of the anti-participation socialists 
here in bourgeois elections, would have voted in this election and potentially 
voted for the Congress?
Was there "any difference" between Modi and the Congress 'babus'? 
If there was, what was the difference? 

I mean I put my view quite plainly that Modi is a fascist, and there was some 
'value' to the Left in even voting for this ratty Congress. 

Thanks for considering, H

https://www.nytimes.com/interactive/2024/06/07/world/asia/india-election-map.html?unlocked_article_code=1.x00.qjvK.wJPibNg5sf17=url-share
  


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#30681): https://groups.io/g/marxmail/message/30681
Mute This Topic: https://groups.io/mt/106550546/21656
-=-=-
POSTING RULES & NOTES
#1 YOU MUST clip all extraneous text when replying to a message.
#2 This mail-list, like most, is publicly & permanently archived.
#3 Subscribe and post under an alias if #2 is a concern.
#4 Do not exceed five posts a day.
-=-=-
Group Owner: marxmail+ow...@groups.io
Unsubscribe: https://groups.io/g/marxmail/leave/8674936/21656/1316126222/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-




Re: [VOTE] Release Airflow 2.9.2 from 2.9.2rc1

2024-06-08 Thread Phani Kumar
+1 non binding

On Sat, 8 Jun 2024, 15:02 Hussein Awala,  wrote:

> +1 (binding) checked signatures, checksums, licences and sources.
>
> On Saturday, June 8, 2024, rom sharon  wrote:
>
> > +1 (non-binding)
> >
>


[SR-Users] Re: Setting Key/Value Pairs at Dialogue Level

2024-06-08 Thread Pavan Kumar via sr-users
Thank you! This is exactly what I was looking for. I don’t know how I
missed it.

On Sat, 8 Jun 2024 at 4:47 PM, Federico Cabiddu 
wrote:

> Hi,
> you can use dialog vars for that purpose:
> https://kamailio.org/docs/modules/5.8.x/modules/dialog.html#idm1681.
>
> Cheers,
>
> Federico
>
> On Fri, Jun 7, 2024 at 5:50 PM Pavan Kumar via sr-users <
> sr-users@lists.kamailio.org> wrote:
>
>> Hi everyone,
>>
>> I am creating a dialogue when I receive INVITE. Is there a way to set
>> key/value pair like attributes at dialog level, so that, when I receive BYE
>> message I can retrieve that information and do some processing?
>>
>> Thank you,
>> Pavan Kumar
>>
> __
>> Kamailio - Users Mailing List - Non Commercial Discussions
>> To unsubscribe send an email to sr-users-le...@lists.kamailio.org
>> Important: keep the mailing list in the recipients, do not reply only to
>> the sender!
>> Edit mailing list options or unsubscribe:
>>
>
__
Kamailio - Users Mailing List - Non Commercial Discussions
To unsubscribe send an email to sr-users-le...@lists.kamailio.org
Important: keep the mailing list in the recipients, do not reply only to the 
sender!
Edit mailing list options or unsubscribe:


Re: [PATCH v4 08/13] drm/msm/dpu: add support for virtual planes

2024-06-07 Thread Abhinav Kumar




On 6/7/2024 5:57 PM, Dmitry Baryshkov wrote:

On Sat, 8 Jun 2024 at 02:55, Abhinav Kumar  wrote:




On 6/7/2024 3:26 PM, Dmitry Baryshkov wrote:

On Sat, 8 Jun 2024 at 00:39, Abhinav Kumar  wrote:




On 6/7/2024 2:10 PM, Dmitry Baryshkov wrote:

On Fri, Jun 07, 2024 at 12:22:16PM -0700, Abhinav Kumar wrote:



On 6/7/2024 12:16 AM, Dmitry Baryshkov wrote:

On Thu, Jun 06, 2024 at 03:21:11PM -0700, Abhinav Kumar wrote:

On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Only several SSPP blocks support such features as YUV output or scaling,
thus different DRM planes have different features.  Properly utilizing
all planes requires the attention of the compositor, who should
prefer simpler planes to YUV-supporting ones. Otherwise it is very easy
to end up in a situation when all featureful planes are already
allocated for simple windows, leaving no spare plane for YUV playback.

To solve this problem make all planes virtual. Each plane is registered
as if it supports all possible features, but then at the runtime during
the atomic_check phase the driver selects backing SSPP block for each
plane.

Note, this does not provide support for using two different SSPP blocks
for a single plane or using two rectangles of an SSPP to drive two
planes. Each plane still gets its own SSPP and can utilize either a solo
rectangle or both multirect rectangles depending on the resolution.

Note #2: By default support for virtual planes is turned off and the
driver still uses old code path with preallocated SSPP block for each
plane. To enable virtual planes, pass 'msm.dpu_use_virtual_planes=1'
kernel parameter.



I like the overall approach in this patch. Some comments below.


Signed-off-by: Dmitry Baryshkov 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c  |  50 +
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  10 +-
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h   |   4 +
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 230 +++---
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h |  19 ++
  drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c|  77 
  drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h|  28 +++
  7 files changed, 390 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
index 88c2e51ab166..794c5643584f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
@@ -1168,6 +1168,49 @@ static bool dpu_crtc_needs_dirtyfb(struct drm_crtc_state 
*cstate)
return false;
  }
+static int dpu_crtc_reassign_planes(struct drm_crtc *crtc, struct 
drm_crtc_state *crtc_state)
+{
+ int total_planes = crtc->dev->mode_config.num_total_plane;
+ struct drm_atomic_state *state = crtc_state->state;
+ struct dpu_global_state *global_state;
+ struct drm_plane_state **states;
+ struct drm_plane *plane;
+ int ret;
+
+ global_state = dpu_kms_get_global_state(crtc_state->state);
+ if (IS_ERR(global_state))
+ return PTR_ERR(global_state);
+
+ dpu_rm_release_all_sspp(global_state, crtc);
+


Do we need to call dpu_rm_release_all_sspp() even in the
_dpu_plane_atomic_disable()?


It allows the driver to optimize the usage of the SSPP rectangles.



No, what I meant was that we should call dpu_rm_release_all_sspp() in
dpu_plane_atomic_update() as well because in the atomic_check() path where
its called today, its being called only for zpos_changed and planes_changed
but during disable we must call this for sure.


No. the dpu_rm_release_all_sspp() should only be called during check.
When dpu_plane_atomic_update() is called, the state should already be
finalised. The atomic_check() callback is called when a plane is going
to be disabled.



atomic_check() will be called when plane is disabled but
dpu_rm_release_all_sspp() may not be called as it is protected by
zpos_changed and planes_changed. OR you need to add a !visible check
here to call dpu_rm_release_all_sspp() that time. Thats whay I wrote
previously.


Unless I miss something, if a plane gets disabled, then obviously
planes_changed is true.

[trimmed]



Do you mean DRM fwk sets planes_changed correctly for this case?

Currently we have

  if (!new_state->visible) {
  _dpu_plane_atomic_disable(plane);
  } else {
  dpu_plane_sspp_atomic_update(plane);
  }

So I wanted to ensure that when plane gets disabled, its SSPP is freed
too. If this is confirmed, I do not have any concerns.


This is the atomic_update() path, not the atomic_check()



Yes, I am aware.

Let me clarify my question here once again.

1) dpu_rm_release_all_sspp() gets called only in atomic_check() when 
either planes_changed or zpos_changed is set
2) But even in _dpu_plane_atomic_disable(), we should call 
dpu_rm_release_all_sspp() unconditionally. So for this, as you wrote, 
the corresponding atomic_check() call of _dpu_plane_atomic_disable() is 
supposed to do this. atomic

Re: [PATCH v4 08/13] drm/msm/dpu: add support for virtual planes

2024-06-07 Thread Abhinav Kumar




On 6/7/2024 5:57 PM, Dmitry Baryshkov wrote:

On Sat, 8 Jun 2024 at 02:55, Abhinav Kumar  wrote:




On 6/7/2024 3:26 PM, Dmitry Baryshkov wrote:

On Sat, 8 Jun 2024 at 00:39, Abhinav Kumar  wrote:




On 6/7/2024 2:10 PM, Dmitry Baryshkov wrote:

On Fri, Jun 07, 2024 at 12:22:16PM -0700, Abhinav Kumar wrote:



On 6/7/2024 12:16 AM, Dmitry Baryshkov wrote:

On Thu, Jun 06, 2024 at 03:21:11PM -0700, Abhinav Kumar wrote:

On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Only several SSPP blocks support such features as YUV output or scaling,
thus different DRM planes have different features.  Properly utilizing
all planes requires the attention of the compositor, who should
prefer simpler planes to YUV-supporting ones. Otherwise it is very easy
to end up in a situation when all featureful planes are already
allocated for simple windows, leaving no spare plane for YUV playback.

To solve this problem make all planes virtual. Each plane is registered
as if it supports all possible features, but then at the runtime during
the atomic_check phase the driver selects backing SSPP block for each
plane.

Note, this does not provide support for using two different SSPP blocks
for a single plane or using two rectangles of an SSPP to drive two
planes. Each plane still gets its own SSPP and can utilize either a solo
rectangle or both multirect rectangles depending on the resolution.

Note #2: By default support for virtual planes is turned off and the
driver still uses old code path with preallocated SSPP block for each
plane. To enable virtual planes, pass 'msm.dpu_use_virtual_planes=1'
kernel parameter.



I like the overall approach in this patch. Some comments below.


Signed-off-by: Dmitry Baryshkov 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c  |  50 +
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  10 +-
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h   |   4 +
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 230 +++---
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h |  19 ++
  drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c|  77 
  drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h|  28 +++
  7 files changed, 390 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
index 88c2e51ab166..794c5643584f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
@@ -1168,6 +1168,49 @@ static bool dpu_crtc_needs_dirtyfb(struct drm_crtc_state 
*cstate)
return false;
  }
+static int dpu_crtc_reassign_planes(struct drm_crtc *crtc, struct 
drm_crtc_state *crtc_state)
+{
+ int total_planes = crtc->dev->mode_config.num_total_plane;
+ struct drm_atomic_state *state = crtc_state->state;
+ struct dpu_global_state *global_state;
+ struct drm_plane_state **states;
+ struct drm_plane *plane;
+ int ret;
+
+ global_state = dpu_kms_get_global_state(crtc_state->state);
+ if (IS_ERR(global_state))
+ return PTR_ERR(global_state);
+
+ dpu_rm_release_all_sspp(global_state, crtc);
+


Do we need to call dpu_rm_release_all_sspp() even in the
_dpu_plane_atomic_disable()?


It allows the driver to optimize the usage of the SSPP rectangles.



No, what I meant was that we should call dpu_rm_release_all_sspp() in
dpu_plane_atomic_update() as well because in the atomic_check() path where
its called today, its being called only for zpos_changed and planes_changed
but during disable we must call this for sure.


No. the dpu_rm_release_all_sspp() should only be called during check.
When dpu_plane_atomic_update() is called, the state should already be
finalised. The atomic_check() callback is called when a plane is going
to be disabled.



atomic_check() will be called when plane is disabled but
dpu_rm_release_all_sspp() may not be called as it is protected by
zpos_changed and planes_changed. OR you need to add a !visible check
here to call dpu_rm_release_all_sspp() that time. Thats whay I wrote
previously.


Unless I miss something, if a plane gets disabled, then obviously
planes_changed is true.

[trimmed]



Do you mean DRM fwk sets planes_changed correctly for this case?

Currently we have

  if (!new_state->visible) {
  _dpu_plane_atomic_disable(plane);
  } else {
  dpu_plane_sspp_atomic_update(plane);
  }

So I wanted to ensure that when plane gets disabled, its SSPP is freed
too. If this is confirmed, I do not have any concerns.


This is the atomic_update() path, not the atomic_check()



Yes, I am aware.

Let me clarify my question here once again.

1) dpu_rm_release_all_sspp() gets called only in atomic_check() when 
either planes_changed or zpos_changed is set
2) But even in _dpu_plane_atomic_disable(), we should call 
dpu_rm_release_all_sspp() unconditionally. So for this, as you wrote, 
the corresponding atomic_check() call of _dpu_plane_atomic_disable() is 
supposed to do this. atomic

Re: [PATCH v4 08/13] drm/msm/dpu: add support for virtual planes

2024-06-07 Thread Abhinav Kumar




On 6/7/2024 3:26 PM, Dmitry Baryshkov wrote:

On Sat, 8 Jun 2024 at 00:39, Abhinav Kumar  wrote:




On 6/7/2024 2:10 PM, Dmitry Baryshkov wrote:

On Fri, Jun 07, 2024 at 12:22:16PM -0700, Abhinav Kumar wrote:



On 6/7/2024 12:16 AM, Dmitry Baryshkov wrote:

On Thu, Jun 06, 2024 at 03:21:11PM -0700, Abhinav Kumar wrote:

On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Only several SSPP blocks support such features as YUV output or scaling,
thus different DRM planes have different features.  Properly utilizing
all planes requires the attention of the compositor, who should
prefer simpler planes to YUV-supporting ones. Otherwise it is very easy
to end up in a situation when all featureful planes are already
allocated for simple windows, leaving no spare plane for YUV playback.

To solve this problem make all planes virtual. Each plane is registered
as if it supports all possible features, but then at the runtime during
the atomic_check phase the driver selects backing SSPP block for each
plane.

Note, this does not provide support for using two different SSPP blocks
for a single plane or using two rectangles of an SSPP to drive two
planes. Each plane still gets its own SSPP and can utilize either a solo
rectangle or both multirect rectangles depending on the resolution.

Note #2: By default support for virtual planes is turned off and the
driver still uses old code path with preallocated SSPP block for each
plane. To enable virtual planes, pass 'msm.dpu_use_virtual_planes=1'
kernel parameter.



I like the overall approach in this patch. Some comments below.


Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c  |  50 +
 drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  10 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h   |   4 +
 drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 230 +++---
 drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h |  19 ++
 drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c|  77 
 drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h|  28 +++
 7 files changed, 390 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
index 88c2e51ab166..794c5643584f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
@@ -1168,6 +1168,49 @@ static bool dpu_crtc_needs_dirtyfb(struct drm_crtc_state 
*cstate)
   return false;
 }
+static int dpu_crtc_reassign_planes(struct drm_crtc *crtc, struct 
drm_crtc_state *crtc_state)
+{
+ int total_planes = crtc->dev->mode_config.num_total_plane;
+ struct drm_atomic_state *state = crtc_state->state;
+ struct dpu_global_state *global_state;
+ struct drm_plane_state **states;
+ struct drm_plane *plane;
+ int ret;
+
+ global_state = dpu_kms_get_global_state(crtc_state->state);
+ if (IS_ERR(global_state))
+ return PTR_ERR(global_state);
+
+ dpu_rm_release_all_sspp(global_state, crtc);
+


Do we need to call dpu_rm_release_all_sspp() even in the
_dpu_plane_atomic_disable()?


It allows the driver to optimize the usage of the SSPP rectangles.



No, what I meant was that we should call dpu_rm_release_all_sspp() in
dpu_plane_atomic_update() as well because in the atomic_check() path where
its called today, its being called only for zpos_changed and planes_changed
but during disable we must call this for sure.


No. the dpu_rm_release_all_sspp() should only be called during check.
When dpu_plane_atomic_update() is called, the state should already be
finalised. The atomic_check() callback is called when a plane is going
to be disabled.



atomic_check() will be called when plane is disabled but
dpu_rm_release_all_sspp() may not be called as it is protected by
zpos_changed and planes_changed. OR you need to add a !visible check
here to call dpu_rm_release_all_sspp() that time. Thats whay I wrote
previously.


Unless I miss something, if a plane gets disabled, then obviously
planes_changed is true.

[trimmed]



Do you mean DRM fwk sets planes_changed correctly for this case?

Currently we have

if (!new_state->visible) {
_dpu_plane_atomic_disable(plane);
} else {
dpu_plane_sspp_atomic_update(plane);
}

So I wanted to ensure that when plane gets disabled, its SSPP is freed 
too. If this is confirmed, I do not have any concerns.





@@ -1486,7 +1593,7 @@ struct drm_plane *dpu_plane_init(struct drm_device *dev,
   supported_rotations = DRM_MODE_REFLECT_MASK | DRM_MODE_ROTATE_0 | 
DRM_MODE_ROTATE_180;
- if (pipe_hw->cap->features & BIT(DPU_SSPP_INLINE_ROTATION))
+ if (inline_rotation)
   supported_rotations |= DRM_MODE_ROTATE_MASK;
   drm_plane_create_rotation_property(plane,
@@ -1494,10 +1601,81 @@ struct drm_plane *dpu_plane_init(struct drm_device *dev,
   drm_plane_enable_fb_damage_clips(plane);
- /* success! finalize initialization */
+ DPU_DEBUG("%s creat

Re: [PATCH v4 08/13] drm/msm/dpu: add support for virtual planes

2024-06-07 Thread Abhinav Kumar




On 6/7/2024 3:26 PM, Dmitry Baryshkov wrote:

On Sat, 8 Jun 2024 at 00:39, Abhinav Kumar  wrote:




On 6/7/2024 2:10 PM, Dmitry Baryshkov wrote:

On Fri, Jun 07, 2024 at 12:22:16PM -0700, Abhinav Kumar wrote:



On 6/7/2024 12:16 AM, Dmitry Baryshkov wrote:

On Thu, Jun 06, 2024 at 03:21:11PM -0700, Abhinav Kumar wrote:

On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Only several SSPP blocks support such features as YUV output or scaling,
thus different DRM planes have different features.  Properly utilizing
all planes requires the attention of the compositor, who should
prefer simpler planes to YUV-supporting ones. Otherwise it is very easy
to end up in a situation when all featureful planes are already
allocated for simple windows, leaving no spare plane for YUV playback.

To solve this problem make all planes virtual. Each plane is registered
as if it supports all possible features, but then at the runtime during
the atomic_check phase the driver selects backing SSPP block for each
plane.

Note, this does not provide support for using two different SSPP blocks
for a single plane or using two rectangles of an SSPP to drive two
planes. Each plane still gets its own SSPP and can utilize either a solo
rectangle or both multirect rectangles depending on the resolution.

Note #2: By default support for virtual planes is turned off and the
driver still uses old code path with preallocated SSPP block for each
plane. To enable virtual planes, pass 'msm.dpu_use_virtual_planes=1'
kernel parameter.



I like the overall approach in this patch. Some comments below.


Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c  |  50 +
 drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  10 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h   |   4 +
 drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 230 +++---
 drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h |  19 ++
 drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c|  77 
 drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h|  28 +++
 7 files changed, 390 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
index 88c2e51ab166..794c5643584f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
@@ -1168,6 +1168,49 @@ static bool dpu_crtc_needs_dirtyfb(struct drm_crtc_state 
*cstate)
   return false;
 }
+static int dpu_crtc_reassign_planes(struct drm_crtc *crtc, struct 
drm_crtc_state *crtc_state)
+{
+ int total_planes = crtc->dev->mode_config.num_total_plane;
+ struct drm_atomic_state *state = crtc_state->state;
+ struct dpu_global_state *global_state;
+ struct drm_plane_state **states;
+ struct drm_plane *plane;
+ int ret;
+
+ global_state = dpu_kms_get_global_state(crtc_state->state);
+ if (IS_ERR(global_state))
+ return PTR_ERR(global_state);
+
+ dpu_rm_release_all_sspp(global_state, crtc);
+


Do we need to call dpu_rm_release_all_sspp() even in the
_dpu_plane_atomic_disable()?


It allows the driver to optimize the usage of the SSPP rectangles.



No, what I meant was that we should call dpu_rm_release_all_sspp() in
dpu_plane_atomic_update() as well because in the atomic_check() path where
its called today, its being called only for zpos_changed and planes_changed
but during disable we must call this for sure.


No. the dpu_rm_release_all_sspp() should only be called during check.
When dpu_plane_atomic_update() is called, the state should already be
finalised. The atomic_check() callback is called when a plane is going
to be disabled.



atomic_check() will be called when plane is disabled but
dpu_rm_release_all_sspp() may not be called as it is protected by
zpos_changed and planes_changed. OR you need to add a !visible check
here to call dpu_rm_release_all_sspp() that time. Thats whay I wrote
previously.


Unless I miss something, if a plane gets disabled, then obviously
planes_changed is true.

[trimmed]



Do you mean DRM fwk sets planes_changed correctly for this case?

Currently we have

if (!new_state->visible) {
_dpu_plane_atomic_disable(plane);
} else {
dpu_plane_sspp_atomic_update(plane);
}

So I wanted to ensure that when plane gets disabled, its SSPP is freed 
too. If this is confirmed, I do not have any concerns.





@@ -1486,7 +1593,7 @@ struct drm_plane *dpu_plane_init(struct drm_device *dev,
   supported_rotations = DRM_MODE_REFLECT_MASK | DRM_MODE_ROTATE_0 | 
DRM_MODE_ROTATE_180;
- if (pipe_hw->cap->features & BIT(DPU_SSPP_INLINE_ROTATION))
+ if (inline_rotation)
   supported_rotations |= DRM_MODE_ROTATE_MASK;
   drm_plane_create_rotation_property(plane,
@@ -1494,10 +1601,81 @@ struct drm_plane *dpu_plane_init(struct drm_device *dev,
   drm_plane_enable_fb_damage_clips(plane);
- /* success! finalize initialization */
+ DPU_DEBUG("%s creat

Re: [PATCH v4 08/13] drm/msm/dpu: add support for virtual planes

2024-06-07 Thread Abhinav Kumar




On 6/7/2024 2:10 PM, Dmitry Baryshkov wrote:

On Fri, Jun 07, 2024 at 12:22:16PM -0700, Abhinav Kumar wrote:



On 6/7/2024 12:16 AM, Dmitry Baryshkov wrote:

On Thu, Jun 06, 2024 at 03:21:11PM -0700, Abhinav Kumar wrote:

On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Only several SSPP blocks support such features as YUV output or scaling,
thus different DRM planes have different features.  Properly utilizing
all planes requires the attention of the compositor, who should
prefer simpler planes to YUV-supporting ones. Otherwise it is very easy
to end up in a situation when all featureful planes are already
allocated for simple windows, leaving no spare plane for YUV playback.

To solve this problem make all planes virtual. Each plane is registered
as if it supports all possible features, but then at the runtime during
the atomic_check phase the driver selects backing SSPP block for each
plane.

Note, this does not provide support for using two different SSPP blocks
for a single plane or using two rectangles of an SSPP to drive two
planes. Each plane still gets its own SSPP and can utilize either a solo
rectangle or both multirect rectangles depending on the resolution.

Note #2: By default support for virtual planes is turned off and the
driver still uses old code path with preallocated SSPP block for each
plane. To enable virtual planes, pass 'msm.dpu_use_virtual_planes=1'
kernel parameter.



I like the overall approach in this patch. Some comments below.


Signed-off-by: Dmitry Baryshkov 
---
drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c  |  50 +
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  10 +-
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h   |   4 +
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 230 +++---
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h |  19 ++
drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c|  77 
drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h|  28 +++
7 files changed, 390 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
index 88c2e51ab166..794c5643584f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
@@ -1168,6 +1168,49 @@ static bool dpu_crtc_needs_dirtyfb(struct drm_crtc_state 
*cstate)
return false;
}
+static int dpu_crtc_reassign_planes(struct drm_crtc *crtc, struct 
drm_crtc_state *crtc_state)
+{
+   int total_planes = crtc->dev->mode_config.num_total_plane;
+   struct drm_atomic_state *state = crtc_state->state;
+   struct dpu_global_state *global_state;
+   struct drm_plane_state **states;
+   struct drm_plane *plane;
+   int ret;
+
+   global_state = dpu_kms_get_global_state(crtc_state->state);
+   if (IS_ERR(global_state))
+   return PTR_ERR(global_state);
+
+   dpu_rm_release_all_sspp(global_state, crtc);
+


Do we need to call dpu_rm_release_all_sspp() even in the
_dpu_plane_atomic_disable()?


It allows the driver to optimize the usage of the SSPP rectangles.



No, what I meant was that we should call dpu_rm_release_all_sspp() in
dpu_plane_atomic_update() as well because in the atomic_check() path where
its called today, its being called only for zpos_changed and planes_changed
but during disable we must call this for sure.


No. the dpu_rm_release_all_sspp() should only be called during check.
When dpu_plane_atomic_update() is called, the state should already be
finalised. The atomic_check() callback is called when a plane is going
to be disabled.



atomic_check() will be called when plane is disabled but 
dpu_rm_release_all_sspp() may not be called as it is protected by 
zpos_changed and planes_changed. OR you need to add a !visible check 
here to call dpu_rm_release_all_sspp() that time. Thats whay I wrote 
previously.







+   if (!crtc_state->enable)
+   return 0;
+
+   states = kcalloc(total_planes, sizeof(*states), GFP_KERNEL);
+   if (!states)
+   return -ENOMEM;
+
+   drm_atomic_crtc_state_for_each_plane(plane, crtc_state) {
+   struct drm_plane_state *plane_state =
+   drm_atomic_get_plane_state(state, plane);
+
+   if (IS_ERR(plane_state)) {
+   ret = PTR_ERR(plane_state);
+   goto done;
+   }
+
+   states[plane_state->normalized_zpos] = plane_state;
+   }
+
+   ret = dpu_assign_plane_resources(global_state, state, crtc, states, 
total_planes);
+
+done:
+   kfree(states);
+   return ret;
+
+   return 0;
+}
+
static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
struct drm_atomic_state *state)
{
@@ -1183,6 +1226,13 @@ static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
bool needs_dirtyfb = dpu_crtc_needs_dirtyfb(crtc_state);
+   if (dpu_use_virtual_planes &&
+   (crtc_state->planes_change

Re: [PATCH v4 08/13] drm/msm/dpu: add support for virtual planes

2024-06-07 Thread Abhinav Kumar




On 6/7/2024 2:10 PM, Dmitry Baryshkov wrote:

On Fri, Jun 07, 2024 at 12:22:16PM -0700, Abhinav Kumar wrote:



On 6/7/2024 12:16 AM, Dmitry Baryshkov wrote:

On Thu, Jun 06, 2024 at 03:21:11PM -0700, Abhinav Kumar wrote:

On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Only several SSPP blocks support such features as YUV output or scaling,
thus different DRM planes have different features.  Properly utilizing
all planes requires the attention of the compositor, who should
prefer simpler planes to YUV-supporting ones. Otherwise it is very easy
to end up in a situation when all featureful planes are already
allocated for simple windows, leaving no spare plane for YUV playback.

To solve this problem make all planes virtual. Each plane is registered
as if it supports all possible features, but then at the runtime during
the atomic_check phase the driver selects backing SSPP block for each
plane.

Note, this does not provide support for using two different SSPP blocks
for a single plane or using two rectangles of an SSPP to drive two
planes. Each plane still gets its own SSPP and can utilize either a solo
rectangle or both multirect rectangles depending on the resolution.

Note #2: By default support for virtual planes is turned off and the
driver still uses old code path with preallocated SSPP block for each
plane. To enable virtual planes, pass 'msm.dpu_use_virtual_planes=1'
kernel parameter.



I like the overall approach in this patch. Some comments below.


Signed-off-by: Dmitry Baryshkov 
---
drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c  |  50 +
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  10 +-
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h   |   4 +
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 230 +++---
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h |  19 ++
drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c|  77 
drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h|  28 +++
7 files changed, 390 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
index 88c2e51ab166..794c5643584f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
@@ -1168,6 +1168,49 @@ static bool dpu_crtc_needs_dirtyfb(struct drm_crtc_state 
*cstate)
return false;
}
+static int dpu_crtc_reassign_planes(struct drm_crtc *crtc, struct 
drm_crtc_state *crtc_state)
+{
+   int total_planes = crtc->dev->mode_config.num_total_plane;
+   struct drm_atomic_state *state = crtc_state->state;
+   struct dpu_global_state *global_state;
+   struct drm_plane_state **states;
+   struct drm_plane *plane;
+   int ret;
+
+   global_state = dpu_kms_get_global_state(crtc_state->state);
+   if (IS_ERR(global_state))
+   return PTR_ERR(global_state);
+
+   dpu_rm_release_all_sspp(global_state, crtc);
+


Do we need to call dpu_rm_release_all_sspp() even in the
_dpu_plane_atomic_disable()?


It allows the driver to optimize the usage of the SSPP rectangles.



No, what I meant was that we should call dpu_rm_release_all_sspp() in
dpu_plane_atomic_update() as well because in the atomic_check() path where
its called today, its being called only for zpos_changed and planes_changed
but during disable we must call this for sure.


No. the dpu_rm_release_all_sspp() should only be called during check.
When dpu_plane_atomic_update() is called, the state should already be
finalised. The atomic_check() callback is called when a plane is going
to be disabled.



atomic_check() will be called when plane is disabled but 
dpu_rm_release_all_sspp() may not be called as it is protected by 
zpos_changed and planes_changed. OR you need to add a !visible check 
here to call dpu_rm_release_all_sspp() that time. Thats whay I wrote 
previously.







+   if (!crtc_state->enable)
+   return 0;
+
+   states = kcalloc(total_planes, sizeof(*states), GFP_KERNEL);
+   if (!states)
+   return -ENOMEM;
+
+   drm_atomic_crtc_state_for_each_plane(plane, crtc_state) {
+   struct drm_plane_state *plane_state =
+   drm_atomic_get_plane_state(state, plane);
+
+   if (IS_ERR(plane_state)) {
+   ret = PTR_ERR(plane_state);
+   goto done;
+   }
+
+   states[plane_state->normalized_zpos] = plane_state;
+   }
+
+   ret = dpu_assign_plane_resources(global_state, state, crtc, states, 
total_planes);
+
+done:
+   kfree(states);
+   return ret;
+
+   return 0;
+}
+
static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
struct drm_atomic_state *state)
{
@@ -1183,6 +1226,13 @@ static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
bool needs_dirtyfb = dpu_crtc_needs_dirtyfb(crtc_state);
+   if (dpu_use_virtual_planes &&
+   (crtc_state->planes_change

[Lldb-commits] [lldb] [lldb-dap] Added "port" property to vscode "attach" command. (PR #91570)

2024-06-07 Thread Santhosh Kumar Ellendula via lldb-commits


@@ -0,0 +1,202 @@
+"""
+Test lldb-dap "port" configuration to "attach" request
+"""
+
+
+import dap_server
+from lldbsuite.test.decorators import *
+from lldbsuite.test.lldbtest import *
+from lldbsuite.test import lldbutil
+from lldbsuite.test import lldbplatformutil
+import lldbdap_testcase
+import os
+import shutil
+import subprocess
+import tempfile
+import threading
+import sys
+import socket
+import select
+
+
+# A class representing a pipe for communicating with debug server.
+# This class includes menthods to open the pipe and read the port number from 
it.
+class Pipe(object):
+def __init__(self, prefix):
+self.name = os.path.join(prefix, "stub_port_number")
+os.mkfifo(self.name)
+self._fd = os.open(self.name, os.O_RDONLY | os.O_NONBLOCK)
+
+def finish_connection(self, timeout):
+pass
+
+def read(self, size, timeout):
+(readers, _, _) = select.select([self._fd], [], [], timeout)
+if self._fd not in readers:
+raise TimeoutError
+return os.read(self._fd, size)
+
+def close(self):
+os.close(self._fd)
+
+
+class TestDAP_attachByPortNum(lldbdap_testcase.DAPTestCaseBase):
+default_timeout = 20
+
+def set_and_hit_breakpoint(self, continueToExit=True):
+source = "main.c"
+main_source_path = os.path.join(os.getcwd(), source)
+breakpoint1_line = line_number(main_source_path, "// breakpoint 1")
+lines = [breakpoint1_line]
+# Set breakpoint in the thread function so we can step the threads
+breakpoint_ids = self.set_source_breakpoints(main_source_path, lines)
+self.assertEqual(
+len(breakpoint_ids), len(lines), "expect correct number of 
breakpoints"
+)
+self.continue_to_breakpoints(breakpoint_ids)
+if continueToExit:
+self.continue_to_exit()
+
+def get_debug_server_command_line_args(self):
+args = []
+if lldbplatformutil.getPlatform() == "linux":
+args = ["gdbserver"]
+elif lldbplatformutil.getPlatform() == "macosx":
+args = ["--listen"]
+if lldb.remote_platform:
+args += ["*:0"]
+else:
+args += ["localhost:0"]
+return args
+
+def get_debug_server_pipe(self):
+pipe = Pipe(self.getBuildDir())
+self.addTearDownHook(lambda: pipe.close())
+pipe.finish_connection(self.default_timeout)
+return pipe
+
+@skipIfWindows
+@skipIfNetBSD
+def test_by_port(self):
+"""
+Tests attaching to a process by port.
+"""
+self.build_and_create_debug_adaptor()
+program = self.getBuildArtifact("a.out")
+
+debug_server_tool = self.getBuiltinDebugServerTool()
+
+pipe = self.get_debug_server_pipe()
+args = self.get_debug_server_command_line_args()
+args += [program]
+args += ["--named-pipe", pipe.name]
+
+self.process = self.spawnSubprocess(
+debug_server_tool, args, install_remote=False
+)
+
+# Read the port number from the debug server pipe.
+port = pipe.read(10, self.default_timeout)
+# Trim null byte, convert to int
+port = int(port[:-1])
+self.assertIsNotNone(
+port, " Failed to read the port number from debug server pipe"
+)
+
+self.attach(program=program, gdbRemotePort=port, sourceInitFile=True)
+self.set_and_hit_breakpoint(continueToExit=True)
+self.process.terminate()
+
+@skipIfWindows
+@skipIfNetBSD
+def test_by_port_and_pid(self):
+"""
+Tests attaching to a process by process ID and port number.
+"""
+self.build_and_create_debug_adaptor()
+program = self.getBuildArtifact("a.out")
+
+debug_server_tool = self.getBuiltinDebugServerTool()
+pipe = self.get_debug_server_pipe()
+args = self.get_debug_server_command_line_args()
+args += [program]
+args += ["--named-pipe", pipe.name]
+
+self.process = self.spawnSubprocess(
+debug_server_tool, args, install_remote=False
+)
+
+# Read the port number from the debug server pipe.
+port = pipe.read(10, self.default_timeout)
+# Trim null byte, convert to int
+port = int(port[:-1])

santhoshe447 wrote:

Even if I remove that it does not impact much.

https://github.com/llvm/llvm-project/pull/91570
___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


Re: [marxmail] On elections in general, and on Modi

2024-06-07 Thread hari kumar
It is a good question Michael.
I do not pretend to actually know.

However I do suspect that all of the I.N.D.I.A. allies with Congress, will feel 
that their long, long stint in the 'desert' should 'encourage' them to be 
"collegial" to each other. All bets will of course be off - once Modi falters 
more in the next years and then we shall see. If he does falter, it will be 
interesting to see whether the Advani fortunes are shifted towards other 
parties. The Advanis lost huge amounts of (yes likely purely paper based 
'fictitious' monies - but still) capital base on receipt of the relative 
electoral losses of the BJP.

But another factor is potentially important also.
As we have seen over the last year in Pakistan, it is the USA who is a major 
decisive player in the Indian sub-continental politics. Modi has been playing 
footsie with both the USA and China, while trying to also project India as an 
independent force a la BRICS. This is over-played of course, and at some stage 
will be corrected by reality.

Be Well,

H


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#30665): https://groups.io/g/marxmail/message/30665
Mute This Topic: https://groups.io/mt/106550546/21656
-=-=-
POSTING RULES & NOTES
#1 YOU MUST clip all extraneous text when replying to a message.
#2 This mail-list, like most, is publicly & permanently archived.
#3 Subscribe and post under an alias if #2 is a concern.
#4 Do not exceed five posts a day.
-=-=-
Group Owner: marxmail+ow...@groups.io
Unsubscribe: https://groups.io/g/marxmail/leave/8674936/21656/1316126222/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-




[Lldb-commits] [lldb] [lldb-dap] Added "port" property to vscode "attach" command. (PR #91570)

2024-06-07 Thread Santhosh Kumar Ellendula via lldb-commits

https://github.com/santhoshe447 updated 
https://github.com/llvm/llvm-project/pull/91570

>From 960351c9abf51f42d92604ac6297aa5b76ddfba5 Mon Sep 17 00:00:00 2001
From: Santhosh Kumar Ellendula 
Date: Fri, 17 Nov 2023 15:09:10 +0530
Subject: [PATCH 01/14] [lldb][test] Add the ability to extract the variable
 value out of the summary.

---
 .../Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py   | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py 
b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
index 9d79872b029a3..0cf9d4fde4948 100644
--- a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
+++ b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
@@ -195,6 +195,9 @@ def collect_console(self, duration):
 
 def get_local_as_int(self, name, threadId=None):
 value = self.dap_server.get_local_variable_value(name, 
threadId=threadId)
+# 'value' may have the variable value and summary.
+# Extract the variable value since summary can have nonnumeric 
characters.
+value = value.split(" ")[0]
 if value.startswith("0x"):
 return int(value, 16)
 elif value.startswith("0"):

>From ab44a6991c5bc8ac5764c3f71cbe3acc747b3776 Mon Sep 17 00:00:00 2001
From: Santhosh Kumar Ellendula 
Date: Fri, 3 May 2024 02:47:05 -0700
Subject: [PATCH 02/14] [lldb-dap] Added "port" property to vscode "attach"
 command.

Adding a "port" property to the VsCode "attach" command likely extends the 
functionality of the debugger configuratiuon to allow attaching to a process 
using PID or PORT number.
Currently, the "Attach" configuration lets the user specify a pid. We tell the 
user to use the attachCommands property to run "gdb-remote ".
Followed the below conditions for "attach" command with "port" and "pid"
We should add a "port" property. If port is specified and pid is not, use that 
port to attach. If both port and pid are specified, return an error saying that 
the user can't specify both pid and port.

Ex - launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "lldb-dap Debug",
"type": "lldb-dap",
"request": "attach",
"port":1234,
"program": "${workspaceFolder}/a.out",
"args": [],
"stopOnEntry": false,
"cwd": "${workspaceFolder}",
"env": [],

}
]
}
---
 lldb/include/lldb/lldb-defines.h  |   1 +
 .../Python/lldbsuite/test/lldbtest.py |   9 ++
 .../test/tools/lldb-dap/dap_server.py |   6 +
 .../test/tools/lldb-dap/lldbdap_testcase.py   |  20 +++
 .../attach/TestDAP_attachByPortNum.py | 120 ++
 lldb/tools/lldb-dap/lldb-dap.cpp  |  36 +-
 lldb/tools/lldb-dap/package.json  |  11 ++
 7 files changed, 199 insertions(+), 4 deletions(-)
 create mode 100644 
lldb/test/API/tools/lldb-dap/attach/TestDAP_attachByPortNum.py

diff --git a/lldb/include/lldb/lldb-defines.h b/lldb/include/lldb/lldb-defines.h
index c7bd019c5c90e..a1e6ee2ce468c 100644
--- a/lldb/include/lldb/lldb-defines.h
+++ b/lldb/include/lldb/lldb-defines.h
@@ -96,6 +96,7 @@
 #define LLDB_INVALID_QUEUE_ID 0
 #define LLDB_INVALID_CPU_ID UINT32_MAX
 #define LLDB_INVALID_WATCHPOINT_RESOURCE_ID UINT32_MAX
+#define LLDB_INVALID_PORT_NUMBER 0
 
 /// CPU Type definitions
 #define LLDB_ARCH_DEFAULT "systemArch"
diff --git a/lldb/packages/Python/lldbsuite/test/lldbtest.py 
b/lldb/packages/Python/lldbsuite/test/lldbtest.py
index 5fd686c143e9f..fb3cd22959df2 100644
--- a/lldb/packages/Python/lldbsuite/test/lldbtest.py
+++ b/lldb/packages/Python/lldbsuite/test/lldbtest.py
@@ -1572,6 +1572,15 @@ def findBuiltClang(self):
 
 return os.environ["CC"]
 
+def getBuiltServerTool(self, server_tool):
+# Tries to find simulation/lldb-server/gdbserver tool at the same 
folder as the lldb.
+lldb_dir = os.path.dirname(lldbtest_config.lldbExec)
+path = shutil.which(server_tool, path=lldb_dir)
+if path is not None:
+return path
+
+return ""
+
 def yaml2obj(self, yaml_path, obj_path, max_size=None):
 """
 Create an object file at the given path from a yaml file.
diff --git a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py 
b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py
index 5838281bcb1a1..96d312565f953 100644
--- a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py
+++ b/lldb/packages/Pyth

[marxmail] In case anyone has not categorised Nigel Farage who..

2024-06-07 Thread hari kumar
.. is ex-British National Party; ex-Brexiteer, now in a new party laughingly 
called "Reform'.
A short (about 9 minutes) systematically reprising his history. Produced by the 
wonderfully named 'Led by Donkeys'.
https://x.com/ByDonkeys/ status/1797651718852804923? ref_src=twsrc%5Egoogle% 
7Ctwcamp%5Eserp%7Ctwgr%5Etweet ( 
https://x.com/ByDonkeys/status/1797651718852804923?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Etweet
 )

His current plan openly stated, is 'take over' the Conservative Party. I think 
he quite likely will, and then we have another incipient mass fascist party.

Of course the same dilemma is seen in the UK as in many other countries - a 
ratty "Labour Party" under a sick and slick Starmer, and no left in electoral 
(or other) sight.

Well - give or take Galloway...
More to come from Galloway I am sure, who has already basically said that the 
NHS is faltering... Why... well, because of immigration. (As an aside it is an 
interesting position for a candidate who has made hay with the Muslim 
discontent with Starmer on Palestine).

Anyway - What would the many anti-voting in bourgeois elections socialists on 
this list, advise the working people of the UK to do in this election?
H


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#30664): https://groups.io/g/marxmail/message/30664
Mute This Topic: https://groups.io/mt/106551151/21656
-=-=-
POSTING RULES & NOTES
#1 YOU MUST clip all extraneous text when replying to a message.
#2 This mail-list, like most, is publicly & permanently archived.
#3 Subscribe and post under an alias if #2 is a concern.
#4 Do not exceed five posts a day.
-=-=-
Group Owner: marxmail+ow...@groups.io
Unsubscribe: https://groups.io/g/marxmail/leave/8674936/21656/1316126222/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-




[Lldb-commits] [lldb] [lldb-dap] Added "port" property to vscode "attach" command. (PR #91570)

2024-06-07 Thread Santhosh Kumar Ellendula via lldb-commits


@@ -0,0 +1,202 @@
+"""
+Test lldb-dap "port" configuration to "attach" request
+"""
+
+
+import dap_server
+from lldbsuite.test.decorators import *
+from lldbsuite.test.lldbtest import *
+from lldbsuite.test import lldbutil
+from lldbsuite.test import lldbplatformutil
+import lldbdap_testcase
+import os
+import shutil
+import subprocess
+import tempfile
+import threading
+import sys
+import socket
+import select
+
+
+# A class representing a pipe for communicating with debug server.
+# This class includes menthods to open the pipe and read the port number from 
it.
+class Pipe(object):
+def __init__(self, prefix):
+self.name = os.path.join(prefix, "stub_port_number")
+os.mkfifo(self.name)
+self._fd = os.open(self.name, os.O_RDONLY | os.O_NONBLOCK)
+
+def finish_connection(self, timeout):
+pass
+
+def read(self, size, timeout):
+(readers, _, _) = select.select([self._fd], [], [], timeout)
+if self._fd not in readers:
+raise TimeoutError
+return os.read(self._fd, size)
+
+def close(self):
+os.close(self._fd)

santhoshe447 wrote:

If I make it common it will create more confusion. 

https://github.com/llvm/llvm-project/pull/91570
___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [lldb] [lldb-dap] Added "port" property to vscode "attach" command. (PR #91570)

2024-06-07 Thread Santhosh Kumar Ellendula via lldb-commits


@@ -0,0 +1,202 @@
+"""
+Test lldb-dap "port" configuration to "attach" request
+"""
+
+
+import dap_server
+from lldbsuite.test.decorators import *
+from lldbsuite.test.lldbtest import *
+from lldbsuite.test import lldbutil
+from lldbsuite.test import lldbplatformutil
+import lldbdap_testcase
+import os
+import shutil
+import subprocess
+import tempfile
+import threading
+import sys
+import socket
+import select
+
+
+# A class representing a pipe for communicating with debug server.
+# This class includes menthods to open the pipe and read the port number from 
it.
+class Pipe(object):
+def __init__(self, prefix):
+self.name = os.path.join(prefix, "stub_port_number")
+os.mkfifo(self.name)
+self._fd = os.open(self.name, os.O_RDONLY | os.O_NONBLOCK)
+
+def finish_connection(self, timeout):
+pass
+
+def read(self, size, timeout):
+(readers, _, _) = select.select([self._fd], [], [], timeout)
+if self._fd not in readers:
+raise TimeoutError
+return os.read(self._fd, size)
+
+def close(self):
+os.close(self._fd)
+
+
+class TestDAP_attachByPortNum(lldbdap_testcase.DAPTestCaseBase):
+default_timeout = 20
+
+def set_and_hit_breakpoint(self, continueToExit=True):
+source = "main.c"
+main_source_path = os.path.join(os.getcwd(), source)
+breakpoint1_line = line_number(main_source_path, "// breakpoint 1")
+lines = [breakpoint1_line]
+# Set breakpoint in the thread function so we can step the threads
+breakpoint_ids = self.set_source_breakpoints(main_source_path, lines)
+self.assertEqual(
+len(breakpoint_ids), len(lines), "expect correct number of 
breakpoints"
+)
+self.continue_to_breakpoints(breakpoint_ids)
+if continueToExit:
+self.continue_to_exit()
+
+def get_debug_server_command_line_args(self):
+args = []
+if lldbplatformutil.getPlatform() == "linux":
+args = ["gdbserver"]
+elif lldbplatformutil.getPlatform() == "macosx":
+args = ["--listen"]
+if lldb.remote_platform:
+args += ["*:0"]
+else:
+args += ["localhost:0"]
+return args
+
+def get_debug_server_pipe(self):
+pipe = Pipe(self.getBuildDir())
+self.addTearDownHook(lambda: pipe.close())
+pipe.finish_connection(self.default_timeout)
+return pipe
+
+@skipIfWindows
+@skipIfNetBSD
+def test_by_port(self):
+"""
+Tests attaching to a process by port.
+"""
+self.build_and_create_debug_adaptor()
+program = self.getBuildArtifact("a.out")
+
+debug_server_tool = self.getBuiltinDebugServerTool()
+
+pipe = self.get_debug_server_pipe()
+args = self.get_debug_server_command_line_args()
+args += [program]
+args += ["--named-pipe", pipe.name]
+
+self.process = self.spawnSubprocess(
+debug_server_tool, args, install_remote=False
+)
+
+# Read the port number from the debug server pipe.
+port = pipe.read(10, self.default_timeout)
+# Trim null byte, convert to int
+port = int(port[:-1])
+self.assertIsNotNone(
+port, " Failed to read the port number from debug server pipe"
+)
+
+self.attach(program=program, gdbRemotePort=port, sourceInitFile=True)
+self.set_and_hit_breakpoint(continueToExit=True)
+self.process.terminate()
+
+@skipIfWindows
+@skipIfNetBSD
+def test_by_port_and_pid(self):
+"""
+Tests attaching to a process by process ID and port number.
+"""
+self.build_and_create_debug_adaptor()
+program = self.getBuildArtifact("a.out")
+
+debug_server_tool = self.getBuiltinDebugServerTool()
+pipe = self.get_debug_server_pipe()
+args = self.get_debug_server_command_line_args()
+args += [program]
+args += ["--named-pipe", pipe.name]
+
+self.process = self.spawnSubprocess(
+debug_server_tool, args, install_remote=False
+)
+
+# Read the port number from the debug server pipe.
+port = pipe.read(10, self.default_timeout)
+# Trim null byte, convert to int
+port = int(port[:-1])

santhoshe447 wrote:

Yes, you are right but I winna test with complete cycle.
I will change this if its required.

https://github.com/llvm/llvm-project/pull/91570
___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [lldb] [lldb-dap] Added "port" property to vscode "attach" command. (PR #91570)

2024-06-07 Thread Santhosh Kumar Ellendula via lldb-commits


@@ -0,0 +1,202 @@
+"""
+Test lldb-dap "port" configuration to "attach" request
+"""
+
+
+import dap_server
+from lldbsuite.test.decorators import *
+from lldbsuite.test.lldbtest import *
+from lldbsuite.test import lldbutil
+from lldbsuite.test import lldbplatformutil
+import lldbdap_testcase
+import os
+import shutil
+import subprocess
+import tempfile
+import threading
+import sys
+import socket
+import select
+
+
+# A class representing a pipe for communicating with debug server.
+# This class includes menthods to open the pipe and read the port number from 
it.
+class Pipe(object):
+def __init__(self, prefix):
+self.name = os.path.join(prefix, "stub_port_number")
+os.mkfifo(self.name)
+self._fd = os.open(self.name, os.O_RDONLY | os.O_NONBLOCK)
+
+def finish_connection(self, timeout):
+pass
+
+def read(self, size, timeout):
+(readers, _, _) = select.select([self._fd], [], [], timeout)
+if self._fd not in readers:
+raise TimeoutError
+return os.read(self._fd, size)
+
+def close(self):
+os.close(self._fd)

santhoshe447 wrote:

I agree but the specific class is defined based on the host platform. If the 
host platform is windows, it uses the same Pipe class with different APIs, and 
for other platforms, it behaves differently.
Therefore, I did not place Pipe class in a common location.


Ref: lldb/test/API/tools/lldb-server/commandline/TestGdbRemoteConnection.py

https://github.com/llvm/llvm-project/pull/91570
___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[marxmail] On elections in general, and on Modi

2024-06-07 Thread hari kumar
I think so far, there has been no comment here on the electoral relative losses 
of Modi in India. It bears at least a few thoughts in my view.
I put a "gift" free link here from todays' NYT (at bottom).

I have a general enquiry as to whether any of the anti-participation socialists 
here in bourgeois elections, would have voted in this election and potentially 
voted for the Congress?
Was there "any difference" between Modi and the Congress 'babus'?
If there was, what was the difference?

I mean I put my view quite plainly that Modi is a fascist, and there was some 
'value' to the Left in even voting for this ratty Congress.

Thanks for considering, H

https://www.nytimes.com/interactive/2024/06/07/world/asia/india-election-map.html?unlocked_article_code=1.x00.qjvK.wJPibNg5sf17=url-share


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#30661): https://groups.io/g/marxmail/message/30661
Mute This Topic: https://groups.io/mt/106550546/21656
-=-=-
POSTING RULES & NOTES
#1 YOU MUST clip all extraneous text when replying to a message.
#2 This mail-list, like most, is publicly & permanently archived.
#3 Subscribe and post under an alias if #2 is a concern.
#4 Do not exceed five posts a day.
-=-=-
Group Owner: marxmail+ow...@groups.io
Unsubscribe: https://groups.io/g/marxmail/leave/8674936/21656/1316126222/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-




Re: [PATCH v4 08/13] drm/msm/dpu: add support for virtual planes

2024-06-07 Thread Abhinav Kumar




On 6/7/2024 12:16 AM, Dmitry Baryshkov wrote:

On Thu, Jun 06, 2024 at 03:21:11PM -0700, Abhinav Kumar wrote:

On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Only several SSPP blocks support such features as YUV output or scaling,
thus different DRM planes have different features.  Properly utilizing
all planes requires the attention of the compositor, who should
prefer simpler planes to YUV-supporting ones. Otherwise it is very easy
to end up in a situation when all featureful planes are already
allocated for simple windows, leaving no spare plane for YUV playback.

To solve this problem make all planes virtual. Each plane is registered
as if it supports all possible features, but then at the runtime during
the atomic_check phase the driver selects backing SSPP block for each
plane.

Note, this does not provide support for using two different SSPP blocks
for a single plane or using two rectangles of an SSPP to drive two
planes. Each plane still gets its own SSPP and can utilize either a solo
rectangle or both multirect rectangles depending on the resolution.

Note #2: By default support for virtual planes is turned off and the
driver still uses old code path with preallocated SSPP block for each
plane. To enable virtual planes, pass 'msm.dpu_use_virtual_planes=1'
kernel parameter.



I like the overall approach in this patch. Some comments below.


Signed-off-by: Dmitry Baryshkov 
---
   drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c  |  50 +
   drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  10 +-
   drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h   |   4 +
   drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 230 +++---
   drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h |  19 ++
   drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c|  77 
   drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h|  28 +++
   7 files changed, 390 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
index 88c2e51ab166..794c5643584f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
@@ -1168,6 +1168,49 @@ static bool dpu_crtc_needs_dirtyfb(struct drm_crtc_state 
*cstate)
return false;
   }
+static int dpu_crtc_reassign_planes(struct drm_crtc *crtc, struct 
drm_crtc_state *crtc_state)
+{
+   int total_planes = crtc->dev->mode_config.num_total_plane;
+   struct drm_atomic_state *state = crtc_state->state;
+   struct dpu_global_state *global_state;
+   struct drm_plane_state **states;
+   struct drm_plane *plane;
+   int ret;
+
+   global_state = dpu_kms_get_global_state(crtc_state->state);
+   if (IS_ERR(global_state))
+   return PTR_ERR(global_state);
+
+   dpu_rm_release_all_sspp(global_state, crtc);
+


Do we need to call dpu_rm_release_all_sspp() even in the
_dpu_plane_atomic_disable()?


It allows the driver to optimize the usage of the SSPP rectangles.



No, what I meant was that we should call dpu_rm_release_all_sspp() in 
dpu_plane_atomic_update() as well because in the atomic_check() path 
where its called today, its being called only for zpos_changed and 
planes_changed but during disable we must call this for sure.





+   if (!crtc_state->enable)
+   return 0;
+
+   states = kcalloc(total_planes, sizeof(*states), GFP_KERNEL);
+   if (!states)
+   return -ENOMEM;
+
+   drm_atomic_crtc_state_for_each_plane(plane, crtc_state) {
+   struct drm_plane_state *plane_state =
+   drm_atomic_get_plane_state(state, plane);
+
+   if (IS_ERR(plane_state)) {
+   ret = PTR_ERR(plane_state);
+   goto done;
+   }
+
+   states[plane_state->normalized_zpos] = plane_state;
+   }
+
+   ret = dpu_assign_plane_resources(global_state, state, crtc, states, 
total_planes);
+
+done:
+   kfree(states);
+   return ret;
+
+   return 0;
+}
+
   static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
struct drm_atomic_state *state)
   {
@@ -1183,6 +1226,13 @@ static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
bool needs_dirtyfb = dpu_crtc_needs_dirtyfb(crtc_state);
+   if (dpu_use_virtual_planes &&
+   (crtc_state->planes_changed || crtc_state->zpos_changed)) {


Here, I assume you are relying on DRM to set zpos_changed. But can you
please elaborate why we have to reassign planes when zpos_changes?


Because the SSPP might be split between two planes. If zpos has changed
we might have to break this split and use two different SSPPs for those
planes.



Got it. But that support has not been added yet so belongs to a later 
patchset?





+   rc = dpu_crtc_reassign_planes(crtc, crtc_state);
+   if (rc < 0)
+   return rc;
+   }
+
if (!crtc_state->enable || 
!drm_ato

Re: [PATCH v4 08/13] drm/msm/dpu: add support for virtual planes

2024-06-07 Thread Abhinav Kumar




On 6/7/2024 12:16 AM, Dmitry Baryshkov wrote:

On Thu, Jun 06, 2024 at 03:21:11PM -0700, Abhinav Kumar wrote:

On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Only several SSPP blocks support such features as YUV output or scaling,
thus different DRM planes have different features.  Properly utilizing
all planes requires the attention of the compositor, who should
prefer simpler planes to YUV-supporting ones. Otherwise it is very easy
to end up in a situation when all featureful planes are already
allocated for simple windows, leaving no spare plane for YUV playback.

To solve this problem make all planes virtual. Each plane is registered
as if it supports all possible features, but then at the runtime during
the atomic_check phase the driver selects backing SSPP block for each
plane.

Note, this does not provide support for using two different SSPP blocks
for a single plane or using two rectangles of an SSPP to drive two
planes. Each plane still gets its own SSPP and can utilize either a solo
rectangle or both multirect rectangles depending on the resolution.

Note #2: By default support for virtual planes is turned off and the
driver still uses old code path with preallocated SSPP block for each
plane. To enable virtual planes, pass 'msm.dpu_use_virtual_planes=1'
kernel parameter.



I like the overall approach in this patch. Some comments below.


Signed-off-by: Dmitry Baryshkov 
---
   drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c  |  50 +
   drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  10 +-
   drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h   |   4 +
   drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 230 +++---
   drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h |  19 ++
   drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c|  77 
   drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h|  28 +++
   7 files changed, 390 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
index 88c2e51ab166..794c5643584f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
@@ -1168,6 +1168,49 @@ static bool dpu_crtc_needs_dirtyfb(struct drm_crtc_state 
*cstate)
return false;
   }
+static int dpu_crtc_reassign_planes(struct drm_crtc *crtc, struct 
drm_crtc_state *crtc_state)
+{
+   int total_planes = crtc->dev->mode_config.num_total_plane;
+   struct drm_atomic_state *state = crtc_state->state;
+   struct dpu_global_state *global_state;
+   struct drm_plane_state **states;
+   struct drm_plane *plane;
+   int ret;
+
+   global_state = dpu_kms_get_global_state(crtc_state->state);
+   if (IS_ERR(global_state))
+   return PTR_ERR(global_state);
+
+   dpu_rm_release_all_sspp(global_state, crtc);
+


Do we need to call dpu_rm_release_all_sspp() even in the
_dpu_plane_atomic_disable()?


It allows the driver to optimize the usage of the SSPP rectangles.



No, what I meant was that we should call dpu_rm_release_all_sspp() in 
dpu_plane_atomic_update() as well because in the atomic_check() path 
where its called today, its being called only for zpos_changed and 
planes_changed but during disable we must call this for sure.





+   if (!crtc_state->enable)
+   return 0;
+
+   states = kcalloc(total_planes, sizeof(*states), GFP_KERNEL);
+   if (!states)
+   return -ENOMEM;
+
+   drm_atomic_crtc_state_for_each_plane(plane, crtc_state) {
+   struct drm_plane_state *plane_state =
+   drm_atomic_get_plane_state(state, plane);
+
+   if (IS_ERR(plane_state)) {
+   ret = PTR_ERR(plane_state);
+   goto done;
+   }
+
+   states[plane_state->normalized_zpos] = plane_state;
+   }
+
+   ret = dpu_assign_plane_resources(global_state, state, crtc, states, 
total_planes);
+
+done:
+   kfree(states);
+   return ret;
+
+   return 0;
+}
+
   static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
struct drm_atomic_state *state)
   {
@@ -1183,6 +1226,13 @@ static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
bool needs_dirtyfb = dpu_crtc_needs_dirtyfb(crtc_state);
+   if (dpu_use_virtual_planes &&
+   (crtc_state->planes_changed || crtc_state->zpos_changed)) {


Here, I assume you are relying on DRM to set zpos_changed. But can you
please elaborate why we have to reassign planes when zpos_changes?


Because the SSPP might be split between two planes. If zpos has changed
we might have to break this split and use two different SSPPs for those
planes.



Got it. But that support has not been added yet so belongs to a later 
patchset?





+   rc = dpu_crtc_reassign_planes(crtc, crtc_state);
+   if (rc < 0)
+   return rc;
+   }
+
if (!crtc_state->enable || 
!drm_ato

Re: [meta-ti][master/scarthgap][PATCH] conf: machine: add AM68-SK machine configuration

2024-06-07 Thread Udit Kumar via lists.yoctoproject.org


On 6/3/2024 6:26 PM, Aniket Limaye wrote:

As of commit [1] there will be a separate defconfig to build u-boot
for j721s2-evm and am68-sk.

Hence, introduce new yocto machine configs for am68-sk. This is done
through a new am68.inc file as the am68-sk platform does not support GP.
So j721s2-evm.inc is copied to am68.inc and updated accordingly.

[1]: 
https://source.denx.de/u-boot/u-boot/-/commit/a96be9b8c05ea835d27f7c9bae03279b8bd5dcf2

Signed-off-by: Aniket Limaye 
---
[...]
+OPTEEMACHINE = "k3-j784s4"


j784s4 ?



+
+MACHINE_ESSENTIAL_EXTRA_RRECOMMENDS += "cadence-mhdp-fw cnm-wave-fw"

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17712): 
https://lists.yoctoproject.org/g/meta-ti/message/17712
Mute This Topic: https://lists.yoctoproject.org/mt/106460845/21656
Group Owner: meta-ti+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/meta-ti/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[SR-Users] Setting Key/Value Pairs at Dialogue Level

2024-06-07 Thread Pavan Kumar via sr-users
Hi everyone,

I am creating a dialogue when I receive INVITE. Is there a way to set
key/value pair like attributes at dialog level, so that, when I receive BYE
message I can retrieve that information and do some processing?

Thank you,
Pavan Kumar
__
Kamailio - Users Mailing List - Non Commercial Discussions
To unsubscribe send an email to sr-users-le...@lists.kamailio.org
Important: keep the mailing list in the recipients, do not reply only to the 
sender!
Edit mailing list options or unsubscribe:


Re: [go-cd] [Feature request] API general pipeline status

2024-06-07 Thread 'Ashwanth Kumar' via go-cd
It is not possible because the definition of pipeline level status can vary
from one person to another.

For instance, the 2nd stage could be validation that I don't care much as
long as the 3rd stage is green (after manually approving it) and things
like that. Ideally you only want to check the status of the stage you're
interested in rather than the entire pipeline. This is how the pipeline
dependencies also work.

Hope this helps.

Thanks,


On Fri, Jun 7, 2024, 16:23 'Hans Dampf' via go-cd 
wrote:

> Hi,
>
> it would be great if the api request on
> /go/api/pipelines// would have a general pipeline
> status.
>
> The stage have one, but it would be alot easier to work with if the last
> status is access able on the toplevel. Currently, you have to parse all
> stages to get the pipeline status.
> Currently :
> {
> "name" : "Pipeline 1",
> "counter" : 2641,
> "label" : "2641",
> "natural_order" : 2641.0,
> "can_run" : true,
> "preparing_to_schedule" : false,
> "comment" : null,
> "scheduled_date" : 1717756828914,
> "build_cause" : {
> "trigger_message" : "User",
> "trigger_forced" : true,
> "approver" : "User",
> "material_revisions" : [ {
> "changed" : false,
> "material" : {
> ...
> },
> "modifications" : [ {
> ...
> } ]
> } ]
> },
> "stages" : [ {
> "result" : "Passed",
> "status" : "Passed",
> ...
> } ]
> }, {
> "result" : "Unknown",
> "status" : "Building",
> ...
> } ]
> } ]
> }
>
> Changed:
> Changed:
> {
> "name" : "Pipeline 1",
> >>> "status" : "Building", <
> "counter" : 2641,
> "label" : "2641",
> "natural_order" : 2641.0,
> "can_run" : true,
> "preparing_to_schedule" : false,
> "comment" : null,
> "scheduled_date" : 1717756828914,
> "build_cause" : {
> "trigger_message" : "User",
> "trigger_forced" : true,
> "approver" : "User",
> "material_revisions" : [ {
> "changed" : false,
> "material" : {
> ...
> },
> "modifications" : [ {
> ...
> } ]
> } ]
> },
> "stages" : [ {
> "result" : "Passed",
> "status" : "Passed",
> ...
> } ]
> }, {
> "result" : "Unknown",
> "status" : "Building",
> ...
> } ]
> } ]
> }
>
>
>
> Regards
>
> --
> You received this message because you are subscribed to the Google Groups
> "go-cd" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to go-cd+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/go-cd/031c7115-4290-40ff-8646-e9976b08fd6en%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"go-cd" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to go-cd+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/go-cd/CAD9m7CzxHFe6NqEpFYMhSqPO5TZnFqfajfeA4K98-aA9h%3DXgSg%40mail.gmail.com.


Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade

2024-06-07 Thread Dilip Kumar
On Fri, Jun 7, 2024 at 2:40 PM Matthias van de Meent
 wrote:
>
> On Fri, 7 Jun 2024 at 10:28, Dilip Kumar  wrote:
> >
> > On Fri, Jun 7, 2024 at 11:57 AM Matthias van de Meent
> >  wrote:
> >>
> >> On Fri, 7 Jun 2024 at 07:18, Dilip Kumar  wrote:
> >>>
> >>> On Wed, Jun 5, 2024 at 10:59 PM Matthias van de Meent
> >>>  wrote:
> >>>
> >>> I agree with you that we introduced the WAL_LOG strategy to avoid
> >>> these force checkpoints. However, in binary upgrade cases where no
> >>> operations are happening in the system, the FILE_COPY strategy should
> >>> be faster.
> >>
> >> While you would be correct if there were no operations happening in
> >> the system, during binary upgrade we're still actively modifying
> >> catalogs; and this is done with potentially many concurrent jobs. I
> >> think it's not unlikely that this would impact performance.
> >
> > Maybe, but generally, long checkpoints are problematic because they
> > involve a lot of I/O, which hampers overall system performance.
> > However, in the case of a binary upgrade, the concurrent operations
> > are only performing a schema restore, not a real data restore.
> > Therefore, it shouldn't have a significant impact, and the checkpoints
> > should also not do a lot of I/O during binary upgrade, right?
>
> My primary concern isn't the IO, but the O(shared_buffers) that we
> have to go through during a checkpoint. As I mentioned upthread, it is
> reasonably possible the new cluster is already setup with a good
> fraction of the old system's shared_buffers configured. Every
> checkpoint has to scan all those buffers, which IMV can get (much)
> more expensive than the IO overhead caused by the WAL_LOG strategy. It
> may be a baseless fear as I haven't done the performance benchmarks
> for this, but I wouldn't be surprised if shared_buffers=8GB would
> measurably impact the upgrade performance in the current patch (vs the
> default 128MB).

Okay, that's a valid point.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: Compress ReorderBuffer spill files using LZ4

2024-06-07 Thread Dilip Kumar
On Fri, Jun 7, 2024 at 2:39 PM Alvaro Herrera  wrote:
>
> On 2024-Jun-07, Dilip Kumar wrote:
>
> > I think the compression option should be supported at the CREATE
> > SUBSCRIPTION level instead of being controlled by a GUC. This way, we
> > can decide on compression for each subscription individually rather
> > than applying it to all subscribers. It makes more sense for the
> > subscriber to control this, especially when we are planning to
> > compress the data sent downstream.
>
> True.  (I think we have some options that are in GUCs for the general
> behavior and can be overridden by per-subscription options for specific
> tailoring; would that make sense here?  I think it does, considering
> that what we mostly want is to save disk space in the publisher when
> spilling to disk.)

Yeah, that makes sense.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: Compress ReorderBuffer spill files using LZ4

2024-06-07 Thread Dilip Kumar
On Thu, Jun 6, 2024 at 7:54 PM Alvaro Herrera  wrote:
>
> On 2024-Jun-06, Amit Kapila wrote:
>
> > On Thu, Jun 6, 2024 at 4:28 PM Julien Tachoires  wrote:
> > >
> > > When the content of a large transaction (size exceeding
> > > logical_decoding_work_mem) and its sub-transactions has to be
> > > reordered during logical decoding, then, all the changes are written
> > > on disk in temporary files located in pg_replslot/.
> > > Decoding very large transactions by multiple replication slots can
> > > lead to disk space saturation and high I/O utilization.
>
> I like the general idea of compressing the output of logical decoding.
> It's not so clear to me that we only want to do so for spilling to disk;
> for instance, if the two nodes communicate over a slow network, it may
> even be beneficial to compress when streaming, so to this question:
>
> > Why can't one use 'streaming' option to send changes to the client
> > once it reaches the configured limit of 'logical_decoding_work_mem'?
>
> I would say that streaming doesn't necessarily have to mean we don't
> want compression, because for some users it might be beneficial.

+1

> I think a GUC would be a good idea.  Also, what if for whatever reason
> you want a different compression algorithm or different compression
> parameters?  Looking at the existing compression UI we offer in
> pg_basebackup, perhaps you could add something like this:
>
> compress_logical_decoding = none
> compress_logical_decoding = lz4:42
> compress_logical_decoding = spill-zstd:99
>
> "none" says to never use compression (perhaps should be the default),
> "lz4:42" says to use lz4 with parameters 42 on both spilling and
> streaming, and "spill-zstd:99" says to use Zstd with parameter 99 but
> only for spilling to disk.
>

I think the compression option should be supported at the CREATE
SUBSCRIPTION level instead of being controlled by a GUC. This way, we
can decide on compression for each subscription individually rather
than applying it to all subscribers. It makes more sense for the
subscriber to control this, especially when we are planning to
compress the data sent downstream.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade

2024-06-07 Thread Dilip Kumar
On Fri, Jun 7, 2024 at 11:57 AM Matthias van de Meent
 wrote:
>
> On Fri, 7 Jun 2024 at 07:18, Dilip Kumar  wrote:
> >
> > On Wed, Jun 5, 2024 at 10:59 PM Matthias van de Meent
> >  wrote:
> >>
> >> On Wed, 5 Jun 2024 at 18:47, Ranier Vilela  wrote:
> >>>
> >>> Why not use it too, if not binary_upgrade?
> >>
> >> Because in the normal case (not during binary_upgrade) you don't want
> >> to have to generate 2 checkpoints for every created database,
> >> especially not when your shared buffers are large. Checkpoints' costs
> >> scale approximately linearly with the size of shared buffers, so being
> >> able to skip those checkpoints (with strategy=WAL_LOG) will save a lot
> >> of performance in the systems where this performance impact matters
> >> most.
> >
> > I agree with you that we introduced the WAL_LOG strategy to avoid
> > these force checkpoints. However, in binary upgrade cases where no
> > operations are happening in the system, the FILE_COPY strategy should
> > be faster.
>
> While you would be correct if there were no operations happening in
> the system, during binary upgrade we're still actively modifying
> catalogs; and this is done with potentially many concurrent jobs. I
> think it's not unlikely that this would impact performance.

Maybe, but generally, long checkpoints are problematic because they
involve a lot of I/O, which hampers overall system performance.
However, in the case of a binary upgrade, the concurrent operations
are only performing a schema restore, not a real data restore.
Therefore, it shouldn't have a significant impact, and the checkpoints
should also not do a lot of I/O during binary upgrade, right?

> Now that I think about it, arguably, we shouldn't need to run
> checkpoints during binary upgrade for the FILE_COPY strategy after
> we've restored the template1 database and created a checkpoint after
> that: All other databases use template1 as their template database,
> and the checkpoint is there mostly to guarantee the FS knows about all
> changes in the template database before we task it with copying the
> template database over to our new database, so the protections we get
> from more checkpoints are practically useless.
> If such a change were implemented (i.e. no checkpoints for FILE_COPY
> in binary upgrade, with a single manual checkpoint after restoring
> template1 in create_new_objects) I think most of my concerns with this
> patch would be alleviated.

Yeah, I think that's a valid point. The second checkpoint is to ensure
that the XLOG_DBASE_CREATE_FILE_COPY never gets replayed. However, for
binary upgrades, we don't need that guarantee because a checkpoint
will be performed during shutdown at the end of the upgrade anyway.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: Vacuum statistics

2024-06-07 Thread Dilip Kumar
On Thu, May 30, 2024 at 11:57 PM Alena Rybakina
 wrote:
>
> On 30.05.2024 10:33, Alena Rybakina wrote:
> >
> > I suggest gathering information about vacuum resource consumption for
> > processing indexes and tables and storing it in the table and index
> > relationships (for example, PgStat_StatTabEntry structure like it has
> > realized for usual statistics). It will allow us to determine how well
> > the vacuum is configured and evaluate the effect of overhead on the
> > system at the strategic level, the vacuum has gathered this
> > information already, but this valuable information doesn't store it.
> >
> My colleagues and I have prepared a patch that can help to solve this
> problem.
>
> We are open to feedback.

I was reading through the patch here are some initial comments.

--
+typedef struct LVExtStatCounters
+{
+ TimestampTz time;
+ PGRUsage ru;
+ WalUsage walusage;
+ BufferUsage bufusage;
+ int64 VacuumPageMiss;
+ int64 VacuumPageHit;
+ int64 VacuumPageDirty;
+ double VacuumDelayTime;
+ PgStat_Counter blocks_fetched;
+ PgStat_Counter blocks_hit;
+} LVExtStatCounters;


I noticed that you are storing both pgBufferUsage and
VacuumPage(Hit/Miss/Dirty) stats. Aren't these essentially the same?
It seems they both exist in the system because some code, like
heap_vacuum_rel(), uses pgBufferUsage, while do_analyze_rel() still
relies on the old counters. And there is already a patch to remove
those old counters.


--
+static Datum
+pg_stats_vacuum(FunctionCallInfo fcinfo, ExtVacReportType type, int ncolumns)
+{

I don't think you need this last parameter (ncolumns) we can anyway
fetch that from tupledesc, so adding an additional parameter
just for checking doesn't look good to me.

--
+ /* Tricky turn here: enforce pgstat to think that our database us dbid */
+
+ MyDatabaseId = dbid;

typo
/think that our database us dbid/think that our database has dbid

Also, remove the blank line between the comment and the next code
block that is related to that comment.


--
  VacuumPageDirty = 0;
+ VacuumDelayTime = 0.;

There is an extra "." after 0


-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: How about using dirty snapshots to locate dependent objects?

2024-06-07 Thread Dilip Kumar
On Fri, Jun 7, 2024 at 11:53 AM Ashutosh Sharma  wrote:
>
> On Fri, Jun 7, 2024 at 10:06 AM Dilip Kumar  wrote:
> >
> > On Thu, Jun 6, 2024 at 7:39 PM Ashutosh Sharma  
> > wrote:
> > >
> > > On Thu, Jun 6, 2024 at 6:20 PM Dilip Kumar  wrote:
> > >>
> > >> On Thu, Jun 6, 2024 at 5:59 PM Ashutosh Sharma  
> > >> wrote:
> > >> >
> > >> > Hello everyone,
> > >> >
> > >> > At present, we use MVCC snapshots to identify dependent objects. This 
> > >> > implies that if a new dependent object is inserted within a 
> > >> > transaction that is still ongoing, our search for dependent objects 
> > >> > won't include this recently added one. Consequently, if someone 
> > >> > attempts to drop the referenced object, it will be dropped, and when 
> > >> > the ongoing transaction completes, we will end up having an entry for 
> > >> > a referenced object that has already been dropped. This situation can 
> > >> > lead to an inconsistent state. Below is an example illustrating this 
> > >> > scenario:
> > >>
> > >> I don't think it's correct to allow the index to be dropped while a
> > >> transaction is creating it. Instead, the right solution should be for
> > >> the create index operation to protect the object it is using from
> > >> being dropped. Specifically, the create index operation should acquire
> > >> a shared lock on the Access Method (AM) to ensure it doesn't get
> > >> dropped concurrently while the transaction is still in progress.
> > >
> > >
> > > If I'm following you correctly, that's exactly what the patch is trying 
> > > to do; while the index creation is in progress, if someone tries to drop 
> > > the object referenced by the index under creation, the referenced object 
> > > being dropped is able to know about the dependent object (in this case 
> > > the index being created) using dirty snapshot and hence, it is unable to 
> > > acquire the lock on the dependent object, and as a result of that, it is 
> > > unable to drop it.
> >
> > You are aiming for the same outcome, but not in the conventional way.
> > In my opinion, the correct approach is not to find objects being
> > created using a dirty snapshot. Instead, when creating an object, you
> > should acquire a proper lock on any dependent objects to prevent them
> > from being dropped during the creation process. For instance, when
> > creating an index that depends on the btree_gist access method, the
> > create index operation should protect btree_gist from being dropped by
> > acquiring the appropriate lock. It is not the responsibility of the
> > drop extension to identify in-progress index creations.
>
> Thanks for sharing your thoughts, I appreciate your inputs and
> completely understand your perspective, but I wonder if that is
> feasible? For example, if an object (index in this case) has
> dependency on lets say 'n' number of objects, and those 'n' number of
> objects belong to say 'n' different catalog tables, so should we
> acquire locks on each of them until the create index command succeeds,
> or, should we just check for the presence of dependent objects and
> record their dependency inside the pg_depend table. Talking about this
> particular case, we are trying to create gist index that has
> dependency on gist_int4 opclass, it is one of the tuple inside
> pg_opclass catalog table, so should acquire lock in this tuple/table
> until the create index command succeeds and is that the thing to be
> done for all the dependent objects?

I am not sure what is the best way to do it, but if you are creating
an object which is dependent on the other object then you need to
check the existence of those objects, record dependency on those
objects, and also lock them so that those object doesn't get dropped
while you are creating your object.  I haven't looked into the patch
but something similar is being achieved in the thread Bertrand has
pointed out by locking the database object while recording the
dependency on those.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: [PATCH v3 7/8] board: ti: Pull redundant DDR functions to a common location and Fixup DDR size when ECC is enabled

2024-06-06 Thread Santhosh Kumar K

Hi, Wadim,

Thanks for the review.

On 30/05/24 17:48, Wadim Egorov wrote:

Hi Santhosh,

thanks for this series!

Am 23.05.24 um 07:04 schrieb Santhosh Kumar K:

As there are few redundant functions in board/ti/*/evm.c files, pull
them to a common location of access to reuse and include the common file
to access the functions.

Call k3-ddrss driver through fixup_ddr_driver_for_ecc() to fixup the
device tree and resize the available amount of DDR, if ECC is enabled.
Otherwise, fixup the device tree using the regular
fdt_fixup_memory_banks().

Also call dram_init_banksize() after every call to
fixup_ddr_driver_for_ecc() is made so that gd->bd is populated
correctly.

Ensure that fixup_ddr_driver_for_ecc() is agnostic to the number of DDR
controllers present.

Signed-off-by: Santhosh Kumar K 
Signed-off-by: Neha Malcom Francis 
---

[...]

+++ b/board/ti/common/k3-ddr-init.c
@@ -0,0 +1,89 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Copyright (C) 2023, Texas Instruments Incorporated - 
https://www.ti.com/

+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include "k3-ddr-init.h"
+
+int dram_init(void)
+{
+    s32 ret;
+
+    ret = fdtdec_setup_mem_size_base_lowest();
+    if (ret)
+    printf("Error setting up mem size and base. %d\n", ret);
+
+    return ret;
+}
+
+int dram_init_banksize(void)
+{
+    s32 ret;
+
+    ret = fdtdec_setup_memory_banksize();
+    if (ret)
+    printf("Error setting up memory banksize. %d\n", ret);
+
+    return ret;
+}


I'm wondering if we can generalize more.

What do you say if we keep dram_init() & dram_init_banksize() in the 
board code and move fixup_ddr_driver_for_ecc() & fixup_memory_node() to 
mach-k3 and make them available for all K3 based boards?


It looks like I will reuse the code for our boards and I assume other 
vendors too.


Regards,
Wadim


Yeah, makes sense, will work on it and post v4.

Regards,
Santhosh.

+
+#if defined(CONFIG_SPL_BUILD)
+
+void fixup_ddr_driver_for_ecc(struct spl_image_info *spl_image)
+{
+    struct udevice *dev;
+    int ret, ctr = 1;
+
+    dram_init_banksize();
+
+    ret = uclass_get_device(UCLASS_RAM, 0, );
+    if (ret)
+    panic("Cannnot get RAM device for ddr size fixup: %d\n", ret);
+
+    ret = k3_ddrss_ddr_fdt_fixup(dev, spl_image->fdt_addr, gd->bd);
+    if (ret)
+    printf("Error fixing up ddr node for ECC use! %d\n", ret);
+
+    dram_init_banksize();
+
+    ret = uclass_next_device_err();
+
+    while (!ret) {
+    ret = k3_ddrss_ddr_fdt_fixup(dev, spl_image->fdt_addr, gd->bd);
+    if (ret)
+    printf("Error fixing up ddr node %d for ECC use! %d\n", 
ctr, ret);

+
+    dram_init_banksize();
+    ret = uclass_next_device_err();
+    ctr++;
+    }
+}
+
+void fixup_memory_node(struct spl_image_info *spl_image)
+{
+    u64 start[CONFIG_NR_DRAM_BANKS];
+    u64 size[CONFIG_NR_DRAM_BANKS];
+    int bank;
+    int ret;
+
+    dram_init();
+    dram_init_banksize();
+
+    for (bank = 0; bank < CONFIG_NR_DRAM_BANKS; bank++) {
+    start[bank] = gd->bd->bi_dram[bank].start;
+    size[bank] = gd->bd->bi_dram[bank].size;
+    }
+
+    ret = fdt_fixup_memory_banks(spl_image->fdt_addr, start, size,
+ CONFIG_NR_DRAM_BANKS);
+
+    if (ret)
+    printf("Error fixing up memory node! %d\n", ret);
+}
+
+#endif
diff --git a/board/ti/common/k3-ddr-init.h 
b/board/ti/common/k3-ddr-init.h

new file mode 100644
index ..9d1826815dfd
--- /dev/null
+++ b/board/ti/common/k3-ddr-init.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Copyright (C) 2023, Texas Instruments Incorporated - 
https://www.ti.com/

+ */
+
+#ifndef __K3_DDR_INIT_H
+#define __K3_DDR_INIT_H
+
+int dram_init(void);
+int dram_init_banksize(void);
+
+void fixup_ddr_driver_for_ecc(struct spl_image_info *spl_image);
+void fixup_memory_node(struct spl_image_info *spl_image);
+
+#endif /* __K3_DDR_INIT_H */
diff --git a/board/ti/j721e/evm.c b/board/ti/j721e/evm.c
index 539eaf47186a..e0cd8529bc2b 100644
--- a/board/ti/j721e/evm.c
+++ b/board/ti/j721e/evm.c
@@ -17,6 +17,7 @@
  #include "../common/board_detect.h"
  #include "../common/fdt_ops.h"
+#include "../common/k3-ddr-init.h"
  #define board_is_j721e_som()    (board_ti_k3_is("J721EX-PM1-SOM") || \
   board_ti_k3_is("J721EX-PM2-SOM"))
@@ -37,17 +38,6 @@ int board_init(void)
  return 0;
  }
-int dram_init(void)
-{
-#ifdef CONFIG_PHYS_64BIT
-    gd->ram_size = 0x1;
-#else
-    gd->ram_size = 0x8000;
-#endif
-
-    return 0;
-}
-
  phys_addr_t board_get_usable_ram_top(phys_size_t total_size)
  {
  #ifdef CONFIG_PHYS_64BIT
@@ -59,23 +49,6 @@ phys_addr_t board_get_usable_ram_top(phys_size_t 
total_size)

  return gd->ram_top;
  }
-int dram_init_banksize(void)
-{
-    /* Bank 0 declares the memory available i

Re: use CREATE DATABASE STRATEGY = FILE_COPY in pg_upgrade

2024-06-06 Thread Dilip Kumar
On Wed, Jun 5, 2024 at 10:59 PM Matthias van de Meent
 wrote:
>
> On Wed, 5 Jun 2024 at 18:47, Ranier Vilela  wrote:
> >
> > Em ter., 4 de jun. de 2024 às 16:39, Nathan Bossart 
> >  escreveu:
> >>
> >> I noticed that the "Restoring database schemas in the new cluster" part of
> >> pg_upgrade can take a while if you have many databases, so I experimented
> >> with a couple different settings to see if there are any easy ways to speed
> >> it up.  The FILE_COPY strategy for CREATE DATABASE helped quite
> >> significantly on my laptop.  For ~3k empty databases, this step went from
> >> ~100 seconds to ~30 seconds with the attached patch.  I see commit ad43a41
> >> made a similar change for initdb, so there might even be an argument for
> >> back-patching this to v15 (where STRATEGY was introduced).  One thing I
> >> still need to verify is that this doesn't harm anything when there are lots
> >> of objects in the databases, i.e., more WAL generated during many
> >> concurrent CREATE-DATABASE-induced checkpoints.
> >>
> >> Thoughts?
> >
> > Why not use it too, if not binary_upgrade?
>
> Because in the normal case (not during binary_upgrade) you don't want
> to have to generate 2 checkpoints for every created database,
> especially not when your shared buffers are large. Checkpoints' costs
> scale approximately linearly with the size of shared buffers, so being
> able to skip those checkpoints (with strategy=WAL_LOG) will save a lot
> of performance in the systems where this performance impact matters
> most.

I agree with you that we introduced the WAL_LOG strategy to avoid
these force checkpoints. However, in binary upgrade cases where no
operations are happening in the system, the FILE_COPY strategy should
be faster.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: How about using dirty snapshots to locate dependent objects?

2024-06-06 Thread Dilip Kumar
On Thu, Jun 6, 2024 at 7:39 PM Ashutosh Sharma  wrote:
>
> On Thu, Jun 6, 2024 at 6:20 PM Dilip Kumar  wrote:
>>
>> On Thu, Jun 6, 2024 at 5:59 PM Ashutosh Sharma  wrote:
>> >
>> > Hello everyone,
>> >
>> > At present, we use MVCC snapshots to identify dependent objects. This 
>> > implies that if a new dependent object is inserted within a transaction 
>> > that is still ongoing, our search for dependent objects won't include this 
>> > recently added one. Consequently, if someone attempts to drop the 
>> > referenced object, it will be dropped, and when the ongoing transaction 
>> > completes, we will end up having an entry for a referenced object that has 
>> > already been dropped. This situation can lead to an inconsistent state. 
>> > Below is an example illustrating this scenario:
>>
>> I don't think it's correct to allow the index to be dropped while a
>> transaction is creating it. Instead, the right solution should be for
>> the create index operation to protect the object it is using from
>> being dropped. Specifically, the create index operation should acquire
>> a shared lock on the Access Method (AM) to ensure it doesn't get
>> dropped concurrently while the transaction is still in progress.
>
>
> If I'm following you correctly, that's exactly what the patch is trying to 
> do; while the index creation is in progress, if someone tries to drop the 
> object referenced by the index under creation, the referenced object being 
> dropped is able to know about the dependent object (in this case the index 
> being created) using dirty snapshot and hence, it is unable to acquire the 
> lock on the dependent object, and as a result of that, it is unable to drop 
> it.

You are aiming for the same outcome, but not in the conventional way.
In my opinion, the correct approach is not to find objects being
created using a dirty snapshot. Instead, when creating an object, you
should acquire a proper lock on any dependent objects to prevent them
from being dropped during the creation process. For instance, when
creating an index that depends on the btree_gist access method, the
create index operation should protect btree_gist from being dropped by
acquiring the appropriate lock. It is not the responsibility of the
drop extension to identify in-progress index creations.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: [PATCH v4 08/13] drm/msm/dpu: add support for virtual planes

2024-06-06 Thread Abhinav Kumar




On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Only several SSPP blocks support such features as YUV output or scaling,
thus different DRM planes have different features.  Properly utilizing
all planes requires the attention of the compositor, who should
prefer simpler planes to YUV-supporting ones. Otherwise it is very easy
to end up in a situation when all featureful planes are already
allocated for simple windows, leaving no spare plane for YUV playback.

To solve this problem make all planes virtual. Each plane is registered
as if it supports all possible features, but then at the runtime during
the atomic_check phase the driver selects backing SSPP block for each
plane.

Note, this does not provide support for using two different SSPP blocks
for a single plane or using two rectangles of an SSPP to drive two
planes. Each plane still gets its own SSPP and can utilize either a solo
rectangle or both multirect rectangles depending on the resolution.

Note #2: By default support for virtual planes is turned off and the
driver still uses old code path with preallocated SSPP block for each
plane. To enable virtual planes, pass 'msm.dpu_use_virtual_planes=1'
kernel parameter.



I like the overall approach in this patch. Some comments below.


Signed-off-by: Dmitry Baryshkov 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c  |  50 +
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  10 +-
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h   |   4 +
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 230 +++---
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h |  19 ++
  drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c|  77 
  drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h|  28 +++
  7 files changed, 390 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
index 88c2e51ab166..794c5643584f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
@@ -1168,6 +1168,49 @@ static bool dpu_crtc_needs_dirtyfb(struct drm_crtc_state 
*cstate)
return false;
  }
  
+static int dpu_crtc_reassign_planes(struct drm_crtc *crtc, struct drm_crtc_state *crtc_state)

+{
+   int total_planes = crtc->dev->mode_config.num_total_plane;
+   struct drm_atomic_state *state = crtc_state->state;
+   struct dpu_global_state *global_state;
+   struct drm_plane_state **states;
+   struct drm_plane *plane;
+   int ret;
+
+   global_state = dpu_kms_get_global_state(crtc_state->state);
+   if (IS_ERR(global_state))
+   return PTR_ERR(global_state);
+
+   dpu_rm_release_all_sspp(global_state, crtc);
+


Do we need to call dpu_rm_release_all_sspp() even in the 
_dpu_plane_atomic_disable()?



+   if (!crtc_state->enable)
+   return 0;
+
+   states = kcalloc(total_planes, sizeof(*states), GFP_KERNEL);
+   if (!states)
+   return -ENOMEM;
+
+   drm_atomic_crtc_state_for_each_plane(plane, crtc_state) {
+   struct drm_plane_state *plane_state =
+   drm_atomic_get_plane_state(state, plane);
+
+   if (IS_ERR(plane_state)) {
+   ret = PTR_ERR(plane_state);
+   goto done;
+   }
+
+   states[plane_state->normalized_zpos] = plane_state;
+   }
+
+   ret = dpu_assign_plane_resources(global_state, state, crtc, states, 
total_planes);
+
+done:
+   kfree(states);
+   return ret;
+
+   return 0;
+}
+
  static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
struct drm_atomic_state *state)
  {
@@ -1183,6 +1226,13 @@ static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
  
  	bool needs_dirtyfb = dpu_crtc_needs_dirtyfb(crtc_state);
  
+	if (dpu_use_virtual_planes &&

+   (crtc_state->planes_changed || crtc_state->zpos_changed)) {


Here, I assume you are relying on DRM to set zpos_changed. But can you 
please elaborate why we have to reassign planes when zpos_changes?



+   rc = dpu_crtc_reassign_planes(crtc, crtc_state);
+   if (rc < 0)
+   return rc;
+   }
+
if (!crtc_state->enable || 
!drm_atomic_crtc_effectively_active(crtc_state)) {
DRM_DEBUG_ATOMIC("crtc%d -> enable %d, active %d, skip 
atomic_check\n",
crtc->base.id, crtc_state->enable,
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
index 9a1fe6868979..becdd98f3c40 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
@@ -51,6 +51,9 @@
  #define DPU_DEBUGFS_DIR "msm_dpu"
  #define DPU_DEBUGFS_HWMASKNAME "hw_log_mask"
  
+bool dpu_use_virtual_planes = false;

+module_param(dpu_use_virtual_planes, bool, 0);
+
  static int dpu_kms_hw_init(struct msm_kms *kms);
  static void _dpu_kms_mmu_destroy(struct dpu_kms *dpu_kms);
  
@@ -770,8 +773,11 @@ static int 

Re: [PATCH v4 08/13] drm/msm/dpu: add support for virtual planes

2024-06-06 Thread Abhinav Kumar




On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Only several SSPP blocks support such features as YUV output or scaling,
thus different DRM planes have different features.  Properly utilizing
all planes requires the attention of the compositor, who should
prefer simpler planes to YUV-supporting ones. Otherwise it is very easy
to end up in a situation when all featureful planes are already
allocated for simple windows, leaving no spare plane for YUV playback.

To solve this problem make all planes virtual. Each plane is registered
as if it supports all possible features, but then at the runtime during
the atomic_check phase the driver selects backing SSPP block for each
plane.

Note, this does not provide support for using two different SSPP blocks
for a single plane or using two rectangles of an SSPP to drive two
planes. Each plane still gets its own SSPP and can utilize either a solo
rectangle or both multirect rectangles depending on the resolution.

Note #2: By default support for virtual planes is turned off and the
driver still uses old code path with preallocated SSPP block for each
plane. To enable virtual planes, pass 'msm.dpu_use_virtual_planes=1'
kernel parameter.



I like the overall approach in this patch. Some comments below.


Signed-off-by: Dmitry Baryshkov 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c  |  50 +
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  10 +-
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h   |   4 +
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 230 +++---
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h |  19 ++
  drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c|  77 
  drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h|  28 +++
  7 files changed, 390 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
index 88c2e51ab166..794c5643584f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
@@ -1168,6 +1168,49 @@ static bool dpu_crtc_needs_dirtyfb(struct drm_crtc_state 
*cstate)
return false;
  }
  
+static int dpu_crtc_reassign_planes(struct drm_crtc *crtc, struct drm_crtc_state *crtc_state)

+{
+   int total_planes = crtc->dev->mode_config.num_total_plane;
+   struct drm_atomic_state *state = crtc_state->state;
+   struct dpu_global_state *global_state;
+   struct drm_plane_state **states;
+   struct drm_plane *plane;
+   int ret;
+
+   global_state = dpu_kms_get_global_state(crtc_state->state);
+   if (IS_ERR(global_state))
+   return PTR_ERR(global_state);
+
+   dpu_rm_release_all_sspp(global_state, crtc);
+


Do we need to call dpu_rm_release_all_sspp() even in the 
_dpu_plane_atomic_disable()?



+   if (!crtc_state->enable)
+   return 0;
+
+   states = kcalloc(total_planes, sizeof(*states), GFP_KERNEL);
+   if (!states)
+   return -ENOMEM;
+
+   drm_atomic_crtc_state_for_each_plane(plane, crtc_state) {
+   struct drm_plane_state *plane_state =
+   drm_atomic_get_plane_state(state, plane);
+
+   if (IS_ERR(plane_state)) {
+   ret = PTR_ERR(plane_state);
+   goto done;
+   }
+
+   states[plane_state->normalized_zpos] = plane_state;
+   }
+
+   ret = dpu_assign_plane_resources(global_state, state, crtc, states, 
total_planes);
+
+done:
+   kfree(states);
+   return ret;
+
+   return 0;
+}
+
  static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
struct drm_atomic_state *state)
  {
@@ -1183,6 +1226,13 @@ static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
  
  	bool needs_dirtyfb = dpu_crtc_needs_dirtyfb(crtc_state);
  
+	if (dpu_use_virtual_planes &&

+   (crtc_state->planes_changed || crtc_state->zpos_changed)) {


Here, I assume you are relying on DRM to set zpos_changed. But can you 
please elaborate why we have to reassign planes when zpos_changes?



+   rc = dpu_crtc_reassign_planes(crtc, crtc_state);
+   if (rc < 0)
+   return rc;
+   }
+
if (!crtc_state->enable || 
!drm_atomic_crtc_effectively_active(crtc_state)) {
DRM_DEBUG_ATOMIC("crtc%d -> enable %d, active %d, skip 
atomic_check\n",
crtc->base.id, crtc_state->enable,
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
index 9a1fe6868979..becdd98f3c40 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
@@ -51,6 +51,9 @@
  #define DPU_DEBUGFS_DIR "msm_dpu"
  #define DPU_DEBUGFS_HWMASKNAME "hw_log_mask"
  
+bool dpu_use_virtual_planes = false;

+module_param(dpu_use_virtual_planes, bool, 0);
+
  static int dpu_kms_hw_init(struct msm_kms *kms);
  static void _dpu_kms_mmu_destroy(struct dpu_kms *dpu_kms);
  
@@ -770,8 +773,11 @@ static int 

Re: RFR: 8333566: Remove unused methods

2024-06-06 Thread Amit Kumar
On Tue, 4 Jun 2024 20:51:52 GMT, Cesar Soares Lucas  wrote:

> Please, consider this patch to remove unused methods from the code base. To 
> the best of my knowledge, these methods are only defined but never used.
> 
> Here is a list with names of delete methods: 
> https://gist.github.com/JohnTortugo/fccc29781a1b584c03162aa4e160e874
> 
> Tested with Linux x86_64 tier1-4, GHA, and only cross building to other 
> platforms.

src/hotspot/cpu/s390/vm_version_s390.hpp line 516:

> 514:   static void set_has_CompareTrap()   { _features[0] |= 
> GnrlInstrExtFacilityMask; }
> 515:   static void set_has_RelativeLoadStore() { _features[0] |= 
> GnrlInstrExtFacilityMask; }
> 516:   static void set_has_ProcessorAssist()   { _features[0] |= 
> ProcessorAssistMask; }

This looks incorrect; there exist a second definition below;

-

PR Review Comment: https://git.openjdk.org/jdk/pull/19550#discussion_r1630102121


Re: RFR: 8333566: Remove unused methods

2024-06-06 Thread Amit Kumar
On Tue, 4 Jun 2024 20:51:52 GMT, Cesar Soares Lucas  wrote:

> Please, consider this patch to remove unused methods from the code base. To 
> the best of my knowledge, these methods are only defined but never used.
> 
> Here is a list with names of delete methods: 
> https://gist.github.com/JohnTortugo/fccc29781a1b584c03162aa4e160e874
> 
> Tested with Linux x86_64 tier1-4, GHA, and only cross building to other 
> platforms.

src/hotspot/cpu/s390/vm_version_s390.hpp line 516:

> 514:   static void set_has_CompareTrap()   { _features[0] |= 
> GnrlInstrExtFacilityMask; }
> 515:   static void set_has_RelativeLoadStore() { _features[0] |= 
> GnrlInstrExtFacilityMask; }
> 516:   static void set_has_GnrlInstrExtensions()   { _features[0] |= 
> GnrlInstrExtFacilityMask; }

I know this PR is still in draft state. Just a thought: I would like to keep 
the methods in `vm_version_s390.hpp` file for now. I'm planning to remove the 
checks applicable to older hardware. So it would be better, If I clean these 
methods as a part of that PR :-)

-

PR Review Comment: https://git.openjdk.org/jdk/pull/19550#discussion_r1628627936


Re: [marxmail] TOWARD NAKBA AS A LEGAL CONCEPT

2024-06-06 Thread hari kumar
Thanks Barry for providing the full text of the article taken down; and Alan 
for the NYT article.
Another insight from 'the Intercept'. Of course since it quotes from the 
editorial team that was placed into an impossible situation, it reveals the 
real processes underlying the censorship.
At:
https://theintercept.com/2024/06/03/columbia-law-review-palestine-board-website/?utm_medium=email_source=The%20Intercept%20Newsletter


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#30653): https://groups.io/g/marxmail/message/30653
Mute This Topic: https://groups.io/mt/106506352/21656
-=-=-
POSTING RULES & NOTES
#1 YOU MUST clip all extraneous text when replying to a message.
#2 This mail-list, like most, is publicly & permanently archived.
#3 Subscribe and post under an alias if #2 is a concern.
#4 Do not exceed five posts a day.
-=-=-
Group Owner: marxmail+ow...@groups.io
Unsubscribe: https://groups.io/g/marxmail/leave/8674936/21656/1316126222/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-




Re: [efloraofindia:465733] Re: 560 ID wild Orchid

2024-06-06 Thread Pankaj Kumar
No sir, I just wanted to know if your plants are from silent valley in
Palakkad? There is supposedly one student working there on vascular
epiphytes.
Thanks so much sir.
Best regards
Pankaj


On Thu, 6 Jun 2024 at 11:12, Sam Kuzhalanattu 
wrote:

> Dear Garg ji, Pankaj ji,
> Thank you very much for the ID confirmation of my Orchid, kind regards,
> Sam.
>
> On Thu, 6 Jun 2024, 9:39 pm Sam Kuzhalanattu, 
> wrote:
>
>> Dear Pankaj ji,
>> No, there are two Silent Valleys in Kerala.  The famous first one is in
>> Palakkad district more than 200km distance from here. Second one, least
>> famous is in Idukki district at Munnar  85km.  Which one is you expect?
>> With regards, Sam.
>>
>>
>> On Thu, 6 Jun 2024, 8:57 pm Pankaj Kumar,  wrote:
>>
>>> Phalaenopsis mysorensis.
>>> May I know if this is Silent Valley area?
>>> Pankaj
>>>
>>>
>>> On Thursday 6 June 2024, J.M. Garg  wrote:
>>>
>>>> Thanks, Saroj ji, for id as Phalaenopsis mysorensis C.J.Saldanha
>>>>
>>>> -- Forwarded message -
>>>> From: Sam Kuzhalanattu 
>>>> Date: Thu, 23 May 2024 at 13:53
>>>> Subject: [efloraofindia:465365] 560 ID wild Orchid
>>>> To: efloraofindia 
>>>>
>>>>
>>>> Please ID wild Orchid, kind regards, Sam.
>>>>
>>>> Location: bloomed near Vannappuram Thodupuzha Idukki Kerala INDIA
>>>>
>>>> Altitude: 1500fsl
>>>>
>>>> Flower date: 22MAY2024, 03.20pm
>>>>
>>>> Habitat: wild moisture evergreen misty sloppy canopied alpine
>>>>
>>>> Plant habit: epiphyte Orchid obliquely unbranched, perennial
>>>>
>>>> Height: 06cm
>>>>
>>>> Leaves: alternate elliptic acute simple smooth flexible fleshy, size
>>>> upto: 07×4cm
>>>>
>>>> Flower: basel spike inflorescence, clustered diameter:17mm, white,
>>>> good non fragrant
>>>>
>>>> Fruit: capsule green into brown ovoid ridges pendulous size:05x01cm
>>>>
>>>> Seed:
>>>> Camera: CANON EOS1500D +FL10x
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "eFloraofIndia" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to indiantreepix+unsubscr...@googlegroups.com.
>>>> To view this discussion on the web, visit
>>>> https://groups.google.com/d/msgid/indiantreepix/CADkXgDSjiBC0oPtagQvB5h2Tsu4aPsg%3Dms_dW534FCVZ%3D36ZOg%40mail.gmail.com
>>>> <https://groups.google.com/d/msgid/indiantreepix/CADkXgDSjiBC0oPtagQvB5h2Tsu4aPsg%3Dms_dW534FCVZ%3D36ZOg%40mail.gmail.com?utm_medium=email_source=footer>
>>>> .
>>>>
>>>>
>>>> --
>>>> With regards,
>>>> J.M.Garg
>>>>
>>>
>>>
>>> --
>>>
>>> *Pankaj Kumar* MSc, PhD, FLS
>>>
>>> IUCN-SSC Red List Authority for Orchids of Asia
>>>
>>> IUCN-SSC: Chinese Species Specialist Group, Orchid Specialist Group of
>>> Asia, Global Trade Subgroup, Western Ghats Plant Specialist Group. Hong
>>> Kong Biodiversity Strategy and Action Plan
>>>
>>> *Department of Plant and Soil Science, **Texas Tech University,
>>> Lubbock, TX 79409 USA*
>>>
>>> *email*: sahanipan...@gmail.com; pankaj.ku...@ttu.edu | *Phone*: +1 806
>>> 317 7623 (USA); +852 9436 6251 (Hong Kong)
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"eFloraofIndia" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to indiantreepix+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/indiantreepix/CABpo8%3D1Q0WKUHr7uMA%2BpAZG9zp3MdotBQCYQHfRFfMQF0Z%2BBUw%40mail.gmail.com.


Integrated: 8332550: [macos] Voice Over: java.awt.IllegalComponentStateException: component must be showing on the screen to determine its location

2024-06-06 Thread Abhishek Kumar
On Fri, 24 May 2024 11:38:30 GMT, Abhishek Kumar  wrote:

> "java.awt.IllegalComponentStateException: component must be showing on the 
> screen to determine its location" is thrown when getLocationOnScreen method 
> is invoked for JTableHeader while testing JFileChooser demo. It seems that in 
> getLocationOfScreen method we are trying to access the parent location but 
> that is not visible and ICSE is thrown.
> 
> Fix is to handle the exception and can be verified using the steps mentioned 
> in [JDK-8332550](https://bugs.openjdk.org/browse/JDK-8332550).
> CI testing is green and link is mentioned in JBS.

This pull request has now been integrated.

Changeset: 054362ab
Author:Abhishek Kumar 
URL:   
https://git.openjdk.org/jdk/commit/054362abe040938b87eb1a1cab8a0a94540e0667
Stats: 5 lines in 1 file changed: 3 ins; 0 del; 2 mod

8332550: [macos] Voice Over: java.awt.IllegalComponentStateException: component 
must be showing on the screen to determine its location

Reviewed-by: asemenov, kizune, achung

-

PR: https://git.openjdk.org/jdk/pull/19391


Re: [efloraofindia:465730] Re: 560 ID wild Orchid

2024-06-06 Thread Pankaj Kumar
Phalaenopsis mysorensis.
May I know if this is Silent Valley area?
Pankaj


On Thursday 6 June 2024, J.M. Garg  wrote:

> Thanks, Saroj ji, for id as Phalaenopsis mysorensis C.J.Saldanha
>
> -- Forwarded message -
> From: Sam Kuzhalanattu 
> Date: Thu, 23 May 2024 at 13:53
> Subject: [efloraofindia:465365] 560 ID wild Orchid
> To: efloraofindia 
>
>
> Please ID wild Orchid, kind regards, Sam.
>
> Location: bloomed near Vannappuram Thodupuzha Idukki Kerala INDIA
>
> Altitude: 1500fsl
>
> Flower date: 22MAY2024, 03.20pm
>
> Habitat: wild moisture evergreen misty sloppy canopied alpine
>
> Plant habit: epiphyte Orchid obliquely unbranched, perennial
>
> Height: 06cm
>
> Leaves: alternate elliptic acute simple smooth flexible fleshy, size
> upto: 07×4cm
>
> Flower: basel spike inflorescence, clustered diameter:17mm, white, good
> non fragrant
>
> Fruit: capsule green into brown ovoid ridges pendulous size:05x01cm
>
> Seed:
> Camera: CANON EOS1500D +FL10x
>
> --
> You received this message because you are subscribed to the Google Groups
> "eFloraofIndia" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to indiantreepix+unsubscr...@googlegroups.com.
> To view this discussion on the web, visit https://groups.google.com/d/
> msgid/indiantreepix/CADkXgDSjiBC0oPtagQvB5h2Tsu4aP
> sg%3Dms_dW534FCVZ%3D36ZOg%40mail.gmail.com
> <https://groups.google.com/d/msgid/indiantreepix/CADkXgDSjiBC0oPtagQvB5h2Tsu4aPsg%3Dms_dW534FCVZ%3D36ZOg%40mail.gmail.com?utm_medium=email_source=footer>
> .
>
>
> --
> With regards,
> J.M.Garg
>


-- 

*Pankaj Kumar* MSc, PhD, FLS

IUCN-SSC Red List Authority for Orchids of Asia

IUCN-SSC: Chinese Species Specialist Group, Orchid Specialist Group of
Asia, Global Trade Subgroup, Western Ghats Plant Specialist Group. Hong
Kong Biodiversity Strategy and Action Plan

*Department of Plant and Soil Science, **Texas Tech University, Lubbock, TX
79409 USA*

*email*: sahanipan...@gmail.com; pankaj.ku...@ttu.edu | *Phone*: +1 806 317
7623 (USA); +852 9436 6251 (Hong Kong)

-- 
You received this message because you are subscribed to the Google Groups 
"eFloraofIndia" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to indiantreepix+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/indiantreepix/CABpo8%3D15226nRaM%3DSvdD%3D%3DbamccwswJjftFS%2B_fFzJJ8VywO0g%40mail.gmail.com.


Compatibility of LGPL-3+ and LGPL-2.1 in same library.

2024-06-06 Thread Arun Kumar Pariyar

Dear Legal Team,

Can LGPL-3+ and LGPL-2.1 licensed code be used together in the same library, or 
is re-licensing required?
Your guidance on their compatibility would be greatly appreciated.

Regards,
~ Arun Kumar Pariyar



OpenPGP_0x4B542AF704F74516.asc
Description: OpenPGP public key


OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: How about using dirty snapshots to locate dependent objects?

2024-06-06 Thread Dilip Kumar
On Thu, Jun 6, 2024 at 5:59 PM Ashutosh Sharma  wrote:
>
> Hello everyone,
>
> At present, we use MVCC snapshots to identify dependent objects. This implies 
> that if a new dependent object is inserted within a transaction that is still 
> ongoing, our search for dependent objects won't include this recently added 
> one. Consequently, if someone attempts to drop the referenced object, it will 
> be dropped, and when the ongoing transaction completes, we will end up having 
> an entry for a referenced object that has already been dropped. This 
> situation can lead to an inconsistent state. Below is an example illustrating 
> this scenario:

I don't think it's correct to allow the index to be dropped while a
transaction is creating it. Instead, the right solution should be for
the create index operation to protect the object it is using from
being dropped. Specifically, the create index operation should acquire
a shared lock on the Access Method (AM) to ensure it doesn't get
dropped concurrently while the transaction is still in progress.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: Logical Replication of sequences

2024-06-06 Thread Dilip Kumar
On Thu, Jun 6, 2024 at 9:34 AM Amit Kapila  wrote:
>
> On Wed, Jun 5, 2024 at 3:17 PM Bharath Rupireddy
>  wrote:
> >
> > On Tue, Jun 4, 2024 at 5:40 PM Amit Kapila  wrote:
> > >
> > > Even if we decode it periodically (say each time we decode the
> > > checkpoint record) then also we need to send the entire set of
> > > sequences at shutdown. This is because the sequences may have changed
> > > from the last time we sent them.
> >
> > Agree. How about decoding and sending only the sequences that are
> > changed from the last time when they were sent? I know it requires a
> > bit of tracking and more work, but all I'm looking for is to reduce
> > the amount of work that walsenders need to do during the shutdown.
> >
>
> I see your point but going towards tracking the changed sequences
> sounds like moving towards what we do for incremental backups unless
> we can invent some other smart way.

Yes, we would need an entirely new infrastructure to track the
sequence change since the last sync. We can only determine this from
WAL, and relying on it would somehow bring us back to the approach we
were trying to achieve with logical decoding of sequences patch.

> > Having said that, I like the idea of letting the user sync the
> > sequences via ALTER SUBSCRIPTION command and not weave the logic into
> > the shutdown checkpoint path. As Peter Eisentraut said here
> > https://www.postgresql.org/message-id/42e5cb35-4aeb-4f58-8091-90619c7c3ecc%40eisentraut.org,
> > this can be a good starting point to get going.
> >
>
> Agreed.

+1

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: Compress ReorderBuffer spill files using LZ4

2024-06-06 Thread Dilip Kumar
On Thu, Jun 6, 2024 at 4:43 PM Amit Kapila  wrote:
>
> On Thu, Jun 6, 2024 at 4:28 PM Julien Tachoires  wrote:
> >
> > When the content of a large transaction (size exceeding
> > logical_decoding_work_mem) and its sub-transactions has to be
> > reordered during logical decoding, then, all the changes are written
> > on disk in temporary files located in pg_replslot/.
> > Decoding very large transactions by multiple replication slots can
> > lead to disk space saturation and high I/O utilization.
> >
>
> Why can't one use 'streaming' option to send changes to the client
> once it reaches the configured limit of 'logical_decoding_work_mem'?
>
> >
> > 2. Do we want a GUC to switch compression on/off?
> >
>
> It depends on the overhead of decoding. Did you try to measure the
> decoding overhead of decompression when reading compressed files?

I think it depends on the trade-off between the I/O savings from
reducing the data size and the performance cost of compressing and
decompressing the data. This balance is highly dependent on the
hardware. For example, if you have a very slow disk and a powerful
processor, compression could be advantageous. Conversely, if the disk
is very fast, the I/O savings might be minimal, and the compression
overhead could outweigh the benefits. Additionally, the effectiveness
of compression also depends on the compression ratio, which varies
with the type of data being compressed.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: Remove dependency on VacuumPage(Hit/Miss/Dirty) counters in do_analyze_rel

2024-06-06 Thread Dilip Kumar
On Thu, Jun 6, 2024 at 3:23 PM Anthonin Bonnefoy
 wrote:
>
> Hi,
>
> I sent a similar patch for this in 
> https://www.postgresql.org/message-id/flat/cao6_xqr__kttclkftqs0qscm-j7_xbrg3ge2rwhucxqjmjh...@mail.gmail.com

Okay, I see, In that case, we can just discard mine, thanks for notifying me.


-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: Conflict Detection and Resolution

2024-06-06 Thread Dilip Kumar
On Thu, Jun 6, 2024 at 3:43 PM Amit Kapila  wrote:
>
> On Wed, Jun 5, 2024 at 7:29 PM Dilip Kumar  wrote:
> >
> > On Tue, Jun 4, 2024 at 9:37 AM Amit Kapila  wrote:
> > >
> > > Can you share the use case of "earliest_timestamp_wins" resolution
> > > method? It seems after the initial update on the local node, it will
> > > never allow remote update to succeed which sounds a bit odd. Jan has
> > > shared this and similar concerns about this resolution method, so I
> > > have added him to the email as well.
> > >
> > I can not think of a use case exactly in this context but it's very
> > common to have such a use case while designing a distributed
> > application with multiple clients.  For example, when we are doing git
> > push concurrently from multiple clients it is expected that the
> > earliest commit wins.
> >
>
> Okay, I think it mostly boils down to something like what Shveta
> mentioned where Inserts for a primary key can use
> "earliest_timestamp_wins" resolution method [1]. So, it seems useful
> to support this method as well.

Correct, but we still need to think about how to make it work
correctly in the presence of a clock skew as I mentioned in one of my
previous emails.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: RFR: 8332550: [macos] Voice Over: java.awt.IllegalComponentStateException: component must be showing on the screen to determine its location [v4]

2024-06-06 Thread Abhishek Kumar
> "java.awt.IllegalComponentStateException: component must be showing on the 
> screen to determine its location" is thrown when getLocationOnScreen method 
> is invoked for JTableHeader while testing JFileChooser demo. It seems that in 
> getLocationOfScreen method we are trying to access the parent location but 
> that is not visible and ICSE is thrown.
> 
> Fix is to handle the exception and can be verified using the steps mentioned 
> in [JDK-8332550](https://bugs.openjdk.org/browse/JDK-8332550).
> CI testing is green and link is mentioned in JBS.

Abhishek Kumar has updated the pull request incrementally with one additional 
commit since the last revision:

  Remove unused import and added null check

-

Changes:
  - all: https://git.openjdk.org/jdk/pull/19391/files
  - new: https://git.openjdk.org/jdk/pull/19391/files/edb02c06..e902496d

Webrevs:
 - full: https://webrevs.openjdk.org/?repo=jdk=19391=03
 - incr: https://webrevs.openjdk.org/?repo=jdk=19391=02-03

  Stats: 4 lines in 1 file changed: 3 ins; 1 del; 0 mod
  Patch: https://git.openjdk.org/jdk/pull/19391.diff
  Fetch: git fetch https://git.openjdk.org/jdk.git pull/19391/head:pull/19391

PR: https://git.openjdk.org/jdk/pull/19391


Remove dependency on VacuumPage(Hit/Miss/Dirty) counters in do_analyze_rel

2024-06-06 Thread Dilip Kumar
As part of commit 5cd72cc0c5017a9d4de8b5d465a75946da5abd1d, the
dependency on global counters such as VacuumPage(Hit/Miss/Dirty) was
removed from the vacuum. However, do_analyze_rel() was still using
these counters, necessitating the tracking of global counters
alongside BufferUsage counters.

The attached patch addresses the issue by eliminating the need to
track VacuumPage(Hit/Miss/Dirty) counters in do_analyze_rel(), making
the global counters obsolete. This simplifies the code and improves
consistency.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


v1-0001-Remove-duplicate-tracking-of-the-page-stats-durin.patch
Description: Binary data


Re: RFR: 8332550: [macos] Voice Over: java.awt.IllegalComponentStateException: component must be showing on the screen to determine its location [v3]

2024-06-06 Thread Abhishek Kumar
> "java.awt.IllegalComponentStateException: component must be showing on the 
> screen to determine its location" is thrown when getLocationOnScreen method 
> is invoked for JTableHeader while testing JFileChooser demo. It seems that in 
> getLocationOfScreen method we are trying to access the parent location but 
> that is not visible and ICSE is thrown.
> 
> Fix is to handle the exception and can be verified using the steps mentioned 
> in [JDK-8332550](https://bugs.openjdk.org/browse/JDK-8332550).
> CI testing is green and link is mentioned in JBS.

Abhishek Kumar has updated the pull request incrementally with one additional 
commit since the last revision:

  Condition check updated

-

Changes:
  - all: https://git.openjdk.org/jdk/pull/19391/files
  - new: https://git.openjdk.org/jdk/pull/19391/files/03671372..edb02c06

Webrevs:
 - full: https://webrevs.openjdk.org/?repo=jdk=19391=02
 - incr: https://webrevs.openjdk.org/?repo=jdk=19391=01-02

  Stats: 10 lines in 1 file changed: 0 ins; 8 del; 2 mod
  Patch: https://git.openjdk.org/jdk/pull/19391.diff
  Fetch: git fetch https://git.openjdk.org/jdk.git pull/19391/head:pull/19391

PR: https://git.openjdk.org/jdk/pull/19391


Re: RFR: 8332550: [macos] Voice Over: java.awt.IllegalComponentStateException: component must be showing on the screen to determine its location [v2]

2024-06-06 Thread Abhishek Kumar
On Thu, 6 Jun 2024 07:28:59 GMT, Alexander Zuev  wrote:

>> As per the spec, getLocationOnScreen() API can throw ICSE and it should be 
>> right to catch this exception. Moreover, I tried checking with the 
>> JTableHeader's visibility, still got the ICSE while performing the testing. 
>> So, I think this should be an ideal way to handle.
>
> That is very strange - the only reason getLocationOnScreen should throw ICSE 
> is when component is not showing on the screen. So if you ask 
> parent.isShowing() and it returns true but you still getting the exception - 
> you need to investigate what is going on, this is not a normal behavior.

Actually I was checking visibility with `parent.isVisible()` and this is always 
`true` but I was getting exception for getLocationOnScreen method. 
But when checked with `parent.isShowing()` then it behaves correctly, whenever 
it returns `false`, there is no exception as it doesn't go inside condition and 
when it returns `true`, `getLocationOnScreen` works as expected.

So, I guess `parent.isShowing` check should be sufficient to handle the 
exception.

-

PR Review Comment: https://git.openjdk.org/jdk/pull/19391#discussion_r1629036998


[jira] [Commented] (HADOOP-18929) Build failure while trying to create apache 3.3.7 release locally.

2024-06-06 Thread Kanaka Kumar Avvaru (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17852667#comment-17852667
 ] 

Kanaka Kumar Avvaru commented on HADOOP-18929:
--

There are quite few third party jar fixes merged in branch 3.3  around last 1 
year after 3.3.6 in Jun 2023

Is there any 3.3.x  next release planned soon ?

> Build failure while trying to create apache 3.3.7 release locally.
> --
>
> Key: HADOOP-18929
> URL: https://issues.apache.org/jira/browse/HADOOP-18929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Mukund Thakur
>Assignee: PJ Fanning
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> {noformat}
> [ESC[1;34mINFOESC[m] ESC[1m---< 
> ESC[0;36morg.apache.hadoop:hadoop-client-check-test-invariantsESC[0;1m 
> >ESC[m
> [ESC[1;34mINFOESC[m] ESC[1mBuilding Apache Hadoop Client Packaging Invariants 
> for Test 3.3.9-SNAPSHOT [105/111]ESC[m
> [ESC[1;34mINFOESC[m] ESC[1m[ pom 
> ]-ESC[m
> [ESC[1;34mINFOESC[m] 
> [ESC[1;34mINFOESC[m] ESC[1m--- 
> ESC[0;32mmaven-enforcer-plugin:3.0.0-M1:enforceESC[m 
> ESC[1m(enforce-banned-dependencies)ESC[m @ 
> ESC[36mhadoop-client-check-test-invariantsESC[0;1m ---ESC[m
> [ESC[1;34mINFOESC[m] Adding ignorable dependency: 
> org.apache.hadoop:hadoop-annotations:null
> [ESC[1;34mINFOESC[m]   Adding ignore: *
> [ESC[1;33mWARNINGESC[m] Rule 1: 
> org.apache.maven.plugins.enforcer.BanDuplicateClasses failed with message:
> Duplicate classes found:
>   Found in:
>     org.apache.hadoop:hadoop-client-minicluster:jar:3.3.9-SNAPSHOT:compile
>     org.apache.hadoop:hadoop-client-runtime:jar:3.3.9-SNAPSHOT:compile
>   Duplicate classes:
>     META-INF/versions/9/module-info.class
> {noformat}
> CC [~ste...@apache.org]  [~weichu] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Unsubscribe

2024-06-06 Thread Ram Kumar



Re: RFR: 8332550: [macos] Voice Over: java.awt.IllegalComponentStateException: component must be showing on the screen to determine its location [v2]

2024-06-05 Thread Abhishek Kumar
On Wed, 5 Jun 2024 22:27:43 GMT, Alisen Chung  wrote:

>> Abhishek Kumar has updated the pull request incrementally with one 
>> additional commit since the last revision:
>> 
>>   copyright year update
>
> src/java.desktop/share/classes/javax/swing/table/JTableHeader.java line 1368:
> 
>> 1366: try {
>> 1367: parentLocation = parent.getLocationOnScreen();
>> 1368: } catch (IllegalComponentStateException icse) {
> 
> should we be preventing the exception by checking if the JTableHeader is 
> visible or not instead?

As per the spec, getLocationOnScreen() API can throw ICSE and it should be 
right to catch this exception rather than preventing it upfront by checking 
JTableHeader's visibility. Moreover, I tried checking with the visibility, 
still got the ICSE while performing the testing. So, I think this should be an 
ideal way to handle.

-

PR Review Comment: https://git.openjdk.org/jdk/pull/19391#discussion_r1628783298


Re: [ccp4bb] Resampling CryoEM Density map to XRD Density map for Difference Map

2024-06-05 Thread Devbrat Kumar
Dear Paul,

Thank you

Regards
Devbrat

On Thu, Jun 6, 2024, 8:53 AM Paul Emsley  wrote:

>
> On 06/06/2024 04:00, Devbrat Kumar wrote:
>
>
> --
> Dear Paul,
>
> Thank you for your response. I wanted to compare a Coulomb potential map
> to an electron density map. Before aligning these maps, I need to bring
> them to similar parameters, which requires rescaling one map to match the
> other. After that, I can proceed with density subtraction.
>
> I hope this clarifies my query. Sorry for any confusion.
> Thank you again.
> Regards
> Devbrat
>
>
> On Wed, Jun 5, 2024, 7:56 PM Paul Emsley 
> wrote:
>
>>
>> On 05/06/2024 07:00, Devbrat Kumar wrote:
>>
>>
>> --
>>
>> Hello Everyone,
>>
>> Hello Devbrat,
>>
>> I have a query regarding the resampling of cryoEM density to match
>> crystal density to obtain a density difference map. Specifically, I am
>> trying to determine if it is feasible to resample a cryoEM map with an XRD
>> density map. However, each time I attempt this, the resampling output
>> provides an arbitrary ASU resample map, resulting in a significant loss of
>> major density.
>>
>> I have been using Coot and Chimera for this process but have not achieved
>> the desired outcome. Please guide me or suggest how to move forward with
>> this. My goal is to create an accurate final density difference map.
>>
>>
>> It is not clear to me exactly what the problem is.
>>
>> In Coot speak, "resampling" is (merely) changing the grid sampling so
>> that the map appears (typically) on a finer grid.
>>
>> I don't think that this is what you want.
>>
>> You want is, I think, "Transform Map by LSQ Model-fit" and if that is
>> what you used, then I can't help until I am more clear about what you think
>> has gone wrong.
>>
>> Regards,
>>
>> Paul.
>>
>> Ah, OK, so you want to *rescale* not resample. The tool (in our world) to
> do that is EMDA (available in CCPEM).
>
> Paul.
>
>
>
> --
>
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] Resampling CryoEM Density map to XRD Density map for Difference Map

2024-06-05 Thread Devbrat Kumar
Dear Paul,

Thank you for your response. I wanted to compare a Coulomb potential map to
an electron density map. Before aligning these maps, I need to bring them
to similar parameters, which requires rescaling one map to match the other.
After that, I can proceed with density subtraction.

I hope this clarifies my query. Sorry for any confusion.
Thank you again.
Regards
Devbrat


On Wed, Jun 5, 2024, 7:56 PM Paul Emsley  wrote:

>
> On 05/06/2024 07:00, Devbrat Kumar wrote:
>
>
> --
>
> Hello Everyone,
>
> Hello Devbrat,
>
> I have a query regarding the resampling of cryoEM density to match crystal
> density to obtain a density difference map. Specifically, I am trying to
> determine if it is feasible to resample a cryoEM map with an XRD density
> map. However, each time I attempt this, the resampling output provides an
> arbitrary ASU resample map, resulting in a significant loss of major
> density.
>
> I have been using Coot and Chimera for this process but have not achieved
> the desired outcome. Please guide me or suggest how to move forward with
> this. My goal is to create an accurate final density difference map.
>
>
> It is not clear to me exactly what the problem is.
>
> In Coot speak, "resampling" is (merely) changing the grid sampling so that
> the map appears (typically) on a finer grid.
>
> I don't think that this is what you want.
>
> You want is, I think, "Transform Map by LSQ Model-fit" and if that is what
> you used, then I can't help until I am more clear about what you think has
> gone wrong.
>
> Regards,
>
> Paul.
>
>
>
> --
>
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] Resampling CryoEM Density map to XRD Density map for Difference Map

2024-06-05 Thread Devbrat Kumar
Dear Jon

I will keep this in mind while working on it.

Thank you.
Regards
Devbrat

On Wed, Jun 5, 2024, 7:11 PM Jon Cooper <
488a26d62010-dmarc-requ...@jiscmail.ac.uk> wrote:

> Another factor might be that ccpem uses a different axis order to gemmi
> and ccp4 ;-0
>
> Best wishes, Jon Cooper.
> jon.b.coo...@protonmail.com
>
> Sent from Proton Mail Android
>
>
>  Original Message 
> On 05/06/2024 12:51, Guillaume Gaullier wrote:
>
> With a cryoEM map, it's easier to do the rigid-body fitting of step 2 in
> real space (this is trivial to do interactively in ChimeraX) rather than by
> MR.
> --
> *From:* CCP4 bulletin board  on behalf of Eleanor
> Dodson <176a9d5ebad7-dmarc-requ...@jiscmail.ac.uk>
> *Sent:* Wednesday, June 5, 2024 1:39:56 PM
> *To:* CCP4BB@JISCMAIL.AC.UK
> *Subject:* Re: [ccp4bb] Resampling CryoEM Density map to XRD Density map
> for Difference Map
>
> Hmm - rather tricky! I would do an MR search with the crystal model v the
> EM density,
> Steps would be:
> 1) convert EM density to "structure factors". - there are tools which do
> this ..
>   1a) You need to go back to ccp4i - program sfall to read map to generate
> SFs from map - then cad or sftools to add a fake SigF column
> 2) Solve MR search with the model v these "structure factors" using them
> as Fobs
> 3) Calculate the structure factors from the MR positiooned model and get
> the difference map..
>
>
> On Wed, 5 Jun 2024 at 11:46, Martin Malý  wrote:
>
>> Dear Devbrat,
>>
>> I am now playing with a similar problem but I don't have a simple
>> solution for you as I'm also quite stuck. You can check these software
>> tools which involve some scripting in Python (NumPy, SciPy) and C++:
>>
>> EMDA (for cryoEM maps, included in CCP-EM)
>> https://gitlab.com/ccpem/emda
>> https://doi.org/10.1016/j.jsb.2021.107826
>>
>> Gemmi (mainly for crystallography, included in CCP4)
>> https://gemmi.readthedocs.io/en/latest/grid.html
>>
>> Maybe there are also some relevant features in CCTBX (included in CCP4
>> and Phenix).
>>
>> Cheers,
>> Martin
>>
>> On 05/06/2024 07:00, Devbrat Kumar wrote:
>>
>> Hello Everyone,
>>
>> Greetings!
>>
>> I have a query regarding the resampling of cryoEM density to match
>> crystal density to obtain a density difference map. Specifically, I am
>> trying to determine if it is feasible to resample a cryoEM map with an XRD
>> density map. However, each time I attempt this, the resampling output
>> provides an arbitrary ASU resample map, resulting in a significant loss of
>> major density.
>>
>> I have been using Coot and Chimera for this process but have not achieved
>> the desired outcome. Please guide me or suggest how to move forward with
>> this. My goal is to create an accurate final density difference map.
>>
>> Thank you in advance for your help.
>> *Warm Regards-*
>> *Devbrat Kumar*
>>
>>
>> --
>>
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>>
>>
>>
>> --
>>
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>>
>
> --
>
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>
>
> VARNING: Klicka inte på länkar och öppna inte bilagor om du inte känner
> igen avsändaren och vet att innehållet är säkert.
> CAUTION: Do not click on links or open attachments unless you recognise
> the sender and know the content is safe.
>
>
>
>
>
>
>
>
>
>
> När du har kontakt med oss på Uppsala universitet med e-post så innebär
> det att vi behandlar dina personuppgifter. För att läsa mer om hur vi gör
> det kan du läsa här: http://www.uu.se/om-uu/dataskydd-personuppgifter/
>
> E-mailing Uppsala University means that we will process your personal
> data. For more information on how this is performed, please read here:
> http://www.uu.se/en/about-uu/data-protection-policy
>
> --
>
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>
>
> --
>
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] Resampling CryoEM Density map to XRD Density map for Difference Map

2024-06-05 Thread Devbrat Kumar
Dear Guillaume

I will keep that in mind while trying.

Thank you
Regards
Devbrat

On Wed, Jun 5, 2024, 5:21 PM Guillaume Gaullier <
guillaume.gaull...@kemi.uu.se> wrote:

> With a cryoEM map, it's easier to do the rigid-body fitting of step 2 in
> real space (this is trivial to do interactively in ChimeraX) rather than by
> MR.
> --
> *From:* CCP4 bulletin board  on behalf of Eleanor
> Dodson <176a9d5ebad7-dmarc-requ...@jiscmail.ac.uk>
> *Sent:* Wednesday, June 5, 2024 1:39:56 PM
> *To:* CCP4BB@JISCMAIL.AC.UK
> *Subject:* Re: [ccp4bb] Resampling CryoEM Density map to XRD Density map
> for Difference Map
>
> Hmm - rather tricky! I would do an MR search with the crystal model v the
> EM density,
> Steps would be:
> 1) convert EM density to "structure factors". - there are tools which do
> this ..
>   1a) You need to go back to ccp4i - program sfall to read map to generate
> SFs from map - then cad or sftools to add a fake SigF column
> 2) Solve MR search with the model v these "structure factors" using them
> as Fobs
> 3) Calculate the structure factors from the MR positiooned model and get
> the difference map..
>
>
> On Wed, 5 Jun 2024 at 11:46, Martin Malý  wrote:
>
>> Dear Devbrat,
>>
>> I am now playing with a similar problem but I don't have a simple
>> solution for you as I'm also quite stuck. You can check these software
>> tools which involve some scripting in Python (NumPy, SciPy) and C++:
>>
>> EMDA (for cryoEM maps, included in CCP-EM)
>> https://gitlab.com/ccpem/emda
>> https://doi.org/10.1016/j.jsb.2021.107826
>>
>> Gemmi (mainly for crystallography, included in CCP4)
>> https://gemmi.readthedocs.io/en/latest/grid.html
>>
>> Maybe there are also some relevant features in CCTBX (included in CCP4
>> and Phenix).
>>
>> Cheers,
>> Martin
>>
>> On 05/06/2024 07:00, Devbrat Kumar wrote:
>>
>> Hello Everyone,
>>
>> Greetings!
>>
>> I have a query regarding the resampling of cryoEM density to match
>> crystal density to obtain a density difference map. Specifically, I am
>> trying to determine if it is feasible to resample a cryoEM map with an XRD
>> density map. However, each time I attempt this, the resampling output
>> provides an arbitrary ASU resample map, resulting in a significant loss of
>> major density.
>>
>> I have been using Coot and Chimera for this process but have not achieved
>> the desired outcome. Please guide me or suggest how to move forward with
>> this. My goal is to create an accurate final density difference map.
>>
>> Thank you in advance for your help.
>> *Warm Regards-*
>> *Devbrat Kumar*
>>
>>
>> --
>>
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>>
>>
>>
>> --
>>
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>>
>
> --
>
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>
>
> VARNING: Klicka inte på länkar och öppna inte bilagor om du inte känner
> igen avsändaren och vet att innehållet är säkert.
> CAUTION: Do not click on links or open attachments unless you recognise
> the sender and know the content is safe.
>
>
>
>
>
>
>
>
>
>
> När du har kontakt med oss på Uppsala universitet med e-post så innebär
> det att vi behandlar dina personuppgifter. För att läsa mer om hur vi gör
> det kan du läsa här: http://www.uu.se/om-uu/dataskydd-personuppgifter/
>
> E-mailing Uppsala University means that we will process your personal
> data. For more information on how this is performed, please read here:
> http://www.uu.se/en/about-uu/data-protection-policy
>
> --
>
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] Resampling CryoEM Density map to XRD Density map for Difference Map

2024-06-05 Thread Devbrat Kumar
Hello Eleanor,

Thank you for your consistent response to my inquiry. I will follow your
suggestions and willl tell you the updates.


Regards
Devbrat

On Wed, Jun 5, 2024, 5:10 PM Eleanor Dodson <
176a9d5ebad7-dmarc-requ...@jiscmail.ac.uk> wrote:

> Hmm - rather tricky! I would do an MR search with the crystal model v the
> EM density,
> Steps would be:
> 1) convert EM density to "structure factors". - there are tools which do
> this ..
>   1a) You need to go back to ccp4i - program sfall to read map to generate
> SFs from map - then cad or sftools to add a fake SigF column
> 2) Solve MR search with the model v these "structure factors" using them
> as Fobs
> 3) Calculate the structure factors from the MR positiooned model and get
> the difference map..
>
>
> On Wed, 5 Jun 2024 at 11:46, Martin Malý  wrote:
>
>> Dear Devbrat,
>>
>> I am now playing with a similar problem but I don't have a simple
>> solution for you as I'm also quite stuck. You can check these software
>> tools which involve some scripting in Python (NumPy, SciPy) and C++:
>>
>> EMDA (for cryoEM maps, included in CCP-EM)
>> https://gitlab.com/ccpem/emda
>> https://doi.org/10.1016/j.jsb.2021.107826
>>
>> Gemmi (mainly for crystallography, included in CCP4)
>> https://gemmi.readthedocs.io/en/latest/grid.html
>>
>> Maybe there are also some relevant features in CCTBX (included in CCP4
>> and Phenix).
>>
>> Cheers,
>> Martin
>>
>> On 05/06/2024 07:00, Devbrat Kumar wrote:
>>
>> Hello Everyone,
>>
>> Greetings!
>>
>> I have a query regarding the resampling of cryoEM density to match
>> crystal density to obtain a density difference map. Specifically, I am
>> trying to determine if it is feasible to resample a cryoEM map with an XRD
>> density map. However, each time I attempt this, the resampling output
>> provides an arbitrary ASU resample map, resulting in a significant loss of
>> major density.
>>
>> I have been using Coot and Chimera for this process but have not achieved
>> the desired outcome. Please guide me or suggest how to move forward with
>> this. My goal is to create an accurate final density difference map.
>>
>> Thank you in advance for your help.
>> *Warm Regards-*
>> *Devbrat Kumar*
>>
>>
>> --
>>
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>>
>>
>>
>> --
>>
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>>
>
> --
>
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] Resampling CryoEM Density map to XRD Density map for Difference Map

2024-06-05 Thread Devbrat Kumar
Hello Martin,

Thank you for your email and I will look into it.

Regards
Devbrat


On Wed, Jun 5, 2024, 4:16 PM Martin Malý  wrote:

> Dear Devbrat,
>
> I am now playing with a similar problem but I don't have a simple solution
> for you as I'm also quite stuck. You can check these software tools which
> involve some scripting in Python (NumPy, SciPy) and C++:
>
> EMDA (for cryoEM maps, included in CCP-EM)
> https://gitlab.com/ccpem/emda
> https://doi.org/10.1016/j.jsb.2021.107826
>
> Gemmi (mainly for crystallography, included in CCP4)
> https://gemmi.readthedocs.io/en/latest/grid.html
>
> Maybe there are also some relevant features in CCTBX (included in CCP4 and
> Phenix).
>
> Cheers,
> Martin
>
> On 05/06/2024 07:00, Devbrat Kumar wrote:
>
> Hello Everyone,
>
> Greetings!
>
> I have a query regarding the resampling of cryoEM density to match crystal
> density to obtain a density difference map. Specifically, I am trying to
> determine if it is feasible to resample a cryoEM map with an XRD density
> map. However, each time I attempt this, the resampling output provides an
> arbitrary ASU resample map, resulting in a significant loss of major
> density.
>
> I have been using Coot and Chimera for this process but have not achieved
> the desired outcome. Please guide me or suggest how to move forward with
> this. My goal is to create an accurate final density difference map.
>
> Thank you in advance for your help.
> *Warm Regards-*
> *Devbrat Kumar*
>
>
> --
>
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>
>
>



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


[jira] [Assigned] (HDDS-10548) [snapshot-LR] OM is getting shutdown due to Snapshot chain corruption in an LR setup

2024-06-05 Thread Hemant Kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-10548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemant Kumar reassigned HDDS-10548:
---

Assignee: Hemant Kumar

> [snapshot-LR] OM is getting shutdown due to Snapshot chain corruption in an 
> LR setup
> 
>
> Key: HDDS-10548
> URL: https://issues.apache.org/jira/browse/HDDS-10548
> Project: Apache Ozone
>  Issue Type: Bug
>  Components: Ozone Manager, Snapshot
>Reporter: Jyotirmoy Sinha
>    Assignee: Hemant Kumar
>Priority: Major
>  Labels: ozone-snapshot
>
> OM is getting shutdown due to Snapshot chain corruption in an LR setup
> Scenario :
>  * Generate data over parallel threads over various volume/buckets
>  * Perform parallel snapshot create/delete/list operations over above buckets
>  * Perform parallel snapdiff operations over each bucket
>  * Perform parallel read operations of snapshot contents
>  * Introduce OM and cluster restarts in between along with DN decommissioning 
> and balancer restarts.
> OM Leader error stacktrace -
> {code:java}
> 2024-03-14 04:07:13,525 INFO 
> [JvmPauseMonitor0]-org.apache.ratis.util.JvmPauseMonitor: 
> JvmPauseMonitor-om232: Started
> 2024-03-14 04:07:13,534 INFO [main]-org.apache.hadoop.ozone.om.OzoneManager: 
> Starting secret key client.
> 2024-03-14 04:07:13,615 ERROR [OM StateMachine ApplyTransaction Thread - 
> 0]-org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine: Terminating 
> with exit status 1: OM Ratis Server has received unrecoverable error, to 
> avoid further DB corruption, terminating OM. Error Response received 
> is:cmdType: CreateSnapshot
> traceID: ""
> success: false
> message: "java.io.IOException: Snapshot chain is corrupted.\n\tat 
> org.apache.hadoop.ozone.om.SnapshotChainManager.validateSnapshotChain(SnapshotChainManager.java:550)\n\tat
>  
> org.apache.hadoop.ozone.om.SnapshotChainManager.getLatestPathSnapshotId(SnapshotChainManager.java:378)\n\tat
>  
> org.apache.hadoop.ozone.om.request.snapshot.OMSnapshotCreateRequest.addSnapshotInfoToSnapshotChainAndCache(OMSnapshotCreateRequest.java:232)\n\tat
>  
> org.apache.hadoop.ozone.om.request.snapshot.OMSnapshotCreateRequest.validateAndUpdateCache(OMSnapshotCreateRequest.java:162)\n\tat
>  
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:378)\n\tat
>  
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:568)\n\tat
>  
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$1(OzoneManagerStateMachine.java:363)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
>  java.lang.Thread.run(Thread.java:748)\n"
> status: INTERNAL_ERRORINTERNAL_ERROR 
> org.apache.hadoop.ozone.om.exceptions.OMException: java.io.IOException: 
> Snapshot chain is corrupted.
>         at 
> org.apache.hadoop.ozone.om.SnapshotChainManager.validateSnapshotChain(SnapshotChainManager.java:550)
>         at 
> org.apache.hadoop.ozone.om.SnapshotChainManager.getLatestPathSnapshotId(SnapshotChainManager.java:378)
>         at 
> org.apache.hadoop.ozone.om.request.snapshot.OMSnapshotCreateRequest.addSnapshotInfoToSnapshotChainAndCache(OMSnapshotCreateRequest.java:232)
>         at 
> org.apache.hadoop.ozone.om.request.snapshot.OMSnapshotCreateRequest.validateAndUpdateCache(OMSnapshotCreateRequest.java:162)
>         at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:378)
>         at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:568)
>         at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$1(OzoneManagerStateMachine.java:363)
>         at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)        at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.terminate(OzoneManagerStateMachine.java:404)
>         at 
> org.apache.hadoop.oz

Re: [PATCH v4 06/13] drm/msm/dpu: split dpu_plane_atomic_check()

2024-06-05 Thread Abhinav Kumar




On 6/5/2024 4:32 PM, Dmitry Baryshkov wrote:

On Thu, 6 Jun 2024 at 02:19, Abhinav Kumar  wrote:




On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Split dpu_plane_atomic_check() function into two pieces:

dpu_plane_atomic_check_nopipe() performing generic checks on the pstate,
without touching the associated pipe,

and

dpu_plane_atomic_check_pipes(), which takes into account used pipes.

Signed-off-by: Dmitry Baryshkov 
---
   drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 184 ++
   1 file changed, 117 insertions(+), 67 deletions(-)



One thing which seemed odd to me is even dpu_plane_atomic_check_nopipe()
does use pipe_cfg even though its named "nopipe".

Perhaps were you targetting a split of SW planes vs SSPP atomic_check?

I tried applying this patch on top of msm-next to more closely review
the split up but it does not apply. So, I will review this patch a
little better after it is re-spun. But will proceed with remaining patches.


diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
index 6360052523b5..187ac2767a2b 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
@@ -788,50 +788,22 @@ static int dpu_plane_atomic_check_pipe(struct dpu_plane 
*pdpu,
   #define MAX_UPSCALE_RATIO   20
   #define MAX_DOWNSCALE_RATIO 4

-static int dpu_plane_atomic_check(struct drm_plane *plane,
-   struct drm_atomic_state *state)
+static int dpu_plane_atomic_check_nopipe(struct drm_plane *plane,
+  struct drm_plane_state *new_plane_state,
+  const struct drm_crtc_state *crtc_state)
   {
- struct drm_plane_state *new_plane_state = 
drm_atomic_get_new_plane_state(state,
-  
plane);
   int ret = 0, min_scale, max_scale;
   struct dpu_plane *pdpu = to_dpu_plane(plane);
   struct dpu_kms *kms = _dpu_plane_get_kms(>base);
   u64 max_mdp_clk_rate = kms->perf.max_core_clk_rate;
   struct dpu_plane_state *pstate = to_dpu_plane_state(new_plane_state);
- struct dpu_kms *dpu_kms = _dpu_plane_get_kms(plane);
- struct dpu_sw_pipe *pipe = >pipe;
- struct dpu_sw_pipe *r_pipe = >r_pipe;
- const struct drm_crtc_state *crtc_state = NULL;
- const struct dpu_format *fmt;
   struct dpu_sw_pipe_cfg *pipe_cfg = >pipe_cfg;
   struct dpu_sw_pipe_cfg *r_pipe_cfg = >r_pipe_cfg;
   struct drm_rect fb_rect = { 0 };
   uint32_t max_linewidth;
- unsigned int rotation;
- uint32_t supported_rotations;
- const struct dpu_sspp_cfg *pipe_hw_caps;
- const struct dpu_sspp_sub_blks *sblk;

- if (new_plane_state->crtc)
- crtc_state = drm_atomic_get_new_crtc_state(state,
-new_plane_state->crtc);
-
- pipe->sspp = dpu_rm_get_sspp(_kms->rm, pdpu->pipe);
- r_pipe->sspp = NULL;
-
- if (!pipe->sspp)
- return -EINVAL;
-
- pipe_hw_caps = pipe->sspp->cap;
- sblk = pipe->sspp->cap->sblk;
-
- if (sblk->scaler_blk.len) {
- min_scale = FRAC_16_16(1, MAX_UPSCALE_RATIO);
- max_scale = MAX_DOWNSCALE_RATIO << 16;
- } else {
- min_scale = 1 << 16;
- max_scale = 1 << 16;
- }
+ min_scale = FRAC_16_16(1, MAX_UPSCALE_RATIO);
+ max_scale = MAX_DOWNSCALE_RATIO << 16;

   ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state,
 min_scale,
@@ -844,11 +816,6 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,
   if (!new_plane_state->visible)
   return 0;

- pipe->multirect_index = DPU_SSPP_RECT_SOLO;
- pipe->multirect_mode = DPU_SSPP_MULTIRECT_NONE;
- r_pipe->multirect_index = DPU_SSPP_RECT_SOLO;
- r_pipe->multirect_mode = DPU_SSPP_MULTIRECT_NONE;
-
   pstate->stage = DPU_STAGE_0 + pstate->base.normalized_zpos;
   if (pstate->stage >= pdpu->catalog->caps->max_mixer_blendstages) {
   DPU_ERROR("> %d plane stages assigned\n",
@@ -872,8 +839,6 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,
   return -E2BIG;
   }

- fmt = to_dpu_format(msm_framebuffer_format(new_plane_state->fb));
-
   max_linewidth = pdpu->catalog->caps->max_linewidth;

   drm_rect_rotate(_cfg->src_rect,
@@ -882,6 +847,83 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,

   if ((drm_rect_width(_cfg->src_rect) > max_linewidth) ||
_dpu_plane_calc_clk(_state->adjusted_mode, pipe_cfg) > 
max_mdp_clk_rate) {
+ if (drm_rect_width(_cfg->src_rect) > 2 * max_linewidth) {
+ DPU_DEBUG_PL

Re: [PATCH v4 06/13] drm/msm/dpu: split dpu_plane_atomic_check()

2024-06-05 Thread Abhinav Kumar




On 6/5/2024 4:32 PM, Dmitry Baryshkov wrote:

On Thu, 6 Jun 2024 at 02:19, Abhinav Kumar  wrote:




On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Split dpu_plane_atomic_check() function into two pieces:

dpu_plane_atomic_check_nopipe() performing generic checks on the pstate,
without touching the associated pipe,

and

dpu_plane_atomic_check_pipes(), which takes into account used pipes.

Signed-off-by: Dmitry Baryshkov 
---
   drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 184 ++
   1 file changed, 117 insertions(+), 67 deletions(-)



One thing which seemed odd to me is even dpu_plane_atomic_check_nopipe()
does use pipe_cfg even though its named "nopipe".

Perhaps were you targetting a split of SW planes vs SSPP atomic_check?

I tried applying this patch on top of msm-next to more closely review
the split up but it does not apply. So, I will review this patch a
little better after it is re-spun. But will proceed with remaining patches.


diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
index 6360052523b5..187ac2767a2b 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
@@ -788,50 +788,22 @@ static int dpu_plane_atomic_check_pipe(struct dpu_plane 
*pdpu,
   #define MAX_UPSCALE_RATIO   20
   #define MAX_DOWNSCALE_RATIO 4

-static int dpu_plane_atomic_check(struct drm_plane *plane,
-   struct drm_atomic_state *state)
+static int dpu_plane_atomic_check_nopipe(struct drm_plane *plane,
+  struct drm_plane_state *new_plane_state,
+  const struct drm_crtc_state *crtc_state)
   {
- struct drm_plane_state *new_plane_state = 
drm_atomic_get_new_plane_state(state,
-  
plane);
   int ret = 0, min_scale, max_scale;
   struct dpu_plane *pdpu = to_dpu_plane(plane);
   struct dpu_kms *kms = _dpu_plane_get_kms(>base);
   u64 max_mdp_clk_rate = kms->perf.max_core_clk_rate;
   struct dpu_plane_state *pstate = to_dpu_plane_state(new_plane_state);
- struct dpu_kms *dpu_kms = _dpu_plane_get_kms(plane);
- struct dpu_sw_pipe *pipe = >pipe;
- struct dpu_sw_pipe *r_pipe = >r_pipe;
- const struct drm_crtc_state *crtc_state = NULL;
- const struct dpu_format *fmt;
   struct dpu_sw_pipe_cfg *pipe_cfg = >pipe_cfg;
   struct dpu_sw_pipe_cfg *r_pipe_cfg = >r_pipe_cfg;
   struct drm_rect fb_rect = { 0 };
   uint32_t max_linewidth;
- unsigned int rotation;
- uint32_t supported_rotations;
- const struct dpu_sspp_cfg *pipe_hw_caps;
- const struct dpu_sspp_sub_blks *sblk;

- if (new_plane_state->crtc)
- crtc_state = drm_atomic_get_new_crtc_state(state,
-new_plane_state->crtc);
-
- pipe->sspp = dpu_rm_get_sspp(_kms->rm, pdpu->pipe);
- r_pipe->sspp = NULL;
-
- if (!pipe->sspp)
- return -EINVAL;
-
- pipe_hw_caps = pipe->sspp->cap;
- sblk = pipe->sspp->cap->sblk;
-
- if (sblk->scaler_blk.len) {
- min_scale = FRAC_16_16(1, MAX_UPSCALE_RATIO);
- max_scale = MAX_DOWNSCALE_RATIO << 16;
- } else {
- min_scale = 1 << 16;
- max_scale = 1 << 16;
- }
+ min_scale = FRAC_16_16(1, MAX_UPSCALE_RATIO);
+ max_scale = MAX_DOWNSCALE_RATIO << 16;

   ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state,
 min_scale,
@@ -844,11 +816,6 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,
   if (!new_plane_state->visible)
   return 0;

- pipe->multirect_index = DPU_SSPP_RECT_SOLO;
- pipe->multirect_mode = DPU_SSPP_MULTIRECT_NONE;
- r_pipe->multirect_index = DPU_SSPP_RECT_SOLO;
- r_pipe->multirect_mode = DPU_SSPP_MULTIRECT_NONE;
-
   pstate->stage = DPU_STAGE_0 + pstate->base.normalized_zpos;
   if (pstate->stage >= pdpu->catalog->caps->max_mixer_blendstages) {
   DPU_ERROR("> %d plane stages assigned\n",
@@ -872,8 +839,6 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,
   return -E2BIG;
   }

- fmt = to_dpu_format(msm_framebuffer_format(new_plane_state->fb));
-
   max_linewidth = pdpu->catalog->caps->max_linewidth;

   drm_rect_rotate(_cfg->src_rect,
@@ -882,6 +847,83 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,

   if ((drm_rect_width(_cfg->src_rect) > max_linewidth) ||
_dpu_plane_calc_clk(_state->adjusted_mode, pipe_cfg) > 
max_mdp_clk_rate) {
+ if (drm_rect_width(_cfg->src_rect) > 2 * max_linewidth) {
+ DPU_DEBUG_PL

Re: [PATCH v4 07/13] drm/msm/dpu: move rot90 checking to dpu_plane_atomic_check_pipe()

2024-06-05 Thread Abhinav Kumar




On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Move a call to dpu_plane_check_inline_rotation() to the
dpu_plane_atomic_check_pipe() function, so that the rot90 constraints
are checked for both pipes. Also move rotation field from struct
dpu_plane_state to struct dpu_sw_pipe_cfg.

Signed-off-by: Dmitry Baryshkov 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.h |  2 +
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c   | 55 +++--
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h   |  2 -
  3 files changed, 31 insertions(+), 28 deletions(-)



From the first glance, no major comments, I will give my R-b on this 
once its re-spun on msm-next.


Re: [PATCH v4 07/13] drm/msm/dpu: move rot90 checking to dpu_plane_atomic_check_pipe()

2024-06-05 Thread Abhinav Kumar




On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Move a call to dpu_plane_check_inline_rotation() to the
dpu_plane_atomic_check_pipe() function, so that the rot90 constraints
are checked for both pipes. Also move rotation field from struct
dpu_plane_state to struct dpu_sw_pipe_cfg.

Signed-off-by: Dmitry Baryshkov 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.h |  2 +
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c   | 55 +++--
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h   |  2 -
  3 files changed, 31 insertions(+), 28 deletions(-)



From the first glance, no major comments, I will give my R-b on this 
once its re-spun on msm-next.


Re: [PATCH v4 06/13] drm/msm/dpu: split dpu_plane_atomic_check()

2024-06-05 Thread Abhinav Kumar




On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Split dpu_plane_atomic_check() function into two pieces:

dpu_plane_atomic_check_nopipe() performing generic checks on the pstate,
without touching the associated pipe,

and

dpu_plane_atomic_check_pipes(), which takes into account used pipes.

Signed-off-by: Dmitry Baryshkov 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 184 ++
  1 file changed, 117 insertions(+), 67 deletions(-)



One thing which seemed odd to me is even dpu_plane_atomic_check_nopipe() 
does use pipe_cfg even though its named "nopipe".


Perhaps were you targetting a split of SW planes vs SSPP atomic_check?

I tried applying this patch on top of msm-next to more closely review 
the split up but it does not apply. So, I will review this patch a 
little better after it is re-spun. But will proceed with remaining patches.



diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
index 6360052523b5..187ac2767a2b 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
@@ -788,50 +788,22 @@ static int dpu_plane_atomic_check_pipe(struct dpu_plane 
*pdpu,
  #define MAX_UPSCALE_RATIO 20
  #define MAX_DOWNSCALE_RATIO   4
  
-static int dpu_plane_atomic_check(struct drm_plane *plane,

- struct drm_atomic_state *state)
+static int dpu_plane_atomic_check_nopipe(struct drm_plane *plane,
+struct drm_plane_state 
*new_plane_state,
+const struct drm_crtc_state 
*crtc_state)
  {
-   struct drm_plane_state *new_plane_state = 
drm_atomic_get_new_plane_state(state,
-   
 plane);
int ret = 0, min_scale, max_scale;
struct dpu_plane *pdpu = to_dpu_plane(plane);
struct dpu_kms *kms = _dpu_plane_get_kms(>base);
u64 max_mdp_clk_rate = kms->perf.max_core_clk_rate;
struct dpu_plane_state *pstate = to_dpu_plane_state(new_plane_state);
-   struct dpu_kms *dpu_kms = _dpu_plane_get_kms(plane);
-   struct dpu_sw_pipe *pipe = >pipe;
-   struct dpu_sw_pipe *r_pipe = >r_pipe;
-   const struct drm_crtc_state *crtc_state = NULL;
-   const struct dpu_format *fmt;
struct dpu_sw_pipe_cfg *pipe_cfg = >pipe_cfg;
struct dpu_sw_pipe_cfg *r_pipe_cfg = >r_pipe_cfg;
struct drm_rect fb_rect = { 0 };
uint32_t max_linewidth;
-   unsigned int rotation;
-   uint32_t supported_rotations;
-   const struct dpu_sspp_cfg *pipe_hw_caps;
-   const struct dpu_sspp_sub_blks *sblk;
  
-	if (new_plane_state->crtc)

-   crtc_state = drm_atomic_get_new_crtc_state(state,
-  
new_plane_state->crtc);
-
-   pipe->sspp = dpu_rm_get_sspp(_kms->rm, pdpu->pipe);
-   r_pipe->sspp = NULL;
-
-   if (!pipe->sspp)
-   return -EINVAL;
-
-   pipe_hw_caps = pipe->sspp->cap;
-   sblk = pipe->sspp->cap->sblk;
-
-   if (sblk->scaler_blk.len) {
-   min_scale = FRAC_16_16(1, MAX_UPSCALE_RATIO);
-   max_scale = MAX_DOWNSCALE_RATIO << 16;
-   } else {
-   min_scale = 1 << 16;
-   max_scale = 1 << 16;
-   }
+   min_scale = FRAC_16_16(1, MAX_UPSCALE_RATIO);
+   max_scale = MAX_DOWNSCALE_RATIO << 16;
  
  	ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state,

  min_scale,
@@ -844,11 +816,6 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,
if (!new_plane_state->visible)
return 0;
  
-	pipe->multirect_index = DPU_SSPP_RECT_SOLO;

-   pipe->multirect_mode = DPU_SSPP_MULTIRECT_NONE;
-   r_pipe->multirect_index = DPU_SSPP_RECT_SOLO;
-   r_pipe->multirect_mode = DPU_SSPP_MULTIRECT_NONE;
-
pstate->stage = DPU_STAGE_0 + pstate->base.normalized_zpos;
if (pstate->stage >= pdpu->catalog->caps->max_mixer_blendstages) {
DPU_ERROR("> %d plane stages assigned\n",
@@ -872,8 +839,6 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,
return -E2BIG;
}
  
-	fmt = to_dpu_format(msm_framebuffer_format(new_plane_state->fb));

-
max_linewidth = pdpu->catalog->caps->max_linewidth;
  
  	drm_rect_rotate(_cfg->src_rect,

@@ -882,6 +847,83 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,
  
  	if ((drm_rect_width(_cfg->src_rect) > max_linewidth) ||

 _dpu_plane_calc_clk(_state->adjusted_mode, pipe_cfg) > 
max_mdp_clk_rate) {
+   if (drm_rect_width(_cfg->src_rect) > 2 * max_linewidth) {
+   DPU_DEBUG_PLANE(pdpu, "invalid src " DRM_RECT_FMT " 
line:%u\n",
+   DRM_RECT_ARG(_cfg->src_rect), 
max_linewidth);
+   return -E2BIG;
+  

Re: [PATCH v4 06/13] drm/msm/dpu: split dpu_plane_atomic_check()

2024-06-05 Thread Abhinav Kumar




On 3/13/2024 5:02 PM, Dmitry Baryshkov wrote:

Split dpu_plane_atomic_check() function into two pieces:

dpu_plane_atomic_check_nopipe() performing generic checks on the pstate,
without touching the associated pipe,

and

dpu_plane_atomic_check_pipes(), which takes into account used pipes.

Signed-off-by: Dmitry Baryshkov 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 184 ++
  1 file changed, 117 insertions(+), 67 deletions(-)



One thing which seemed odd to me is even dpu_plane_atomic_check_nopipe() 
does use pipe_cfg even though its named "nopipe".


Perhaps were you targetting a split of SW planes vs SSPP atomic_check?

I tried applying this patch on top of msm-next to more closely review 
the split up but it does not apply. So, I will review this patch a 
little better after it is re-spun. But will proceed with remaining patches.



diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
index 6360052523b5..187ac2767a2b 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
@@ -788,50 +788,22 @@ static int dpu_plane_atomic_check_pipe(struct dpu_plane 
*pdpu,
  #define MAX_UPSCALE_RATIO 20
  #define MAX_DOWNSCALE_RATIO   4
  
-static int dpu_plane_atomic_check(struct drm_plane *plane,

- struct drm_atomic_state *state)
+static int dpu_plane_atomic_check_nopipe(struct drm_plane *plane,
+struct drm_plane_state 
*new_plane_state,
+const struct drm_crtc_state 
*crtc_state)
  {
-   struct drm_plane_state *new_plane_state = 
drm_atomic_get_new_plane_state(state,
-   
 plane);
int ret = 0, min_scale, max_scale;
struct dpu_plane *pdpu = to_dpu_plane(plane);
struct dpu_kms *kms = _dpu_plane_get_kms(>base);
u64 max_mdp_clk_rate = kms->perf.max_core_clk_rate;
struct dpu_plane_state *pstate = to_dpu_plane_state(new_plane_state);
-   struct dpu_kms *dpu_kms = _dpu_plane_get_kms(plane);
-   struct dpu_sw_pipe *pipe = >pipe;
-   struct dpu_sw_pipe *r_pipe = >r_pipe;
-   const struct drm_crtc_state *crtc_state = NULL;
-   const struct dpu_format *fmt;
struct dpu_sw_pipe_cfg *pipe_cfg = >pipe_cfg;
struct dpu_sw_pipe_cfg *r_pipe_cfg = >r_pipe_cfg;
struct drm_rect fb_rect = { 0 };
uint32_t max_linewidth;
-   unsigned int rotation;
-   uint32_t supported_rotations;
-   const struct dpu_sspp_cfg *pipe_hw_caps;
-   const struct dpu_sspp_sub_blks *sblk;
  
-	if (new_plane_state->crtc)

-   crtc_state = drm_atomic_get_new_crtc_state(state,
-  
new_plane_state->crtc);
-
-   pipe->sspp = dpu_rm_get_sspp(_kms->rm, pdpu->pipe);
-   r_pipe->sspp = NULL;
-
-   if (!pipe->sspp)
-   return -EINVAL;
-
-   pipe_hw_caps = pipe->sspp->cap;
-   sblk = pipe->sspp->cap->sblk;
-
-   if (sblk->scaler_blk.len) {
-   min_scale = FRAC_16_16(1, MAX_UPSCALE_RATIO);
-   max_scale = MAX_DOWNSCALE_RATIO << 16;
-   } else {
-   min_scale = 1 << 16;
-   max_scale = 1 << 16;
-   }
+   min_scale = FRAC_16_16(1, MAX_UPSCALE_RATIO);
+   max_scale = MAX_DOWNSCALE_RATIO << 16;
  
  	ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state,

  min_scale,
@@ -844,11 +816,6 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,
if (!new_plane_state->visible)
return 0;
  
-	pipe->multirect_index = DPU_SSPP_RECT_SOLO;

-   pipe->multirect_mode = DPU_SSPP_MULTIRECT_NONE;
-   r_pipe->multirect_index = DPU_SSPP_RECT_SOLO;
-   r_pipe->multirect_mode = DPU_SSPP_MULTIRECT_NONE;
-
pstate->stage = DPU_STAGE_0 + pstate->base.normalized_zpos;
if (pstate->stage >= pdpu->catalog->caps->max_mixer_blendstages) {
DPU_ERROR("> %d plane stages assigned\n",
@@ -872,8 +839,6 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,
return -E2BIG;
}
  
-	fmt = to_dpu_format(msm_framebuffer_format(new_plane_state->fb));

-
max_linewidth = pdpu->catalog->caps->max_linewidth;
  
  	drm_rect_rotate(_cfg->src_rect,

@@ -882,6 +847,83 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,
  
  	if ((drm_rect_width(_cfg->src_rect) > max_linewidth) ||

 _dpu_plane_calc_clk(_state->adjusted_mode, pipe_cfg) > 
max_mdp_clk_rate) {
+   if (drm_rect_width(_cfg->src_rect) > 2 * max_linewidth) {
+   DPU_DEBUG_PLANE(pdpu, "invalid src " DRM_RECT_FMT " 
line:%u\n",
+   DRM_RECT_ARG(_cfg->src_rect), 
max_linewidth);
+   return -E2BIG;
+  

Re: [PATCH] drm/msm/dp: fix runtime_pm handling in dp_wait_hpd_asserted

2024-06-05 Thread Abhinav Kumar


On Tue, 27 Feb 2024 00:34:45 +0200, Dmitry Baryshkov wrote:
> The function dp_wait_hpd_asserted() uses pm_runtime_get_sync() and
> doesn't care about the return value. Potentially this can lead to
> unclocked access if for some reason resuming of the DP controller fails.
> 
> Change the function to use pm_runtime_resume_and_get() and return an
> error if resume fails.
> 
> [...]

Applied, thanks!

[1/1] drm/msm/dp: fix runtime_pm handling in dp_wait_hpd_asserted
  https://gitlab.freedesktop.org/abhinavk/msm-next/-/commit/3e40e281afa0

Best regards,
-- 
Abhinav Kumar 


Re: [PATCH] drm/msm/dp: fix runtime_pm handling in dp_wait_hpd_asserted

2024-06-05 Thread Abhinav Kumar


On Tue, 27 Feb 2024 00:34:45 +0200, Dmitry Baryshkov wrote:
> The function dp_wait_hpd_asserted() uses pm_runtime_get_sync() and
> doesn't care about the return value. Potentially this can lead to
> unclocked access if for some reason resuming of the DP controller fails.
> 
> Change the function to use pm_runtime_resume_and_get() and return an
> error if resume fails.
> 
> [...]

Applied, thanks!

[1/1] drm/msm/dp: fix runtime_pm handling in dp_wait_hpd_asserted
  https://gitlab.freedesktop.org/abhinavk/msm-next/-/commit/3e40e281afa0

Best regards,
-- 
Abhinav Kumar 


Re: [PATCH v2] drm/msm/dpu: fix encoder irq wait skip

2024-06-05 Thread Abhinav Kumar


On Thu, 09 May 2024 21:40:41 +0200, Barnabás Czémán wrote:
> The irq_idx is unsigned so it cannot be lower than zero, better
> to change the condition to check if it is equal with zero.
> It could not cause any issue because a valid irq index starts from one.
> 
> 

Applied, thanks!

[1/1] drm/msm/dpu: fix encoder irq wait skip
  https://gitlab.freedesktop.org/abhinavk/msm-next/-/commit/8dfe802d4a7c

Best regards,
-- 
Abhinav Kumar 


Re: [PATCH v2] drm/msm/dpu: fix encoder irq wait skip

2024-06-05 Thread Abhinav Kumar


On Thu, 09 May 2024 21:40:41 +0200, Barnabás Czémán wrote:
> The irq_idx is unsigned so it cannot be lower than zero, better
> to change the condition to check if it is equal with zero.
> It could not cause any issue because a valid irq index starts from one.
> 
> 

Applied, thanks!

[1/1] drm/msm/dpu: fix encoder irq wait skip
  https://gitlab.freedesktop.org/abhinavk/msm-next/-/commit/8dfe802d4a7c

Best regards,
-- 
Abhinav Kumar 


Re: [PATCH v2] Revert "drm/msm/dpu: drop dpu_encoder_phys_ops.atomic_mode_set"

2024-06-05 Thread Abhinav Kumar


On Wed, 22 May 2024 13:24:28 +0300, Dmitry Baryshkov wrote:
> In the DPU driver blank IRQ handling is called from a vblank worker and
> can happen outside of the irq_enable / irq_disable pair. Using the
> worker makes that completely asynchronous with the rest of the code.
> Revert commit d13f638c9b88 ("drm/msm/dpu: drop
> dpu_encoder_phys_ops.atomic_mode_set") to fix vblank IRQ assignment for
> CMD DSI panels.
> 
> [...]

Applied, thanks!

[1/1] Revert "drm/msm/dpu: drop dpu_encoder_phys_ops.atomic_mode_set"
  https://gitlab.freedesktop.org/abhinavk/msm-next/-/commit/6e301821c28d

Best regards,
-- 
Abhinav Kumar 


Re: [PATCH v2] Revert "drm/msm/dpu: drop dpu_encoder_phys_ops.atomic_mode_set"

2024-06-05 Thread Abhinav Kumar


On Wed, 22 May 2024 13:24:28 +0300, Dmitry Baryshkov wrote:
> In the DPU driver blank IRQ handling is called from a vblank worker and
> can happen outside of the irq_enable / irq_disable pair. Using the
> worker makes that completely asynchronous with the rest of the code.
> Revert commit d13f638c9b88 ("drm/msm/dpu: drop
> dpu_encoder_phys_ops.atomic_mode_set") to fix vblank IRQ assignment for
> CMD DSI panels.
> 
> [...]

Applied, thanks!

[1/1] Revert "drm/msm/dpu: drop dpu_encoder_phys_ops.atomic_mode_set"
  https://gitlab.freedesktop.org/abhinavk/msm-next/-/commit/6e301821c28d

Best regards,
-- 
Abhinav Kumar 


Re: [PATCH] drm/msm/dpu: drop duplicate drm formats from wb2_formats arrays

2024-06-05 Thread Abhinav Kumar


On Fri, 24 May 2024 23:01:12 +0800, Junhao Xie wrote:
> There are duplicate items in wb2_formats_rgb and wb2_formats_rgb_yuv,
> which cause weston assertions failed.
> 
> weston: libweston/drm-formats.c:131: weston_drm_format_array_add_format:
> Assertion `!weston_drm_format_array_find_format(formats, format)' failed.
> 
> 
> [...]

Applied, thanks!

[1/1] drm/msm/dpu: drop duplicate drm formats from wb2_formats arrays
  https://gitlab.freedesktop.org/abhinavk/msm-next/-/commit/3788ddf084b7

Best regards,
-- 
Abhinav Kumar 


Re: [PATCH] drm/msm/dpu: drop duplicate drm formats from wb2_formats arrays

2024-06-05 Thread Abhinav Kumar


On Fri, 24 May 2024 23:01:12 +0800, Junhao Xie wrote:
> There are duplicate items in wb2_formats_rgb and wb2_formats_rgb_yuv,
> which cause weston assertions failed.
> 
> weston: libweston/drm-formats.c:131: weston_drm_format_array_add_format:
> Assertion `!weston_drm_format_array_find_format(formats, format)' failed.
> 
> 
> [...]

Applied, thanks!

[1/1] drm/msm/dpu: drop duplicate drm formats from wb2_formats arrays
  https://gitlab.freedesktop.org/abhinavk/msm-next/-/commit/3788ddf084b7

Best regards,
-- 
Abhinav Kumar 


[PATCH v4] drm/msm/a6xx: use __unused__ to fix compiler warnings for gen7_* includes

2024-06-05 Thread Abhinav Kumar
GCC diagnostic pragma method throws below warnings in some of the versions

drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:16:9: warning: unknown
option after '#pragma GCC diagnostic' kind [-Wpragmas]
  #pragma GCC diagnostic ignored "-Wunused-const-variable"
  ^
In file included from drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:18:0:
drivers/gpu/drm/msm/adreno/adreno_gen7_0_0_snapshot.h:924:19: warning:
'gen7_0_0_external_core_regs' defined but not used [-Wunused-variable]
  static const u32 *gen7_0_0_external_core_regs[] = {
^
In file included from drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:19:0:
drivers/gpu/drm/msm/adreno/adreno_gen7_2_0_snapshot.h:748:19: warning:
'gen7_2_0_external_core_regs' defined but not used [-Wunused-variable]
  static const u32 *gen7_2_0_external_core_regs[] = {
^
In file included from drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:20:0:
drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h:1188:43: warning:
'gen7_9_0_sptp_clusters' defined but not used [-Wunused-variable]
  static struct gen7_sptp_cluster_registers gen7_9_0_sptp_clusters[] = {
^
drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h:1438:19: warning:
'gen7_9_0_external_core_regs' defined but not used [-Wunused-variable]
  static const u32 *gen7_9_0_external_core_regs[] = {

Remove GCC version dependency by using __unused__ for the unused gen7_* 
includes.

Changes in v2:
- Fix the warnings in the commit text
- Use __attribute((__unused__)) instead of local assignment

changes in v3:
- drop the Link from the auto add

changes in v4:
- replace __attribute((__unused__)) with __always_unused

Fixes: 64d6255650d4 ("drm/msm: More fully implement devcoredump for a7xx")
Suggested-by: Rob Clark 
Signed-off-by: Abhinav Kumar 
---
 drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 12 
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c 
b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
index 0a7717a4fc2f..59a4eb942b9b 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
@@ -8,19 +8,15 @@
 #include "a6xx_gpu_state.h"
 #include "a6xx_gmu.xml.h"
 
-/* Ignore diagnostics about register tables that we aren't using yet. We don't
- * want to modify these headers too much from their original source.
- */
-#pragma GCC diagnostic push
-#pragma GCC diagnostic ignored "-Wunused-variable"
-#pragma GCC diagnostic ignored "-Wunused-const-variable"
+static const unsigned int *gen7_0_0_external_core_regs[] __always_unused;
+static const unsigned int *gen7_2_0_external_core_regs[] __always_unused;
+static const unsigned int *gen7_9_0_external_core_regs[] __always_unused;
+static struct gen7_sptp_cluster_registers gen7_9_0_sptp_clusters[] 
__always_unused;
 
 #include "adreno_gen7_0_0_snapshot.h"
 #include "adreno_gen7_2_0_snapshot.h"
 #include "adreno_gen7_9_0_snapshot.h"
 
-#pragma GCC diagnostic pop
-
 struct a6xx_gpu_state_obj {
const void *handle;
u32 *data;
-- 
2.44.0



[PATCH v4] drm/msm/a6xx: use __unused__ to fix compiler warnings for gen7_* includes

2024-06-05 Thread Abhinav Kumar
GCC diagnostic pragma method throws below warnings in some of the versions

drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:16:9: warning: unknown
option after '#pragma GCC diagnostic' kind [-Wpragmas]
  #pragma GCC diagnostic ignored "-Wunused-const-variable"
  ^
In file included from drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:18:0:
drivers/gpu/drm/msm/adreno/adreno_gen7_0_0_snapshot.h:924:19: warning:
'gen7_0_0_external_core_regs' defined but not used [-Wunused-variable]
  static const u32 *gen7_0_0_external_core_regs[] = {
^
In file included from drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:19:0:
drivers/gpu/drm/msm/adreno/adreno_gen7_2_0_snapshot.h:748:19: warning:
'gen7_2_0_external_core_regs' defined but not used [-Wunused-variable]
  static const u32 *gen7_2_0_external_core_regs[] = {
^
In file included from drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:20:0:
drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h:1188:43: warning:
'gen7_9_0_sptp_clusters' defined but not used [-Wunused-variable]
  static struct gen7_sptp_cluster_registers gen7_9_0_sptp_clusters[] = {
^
drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h:1438:19: warning:
'gen7_9_0_external_core_regs' defined but not used [-Wunused-variable]
  static const u32 *gen7_9_0_external_core_regs[] = {

Remove GCC version dependency by using __unused__ for the unused gen7_* 
includes.

Changes in v2:
- Fix the warnings in the commit text
- Use __attribute((__unused__)) instead of local assignment

changes in v3:
- drop the Link from the auto add

changes in v4:
- replace __attribute((__unused__)) with __always_unused

Fixes: 64d6255650d4 ("drm/msm: More fully implement devcoredump for a7xx")
Suggested-by: Rob Clark 
Signed-off-by: Abhinav Kumar 
---
 drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 12 
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c 
b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
index 0a7717a4fc2f..59a4eb942b9b 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
@@ -8,19 +8,15 @@
 #include "a6xx_gpu_state.h"
 #include "a6xx_gmu.xml.h"
 
-/* Ignore diagnostics about register tables that we aren't using yet. We don't
- * want to modify these headers too much from their original source.
- */
-#pragma GCC diagnostic push
-#pragma GCC diagnostic ignored "-Wunused-variable"
-#pragma GCC diagnostic ignored "-Wunused-const-variable"
+static const unsigned int *gen7_0_0_external_core_regs[] __always_unused;
+static const unsigned int *gen7_2_0_external_core_regs[] __always_unused;
+static const unsigned int *gen7_9_0_external_core_regs[] __always_unused;
+static struct gen7_sptp_cluster_registers gen7_9_0_sptp_clusters[] 
__always_unused;
 
 #include "adreno_gen7_0_0_snapshot.h"
 #include "adreno_gen7_2_0_snapshot.h"
 #include "adreno_gen7_9_0_snapshot.h"
 
-#pragma GCC diagnostic pop
-
 struct a6xx_gpu_state_obj {
const void *handle;
u32 *data;
-- 
2.44.0



Re: [PATCH v3] drm/msm/a6xx: use __unused__ to fix compiler warnings for gen7_* includes

2024-06-05 Thread Abhinav Kumar

Hi Nathan

On 6/5/2024 11:05 AM, Nathan Chancellor wrote:

Hi Abhinav,

Just a drive by style comment.

On Tue, Jun 04, 2024 at 05:38:28PM -0700, Abhinav Kumar wrote:

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c 
b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
index 0a7717a4fc2f..a958e2b3c025 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
@@ -8,19 +8,15 @@
  #include "a6xx_gpu_state.h"
  #include "a6xx_gmu.xml.h"
  
-/* Ignore diagnostics about register tables that we aren't using yet. We don't

- * want to modify these headers too much from their original source.
- */
-#pragma GCC diagnostic push
-#pragma GCC diagnostic ignored "-Wunused-variable"
-#pragma GCC diagnostic ignored "-Wunused-const-variable"
+static const unsigned int *gen7_0_0_external_core_regs[] 
__attribute((__unused__));
+static const unsigned int *gen7_2_0_external_core_regs[] 
__attribute((__unused__));
+static const unsigned int *gen7_9_0_external_core_regs[] 
__attribute((__unused__));
+static struct gen7_sptp_cluster_registers gen7_9_0_sptp_clusters[] 
__attribute((__unused__));


Please do not open code attributes. This is available as either
'__always_unused' or '__maybe_unused', depending on the context.
checkpatch would have warned about this if it was '__attribute__'
instead of '__attribute'.



Thanks for the note. Let me update the patch to use __always_unused.


Cheers,
Nathan


Re: [PATCH v3] drm/msm/a6xx: use __unused__ to fix compiler warnings for gen7_* includes

2024-06-05 Thread Abhinav Kumar

Hi Nathan

On 6/5/2024 11:05 AM, Nathan Chancellor wrote:

Hi Abhinav,

Just a drive by style comment.

On Tue, Jun 04, 2024 at 05:38:28PM -0700, Abhinav Kumar wrote:

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c 
b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
index 0a7717a4fc2f..a958e2b3c025 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
@@ -8,19 +8,15 @@
  #include "a6xx_gpu_state.h"
  #include "a6xx_gmu.xml.h"
  
-/* Ignore diagnostics about register tables that we aren't using yet. We don't

- * want to modify these headers too much from their original source.
- */
-#pragma GCC diagnostic push
-#pragma GCC diagnostic ignored "-Wunused-variable"
-#pragma GCC diagnostic ignored "-Wunused-const-variable"
+static const unsigned int *gen7_0_0_external_core_regs[] 
__attribute((__unused__));
+static const unsigned int *gen7_2_0_external_core_regs[] 
__attribute((__unused__));
+static const unsigned int *gen7_9_0_external_core_regs[] 
__attribute((__unused__));
+static struct gen7_sptp_cluster_registers gen7_9_0_sptp_clusters[] 
__attribute((__unused__));


Please do not open code attributes. This is available as either
'__always_unused' or '__maybe_unused', depending on the context.
checkpatch would have warned about this if it was '__attribute__'
instead of '__attribute'.



Thanks for the note. Let me update the patch to use __always_unused.


Cheers,
Nathan


  1   2   3   4   5   6   7   8   9   10   >