Re: [PATCH 2/2] drm/amdgpu/display: buffer INTERRUPT_LOW_IRQ_CONTEXT interrupt work

2021-02-26 Thread Aurabindo Pillai



On 2021-01-22 3:55 p.m., Chen, Xiaogang wrote:

On 1/19/2021 4:29 PM, Grodzovsky, Andrey wrote:


On 1/15/21 2:21 AM, Chen, Xiaogang wrote:

On 1/14/2021 1:24 AM, Grodzovsky, Andrey wrote:


On 1/14/21 12:11 AM, Chen, Xiaogang wrote:

On 1/12/2021 10:54 PM, Grodzovsky, Andrey wrote:

On 1/4/21 1:01 AM, Xiaogang.Chen wrote:

From: Xiaogang Chen 

amdgpu DM handles INTERRUPT_LOW_IRQ_CONTEXT interrupt(hpd, 
hpd_rx) by

using work queue and uses single work_struct. If previous interrupt
has not been handled new interrupts(same type) will be discarded and
driver just sends "amdgpu_dm_irq_schedule_work FAILED" message out.
If some important hpd, hpd_rx related interrupts are missed by 
driver

the hot (un)plug devices may cause system hang or unstable, such as
system resumes from S3 sleep with mst device connected.

This patch dynamically allocates new amdgpu_dm_irq_handler_data for
new interrupts if previous INTERRUPT_LOW_IRQ_CONTEXT interrupt work
has not been handled. So the new interrupt works can be queued to 
the

same workqueue_struct, instead discard the new interrupts.
All allocated amdgpu_dm_irq_handler_data are put into a single 
linked

list and will be reused after.


I believe this creates a possible concurrency between already
executing work item
and the new incoming one for which you allocate a new work item on
the fly. While
handle_hpd_irq is serialized with aconnector->hpd_lock I am seeing
that for handle_hpd_rx_irq
it's not locked for MST use case (which is the most frequently used
with this interrupt).  Did you
verified that handle_hpd_rx_irq is reentrant ?


handle_hpd_rx_irq is put at a work queue. Its execution is serialized
by the work queue. So there is no reentrant.


You are using system_highpri_wq which has the property that it has
multiple workers thread pool spread across all the
active CPUs, see all work queue definitions here
https://elixir.bootlin.com/linux/v5.11-rc3/source/include/linux/workqueue.h#L358 


I beleieve that what you saying about no chance of reentrnacy would be
correct if it would be same work item dequeued for execution
while previous instance is still running, see the explanation here -
https://elixir.bootlin.com/linux/v5.11-rc3/source/kernel/workqueue.c#L1435. 


Non reentrancy is guaranteed only for the same work item. If you want
non reentrancy (full serializtion) for different work items you should
create
you own single threaded work-queue using create_singlethread_workqueue



Thank you. I think the easiest way is using aconnector->hpd_lock at
handle_hpd_rx_irq to lock for dc_link->type == dc_connection_mst_branch
case, right? I will do that at next version if you think it is ok.



I am not sure what are the consequences of of using hpd lock there with
regard to other locks acquired in DRM MST code during MST related HPD 
transactions since
i haven't dealt with this for a very long time. Maybe Harry or Nick 
can advise on this ?


Andrey


Hi Harry, Nick: would you or someone give review on this patch?

Thanks

Xiaogang




amdgpu_dm_irq_schedule_work does queuing of work(put
handle_hpd_rx_irq into work queue). The first call is
dm_irq_work_func, then call handle_hpd_rx_irq.

Signed-off-by: Xiaogang Chen 
---
   drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h  |  14 +--
   .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c  | 114
++---
   2 files changed, 80 insertions(+), 48 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
index c9d82b9..730e540 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
@@ -69,18 +69,6 @@ struct common_irq_params {
   };
     /**
- * struct irq_list_head - Linked-list for low context IRQ handlers.
- *
- * @head: The list_head within  handler_data
- * @work: A work_struct containing the deferred handler work
- */
-struct irq_list_head {
-    struct list_head head;
-    /* In case this interrupt needs post-processing, 'work' will
be queued*/
-    struct work_struct work;
-};
-
-/**
    * struct dm_compressor_info - Buffer info used by frame buffer
compression
    * @cpu_addr: MMIO cpu addr
    * @bo_ptr: Pointer to the buffer object
@@ -270,7 +258,7 @@ struct amdgpu_display_manager {
    * Note that handlers are called in the same order as they 
were

    * registered (FIFO).
    */
-    struct irq_list_head
irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
+    struct list_head
irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
     /**
    * @irq_handler_list_high_tab:
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
index 3577785..ada344a 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
@@ -82,6 +82,7 @@ struct amdgpu_dm_irq_handler_data {
   struct amdgpu_display_manager *dm;
   /* DAL 

Re: [PATCH 2/2] drm/amdgpu/display: buffer INTERRUPT_LOW_IRQ_CONTEXT interrupt work

2021-01-22 Thread Chen, Xiaogang
On 1/19/2021 4:29 PM, Grodzovsky, Andrey wrote:

On 1/15/21 2:21 AM, Chen, Xiaogang wrote:
On 1/14/2021 1:24 AM, Grodzovsky, Andrey wrote:

On 1/14/21 12:11 AM, Chen, Xiaogang wrote:
On 1/12/2021 10:54 PM, Grodzovsky, Andrey wrote:
On 1/4/21 1:01 AM, Xiaogang.Chen wrote:
From: Xiaogang Chen 

amdgpu DM handles INTERRUPT_LOW_IRQ_CONTEXT interrupt(hpd, hpd_rx) by
using work queue and uses single work_struct. If previous interrupt
has not been handled new interrupts(same type) will be discarded and
driver just sends "amdgpu_dm_irq_schedule_work FAILED" message out.
If some important hpd, hpd_rx related interrupts are missed by driver
the hot (un)plug devices may cause system hang or unstable, such as
system resumes from S3 sleep with mst device connected.

This patch dynamically allocates new amdgpu_dm_irq_handler_data for
new interrupts if previous INTERRUPT_LOW_IRQ_CONTEXT interrupt work
has not been handled. So the new interrupt works can be queued to the
same workqueue_struct, instead discard the new interrupts.
All allocated amdgpu_dm_irq_handler_data are put into a single linked
list and will be reused after.

I believe this creates a possible concurrency between already
executing work item
and the new incoming one for which you allocate a new work item on
the fly. While
handle_hpd_irq is serialized with aconnector->hpd_lock I am seeing
that for handle_hpd_rx_irq
it's not locked for MST use case (which is the most frequently used
with this interrupt).  Did you
verified that handle_hpd_rx_irq is reentrant ?

handle_hpd_rx_irq is put at a work queue. Its execution is serialized
by the work queue. So there is no reentrant.

You are using system_highpri_wq which has the property that it has
multiple workers thread pool spread across all the
active CPUs, see all work queue definitions here
https://elixir.bootlin.com/linux/v5.11-rc3/source/include/linux/workqueue.h#L358
I beleieve that what you saying about no chance of reentrnacy would be
correct if it would be same work item dequeued for execution
while previous instance is still running, see the explanation here -
https://elixir.bootlin.com/linux/v5.11-rc3/source/kernel/workqueue.c#L1435.
Non reentrancy is guaranteed only for the same work item. If you want
non reentrancy (full serializtion) for different work items you should
create
you own single threaded work-queue using create_singlethread_workqueue


Thank you. I think the easiest way is using aconnector->hpd_lock at
handle_hpd_rx_irq to lock for dc_link->type == dc_connection_mst_branch
case, right? I will do that at next version if you think it is ok.


I am not sure what are the consequences of of using hpd lock there with
regard to other locks acquired in DRM MST code during MST related HPD 
transactions since
i haven't dealt with this for a very long time. Maybe Harry or Nick can advise 
on this ?

Andrey


Hi Harry, Nick: would you or someone give review on this patch?

Thanks

Xiaogang

amdgpu_dm_irq_schedule_work does queuing of work(put
handle_hpd_rx_irq into work queue). The first call is
dm_irq_work_func, then call handle_hpd_rx_irq.
Signed-off-by: Xiaogang Chen 

---
   drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h  |  14 +--
   .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c  | 114
++---
   2 files changed, 80 insertions(+), 48 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
index c9d82b9..730e540 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
@@ -69,18 +69,6 @@ struct common_irq_params {
   };
 /**
- * struct irq_list_head - Linked-list for low context IRQ handlers.
- *
- * @head: The list_head within  handler_data
- * @work: A work_struct containing the deferred handler work
- */
-struct irq_list_head {
-struct list_head head;
-/* In case this interrupt needs post-processing, 'work' will
be queued*/
-struct work_struct work;
-};
-
-/**
* struct dm_compressor_info - Buffer info used by frame buffer
compression
* @cpu_addr: MMIO cpu addr
* @bo_ptr: Pointer to the buffer object
@@ -270,7 +258,7 @@ struct amdgpu_display_manager {
* Note that handlers are called in the same order as they were
* registered (FIFO).
*/
-struct irq_list_head
irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
+struct list_head
irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
 /**
* @irq_handler_list_high_tab:
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
index 3577785..ada344a 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
@@ -82,6 +82,7 @@ struct amdgpu_dm_irq_handler_data {
   struct amdgpu_display_manager *dm;
   /* DAL irq source which 

Re: [PATCH 2/2] drm/amdgpu/display: buffer INTERRUPT_LOW_IRQ_CONTEXT interrupt work

2021-01-19 Thread Andrey Grodzovsky


On 1/15/21 2:21 AM, Chen, Xiaogang wrote:

On 1/14/2021 1:24 AM, Grodzovsky, Andrey wrote:


On 1/14/21 12:11 AM, Chen, Xiaogang wrote:

On 1/12/2021 10:54 PM, Grodzovsky, Andrey wrote:

On 1/4/21 1:01 AM, Xiaogang.Chen wrote:

From: Xiaogang Chen 

amdgpu DM handles INTERRUPT_LOW_IRQ_CONTEXT interrupt(hpd, hpd_rx) by
using work queue and uses single work_struct. If previous interrupt
has not been handled new interrupts(same type) will be discarded and
driver just sends "amdgpu_dm_irq_schedule_work FAILED" message out.
If some important hpd, hpd_rx related interrupts are missed by driver
the hot (un)plug devices may cause system hang or unstable, such as
system resumes from S3 sleep with mst device connected.

This patch dynamically allocates new amdgpu_dm_irq_handler_data for
new interrupts if previous INTERRUPT_LOW_IRQ_CONTEXT interrupt work
has not been handled. So the new interrupt works can be queued to the
same workqueue_struct, instead discard the new interrupts.
All allocated amdgpu_dm_irq_handler_data are put into a single linked
list and will be reused after.


I believe this creates a possible concurrency between already
executing work item
and the new incoming one for which you allocate a new work item on
the fly. While
handle_hpd_irq is serialized with aconnector->hpd_lock I am seeing
that for handle_hpd_rx_irq
it's not locked for MST use case (which is the most frequently used
with this interrupt).  Did you
verified that handle_hpd_rx_irq is reentrant ?


handle_hpd_rx_irq is put at a work queue. Its execution is serialized
by the work queue. So there is no reentrant.


You are using system_highpri_wq which has the property that it has
multiple workers thread pool spread across all the
active CPUs, see all work queue definitions here
https://elixir.bootlin.com/linux/v5.11-rc3/source/include/linux/workqueue.h#L358
I beleieve that what you saying about no chance of reentrnacy would be
correct if it would be same work item dequeued for execution
while previous instance is still running, see the explanation here -
https://elixir.bootlin.com/linux/v5.11-rc3/source/kernel/workqueue.c#L1435.
Non reentrancy is guaranteed only for the same work item. If you want
non reentrancy (full serializtion) for different work items you should
create
you own single threaded work-queue using create_singlethread_workqueue



Thank you. I think the easiest way is using aconnector->hpd_lock at
handle_hpd_rx_irq to lock for dc_link->type == dc_connection_mst_branch
case, right? I will do that at next version if you think it is ok.



I am not sure what are the consequences of of using hpd lock there with
regard to other locks acquired in DRM MST code during MST related HPD 
transactions since
i haven't dealt with this for a very long time. Maybe Harry or Nick can advise 
on this ?


Andrey




amdgpu_dm_irq_schedule_work does queuing of work(put
handle_hpd_rx_irq into work queue). The first call is
dm_irq_work_func, then call handle_hpd_rx_irq.

Signed-off-by: Xiaogang Chen 
---
   drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h  |  14 +--
   .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c  | 114
++---
   2 files changed, 80 insertions(+), 48 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
index c9d82b9..730e540 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
@@ -69,18 +69,6 @@ struct common_irq_params {
   };
     /**
- * struct irq_list_head - Linked-list for low context IRQ handlers.
- *
- * @head: The list_head within  handler_data
- * @work: A work_struct containing the deferred handler work
- */
-struct irq_list_head {
-    struct list_head head;
-    /* In case this interrupt needs post-processing, 'work' will
be queued*/
-    struct work_struct work;
-};
-
-/**
    * struct dm_compressor_info - Buffer info used by frame buffer
compression
    * @cpu_addr: MMIO cpu addr
    * @bo_ptr: Pointer to the buffer object
@@ -270,7 +258,7 @@ struct amdgpu_display_manager {
    * Note that handlers are called in the same order as they were
    * registered (FIFO).
    */
-    struct irq_list_head
irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
+    struct list_head
irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
     /**
    * @irq_handler_list_high_tab:
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
index 3577785..ada344a 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
@@ -82,6 +82,7 @@ struct amdgpu_dm_irq_handler_data {
   struct amdgpu_display_manager *dm;
   /* DAL irq source which registered for this interrupt. */
   enum dc_irq_source irq_source;
+    struct work_struct work;
   };
     #define DM_IRQ_TABLE_LOCK(adev, flags) \
@@ -111,20 +112,10 @@ static 

Re: [PATCH 2/2] drm/amdgpu/display: buffer INTERRUPT_LOW_IRQ_CONTEXT interrupt work

2021-01-14 Thread Chen, Xiaogang
On 1/14/2021 1:24 AM, Grodzovsky, Andrey wrote:
>
>
> On 1/14/21 12:11 AM, Chen, Xiaogang wrote:
>> On 1/12/2021 10:54 PM, Grodzovsky, Andrey wrote:
>>>
>>> On 1/4/21 1:01 AM, Xiaogang.Chen wrote:
 From: Xiaogang Chen 

 amdgpu DM handles INTERRUPT_LOW_IRQ_CONTEXT interrupt(hpd, hpd_rx) by
 using work queue and uses single work_struct. If previous interrupt
 has not been handled new interrupts(same type) will be discarded and
 driver just sends "amdgpu_dm_irq_schedule_work FAILED" message out.
 If some important hpd, hpd_rx related interrupts are missed by driver
 the hot (un)plug devices may cause system hang or unstable, such as
 system resumes from S3 sleep with mst device connected.

 This patch dynamically allocates new amdgpu_dm_irq_handler_data for
 new interrupts if previous INTERRUPT_LOW_IRQ_CONTEXT interrupt work
 has not been handled. So the new interrupt works can be queued to the
 same workqueue_struct, instead discard the new interrupts.
 All allocated amdgpu_dm_irq_handler_data are put into a single linked
 list and will be reused after.
>>>
>>>
>>> I believe this creates a possible concurrency between already 
>>> executing work item
>>> and the new incoming one for which you allocate a new work item on 
>>> the fly. While
>>> handle_hpd_irq is serialized with aconnector->hpd_lock I am seeing 
>>> that for handle_hpd_rx_irq
>>> it's not locked for MST use case (which is the most frequently used 
>>> with this interrupt).  Did you
>>> verified that handle_hpd_rx_irq is reentrant ?
>>>
>> handle_hpd_rx_irq is put at a work queue. Its execution is serialized 
>> by the work queue. So there is no reentrant.
>>
> You are using system_highpri_wq which has the property that it has 
> multiple workers thread pool spread across all the
> active CPUs, see all work queue definitions here 
> https://elixir.bootlin.com/linux/v5.11-rc3/source/include/linux/workqueue.h#L358
> I beleieve that what you saying about no chance of reentrnacy would be 
> correct if it would be same work item dequeued for execution
> while previous instance is still running, see the explanation here - 
> https://elixir.bootlin.com/linux/v5.11-rc3/source/kernel/workqueue.c#L1435.
> Non reentrancy is guaranteed only for the same work item. If you want 
> non reentrancy (full serializtion) for different work items you should 
> create
> you own single threaded work-queue using create_singlethread_workqueue
>
>
Thank you. I think the easiest way is using aconnector->hpd_lock at 
handle_hpd_rx_irq to lock for dc_link->type == dc_connection_mst_branch 
case, right? I will do that at next version if you think it is ok.

>> amdgpu_dm_irq_schedule_work does queuing of work(put 
>> handle_hpd_rx_irq into work queue). The first call is 
>> dm_irq_work_func, then call handle_hpd_rx_irq.
>>>

 Signed-off-by: Xiaogang Chen 
 ---
   drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h  |  14 +--
   .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c  | 114 
 ++---
   2 files changed, 80 insertions(+), 48 deletions(-)

 diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h 
 b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
 index c9d82b9..730e540 100644
 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
 +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
 @@ -69,18 +69,6 @@ struct common_irq_params {
   };
     /**
 - * struct irq_list_head - Linked-list for low context IRQ handlers.
 - *
 - * @head: The list_head within  handler_data
 - * @work: A work_struct containing the deferred handler work
 - */
 -struct irq_list_head {
 -    struct list_head head;
 -    /* In case this interrupt needs post-processing, 'work' will 
 be queued*/
 -    struct work_struct work;
 -};
 -
 -/**
    * struct dm_compressor_info - Buffer info used by frame buffer 
 compression
    * @cpu_addr: MMIO cpu addr
    * @bo_ptr: Pointer to the buffer object
 @@ -270,7 +258,7 @@ struct amdgpu_display_manager {
    * Note that handlers are called in the same order as they were
    * registered (FIFO).
    */
 -    struct irq_list_head 
 irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
 +    struct list_head 
 irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
     /**
    * @irq_handler_list_high_tab:
 diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c 
 b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
 index 3577785..ada344a 100644
 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
 +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
 @@ -82,6 +82,7 @@ struct amdgpu_dm_irq_handler_data {
   struct amdgpu_display_manager *dm;
   /* DAL irq source which registered for this interrupt. */
   enum 

Re: [PATCH 2/2] drm/amdgpu/display: buffer INTERRUPT_LOW_IRQ_CONTEXT interrupt work

2021-01-13 Thread Andrey Grodzovsky


On 1/14/21 12:11 AM, Chen, Xiaogang wrote:

On 1/12/2021 10:54 PM, Grodzovsky, Andrey wrote:


On 1/4/21 1:01 AM, Xiaogang.Chen wrote:

From: Xiaogang Chen 

amdgpu DM handles INTERRUPT_LOW_IRQ_CONTEXT interrupt(hpd, hpd_rx) by
using work queue and uses single work_struct. If previous interrupt
has not been handled new interrupts(same type) will be discarded and
driver just sends "amdgpu_dm_irq_schedule_work FAILED" message out.
If some important hpd, hpd_rx related interrupts are missed by driver
the hot (un)plug devices may cause system hang or unstable, such as
system resumes from S3 sleep with mst device connected.

This patch dynamically allocates new amdgpu_dm_irq_handler_data for
new interrupts if previous INTERRUPT_LOW_IRQ_CONTEXT interrupt work
has not been handled. So the new interrupt works can be queued to the
same workqueue_struct, instead discard the new interrupts.
All allocated amdgpu_dm_irq_handler_data are put into a single linked
list and will be reused after.



I believe this creates a possible concurrency between already executing work 
item
and the new incoming one for which you allocate a new work item on the fly. 
While
handle_hpd_irq is serialized with aconnector->hpd_lock I am seeing that for 
handle_hpd_rx_irq
it's not locked for MST use case (which is the most frequently used with this 
interrupt).  Did you

verified that handle_hpd_rx_irq is reentrant ?

handle_hpd_rx_irq is put at a work queue. Its execution is serialized by the 
work queue. So there is no reentrant.


You are using system_highpri_wq which has the property that it has multiple 
workers thread pool spread across all the
active CPUs, see all work queue definitions here 
https://elixir.bootlin.com/linux/v5.11-rc3/source/include/linux/workqueue.h#L358
I beleieve that what you saying about no chance of reentrnacy would be correct 
if it would be same work item dequeued for execution
while previous instance is still running, see the explanation here - 
https://elixir.bootlin.com/linux/v5.11-rc3/source/kernel/workqueue.c#L1435.
Non reentrancy is guaranteed only for the same work item. If you want non 
reentrancy (full serializtion) for different work items you should create

you own single threaded work-queue using create_singlethread_workqueue


amdgpu_dm_irq_schedule_work does queuing of work(put handle_hpd_rx_irq into 
work queue). The first call is dm_irq_work_func, then call handle_hpd_rx_irq.




Signed-off-by: Xiaogang Chen 
---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h  |  14 +--
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c  | 114 
++---

  2 files changed, 80 insertions(+), 48 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h

index c9d82b9..730e540 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
@@ -69,18 +69,6 @@ struct common_irq_params {
  };
    /**
- * struct irq_list_head - Linked-list for low context IRQ handlers.
- *
- * @head: The list_head within  handler_data
- * @work: A work_struct containing the deferred handler work
- */
-struct irq_list_head {
-    struct list_head head;
-    /* In case this interrupt needs post-processing, 'work' will be queued*/
-    struct work_struct work;
-};
-
-/**
   * struct dm_compressor_info - Buffer info used by frame buffer compression
   * @cpu_addr: MMIO cpu addr
   * @bo_ptr: Pointer to the buffer object
@@ -270,7 +258,7 @@ struct amdgpu_display_manager {
   * Note that handlers are called in the same order as they were
   * registered (FIFO).
   */
-    struct irq_list_head irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
+    struct list_head irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
    /**
   * @irq_handler_list_high_tab:
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c

index 3577785..ada344a 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
@@ -82,6 +82,7 @@ struct amdgpu_dm_irq_handler_data {
  struct amdgpu_display_manager *dm;
  /* DAL irq source which registered for this interrupt. */
  enum dc_irq_source irq_source;
+    struct work_struct work;
  };
    #define DM_IRQ_TABLE_LOCK(adev, flags) \
@@ -111,20 +112,10 @@ static void init_handler_common_data(struct 
amdgpu_dm_irq_handler_data *hcd,

   */
  static void dm_irq_work_func(struct work_struct *work)
  {
-    struct irq_list_head *irq_list_head =
-    container_of(work, struct irq_list_head, work);
-    struct list_head *handler_list = _list_head->head;
-    struct amdgpu_dm_irq_handler_data *handler_data;
-
-    list_for_each_entry(handler_data, handler_list, list) {
-    DRM_DEBUG_KMS("DM_IRQ: work_func: for dal_src=%d\n",
-    handler_data->irq_source);
+    struct amdgpu_dm_irq_handler_data 

Re: [PATCH 2/2] drm/amdgpu/display: buffer INTERRUPT_LOW_IRQ_CONTEXT interrupt work

2021-01-13 Thread Chen, Xiaogang
On 1/12/2021 10:54 PM, Grodzovsky, Andrey wrote:

On 1/4/21 1:01 AM, Xiaogang.Chen wrote:
From: Xiaogang Chen 

amdgpu DM handles INTERRUPT_LOW_IRQ_CONTEXT interrupt(hpd, hpd_rx) by
using work queue and uses single work_struct. If previous interrupt
has not been handled new interrupts(same type) will be discarded and
driver just sends "amdgpu_dm_irq_schedule_work FAILED" message out.
If some important hpd, hpd_rx related interrupts are missed by driver
the hot (un)plug devices may cause system hang or unstable, such as
system resumes from S3 sleep with mst device connected.

This patch dynamically allocates new amdgpu_dm_irq_handler_data for
new interrupts if previous INTERRUPT_LOW_IRQ_CONTEXT interrupt work
has not been handled. So the new interrupt works can be queued to the
same workqueue_struct, instead discard the new interrupts.
All allocated amdgpu_dm_irq_handler_data are put into a single linked
list and will be reused after.


I believe this creates a possible concurrency between already executing work 
item
and the new incoming one for which you allocate a new work item on the fly. 
While
handle_hpd_irq is serialized with aconnector->hpd_lock I am seeing that for 
handle_hpd_rx_irq
it's not locked for MST use case (which is the most frequently used with this 
interrupt).  Did you
verified that handle_hpd_rx_irq is reentrant ?


handle_hpd_rx_irq is put at a work queue. Its execution is serialized by the 
work queue. So there is no reentrant.

amdgpu_dm_irq_schedule_work does queuing of work(put handle_hpd_rx_irq into 
work queue). The first call is dm_irq_work_func, then call handle_hpd_rx_irq.


Signed-off-by: Xiaogang Chen 

---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h  |  14 +--
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c  | 114 ++---
  2 files changed, 80 insertions(+), 48 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
index c9d82b9..730e540 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
@@ -69,18 +69,6 @@ struct common_irq_params {
  };
/**
- * struct irq_list_head - Linked-list for low context IRQ handlers.
- *
- * @head: The list_head within  handler_data
- * @work: A work_struct containing the deferred handler work
- */
-struct irq_list_head {
-struct list_head head;
-/* In case this interrupt needs post-processing, 'work' will be queued*/
-struct work_struct work;
-};
-
-/**
   * struct dm_compressor_info - Buffer info used by frame buffer compression
   * @cpu_addr: MMIO cpu addr
   * @bo_ptr: Pointer to the buffer object
@@ -270,7 +258,7 @@ struct amdgpu_display_manager {
   * Note that handlers are called in the same order as they were
   * registered (FIFO).
   */
-struct irq_list_head irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
+struct list_head irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
/**
   * @irq_handler_list_high_tab:
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
index 3577785..ada344a 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
@@ -82,6 +82,7 @@ struct amdgpu_dm_irq_handler_data {
  struct amdgpu_display_manager *dm;
  /* DAL irq source which registered for this interrupt. */
  enum dc_irq_source irq_source;
+struct work_struct work;
  };
#define DM_IRQ_TABLE_LOCK(adev, flags) \
@@ -111,20 +112,10 @@ static void init_handler_common_data(struct 
amdgpu_dm_irq_handler_data *hcd,
   */
  static void dm_irq_work_func(struct work_struct *work)
  {
-struct irq_list_head *irq_list_head =
-container_of(work, struct irq_list_head, work);
-struct list_head *handler_list = _list_head->head;
-struct amdgpu_dm_irq_handler_data *handler_data;
-
-list_for_each_entry(handler_data, handler_list, list) {
-DRM_DEBUG_KMS("DM_IRQ: work_func: for dal_src=%d\n",
-handler_data->irq_source);
+struct amdgpu_dm_irq_handler_data *handler_data =
+ container_of(work, struct amdgpu_dm_irq_handler_data, work);
  -DRM_DEBUG_KMS("DM_IRQ: schedule_work: for dal_src=%d\n",
-handler_data->irq_source);
-
-handler_data->handler(handler_data->handler_arg);
-}
+handler_data->handler(handler_data->handler_arg);
/* Call a DAL subcomponent which registered for interrupt notification
   * at INTERRUPT_LOW_IRQ_CONTEXT.
@@ -156,7 +147,7 @@ static struct list_head *remove_irq_handler(struct 
amdgpu_device *adev,
  break;
  case INTERRUPT_LOW_IRQ_CONTEXT:
  default:
-hnd_list = >dm.irq_handler_list_low_tab[irq_source].head;
+hnd_list = >dm.irq_handler_list_low_tab[irq_source];
  break;
  }
  @@ 

Re: [PATCH 2/2] drm/amdgpu/display: buffer INTERRUPT_LOW_IRQ_CONTEXT interrupt work

2021-01-12 Thread Andrey Grodzovsky


On 1/4/21 1:01 AM, Xiaogang.Chen wrote:

From: Xiaogang Chen 

amdgpu DM handles INTERRUPT_LOW_IRQ_CONTEXT interrupt(hpd, hpd_rx) by
using work queue and uses single work_struct. If previous interrupt
has not been handled new interrupts(same type) will be discarded and
driver just sends "amdgpu_dm_irq_schedule_work FAILED" message out.
If some important hpd, hpd_rx related interrupts are missed by driver
the hot (un)plug devices may cause system hang or unstable, such as
system resumes from S3 sleep with mst device connected.

This patch dynamically allocates new amdgpu_dm_irq_handler_data for
new interrupts if previous INTERRUPT_LOW_IRQ_CONTEXT interrupt work
has not been handled. So the new interrupt works can be queued to the
same workqueue_struct, instead discard the new interrupts.
All allocated amdgpu_dm_irq_handler_data are put into a single linked
list and will be reused after.



I believe this creates a possible concurrency between already executing work 
item
and the new incoming one for which you allocate a new work item on the fly. 
While
handle_hpd_irq is serialized with aconnector->hpd_lock I am seeing that for 
handle_hpd_rx_irq
it's not locked for MST use case (which is the most frequently used with this 
interrupt).  Did you

verified that handle_hpd_rx_irq is reentrant ?




Signed-off-by: Xiaogang Chen 
---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h  |  14 +--
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c  | 114 ++---
  2 files changed, 80 insertions(+), 48 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
index c9d82b9..730e540 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
@@ -69,18 +69,6 @@ struct common_irq_params {
  };
  
  /**

- * struct irq_list_head - Linked-list for low context IRQ handlers.
- *
- * @head: The list_head within  handler_data
- * @work: A work_struct containing the deferred handler work
- */
-struct irq_list_head {
-   struct list_head head;
-   /* In case this interrupt needs post-processing, 'work' will be queued*/
-   struct work_struct work;
-};
-
-/**
   * struct dm_compressor_info - Buffer info used by frame buffer compression
   * @cpu_addr: MMIO cpu addr
   * @bo_ptr: Pointer to the buffer object
@@ -270,7 +258,7 @@ struct amdgpu_display_manager {
 * Note that handlers are called in the same order as they were
 * registered (FIFO).
 */
-   struct irq_list_head irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
+   struct list_head irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
  
  	/**

 * @irq_handler_list_high_tab:
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
index 3577785..ada344a 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
@@ -82,6 +82,7 @@ struct amdgpu_dm_irq_handler_data {
struct amdgpu_display_manager *dm;
/* DAL irq source which registered for this interrupt. */
enum dc_irq_source irq_source;
+   struct work_struct work;
  };
  
  #define DM_IRQ_TABLE_LOCK(adev, flags) \

@@ -111,20 +112,10 @@ static void init_handler_common_data(struct 
amdgpu_dm_irq_handler_data *hcd,
   */
  static void dm_irq_work_func(struct work_struct *work)
  {
-   struct irq_list_head *irq_list_head =
-   container_of(work, struct irq_list_head, work);
-   struct list_head *handler_list = _list_head->head;
-   struct amdgpu_dm_irq_handler_data *handler_data;
-
-   list_for_each_entry(handler_data, handler_list, list) {
-   DRM_DEBUG_KMS("DM_IRQ: work_func: for dal_src=%d\n",
-   handler_data->irq_source);
+   struct amdgpu_dm_irq_handler_data *handler_data =
+container_of(work, struct amdgpu_dm_irq_handler_data, work);
  
-		DRM_DEBUG_KMS("DM_IRQ: schedule_work: for dal_src=%d\n",

-   handler_data->irq_source);
-
-   handler_data->handler(handler_data->handler_arg);
-   }
+   handler_data->handler(handler_data->handler_arg);
  
  	/* Call a DAL subcomponent which registered for interrupt notification

 * at INTERRUPT_LOW_IRQ_CONTEXT.
@@ -156,7 +147,7 @@ static struct list_head *remove_irq_handler(struct 
amdgpu_device *adev,
break;
case INTERRUPT_LOW_IRQ_CONTEXT:
default:
-   hnd_list = >dm.irq_handler_list_low_tab[irq_source].head;
+   hnd_list = >dm.irq_handler_list_low_tab[irq_source];
break;
}
  
@@ -287,7 +278,8 @@ void *amdgpu_dm_irq_register_interrupt(struct amdgpu_device *adev,

break;
case INTERRUPT_LOW_IRQ_CONTEXT:
default:
-   hnd_list = >dm.irq_handler_list_low_tab[irq_source].head;
+   

RE: [PATCH 2/2] drm/amdgpu/display: buffer INTERRUPT_LOW_IRQ_CONTEXT interrupt work

2021-01-11 Thread Chen, Xiaogang
[AMD Official Use Only - Internal Distribution Only]

Any comment?

-Original Message-
From: Xiaogang.Chen  
Sent: Monday, January 4, 2021 12:02 AM
To: amd-...@lists.freedesktop.org; Wentland, Harry ; 
dri-devel@lists.freedesktop.org; airl...@linux.ie
Cc: Chen, Xiaogang 
Subject: [PATCH 2/2] drm/amdgpu/display: buffer INTERRUPT_LOW_IRQ_CONTEXT 
interrupt work

From: Xiaogang Chen 

amdgpu DM handles INTERRUPT_LOW_IRQ_CONTEXT interrupt(hpd, hpd_rx) by using 
work queue and uses single work_struct. If previous interrupt has not been 
handled new interrupts(same type) will be discarded and driver just sends 
"amdgpu_dm_irq_schedule_work FAILED" message out.
If some important hpd, hpd_rx related interrupts are missed by driver the hot 
(un)plug devices may cause system hang or unstable, such as system resumes from 
S3 sleep with mst device connected.

This patch dynamically allocates new amdgpu_dm_irq_handler_data for new 
interrupts if previous INTERRUPT_LOW_IRQ_CONTEXT interrupt work has not been 
handled. So the new interrupt works can be queued to the same workqueue_struct, 
instead discard the new interrupts.
All allocated amdgpu_dm_irq_handler_data are put into a single linked list and 
will be reused after.

Signed-off-by: Xiaogang Chen 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h  |  14 +--  
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c  | 114 ++---
 2 files changed, 80 insertions(+), 48 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
index c9d82b9..730e540 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
@@ -69,18 +69,6 @@ struct common_irq_params {  };
 
 /**
- * struct irq_list_head - Linked-list for low context IRQ handlers.
- *
- * @head: The list_head within  handler_data
- * @work: A work_struct containing the deferred handler work
- */
-struct irq_list_head {
-   struct list_head head;
-   /* In case this interrupt needs post-processing, 'work' will be queued*/
-   struct work_struct work;
-};
-
-/**
  * struct dm_compressor_info - Buffer info used by frame buffer compression
  * @cpu_addr: MMIO cpu addr
  * @bo_ptr: Pointer to the buffer object @@ -270,7 +258,7 @@ struct 
amdgpu_display_manager {
 * Note that handlers are called in the same order as they were
 * registered (FIFO).
 */
-   struct irq_list_head irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
+   struct list_head irq_handler_list_low_tab[DAL_IRQ_SOURCES_NUMBER];
 
/**
 * @irq_handler_list_high_tab:
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
index 3577785..ada344a 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
@@ -82,6 +82,7 @@ struct amdgpu_dm_irq_handler_data {
struct amdgpu_display_manager *dm;
/* DAL irq source which registered for this interrupt. */
enum dc_irq_source irq_source;
+   struct work_struct work;
 };
 
 #define DM_IRQ_TABLE_LOCK(adev, flags) \ @@ -111,20 +112,10 @@ static void 
init_handler_common_data(struct amdgpu_dm_irq_handler_data *hcd,
  */
 static void dm_irq_work_func(struct work_struct *work)  {
-   struct irq_list_head *irq_list_head =
-   container_of(work, struct irq_list_head, work);
-   struct list_head *handler_list = _list_head->head;
-   struct amdgpu_dm_irq_handler_data *handler_data;
-
-   list_for_each_entry(handler_data, handler_list, list) {
-   DRM_DEBUG_KMS("DM_IRQ: work_func: for dal_src=%d\n",
-   handler_data->irq_source);
+   struct amdgpu_dm_irq_handler_data *handler_data =
+container_of(work, struct amdgpu_dm_irq_handler_data, work);
 
-   DRM_DEBUG_KMS("DM_IRQ: schedule_work: for dal_src=%d\n",
-   handler_data->irq_source);
-
-   handler_data->handler(handler_data->handler_arg);
-   }
+   handler_data->handler(handler_data->handler_arg);
 
/* Call a DAL subcomponent which registered for interrupt notification
 * at INTERRUPT_LOW_IRQ_CONTEXT.
@@ -156,7 +147,7 @@ static struct list_head *remove_irq_handler(struct 
amdgpu_device *adev,
break;
case INTERRUPT_LOW_IRQ_CONTEXT:
default:
-   hnd_list = >dm.irq_handler_list_low_tab[irq_source].head;
+   hnd_list = >dm.irq_handler_list_low_tab[irq_source];
break;
}
 
@@ -287,7 +278,8 @@ void *amdgpu_dm_irq_register_interrupt(struct amdgpu_device 
*adev,
break;
case INTERRUPT_LOW_IRQ_CONTEXT:
default:
-   hnd_list = >dm.irq_handler_list_low_tab[irq_source].head;
+   hnd_list = >dm.irq_handler_list_low_tab[irq_source];
+