Re: [BUG] gpu: drm: radeon: two possible deadlocks involving locking and waiting

2022-02-05 Thread Jia-Ju Bai

Hi Christian,

Thanks for the reply :)

On 2022/2/1 15:56, Christian König wrote:

Hi Jia-Ju,

interesting that you have found those issues with an automated tool.

And yes that is a well design flaw within the radeon driver which can 
happen on hardware faults, e.g. when radeon_ring_backup() needs to be 
called.


In fact, my tool finds dozens of similar possible deadlocks caused by 
wait_event_timeout() in radeon_fence_wait_seq_timeout().

There are three other examples in Linux 5.16:

#BUG 1
radeon_dpm_change_power_state_locked()
  mutex_lock(&rdev->ring_lock); --> Line 1133 (Lock A)
  radeon_fence_wait_empty()
    radeon_fence_wait_seq_timeout()
      wait_event_timeout(rdev->fence_queue, ...) --> Line 504 (Wait X)

radeon_fence_driver_fini()
  mutex_lock(&rdev->ring_lock); --> Line 917 (Lock A)
  wake_up_all(&rdev->fence_queue); --> Line 927 (Wake X)

#BUG 2
radeon_set_pm_profile()
  mutex_lock(&rdev->pm.mutex); --> Line 382 (Lock A)
  radeon_pm_set_clocks()
    radeon_fence_wait_empty()
  radeon_fence_wait_seq_timeout()
        wait_event_timeout(rdev->fence_queue, ...) --> Line 504 (Wait X)

radeon_dynpm_idle_work_handler()
  mutex_lock(&rdev->pm.mutex); --> Line 1861 (Lock A)
  radeon_fence_count_emitted()
    radeon_fence_process()
  wake_up_all(&rdev->fence_queue); --> Line 323 (Wake X)

#BUG 3
radeon_pm_fini_old()
  mutex_lock(&rdev->pm.mutex); --> Line 1642 (Lock A)
  radeon_pm_set_clocks()
    radeon_fence_wait_empty()
  radeon_fence_wait_seq_timeout()
        wait_event_timeout(rdev->fence_queue, ...) --> Line 504 (Wait X)

radeon_dynpm_idle_work_handler()
  mutex_lock(&rdev->pm.mutex); --> Line 1861 (Lock A)
  radeon_fence_count_emitted()
    radeon_fence_process()
  wake_up_all(&rdev->fence_queue); --> Line 323 (Wake X)

Thus, to fix these possible deadlocks, we could moditify the code 
related to radeon_fence_wait_seq_timeout().
But I am not quite familar with the radeon driver, so I am not sure how 
to moditify the code properly.




But that happens so rarely and the driver is not developed further 
that we decided to not address this any more.


Ah, okay.



Regards,
Christian.

Am 01.02.22 um 08:40 schrieb Jia-Ju Bai:

Hello,

My static analysis tool reports a possible deadlock in the radeon 
driver in Linux 5.16:


#BUG 1
radeon_dpm_change_power_state_locked()
  mutex_lock(&rdev->ring_lock); --> Line 1133 (Lock A)
  radeon_fence_wait_empty()
    radeon_fence_wait_seq_timeout()
      wait_event_timeout(rdev->fence_queue, ...) --> Line 504 (Wait X)

radeon_ring_backup()
  mutex_lock(&rdev->ring_lock); --> Line 289(Lock A)
  radeon_fence_count_emitted()
    radeon_fence_process()
  wake_up_all(&rdev->fence_queue); --> Line 323 (Wake X)

When radeon_dpm_change_power_state_locked() is executed, "Wait X" is 
performed by holding "Lock A". If radeon_ring_backup() is executed at 
this time, "Wake X" cannot be performed to wake up "Wait X" in 
radeon_dpm_change_power_state_locked(), because "Lock A" has been 
already hold by radeon_dpm_change_power_state_locked(), causing a 
possible deadlock.
I find that "Wait X" is performed with a timeout 
MAX_SCHEDULE_TIMEOUT, to relieve the possible deadlock; but I think 
this timeout can cause inefficient execution.


#BUG 2
radeon_ring_lock()
  mutex_lock(&rdev->ring_lock); --> Line 147 (Lock A)
  radeon_ring_alloc()
    radeon_fence_wait_next()
  radeon_fence_wait_seq_timeout()
        wait_event_timeout(rdev->fence_queue, ...) --> Line 504 (Wait X)

radeon_ring_backup()
  mutex_lock(&rdev->ring_lock); --> Line 289(Lock A)
  radeon_fence_count_emitted()
    radeon_fence_process()
  wake_up_all(&rdev->fence_queue); --> Line 323 (Wake X)

When radeon_ring_lock() is executed, "Wait X" is performed by holding 
"Lock A". If radeon_ring_backup() is executed at this time, "Wake X" 
cannot be performed to wake up "Wait X" in radeon_ring_lock(), 
because "Lock A" has been already hold by radeon_ring_lock(), causing 
a possible deadlock.
I find that "Wait X" is performed with a timeout 
MAX_SCHEDULE_TIMEOUT, to relieve the possible deadlock; but I think 
this timeout can cause inefficient execution.


I am not quite sure whether these possible problems are real and how 
to fix them if they are real.

Any feedback would be appreciated, thanks :)


Best wishes,
Jia-Ju Bai







[BUG] gpu: drm: radeon: two possible deadlocks involving locking and waiting

2022-02-01 Thread Jia-Ju Bai

Hello,

My static analysis tool reports a possible deadlock in the radeon driver 
in Linux 5.16:


#BUG 1
radeon_dpm_change_power_state_locked()
  mutex_lock(&rdev->ring_lock); --> Line 1133 (Lock A)
  radeon_fence_wait_empty()
    radeon_fence_wait_seq_timeout()
      wait_event_timeout(rdev->fence_queue, ...) --> Line 504 (Wait X)

radeon_ring_backup()
  mutex_lock(&rdev->ring_lock); --> Line 289(Lock A)
  radeon_fence_count_emitted()
    radeon_fence_process()
  wake_up_all(&rdev->fence_queue); --> Line 323 (Wake X)

When radeon_dpm_change_power_state_locked() is executed, "Wait X" is 
performed by holding "Lock A". If radeon_ring_backup() is executed at 
this time, "Wake X" cannot be performed to wake up "Wait X" in 
radeon_dpm_change_power_state_locked(), because "Lock A" has been 
already hold by radeon_dpm_change_power_state_locked(), causing a 
possible deadlock.
I find that "Wait X" is performed with a timeout MAX_SCHEDULE_TIMEOUT, 
to relieve the possible deadlock; but I think this timeout can cause 
inefficient execution.


#BUG 2
radeon_ring_lock()
  mutex_lock(&rdev->ring_lock); --> Line 147 (Lock A)
  radeon_ring_alloc()
    radeon_fence_wait_next()
  radeon_fence_wait_seq_timeout()
        wait_event_timeout(rdev->fence_queue, ...) --> Line 504 (Wait X)

radeon_ring_backup()
  mutex_lock(&rdev->ring_lock); --> Line 289(Lock A)
  radeon_fence_count_emitted()
    radeon_fence_process()
  wake_up_all(&rdev->fence_queue); --> Line 323 (Wake X)

When radeon_ring_lock() is executed, "Wait X" is performed by holding 
"Lock A". If radeon_ring_backup() is executed at this time, "Wake X" 
cannot be performed to wake up "Wait X" in radeon_ring_lock(), because 
"Lock A" has been already hold by radeon_ring_lock(), causing a possible 
deadlock.
I find that "Wait X" is performed with a timeout MAX_SCHEDULE_TIMEOUT, 
to relieve the possible deadlock; but I think this timeout can cause 
inefficient execution.


I am not quite sure whether these possible problems are real and how to 
fix them if they are real.

Any feedback would be appreciated, thanks :)


Best wishes,
Jia-Ju Bai



Re: [BUG] gpu: drm: amd: amdgpu: possible ABBA deadlock in amdgpu_set_power_dpm_force_performance_level() and amdgpu_debugfs_process_reg_op()

2021-12-09 Thread Jia-Ju Bai

Hello,

Could you please provide the feedback to my previous report?
Thanks a lot :)


Best wishes,
Jia-Ju Bai

On 2021/9/15 17:39, Jia-Ju Bai wrote:

Hello,

My static analysis tool reports a possible ABBA deadlock in the amdgpu 
driver in Linux 5.10:


amdgpu_debugfs_process_reg_op()
  mutex_lock(&adev->grbm_idx_mutex); --> Line 250 (Lock A)
  mutex_lock(&adev->pm.mutex); --> Line 259 (Lock B)

amdgpu_set_power_dpm_force_performance_level()
  mutex_lock(&adev->pm.mutex); --> Line 381 (Lock B)
    pp_dpm_force_performance_level() --> function pointer via 
"amdgpu_dpm_force_performance_level()"

  pp_dpm_en_umd_pstate()
    amdgpu_device_ip_set_clockgating_state()
  gfx_v7_0_set_clockgating_state() --> function pointer via 
"funcs->set_clockgating_state()"

    gfx_v7_0_enable_mgcg()
  mutex_lock(&adev->grbm_idx_mutex); --> Line 3646 (Lock A)
  mutex_lock(&adev->grbm_idx_mutex); --> Line 3697 (Lock A)

When amdgpu_debugfs_process_reg_op() and 
amdgpu_set_power_dpm_force_performance_level() are concurrently 
executed, the deadlock can occur.


I am not quite sure whether this possible deadlock is real and how to 
fix it if it is real.

Any feedback would be appreciated, thanks :)

Reported-by: TOTE Robot 


Best wishes,
Jia-Ju Bai




[BUG] gpu: drm: amd: amdgpu: possible ABBA deadlock in amdgpu_set_power_dpm_force_performance_level() and amdgpu_debugfs_process_reg_op()

2021-09-15 Thread Jia-Ju Bai

Hello,

My static analysis tool reports a possible ABBA deadlock in the amdgpu 
driver in Linux 5.10:


amdgpu_debugfs_process_reg_op()
  mutex_lock(&adev->grbm_idx_mutex); --> Line 250 (Lock A)
  mutex_lock(&adev->pm.mutex); --> Line 259 (Lock B)

amdgpu_set_power_dpm_force_performance_level()
  mutex_lock(&adev->pm.mutex); --> Line 381 (Lock B)
    pp_dpm_force_performance_level() --> function pointer via 
"amdgpu_dpm_force_performance_level()"

  pp_dpm_en_umd_pstate()
    amdgpu_device_ip_set_clockgating_state()
  gfx_v7_0_set_clockgating_state() --> function pointer via 
"funcs->set_clockgating_state()"

    gfx_v7_0_enable_mgcg()
  mutex_lock(&adev->grbm_idx_mutex); --> Line 3646 (Lock A)
  mutex_lock(&adev->grbm_idx_mutex); --> Line 3697 (Lock A)

When amdgpu_debugfs_process_reg_op() and 
amdgpu_set_power_dpm_force_performance_level() are concurrently 
executed, the deadlock can occur.


I am not quite sure whether this possible deadlock is real and how to 
fix it if it is real.

Any feedback would be appreciated, thanks :)

Reported-by: TOTE Robot 


Best wishes,
Jia-Ju Bai


[PATCH] gpu: drm: amd: amdgpu: fix error return code of amdgpu_acpi_init()

2021-03-08 Thread Jia-Ju Bai
Add error return code in error hanlding code of amdgpu_acpi_init().

Reported-by: TOTE Robot 
Signed-off-by: Jia-Ju Bai 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
index 8155c54392c8..156f30d5a2c0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
@@ -788,12 +788,15 @@ int amdgpu_acpi_init(struct amdgpu_device *adev)
 
/* Probe for ATIF, and initialize it if found */
atif_handle = amdgpu_atif_probe_handle(handle);
-   if (!atif_handle)
+   if (!atif_handle) {
+   ret = -EINVAL;
goto out;
+   }
 
atif = kzalloc(sizeof(*atif), GFP_KERNEL);
if (!atif) {
DRM_WARN("Not enough memory to initialize ATIF\n");
+   ret = -ENOMEM;
goto out;
}
atif->handle = atif_handle;
@@ -803,6 +806,7 @@ int amdgpu_acpi_init(struct amdgpu_device *adev)
if (ret) {
DRM_DEBUG_DRIVER("Call to ATIF verify_interface failed: %d\n", 
ret);
kfree(atif);
+   ret = -EINVAL;
goto out;
}
adev->atif = atif;
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] gpu: drm: swsmu: fix error return code of smu_v11_0_set_allowed_mask()

2021-03-04 Thread Jia-Ju Bai
When bitmap_empty() or feature->feature_num triggers an error, 
no error return code of smu_v11_0_set_allowed_mask() is assigned.
To fix this bug, ret is assigned with -EINVAL as error return code.

Reported-by: TOTE Robot 
Signed-off-by: Jia-Ju Bai 
---
 drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c 
b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
index 90585461a56e..82731a932308 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
@@ -747,8 +747,10 @@ int smu_v11_0_set_allowed_mask(struct smu_context *smu)
int ret = 0;
uint32_t feature_mask[2];
 
-   if (bitmap_empty(feature->allowed, SMU_FEATURE_MAX) || 
feature->feature_num < 64)
+   if (bitmap_empty(feature->allowed, SMU_FEATURE_MAX) || 
feature->feature_num < 64) {
+   ret = -EINVAL;
goto failed;
+   }
 
bitmap_copy((unsigned long *)feature_mask, feature->allowed, 64);
 
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] gpu: drm: radeon: Fix a possible null-pointer dereference in radeon_connector_set_property()

2019-07-29 Thread Jia-Ju Bai
In radeon_connector_set_property(), there is an if statement on line 743
to check whether connector->encoder is NULL:
if (connector->encoder)

When connector->encoder is NULL, it is used on line 755:
if (connector->encoder->crtc)

Thus, a possible null-pointer dereference may occur.

To fix this bug, connector->encoder is checked before being used.

This bug is found by a static analysis tool STCheck written by us.

Signed-off-by: Jia-Ju Bai 
---
 drivers/gpu/drm/radeon/radeon_connectors.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c 
b/drivers/gpu/drm/radeon/radeon_connectors.c
index c60d1a44d22a..b684cd719612 100644
--- a/drivers/gpu/drm/radeon/radeon_connectors.c
+++ b/drivers/gpu/drm/radeon/radeon_connectors.c
@@ -752,7 +752,7 @@ static int radeon_connector_set_property(struct 
drm_connector *connector, struct
 
radeon_encoder->output_csc = val;
 
-   if (connector->encoder->crtc) {
+   if (connector->encoder && connector->encoder->crtc) {
struct drm_crtc *crtc  = connector->encoder->crtc;
struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
 
-- 
2.17.0



Re: [BUG] gpu: drm: amdgpu: Possible sleep-in-atomic-context bugs in amdgpu_uvd_ring_begin_use()

2018-09-17 Thread Jia-Ju Bai

Sorry, I am still not clear why the call chain I proposed is incorrect...

I find a conditional in amdgpu_mm_wreg():

if (!(acc_flags & AMDGPU_REGS_NO_KIQ) && amdgpu_sriov_runtime(adev))
return amdgpu_virt_kiq_wreg(adev, reg, v);

Is amdgpu_virt_kiq_wreg() never called from WREG32() or RREG32()?


Best wishes,
Jia-Ju Bai


On 2018/9/15 17:10, Koenig, Christian wrote:
amdgpu_ring_alloc() does call amdgpu_uvd_begin_use(), but never in the 
call chain you proposed.


Thinking about it I actually don't see a way a statically analysis 
could ever figure that out.


Christian.

Am 15.09.2018 11:05 schrieb Jia-Ju Bai :

Thanks for your reply.

On 2018/9/15 17:01, Koenig, Christian wrote:

Sorry to say that but your analysis tool is buggy.

The proposed call paths will never trigger.


Could you please explain which piece of the call path is incorrect?
I am not very sure of my function pointer analysis.
Does amdgpu_ring_alloc() never calls amdgpu_uvd_ring_begin_use()?

Thanks in advance.


Best wishes,
Jia-Ju Bai


Regards,
Christian.

Am 15.09.2018 10:59 schrieb Jia-Ju Bai
 <mailto:baijiaju1...@gmail.com>:
The driver may sleep with holding a spinlock.
The function call paths (from bottom to top) in Linux-4.17 are:

[FUNC] mutex_lock_nested
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c, 1477:
 mutex_lock_nested in amdgpu_dpm_enable_uvd
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c, 1154:
 amdgpu_dpm_enable_uvd in amdgpu_uvd_ring_begin_use
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c, 80:
 [FUNC_PTR]amdgpu_uvd_ring_begin_use in amdgpu_ring_alloc
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c, 199:
 amdgpu_ring_alloc in amdgpu_virt_kiq_wreg
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c, 207:
 amdgpu_virt_kiq_wreg in amdgpu_mm_wreg
drivers/gpu/drm/amd/amdgpu/vi.c, 207:
 amdgpu_mm_wreg in vi_gc_cac_rreg
drivers/gpu/drm/amd/amdgpu/vi.c, 206:
 _raw_spin_lock_irqsave in vi_gc_cac_rreg

[FUNC] mutex_lock_nested
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c, 1477:
 mutex_lock_nested in amdgpu_dpm_enable_uvd
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c, 1154:
 amdgpu_dpm_enable_uvd in amdgpu_uvd_ring_begin_use
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c, 80:
 [FUNC_PTR]amdgpu_uvd_ring_begin_use in amdgpu_ring_alloc
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c, 199:
 amdgpu_ring_alloc in amdgpu_virt_kiq_wreg
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c, 207:
 amdgpu_virt_kiq_wreg in amdgpu_mm_wreg
drivers/gpu/drm/amd/amdgpu/soc15.c, 106:
 amdgpu_mm_wreg in soc15_pcie_rreg
drivers/gpu/drm/amd/amdgpu/soc15.c, 105:
 _raw_spin_lock_irqsave in soc15_pcie_rreg

[FUNC] mutex_lock_nested
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c, 1477:
 mutex_lock_nested in amdgpu_dpm_enable_uvd
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c, 1154:
 amdgpu_dpm_enable_uvd in amdgpu_uvd_ring_begin_use
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c, 80:
 [FUNC_PTR]amdgpu_uvd_ring_begin_use in amdgpu_ring_alloc
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c, 199:
 amdgpu_ring_alloc in amdgpu_virt_kiq_wreg
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c, 207:
 amdgpu_virt_kiq_wreg in amdgpu_mm_wreg
drivers/gpu/drm/amd/amdgpu/cik.c, 139:
 amdgpu_mm_wreg in cik_uvd_ctx_wreg
drivers/gpu/drm/amd/amdgpu/cik.c, 138:
 _raw_spin_lock_irqsave in cik_uvd_ctx_wreg

[FUNC] mutex_lock_nested
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c, 1477:
 mutex_lock_nested in amdgpu_dpm_enable_uvd
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c, 1154:
 amdgpu_dpm_enable_uvd in amdgpu_uvd_ring_begin_use
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c, 80:
 [FUNC_PTR]amdgpu_uvd_ring_begin_use in amdgpu_ring_alloc
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c, 199:
 amdgpu_ring_alloc in amdgpu_virt_kiq_wreg
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c, 207:
 amdgpu_virt_kiq_wreg in amdgpu_mm_wreg
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c, 126:
 amdgpu_mm_wreg in dce_v6_0_audio_endpt_rreg
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c, 125:
 _raw_spin_lock_irqsave in dce_v6_0_audio_endpt_rreg

Note that [FUNC_PTR] means a function pointer call is used.

These bugs are found by my static analysis tool DSAC.


    Best wishes,
Jia-Ju Bai





___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [BUG] gpu: drm: amdgpu: Possible sleep-in-atomic-context bugs in amdgpu_uvd_ring_begin_use()

2018-09-17 Thread Jia-Ju Bai



On 2018/9/15 17:23, Koenig, Christian wrote:

No, the problem is the function pointer analysis.

In other words the KIQ ring is sometimes used in atomic and even 
interrupt context.


But the UVD ring is never used in atomic context.

But I don't see a way a static analysis could ever figure that out.



Okay, thanks for your explanation :)
Besides, I find that amdgpu_virt_kiq_rreg() calls msleep(), so mdelay() 
should be used instead.



Best wishes,
Jia-Ju Bai



Am 15.09.2018 11:18 schrieb Jia-Ju Bai :
Sorry, I am still not clear why the call chain I proposed is incorrect...

I find a conditional in amdgpu_mm_wreg():

if (!(acc_flags & AMDGPU_REGS_NO_KIQ) && amdgpu_sriov_runtime(adev))
return amdgpu_virt_kiq_wreg(adev, reg, v);

Is amdgpu_virt_kiq_wreg() never called from WREG32() or RREG32()?


Best wishes,
Jia-Ju Bai


On 2018/9/15 17:10, Koenig, Christian wrote:
amdgpu_ring_alloc() does call amdgpu_uvd_begin_use(), but never in 
the call chain you proposed.


Thinking about it I actually don't see a way a statically analysis 
could ever figure that out.


Christian.

Am 15.09.2018 11:05 schrieb Jia-Ju Bai :

Thanks for your reply.

On 2018/9/15 17:01, Koenig, Christian wrote:

Sorry to say that but your analysis tool is buggy.

The proposed call paths will never trigger.


Could you please explain which piece of the call path is incorrect?
I am not very sure of my function pointer analysis.
Does amdgpu_ring_alloc() never calls amdgpu_uvd_ring_begin_use()?

Thanks in advance.


    Best wishes,
Jia-Ju Bai


Regards,
Christian.

Am 15.09.2018 10:59 schrieb Jia-Ju Bai
 <mailto:baijiaju1...@gmail.com>:
The driver may sleep with holding a spinlock.
The function call paths (from bottom to top) in Linux-4.17 are:

[FUNC] mutex_lock_nested
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c, 1477:
 mutex_lock_nested in amdgpu_dpm_enable_uvd
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c, 1154:
 amdgpu_dpm_enable_uvd in amdgpu_uvd_ring_begin_use
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c, 80:
 [FUNC_PTR]amdgpu_uvd_ring_begin_use in amdgpu_ring_alloc
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c, 199:
 amdgpu_ring_alloc in amdgpu_virt_kiq_wreg
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c, 207:
 amdgpu_virt_kiq_wreg in amdgpu_mm_wreg
drivers/gpu/drm/amd/amdgpu/vi.c, 207:
 amdgpu_mm_wreg in vi_gc_cac_rreg
drivers/gpu/drm/amd/amdgpu/vi.c, 206:
 _raw_spin_lock_irqsave in vi_gc_cac_rreg

[FUNC] mutex_lock_nested
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c, 1477:
 mutex_lock_nested in amdgpu_dpm_enable_uvd
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c, 1154:
 amdgpu_dpm_enable_uvd in amdgpu_uvd_ring_begin_use
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c, 80:
 [FUNC_PTR]amdgpu_uvd_ring_begin_use in amdgpu_ring_alloc
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c, 199:
 amdgpu_ring_alloc in amdgpu_virt_kiq_wreg
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c, 207:
 amdgpu_virt_kiq_wreg in amdgpu_mm_wreg
drivers/gpu/drm/amd/amdgpu/soc15.c, 106:
 amdgpu_mm_wreg in soc15_pcie_rreg
drivers/gpu/drm/amd/amdgpu/soc15.c, 105:
 _raw_spin_lock_irqsave in soc15_pcie_rreg

[FUNC] mutex_lock_nested
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c, 1477:
 mutex_lock_nested in amdgpu_dpm_enable_uvd
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c, 1154:
 amdgpu_dpm_enable_uvd in amdgpu_uvd_ring_begin_use
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c, 80:
 [FUNC_PTR]amdgpu_uvd_ring_begin_use in amdgpu_ring_alloc
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c, 199:
 amdgpu_ring_alloc in amdgpu_virt_kiq_wreg
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c, 207:
 amdgpu_virt_kiq_wreg in amdgpu_mm_wreg
drivers/gpu/drm/amd/amdgpu/cik.c, 139:
 amdgpu_mm_wreg in cik_uvd_ctx_wreg
drivers/gpu/drm/amd/amdgpu/cik.c, 138:
 _raw_spin_lock_irqsave in cik_uvd_ctx_wreg

[FUNC] mutex_lock_nested
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c, 1477:
 mutex_lock_nested in amdgpu_dpm_enable_uvd
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c, 1154:
 amdgpu_dpm_enable_uvd in amdgpu_uvd_ring_begin_use
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c, 80:
 [FUNC_PTR]amdgpu_uvd_ring_begin_use in amdgpu_ring_alloc
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c, 199:
 amdgpu_ring_alloc in amdgpu_virt_kiq_wreg
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c, 207:
 amdgpu_virt_kiq_wreg in amdgpu_mm_wreg
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c, 126:
 amdgpu_mm_wreg in dce_v6_0_audio_endpt_

Re: [BUG] gpu: drm: amdgpu: Possible sleep-in-atomic-context bugs in amdgpu_uvd_ring_begin_use()

2018-09-17 Thread Jia-Ju Bai

Thanks for your reply.

On 2018/9/15 17:01, Koenig, Christian wrote:

Sorry to say that but your analysis tool is buggy.

The proposed call paths will never trigger.


Could you please explain which piece of the call path is incorrect?
I am not very sure of my function pointer analysis.
Does amdgpu_ring_alloc() never calls amdgpu_uvd_ring_begin_use()?

Thanks in advance.


Best wishes,
Jia-Ju Bai



Regards,
Christian.

Am 15.09.2018 10:59 schrieb Jia-Ju Bai :
The driver may sleep with holding a spinlock.
The function call paths (from bottom to top) in Linux-4.17 are:

[FUNC] mutex_lock_nested
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c, 1477:
 mutex_lock_nested in amdgpu_dpm_enable_uvd
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c, 1154:
 amdgpu_dpm_enable_uvd in amdgpu_uvd_ring_begin_use
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c, 80:
 [FUNC_PTR]amdgpu_uvd_ring_begin_use in amdgpu_ring_alloc
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c, 199:
 amdgpu_ring_alloc in amdgpu_virt_kiq_wreg
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c, 207:
 amdgpu_virt_kiq_wreg in amdgpu_mm_wreg
drivers/gpu/drm/amd/amdgpu/vi.c, 207:
 amdgpu_mm_wreg in vi_gc_cac_rreg
drivers/gpu/drm/amd/amdgpu/vi.c, 206:
 _raw_spin_lock_irqsave in vi_gc_cac_rreg

[FUNC] mutex_lock_nested
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c, 1477:
 mutex_lock_nested in amdgpu_dpm_enable_uvd
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c, 1154:
 amdgpu_dpm_enable_uvd in amdgpu_uvd_ring_begin_use
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c, 80:
 [FUNC_PTR]amdgpu_uvd_ring_begin_use in amdgpu_ring_alloc
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c, 199:
 amdgpu_ring_alloc in amdgpu_virt_kiq_wreg
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c, 207:
 amdgpu_virt_kiq_wreg in amdgpu_mm_wreg
drivers/gpu/drm/amd/amdgpu/soc15.c, 106:
 amdgpu_mm_wreg in soc15_pcie_rreg
drivers/gpu/drm/amd/amdgpu/soc15.c, 105:
 _raw_spin_lock_irqsave in soc15_pcie_rreg

[FUNC] mutex_lock_nested
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c, 1477:
 mutex_lock_nested in amdgpu_dpm_enable_uvd
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c, 1154:
 amdgpu_dpm_enable_uvd in amdgpu_uvd_ring_begin_use
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c, 80:
 [FUNC_PTR]amdgpu_uvd_ring_begin_use in amdgpu_ring_alloc
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c, 199:
 amdgpu_ring_alloc in amdgpu_virt_kiq_wreg
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c, 207:
 amdgpu_virt_kiq_wreg in amdgpu_mm_wreg
drivers/gpu/drm/amd/amdgpu/cik.c, 139:
 amdgpu_mm_wreg in cik_uvd_ctx_wreg
drivers/gpu/drm/amd/amdgpu/cik.c, 138:
 _raw_spin_lock_irqsave in cik_uvd_ctx_wreg

[FUNC] mutex_lock_nested
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c, 1477:
 mutex_lock_nested in amdgpu_dpm_enable_uvd
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c, 1154:
 amdgpu_dpm_enable_uvd in amdgpu_uvd_ring_begin_use
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c, 80:
 [FUNC_PTR]amdgpu_uvd_ring_begin_use in amdgpu_ring_alloc
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c, 199:
 amdgpu_ring_alloc in amdgpu_virt_kiq_wreg
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c, 207:
 amdgpu_virt_kiq_wreg in amdgpu_mm_wreg
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c, 126:
 amdgpu_mm_wreg in dce_v6_0_audio_endpt_rreg
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c, 125:
 _raw_spin_lock_irqsave in dce_v6_0_audio_endpt_rreg

Note that [FUNC_PTR] means a function pointer call is used.

These bugs are found by my static analysis tool DSAC.


Best wishes,
Jia-Ju Bai


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[BUG] gpu: drm: amdgpu: Possible sleep-in-atomic-context bugs in amdgpu_uvd_ring_begin_use()

2018-09-17 Thread Jia-Ju Bai

The driver may sleep with holding a spinlock.
The function call paths (from bottom to top) in Linux-4.17 are:

[FUNC] mutex_lock_nested
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c, 1477:
mutex_lock_nested in amdgpu_dpm_enable_uvd
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c, 1154:
amdgpu_dpm_enable_uvd in amdgpu_uvd_ring_begin_use
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c, 80:
[FUNC_PTR]amdgpu_uvd_ring_begin_use in amdgpu_ring_alloc
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c, 199:
amdgpu_ring_alloc in amdgpu_virt_kiq_wreg
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c, 207:
amdgpu_virt_kiq_wreg in amdgpu_mm_wreg
drivers/gpu/drm/amd/amdgpu/vi.c, 207:
amdgpu_mm_wreg in vi_gc_cac_rreg
drivers/gpu/drm/amd/amdgpu/vi.c, 206:
_raw_spin_lock_irqsave in vi_gc_cac_rreg

[FUNC] mutex_lock_nested
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c, 1477:
mutex_lock_nested in amdgpu_dpm_enable_uvd
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c, 1154:
amdgpu_dpm_enable_uvd in amdgpu_uvd_ring_begin_use
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c, 80:
[FUNC_PTR]amdgpu_uvd_ring_begin_use in amdgpu_ring_alloc
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c, 199:
amdgpu_ring_alloc in amdgpu_virt_kiq_wreg
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c, 207:
amdgpu_virt_kiq_wreg in amdgpu_mm_wreg
drivers/gpu/drm/amd/amdgpu/soc15.c, 106:
amdgpu_mm_wreg in soc15_pcie_rreg
drivers/gpu/drm/amd/amdgpu/soc15.c, 105:
_raw_spin_lock_irqsave in soc15_pcie_rreg

[FUNC] mutex_lock_nested
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c, 1477:
mutex_lock_nested in amdgpu_dpm_enable_uvd
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c, 1154:
amdgpu_dpm_enable_uvd in amdgpu_uvd_ring_begin_use
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c, 80:
[FUNC_PTR]amdgpu_uvd_ring_begin_use in amdgpu_ring_alloc
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c, 199:
amdgpu_ring_alloc in amdgpu_virt_kiq_wreg
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c, 207:
amdgpu_virt_kiq_wreg in amdgpu_mm_wreg
drivers/gpu/drm/amd/amdgpu/cik.c, 139:
amdgpu_mm_wreg in cik_uvd_ctx_wreg
drivers/gpu/drm/amd/amdgpu/cik.c, 138:
_raw_spin_lock_irqsave in cik_uvd_ctx_wreg

[FUNC] mutex_lock_nested
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c, 1477:
mutex_lock_nested in amdgpu_dpm_enable_uvd
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c, 1154:
amdgpu_dpm_enable_uvd in amdgpu_uvd_ring_begin_use
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c, 80:
[FUNC_PTR]amdgpu_uvd_ring_begin_use in amdgpu_ring_alloc
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c, 199:
amdgpu_ring_alloc in amdgpu_virt_kiq_wreg
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c, 207:
amdgpu_virt_kiq_wreg in amdgpu_mm_wreg
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c, 126:
amdgpu_mm_wreg in dce_v6_0_audio_endpt_rreg
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c, 125:
_raw_spin_lock_irqsave in dce_v6_0_audio_endpt_rreg

Note that [FUNC_PTR] means a function pointer call is used.

These bugs are found by my static analysis tool DSAC.


Best wishes,
Jia-Ju Bai
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] gpu: drm: radeon: r600: Replace mdelay() and udelay() with msleep() and usleep_range()

2018-08-06 Thread Jia-Ju Bai
r600_gpu_soft_reset() and r600_gpu_pci_config_reset() are never
called in atomic context.
They call mdelay() and udelay() to busily wait, which is not necessary.
mdelay() and udelay() can be replaced with msleep() and usleep_range().

This is found by a static analysis tool named DCNS written by myself.

Signed-off-by: Jia-Ju Bai 
---
 drivers/gpu/drm/radeon/r600.c | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
index e06e2d8feab3..de5f6d9f251e 100644
--- a/drivers/gpu/drm/radeon/r600.c
+++ b/drivers/gpu/drm/radeon/r600.c
@@ -1705,7 +1705,7 @@ static void r600_gpu_soft_reset(struct radeon_device 
*rdev, u32 reset_mask)
WREG32(DMA_RB_CNTL, tmp);
}
 
-   mdelay(50);
+   msleep(50);
 
rv515_mc_stop(rdev, &save);
if (r600_mc_wait_for_idle(rdev)) {
@@ -1782,7 +1782,7 @@ static void r600_gpu_soft_reset(struct radeon_device 
*rdev, u32 reset_mask)
WREG32(R_008020_GRBM_SOFT_RESET, tmp);
tmp = RREG32(R_008020_GRBM_SOFT_RESET);
 
-   udelay(50);
+   usleep_range(50, 100);
 
tmp &= ~grbm_soft_reset;
WREG32(R_008020_GRBM_SOFT_RESET, tmp);
@@ -1796,7 +1796,7 @@ static void r600_gpu_soft_reset(struct radeon_device 
*rdev, u32 reset_mask)
WREG32(SRBM_SOFT_RESET, tmp);
tmp = RREG32(SRBM_SOFT_RESET);
 
-   udelay(50);
+   usleep_range(50, 100);
 
tmp &= ~srbm_soft_reset;
WREG32(SRBM_SOFT_RESET, tmp);
@@ -1804,10 +1804,10 @@ static void r600_gpu_soft_reset(struct radeon_device 
*rdev, u32 reset_mask)
}
 
/* Wait a little for things to settle down */
-   mdelay(1);
+   usleep_range(1000, 2000);
 
rv515_mc_resume(rdev, &save);
-   udelay(50);
+   usleep_range(50, 100);
 
r600_print_gpu_status_regs(rdev);
 }
@@ -1835,7 +1835,7 @@ static void r600_gpu_pci_config_reset(struct 
radeon_device *rdev)
tmp &= ~DMA_RB_ENABLE;
WREG32(DMA_RB_CNTL, tmp);
 
-   mdelay(50);
+   msleep(50);
 
/* set mclk/sclk to bypass */
if (rdev->family >= CHIP_RV770)
@@ -1857,12 +1857,12 @@ static void r600_gpu_pci_config_reset(struct 
radeon_device *rdev)
 
/* reset */
radeon_pci_config_reset(rdev);
-   mdelay(1);
+   usleep_range(1000, 2000);
 
/* BIF reset workaround.  Not sure if this is needed on 6xx */
tmp = SOFT_RESET_BIF;
WREG32(SRBM_SOFT_RESET, tmp);
-   mdelay(1);
+   usleep_range(1000, 2000);
WREG32(SRBM_SOFT_RESET, 0);
 
/* wait for asic to come out of reset */
-- 
2.17.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] gpu: drm: radeon: r300: Replace mdelay() with msleep() and usleep_range() in r300_asic_reset()

2018-08-06 Thread Jia-Ju Bai
r300_asic_reset() is never called in atomic context.
It calls mdelay() to busily wait, which is not necessary.
mdelay() can be replaced with msleep() and usleep_range().

This is found by a static analysis tool named DCNS written by myself.

Signed-off-by: Jia-Ju Bai 
---
 drivers/gpu/drm/radeon/r300.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/radeon/r300.c b/drivers/gpu/drm/radeon/r300.c
index 21161aa8acbf..55cf02400d5a 100644
--- a/drivers/gpu/drm/radeon/r300.c
+++ b/drivers/gpu/drm/radeon/r300.c
@@ -434,9 +434,9 @@ int r300_asic_reset(struct radeon_device *rdev, bool hard)
WREG32(R_F0_RBBM_SOFT_RESET, S_F0_SOFT_RESET_VAP(1) |
S_F0_SOFT_RESET_GA(1));
RREG32(R_F0_RBBM_SOFT_RESET);
-   mdelay(500);
+   msleep(500);
WREG32(R_F0_RBBM_SOFT_RESET, 0);
-   mdelay(1);
+   usleep_range(1000, 2000);
status = RREG32(R_000E40_RBBM_STATUS);
dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, 
status);
/* resetting the CP seems to be problematic sometimes it end up
@@ -446,9 +446,9 @@ int r300_asic_reset(struct radeon_device *rdev, bool hard)
 */
WREG32(R_F0_RBBM_SOFT_RESET, S_F0_SOFT_RESET_CP(1));
RREG32(R_F0_RBBM_SOFT_RESET);
-   mdelay(500);
+   msleep(500);
WREG32(R_F0_RBBM_SOFT_RESET, 0);
-   mdelay(1);
+   usleep_range(1000, 2000);
status = RREG32(R_000E40_RBBM_STATUS);
dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, 
status);
/* restore PCI & busmastering */
-- 
2.17.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] gpu: drm: radeon: si: Replace mdelay() with msleep() in si_pcie_gen3_enable()

2018-08-06 Thread Jia-Ju Bai
si_pcie_gen3_enable() is never called in atomic context.
It calls mdelay() to busily wait, which is not necessary.
mdelay() can be replaced with msleep().

This is found by a static analysis tool named DCNS written by myself

Signed-off-by: Jia-Ju Bai 
---
 drivers/gpu/drm/radeon/si.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
index 1907c950d76f..c28743443970 100644
--- a/drivers/gpu/drm/radeon/si.c
+++ b/drivers/gpu/drm/radeon/si.c
@@ -7181,7 +7181,7 @@ static void si_pcie_gen3_enable(struct radeon_device 
*rdev)
tmp |= LC_REDO_EQ;
WREG32_PCIE_PORT(PCIE_LC_CNTL4, tmp);
 
-   mdelay(100);
+   msleep(100);
 
/* linkctl */
pci_read_config_word(root, bridge_pos + 
PCI_EXP_LNKCTL, &tmp16);
-- 
2.17.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] gpu: drm: radeon: r100: Replace mdelay() with msleep() and usleep_range() in r100_asic_reset()

2018-08-06 Thread Jia-Ju Bai
r100_asic_reset() is never called in atomic context.
It calls mdelay() to busily wait, which is not necessary.
mdelay() can be replaced with msleep() and usleep_range().

This is found by a static analysis tool named DCNS written by myself.

Signed-off-by: Jia-Ju Bai 
---
 drivers/gpu/drm/radeon/r100.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c
index 7d39ed63e5be..09c418113d9a 100644
--- a/drivers/gpu/drm/radeon/r100.c
+++ b/drivers/gpu/drm/radeon/r100.c
@@ -2576,17 +2576,17 @@ int r100_asic_reset(struct radeon_device *rdev, bool 
hard)
S_F0_SOFT_RESET_PP(1) |
S_F0_SOFT_RESET_RB(1));
RREG32(R_F0_RBBM_SOFT_RESET);
-   mdelay(500);
+   msleep(500);
WREG32(R_F0_RBBM_SOFT_RESET, 0);
-   mdelay(1);
+   usleep_range(1000, 2000);
status = RREG32(R_000E40_RBBM_STATUS);
dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, 
status);
/* reset CP */
WREG32(R_F0_RBBM_SOFT_RESET, S_F0_SOFT_RESET_CP(1));
RREG32(R_F0_RBBM_SOFT_RESET);
-   mdelay(500);
+   msleep(500);
WREG32(R_F0_RBBM_SOFT_RESET, 0);
-   mdelay(1);
+   usleep_range(1000, 2000);
status = RREG32(R_000E40_RBBM_STATUS);
dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, 
status);
/* restore PCI & busmastering */
-- 
2.17.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] gpu: drm: radeon: rs600: Replace mdelay() with msleep() and usleep_range() in rs600_asic_reset()

2018-08-06 Thread Jia-Ju Bai
rs600_asic_reset() is never called in atomic context.
They call mdelay() to busily wait, which is not necessary.
mdelay() can be replaced with msleep() and usleep_range().

This is found by a static analysis tool named DCNS written by myself.

Signed-off-by: Jia-Ju Bai 
---
 drivers/gpu/drm/radeon/rs600.c | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/radeon/rs600.c b/drivers/gpu/drm/radeon/rs600.c
index f16af119c688..1a97f5fd719b 100644
--- a/drivers/gpu/drm/radeon/rs600.c
+++ b/drivers/gpu/drm/radeon/rs600.c
@@ -472,30 +472,30 @@ int rs600_asic_reset(struct radeon_device *rdev, bool 
hard)
pci_save_state(rdev->pdev);
/* disable bus mastering */
pci_clear_master(rdev->pdev);
-   mdelay(1);
+   usleep_range(1000, 2000);
/* reset GA+VAP */
WREG32(R_F0_RBBM_SOFT_RESET, S_F0_SOFT_RESET_VAP(1) |
S_F0_SOFT_RESET_GA(1));
RREG32(R_F0_RBBM_SOFT_RESET);
-   mdelay(500);
+   msleep(500);
WREG32(R_F0_RBBM_SOFT_RESET, 0);
-   mdelay(1);
+   usleep_range(1000, 2000);
status = RREG32(R_000E40_RBBM_STATUS);
dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, 
status);
/* reset CP */
WREG32(R_F0_RBBM_SOFT_RESET, S_F0_SOFT_RESET_CP(1));
RREG32(R_F0_RBBM_SOFT_RESET);
-   mdelay(500);
+   msleep(500);
WREG32(R_F0_RBBM_SOFT_RESET, 0);
-   mdelay(1);
+   usleep_range(1000, 2000);
status = RREG32(R_000E40_RBBM_STATUS);
dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, 
status);
/* reset MC */
WREG32(R_F0_RBBM_SOFT_RESET, S_F0_SOFT_RESET_MC(1));
RREG32(R_F0_RBBM_SOFT_RESET);
-   mdelay(500);
+   msleep(500);
WREG32(R_F0_RBBM_SOFT_RESET, 0);
-   mdelay(1);
+   usleep_range(1000, 2000);
status = RREG32(R_000E40_RBBM_STATUS);
dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, 
status);
/* restore PCI & busmastering */
-- 
2.17.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] gpu: drm: radeon: radeon_test: Replace mdelay() with msleep()

2018-08-06 Thread Jia-Ju Bai
radeon_test_ring_sync() and radeon_test_ring_sync2() are never 
called in atomic context.
They call mdelay() to busily wait, which is not necessary.
mdelay() can be replaced with msleep().

This is found by a static analysis tool named DCNS written by myself.

Signed-off-by: Jia-Ju Bai 
---
 drivers/gpu/drm/radeon/radeon_test.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_test.c 
b/drivers/gpu/drm/radeon/radeon_test.c
index f5e9abfadb56..411efa303f54 100644
--- a/drivers/gpu/drm/radeon/radeon_test.c
+++ b/drivers/gpu/drm/radeon/radeon_test.c
@@ -347,7 +347,7 @@ void radeon_test_ring_sync(struct radeon_device *rdev,
if (r)
goto out_cleanup;
 
-   mdelay(1000);
+   msleep(1000);
 
if (radeon_fence_signaled(fence1)) {
DRM_ERROR("Fence 1 signaled without waiting for semaphore.\n");
@@ -368,7 +368,7 @@ void radeon_test_ring_sync(struct radeon_device *rdev,
goto out_cleanup;
}
 
-   mdelay(1000);
+   msleep(1000);
 
if (radeon_fence_signaled(fence2)) {
DRM_ERROR("Fence 2 signaled without waiting for semaphore.\n");
@@ -441,7 +441,7 @@ static void radeon_test_ring_sync2(struct radeon_device 
*rdev,
if (r)
goto out_cleanup;
 
-   mdelay(1000);
+   msleep(1000);
 
if (radeon_fence_signaled(fenceA)) {
DRM_ERROR("Fence A signaled without waiting for semaphore.\n");
@@ -461,7 +461,7 @@ static void radeon_test_ring_sync2(struct radeon_device 
*rdev,
radeon_ring_unlock_commit(rdev, ringC, false);
 
for (i = 0; i < 30; ++i) {
-   mdelay(100);
+   msleep(100);
sigA = radeon_fence_signaled(fenceA);
sigB = radeon_fence_signaled(fenceB);
if (sigA || sigB)
@@ -486,7 +486,7 @@ static void radeon_test_ring_sync2(struct radeon_device 
*rdev,
radeon_semaphore_emit_signal(rdev, ringC->idx, semaphore);
radeon_ring_unlock_commit(rdev, ringC, false);
 
-   mdelay(1000);
+   msleep(1000);
 
r = radeon_fence_wait(fenceA, false);
if (r) {
-- 
2.17.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] gpu: drm: radeon: cik: Replace mdelay() with msleep() in cik_pcie_gen3_enable()

2018-08-06 Thread Jia-Ju Bai
cik_pcie_gen3_enable() is never called in atomic context.
It calls mdelay() to busily wait, which is not necessary.
mdelay() can be replaced with msleep().

This is found by a static analysis tool named DCNS written by myself.

Signed-off-by: Jia-Ju Bai 
---
 drivers/gpu/drm/radeon/cik.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
index 7c73bc7e2f85..c1b4d4fbcf3a 100644
--- a/drivers/gpu/drm/radeon/cik.c
+++ b/drivers/gpu/drm/radeon/cik.c
@@ -9598,7 +9598,7 @@ static void cik_pcie_gen3_enable(struct radeon_device 
*rdev)
tmp |= LC_REDO_EQ;
WREG32_PCIE_PORT(PCIE_LC_CNTL4, tmp);
 
-   mdelay(100);
+   msleep(100);
 
/* linkctl */
pci_read_config_word(root, bridge_pos + 
PCI_EXP_LNKCTL, &tmp16);
-- 
2.17.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] gpu: drm: amdgpu: Replace mdelay with msleep in cik_pcie_gen3_enable()

2018-07-23 Thread Jia-Ju Bai
cik_pcie_gen3_enable() is only called by cik_common_hw_init(), which is
never called in atomic context.
cik_pcie_gen3_enable() calls mdelay() to busily wait, which is not
necessary.
mdelay() can be replaced with msleep().

This is found by a static analysis tool named DCNS written by myself.

Signed-off-by: Jia-Ju Bai 
---
 drivers/gpu/drm/amd/amdgpu/cik.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/cik.c b/drivers/gpu/drm/amd/amdgpu/cik.c
index 0df22030e713..5b7fab2c2008 100644
--- a/drivers/gpu/drm/amd/amdgpu/cik.c
+++ b/drivers/gpu/drm/amd/amdgpu/cik.c
@@ -1476,7 +1476,7 @@ static void cik_pcie_gen3_enable(struct amdgpu_device 
*adev)
tmp |= PCIE_LC_CNTL4__LC_REDO_EQ_MASK;
WREG32_PCIE(ixPCIE_LC_CNTL4, tmp);
 
-   mdelay(100);
+   msleep(100);
 
/* linkctl */
pci_read_config_word(root, bridge_pos + 
PCI_EXP_LNKCTL, &tmp16);
-- 
2.17.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx