RE: [PATCH 4/8] drm/amdgpu: convert kiq ring_mutex to a spinlock

2017-05-08 Thread Liu, Monk
>Can you explain your reasoning behind your current position that the KIQ 
>shouldn't be used by baremetal amdgpu?

[ML] I didn't mean KIQ shouldn't leveraged by bare-metal, instead how it is 
used by bare-metal is none of my interest ... 
I mean it better not be used under SR-IOV case by other client except original 
two (RLCV, register access in KMD)
I assume you try to use kiq for KFD under SRIOV case, right ? Can you elaborate 
what kind of jobs you are going to submit to kiq for kfd?  I already get one 
that you want to write register in INTR context .

THANKS & BR

-Original Message-
From: Andres Rodriguez [mailto:andr...@valvesoftware.com] 
Sent: Monday, May 08, 2017 11:42 PM
To: Liu, Monk ; Andres Rodriguez ; 
amd-gfx@lists.freedesktop.org
Subject: RE: [PATCH 4/8] drm/amdgpu: convert kiq ring_mutex to a spinlock



On 2017-05-08 02:08 AM, Liu, Monk wrote:
> Andres
> 
> Some previous patches like move KIQ mutex-lock from amdgpu_virt to 
> common place jumped my NAK, but from technique perspective it's no 
> matter anyway, But this patch and the following patches are go to a 
> dead end,
> 
> 1, Don't use KIQ to access register inside INTR context ...

This is explained below.

> 2, Don't submit CP/hw blocking jobs on KIQ since they will prevent 
> world switch,

Only WRITE_DATA packets are being submitted. There is no blocking work that 
would prevent a worldswitch.

> 
> Besides, why not use MEC2 PIPE1 queue1 to serve your purpose ?  if you insist 
> to do above things in KIQ?
> Any reason you must use KIQ ?

Having a separate KIQ for bare-metal amdgpu and virtualized amdgpu sounds good 
at face-value. However, I'm not sure if it would introduce subtle 
synchronization bugs if both pipes attempt to touch the same HW resources 
simultaneously.

Can you explain your reasoning behind your current position that the KIQ 
shouldn't be used by baremetal amdgpu?

> 
> 
> BR
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf 
> Of Liu, Monk
> Sent: Monday, May 08, 2017 1:59 PM
> To: Andres Rodriguez ; 
> amd-gfx@lists.freedesktop.org
> Subject: RE: [PATCH 4/8] drm/amdgpu: convert kiq ring_mutex to a 
> spinlock
> 
> Clearly NAK
> 
>   mutex_lock(>ring_mutex);
>   amdgpu_ring_alloc(ring, 32);
>   amdgpu_ring_emit_rreg(ring, reg);
>   amdgpu_fence_emit(ring, );
>   amdgpu_ring_commit(ring);
>   mutex_unlock(>ring_mutex);
> 
> 
> be aware that amdgpu_fence_emit() introduces fence_wait() operation,

Please see the following patch:
[PATCH 6/8] drm/amdgpu: add macro for asynchronous KIQ register writes

This is why this patch is described as "step one"

> spin_lock is forbidden and why you want to use spinlock instead of 
> mutex, please give the explanation

I don't understand what you mean by spin_lock being forbidden. The spin_lock is 
a standard kernel construct specifically designed to address mutual exclusion 
in contexts that disallow sleeping. That is exactly what we are trying to do 
here, as described in the commit message.

Regards,
Andres

> 
> BR Monk
> 
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf 
> Of Andres Rodriguez
> Sent: Saturday, May 06, 2017 1:10 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: andre...@gmail.com
> Subject: [PATCH 4/8] drm/amdgpu: convert kiq ring_mutex to a spinlock
> 
> First step towards enabling kiq register operations from an interrupt 
> handler
> 
> Signed-off-by: Andres Rodriguez 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h|  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 10 +-
>  2 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 496ad66..1d987e3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -941,7 +941,7 @@ struct amdgpu_kiq {
>   struct amdgpu_ring  ring;
>   struct amdgpu_irq_src   irq;
>   uint32_treg_val_offs;
> - struct mutexring_mutex;
> + spinlock_t  ring_lock;
>  };
>  
>  /*
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index ef3a46d..ce22f04 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -169,12 +169,12 @@ uint32_t amdgpu_kiq_rreg(struct amdgpu_device 
> *adev, uint32_t reg)
>  
>   BUG_ON(!ring->funcs->emit_rreg);
>  
> - mutex_lock(>ring_mutex);
> + spin_lock(>ring_lock);
>   amdgpu_ring_alloc(ring, 32);
>   amdgpu_ring_emit_rreg(ring, reg);
>   amdgpu_fence_emit(ring, );
>   amdgpu_ring_commit(ring);
> - mutex_unlock(>ring_mutex);
> + spin_unlock(>ring_lock);
>  
>   r = dma_fence_wait(f, false);
>   if (r)
> @@ -195,12 +195,12 @@ 

Re: [RFC] Problems with SRBM select on KIQ

2017-05-08 Thread zhoucm1



On 2017年05月06日 06:57, Felix Kuehling wrote:

We ran into a similar problem when we played with priorities on KFD
queues. You can't change an MQD of a currently mapped queue. To change a
queue priority we need to unmap it, update the MQD, and then map it again.

I wonder if there is similar requirement when engine soft reset happens.
We encounter the issue that KIQ doesn't work any more after gfx/compute 
engine soft reset.


Regards,
David Zhou


I wonder if a queue can change its own priority by using register write
commands in the queue itself.

Regards,
   Felix


On 17-05-05 06:40 PM, Alex Deucher wrote:

On Fri, May 5, 2017 at 1:10 PM, Andres Rodriguez  wrote:

Hey Everyone,

On one of the previous comments for this series I received feedback that
the register programming should be done through the KIQ. This series has
the relevant changes.

However, register writes that require an srbm_select are not working
correctly. The effect is as if the write never happened.

Maybe I'm just doing something silly or dumb in here, but I can't really
see it. Possibly the way I try to perform the srbm_select for KIQ?

Any feedback on this issue is appreciated.

I think to make this work, you'd probably need to do is unmap the
queue, update the priority in the mqd, and then map it again.  It's
probably more efficient to use runlists, but that is a lot of work.

Alex
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: Soliciting DRM feedback on latest DC rework

2017-05-08 Thread Harry Wentland



On 2017-05-08 03:07 PM, Dave Airlie wrote:

On 9 May 2017 at 04:54, Harry Wentland  wrote:

Hi Daniel,

Thanks for taking the time to look at DC.

I had a couple more questions/comments in regard to the patch you posted on
IRC: http://paste.debian.net/plain/930704

My impression is that this item is the most important next step for us:


From a quick glance I think what we want instead is:
- amdgpu_crtc_state and amdgpu_plane_state as following:

struct amdgpu_crtc_state {
struct drm_crtc_state base;
struct dc_stream;
};



Unfortunately as sketched here it doesn't quite mesh with what we currently
have, which is:

struct stream {
struct core_stream protected;
...
}

struct core_stream {
struct dc_stream public;
...
}

struct dc_stream {
...
}



Is there any major reason to keep all those abstractions?

Could you collapse everything into struct dc_stream?

I haven't looked recently but I didn't get the impression there was a
lot of design around what was public/protected, more whatever needed
to be used by someone else was in public.



Yeah, I've been thinking the same recently.

The thought behind public/protected was that DM can modify public but 
would never touch protected. We don't really want to give DM an 
opportunity to touch core_stream directly. If it does we might lose a 
lot of our confidence in DC correctness which comes from running the 
same code on different OSes.


That said, this could be caught during code review and collapsing these 
structs would simplify things a bit. For one, it would allow me to take 
Daniel's proposal as-is.


I'll think about it some more and maybe mock up a couple patches and see 
how they're received internally.


Harry


Dave.


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: remove unsed amdgpu_gem_handle_lockup

2017-05-08 Thread Alex Deucher
On Mon, May 8, 2017 at 9:25 AM, Christian König  wrote:
> From: Christian König 
>
> This kind of reset handling was removed a long time ago.
>
> Signed-off-by: Christian König 

Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 45 
> -
>  1 file changed, 11 insertions(+), 34 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index 67be795..28c8348 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -219,16 +219,6 @@ void amdgpu_gem_object_close(struct drm_gem_object *obj,
> ttm_eu_backoff_reservation(, );
>  }
>
> -static int amdgpu_gem_handle_lockup(struct amdgpu_device *adev, int r)
> -{
> -   if (r == -EDEADLK) {
> -   r = amdgpu_gpu_reset(adev);
> -   if (!r)
> -   r = -EAGAIN;
> -   }
> -   return r;
> -}
> -
>  /*
>   * GEM ioctls.
>   */
> @@ -249,20 +239,17 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, 
> void *data,
>   AMDGPU_GEM_CREATE_CPU_GTT_USWC |
>   AMDGPU_GEM_CREATE_VRAM_CLEARED|
>   AMDGPU_GEM_CREATE_SHADOW |
> - AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS)) {
> -   r = -EINVAL;
> -   goto error_unlock;
> -   }
> + AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS))
> +   return -EINVAL;
> +
> /* reject invalid gem domains */
> if (args->in.domains & ~(AMDGPU_GEM_DOMAIN_CPU |
>  AMDGPU_GEM_DOMAIN_GTT |
>  AMDGPU_GEM_DOMAIN_VRAM |
>  AMDGPU_GEM_DOMAIN_GDS |
>  AMDGPU_GEM_DOMAIN_GWS |
> -AMDGPU_GEM_DOMAIN_OA)) {
> -   r = -EINVAL;
> -   goto error_unlock;
> -   }
> +AMDGPU_GEM_DOMAIN_OA))
> +   return -EINVAL;
>
> /* create a gem object to contain this object in */
> if (args->in.domains & (AMDGPU_GEM_DOMAIN_GDS |
> @@ -274,10 +261,8 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
> *data,
> size = size << AMDGPU_GWS_SHIFT;
> else if (args->in.domains == AMDGPU_GEM_DOMAIN_OA)
> size = size << AMDGPU_OA_SHIFT;
> -   else {
> -   r = -EINVAL;
> -   goto error_unlock;
> -   }
> +   else
> +   return -EINVAL;
> }
> size = roundup(size, PAGE_SIZE);
>
> @@ -286,21 +271,17 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, 
> void *data,
>  args->in.domain_flags,
>  kernel, );
> if (r)
> -   goto error_unlock;
> +   return r;
>
> r = drm_gem_handle_create(filp, gobj, );
> /* drop reference from allocate - handle holds it now */
> drm_gem_object_unreference_unlocked(gobj);
> if (r)
> -   goto error_unlock;
> +   return r;
>
> memset(args, 0, sizeof(*args));
> args->out.handle = handle;
> return 0;
> -
> -error_unlock:
> -   r = amdgpu_gem_handle_lockup(adev, r);
> -   return r;
>  }
>
>  int amdgpu_gem_userptr_ioctl(struct drm_device *dev, void *data,
> @@ -334,7 +315,7 @@ int amdgpu_gem_userptr_ioctl(struct drm_device *dev, void 
> *data,
>  AMDGPU_GEM_DOMAIN_CPU, 0,
>  0, );
> if (r)
> -   goto handle_lockup;
> +   return r;
>
> bo = gem_to_amdgpu_bo(gobj);
> bo->prefered_domains = AMDGPU_GEM_DOMAIN_GTT;
> @@ -374,7 +355,7 @@ int amdgpu_gem_userptr_ioctl(struct drm_device *dev, void 
> *data,
> /* drop reference from allocate - handle holds it now */
> drm_gem_object_unreference_unlocked(gobj);
> if (r)
> -   goto handle_lockup;
> +   return r;
>
> args->handle = handle;
> return 0;
> @@ -388,9 +369,6 @@ int amdgpu_gem_userptr_ioctl(struct drm_device *dev, void 
> *data,
>  release_object:
> drm_gem_object_unreference_unlocked(gobj);
>
> -handle_lockup:
> -   r = amdgpu_gem_handle_lockup(adev, r);
> -
> return r;
>  }
>
> @@ -486,7 +464,6 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, 
> void *data,
> r = ret;
>
> drm_gem_object_unreference_unlocked(gobj);
> -   r = amdgpu_gem_handle_lockup(adev, r);
> return r;
>  }
>
> --
> 2.7.4
>
> 

Re: [PATCH 1/2] drm/amdgpu/atomfirmware: add function to update engine hang status

2017-05-08 Thread Alex Deucher
On Fri, May 5, 2017 at 10:27 AM, Alex Deucher  wrote:
> Update the scratch reg for when the engine is hung.
>
> Signed-off-by: Alex Deucher 

ping on this series.

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c | 13 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h |  2 ++
>  2 files changed, 15 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
> index d735cd1..4bdda56 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
> @@ -88,6 +88,19 @@ void amdgpu_atomfirmware_scratch_regs_restore(struct 
> amdgpu_device *adev)
> WREG32(adev->bios_scratch_reg_offset + i, 
> adev->bios_scratch[i]);
>  }
>
> +void amdgpu_atomfirmware_scratch_regs_engine_hung(struct amdgpu_device *adev,
> + bool hung)
> +{
> +   u32 tmp = RREG32(adev->bios_scratch_reg_offset + 3);
> +
> +   if (hung)
> +   tmp |= ATOM_S3_ASIC_GUI_ENGINE_HUNG;
> +   else
> +   tmp &= ~ATOM_S3_ASIC_GUI_ENGINE_HUNG;
> +
> +   WREG32(adev->bios_scratch_reg_offset + 3, tmp);
> +}
> +
>  int amdgpu_atomfirmware_allocate_fb_scratch(struct amdgpu_device *adev)
>  {
> struct atom_context *ctx = adev->mode_info.atom_context;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h
> index d0c4dcd..a2c3ebe 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h
> @@ -28,6 +28,8 @@ bool amdgpu_atomfirmware_gpu_supports_virtualization(struct 
> amdgpu_device *adev)
>  void amdgpu_atomfirmware_scratch_regs_init(struct amdgpu_device *adev);
>  void amdgpu_atomfirmware_scratch_regs_save(struct amdgpu_device *adev);
>  void amdgpu_atomfirmware_scratch_regs_restore(struct amdgpu_device *adev);
> +void amdgpu_atomfirmware_scratch_regs_engine_hung(struct amdgpu_device *adev,
> + bool hung);
>  int amdgpu_atomfirmware_allocate_fb_scratch(struct amdgpu_device *adev);
>
>  #endif
> --
> 2.5.5
>
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] iommu/amd: flush IOTLB for specific domains only

2017-05-08 Thread Daniel Drake
On Wed, Apr 5, 2017 at 9:01 AM, Nath, Arindam  wrote:
>
> >-Original Message-
> >From: Daniel Drake [mailto:dr...@endlessm.com]
> >Sent: Thursday, March 30, 2017 7:15 PM
> >To: Nath, Arindam
> >Cc: j...@8bytes.org; Deucher, Alexander; Bridgman, John; amd-
> >g...@lists.freedesktop.org; io...@lists.linux-foundation.org; Suthikulpanit,
> >Suravee; Linux Upstreaming Team
> >Subject: Re: [PATCH] iommu/amd: flush IOTLB for specific domains only
> >
> >On Thu, Mar 30, 2017 at 12:23 AM, Nath, Arindam 
> >wrote:
> >> Daniel, did you get chance to test this patch?
> >
> >Not yet. Should we test it alone or alongside "PCI: Blacklist AMD
> >Stoney GPU devices for ATS"?
>
> Daniel, any luck with this patch?

Sorry for the delay. The patch appears to be working fine.

Daniel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] gpu: drm: amd: amdgpu: remove dead code

2017-05-08 Thread Gustavo A. R. Silva
Local variable use_doorbell is assigned to a constant value and it is never
updated again. Remove this variable and the dead code it guards.

Addresses-Coverity-ID: 1401828
Signed-off-by: Gustavo A. R. Silva 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 53 +--
 1 file changed, 20 insertions(+), 33 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
index 67afc90..e824c2b 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
@@ -4991,7 +4991,6 @@ static int gfx_v8_0_cp_compute_resume(struct 
amdgpu_device *adev)
 {
int r, i, j;
u32 tmp;
-   bool use_doorbell = true;
u64 hqd_gpu_addr;
u64 mqd_gpu_addr;
u64 eop_gpu_addr;
@@ -5079,11 +5078,7 @@ static int gfx_v8_0_cp_compute_resume(struct 
amdgpu_device *adev)
 
/* enable doorbell? */
tmp = RREG32(mmCP_HQD_PQ_DOORBELL_CONTROL);
-   if (use_doorbell) {
-   tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL, 
DOORBELL_EN, 1);
-   } else {
-   tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL, 
DOORBELL_EN, 0);
-   }
+   tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL, 
DOORBELL_EN, 1);
WREG32(mmCP_HQD_PQ_DOORBELL_CONTROL, tmp);
mqd->cp_hqd_pq_doorbell_control = tmp;
 
@@ -5157,29 +5152,23 @@ static int gfx_v8_0_cp_compute_resume(struct 
amdgpu_device *adev)
   mqd->cp_hqd_pq_wptr_poll_addr_hi);
 
/* enable the doorbell if requested */
-   if (use_doorbell) {
-   if ((adev->asic_type == CHIP_CARRIZO) ||
-   (adev->asic_type == CHIP_FIJI) ||
-   (adev->asic_type == CHIP_STONEY) ||
-   (adev->asic_type == CHIP_POLARIS11) ||
-   (adev->asic_type == CHIP_POLARIS10) ||
-   (adev->asic_type == CHIP_POLARIS12)) {
-   WREG32(mmCP_MEC_DOORBELL_RANGE_LOWER,
-  AMDGPU_DOORBELL_KIQ << 2);
-   WREG32(mmCP_MEC_DOORBELL_RANGE_UPPER,
-  AMDGPU_DOORBELL_MEC_RING7 << 2);
-   }
-   tmp = RREG32(mmCP_HQD_PQ_DOORBELL_CONTROL);
-   tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
-   DOORBELL_OFFSET, 
ring->doorbell_index);
-   tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL, 
DOORBELL_EN, 1);
-   tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL, 
DOORBELL_SOURCE, 0);
-   tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL, 
DOORBELL_HIT, 0);
-   mqd->cp_hqd_pq_doorbell_control = tmp;
-
-   } else {
-   mqd->cp_hqd_pq_doorbell_control = 0;
+   if ((adev->asic_type == CHIP_CARRIZO) ||
+   (adev->asic_type == CHIP_FIJI) ||
+   (adev->asic_type == CHIP_STONEY) ||
+   (adev->asic_type == CHIP_POLARIS11) ||
+   (adev->asic_type == CHIP_POLARIS10) ||
+   (adev->asic_type == CHIP_POLARIS12)) {
+   WREG32(mmCP_MEC_DOORBELL_RANGE_LOWER, 
AMDGPU_DOORBELL_KIQ << 2);
+   WREG32(mmCP_MEC_DOORBELL_RANGE_UPPER, 
AMDGPU_DOORBELL_MEC_RING7 << 2);
}
+   tmp = RREG32(mmCP_HQD_PQ_DOORBELL_CONTROL);
+   tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+  DOORBELL_OFFSET, ring->doorbell_index);
+   tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL, 
DOORBELL_EN, 1);
+   tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL, 
DOORBELL_SOURCE, 0);
+   tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL, 
DOORBELL_HIT, 0);
+   mqd->cp_hqd_pq_doorbell_control = tmp;
+
WREG32(mmCP_HQD_PQ_DOORBELL_CONTROL,
   mqd->cp_hqd_pq_doorbell_control);
 
@@ -5217,11 +5206,9 @@ static int gfx_v8_0_cp_compute_resume(struct 
amdgpu_device *adev)
amdgpu_bo_unreserve(ring->mqd_obj);
}
 
-   if (use_doorbell) {
-   tmp = RREG32(mmCP_PQ_STATUS);
-   tmp = REG_SET_FIELD(tmp, CP_PQ_STATUS, DOORBELL_ENABLE, 1);
-   WREG32(mmCP_PQ_STATUS, tmp);
-   }
+   tmp = RREG32(mmCP_PQ_STATUS);
+   tmp = REG_SET_FIELD(tmp, CP_PQ_STATUS, DOORBELL_ENABLE, 1);
+   WREG32(mmCP_PQ_STATUS, tmp);
 
gfx_v8_0_cp_compute_enable(adev, true);
 
-- 
2.5.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org

RE: [PATCH] drm/amdgpu: fix errors in comments.

2017-05-08 Thread Deucher, Alexander


> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Alex Xie
> Sent: Monday, May 08, 2017 11:32 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Xie, AlexBin
> Subject: [PATCH] drm/amdgpu: fix errors in comments.
> 
> Signed-off-by: Alex Xie 

Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 7 +++
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index 66bb60e..aab3206 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -479,9 +479,8 @@ void amdgpu_doorbell_get_kfd_info(struct
> amdgpu_device *adev,
> 
>  /*
>   * amdgpu_wb_*()
> - * Writeback is the the method by which the the GPU updates special pages
> - * in memory with the status of certain GPU events (fences, ring pointers,
> - * etc.).
> + * Writeback is the method by which GPU updates special pages in memory
> + * with the status of certain GPU events (fences, ring pointers,etc.).
>   */
> 
>  /**
> @@ -507,7 +506,7 @@ static void amdgpu_wb_fini(struct amdgpu_device
> *adev)
>   *
>   * @adev: amdgpu_device pointer
>   *
> - * Disables Writeback and frees the Writeback memory (all asics).
> + * Initialize writeback and allocates writeback memory (all asics).
>   * Used at driver startup.
>   * Returns 0 on success or an -error on failure.
>   */
> --
> 2.7.4
> 
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu/gfx6: flush caches after IB with the correct vmid

2017-05-08 Thread Nicolai Hähnle
Unfortunately, further testing shows that this doesn't actually fix the 
problem. FWIW, that test runs very reliably on SI with the radeon drm, 
but with the amdgpu drm it fails. VI is fine on amdgpu, which is why I 
was sent down this road.


Anyway, back to trying to figure this out :/

Cheers,
Nicolai

On 08.05.2017 11:30, Nicolai Hähnle wrote:

From: Nicolai Hähnle 

Bring the code in line with what the radeon module does.

Without this change, the fence following the IB may be signalled
to the CPU even though some data written by shaders may not have
been written back yet.

This change fixes the OpenGL CTS test
GL45-CTS.gtf32.GL3Tests.packed_pixels.packed_pixels_pbo on Verde.

Signed-off-by: Nicolai Hähnle 
--
So, I'm not too familiar with the details of these syncs, but the radeon
module effectively does:

- IB
- SURFACE_SYNC (vm_id of IB)
- SURFACE_SYNC (vm_id == 0)
- EVENT_WRITE_EOP (kernel fence)

While amdgpu now does (with this change):

- IB
- SURFACE_SYNC (vm_id of IB) <-- this was missing
- SURFACE_SYNC (vm_id == 0)
- EVENT_WRITE_EOP (kernel fence)
- SURFACE_SYNC (vm_id == 0)
- EVENT_WRITE_EOP (user fence)

It seems like at least the second SURFACE_SYNC (vm_id == 0) should be
redundant, so the question is whether the SURFACE_SYNC (vm_id == 0)
should be rearranged somehow, perhaps also added to the IB emission.
But for better bisectability, that should probably be a separate
change.

Cheers,
Nicolai
---
 drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c | 19 +--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
index 03d2a0a..c4f444d 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
@@ -1922,20 +1922,35 @@ static void gfx_v6_0_ring_emit_ib(struct amdgpu_ring 
*ring,
control |= ib->length_dw | (vm_id << 24);

amdgpu_ring_write(ring, header);
amdgpu_ring_write(ring,
 #ifdef __BIG_ENDIAN
  (2 << 0) |
 #endif
  (ib->gpu_addr & 0xFFFC));
amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr) & 0x);
amdgpu_ring_write(ring, control);
+
+   if (!(ib->flags & AMDGPU_IB_FLAG_CE)) {
+   /* flush read cache over gart for this vmid */
+   amdgpu_ring_write(ring, PACKET3(PACKET3_SET_CONFIG_REG, 1));
+   amdgpu_ring_write(ring, (mmCP_COHER_CNTL2 - 
PACKET3_SET_CONFIG_REG_START));
+   amdgpu_ring_write(ring, vm_id);
+   amdgpu_ring_write(ring, PACKET3(PACKET3_SURFACE_SYNC, 3));
+   amdgpu_ring_write(ring, PACKET3_TCL1_ACTION_ENA |
+ PACKET3_TC_ACTION_ENA |
+ PACKET3_SH_KCACHE_ACTION_ENA |
+ PACKET3_SH_ICACHE_ACTION_ENA);
+   amdgpu_ring_write(ring, 0x);
+   amdgpu_ring_write(ring, 0);
+   amdgpu_ring_write(ring, 10); /* poll interval */
+   }
 }

 /**
  * gfx_v6_0_ring_test_ib - basic ring IB test
  *
  * @ring: amdgpu_ring structure holding ring information
  *
  * Allocate an IB and execute it on the gfx ring (SI).
  * Provides a basic gfx ring test to verify that IBs are working.
  * Returns 0 on success, error on failure.
@@ -3631,21 +3646,21 @@ static const struct amdgpu_ring_funcs 
gfx_v6_0_ring_funcs_gfx = {
.get_rptr = gfx_v6_0_ring_get_rptr,
.get_wptr = gfx_v6_0_ring_get_wptr,
.set_wptr = gfx_v6_0_ring_set_wptr_gfx,
.emit_frame_size =
5 + /* gfx_v6_0_ring_emit_hdp_flush */
5 + /* gfx_v6_0_ring_emit_hdp_invalidate */
14 + 14 + 14 + /* gfx_v6_0_ring_emit_fence x3 for user fence, 
vm fence */
7 + 4 + /* gfx_v6_0_ring_emit_pipeline_sync */
17 + 6 + /* gfx_v6_0_ring_emit_vm_flush */
3 + 2, /* gfx_v6_ring_emit_cntxcntl including vgt flush */
-   .emit_ib_size = 6, /* gfx_v6_0_ring_emit_ib */
+   .emit_ib_size = 6 + 8, /* gfx_v6_0_ring_emit_ib */
.emit_ib = gfx_v6_0_ring_emit_ib,
.emit_fence = gfx_v6_0_ring_emit_fence,
.emit_pipeline_sync = gfx_v6_0_ring_emit_pipeline_sync,
.emit_vm_flush = gfx_v6_0_ring_emit_vm_flush,
.emit_hdp_flush = gfx_v6_0_ring_emit_hdp_flush,
.emit_hdp_invalidate = gfx_v6_0_ring_emit_hdp_invalidate,
.test_ring = gfx_v6_0_ring_test_ring,
.test_ib = gfx_v6_0_ring_test_ib,
.insert_nop = amdgpu_ring_insert_nop,
.emit_cntxcntl = gfx_v6_ring_emit_cntxcntl,
@@ -3657,21 +3672,21 @@ static const struct amdgpu_ring_funcs 
gfx_v6_0_ring_funcs_compute = {
.nop = 0x8000,
.get_rptr = gfx_v6_0_ring_get_rptr,
.get_wptr = gfx_v6_0_ring_get_wptr,
.set_wptr = gfx_v6_0_ring_set_wptr_compute,
.emit_frame_size =
5 + /* 

RE: [PATCH 4/8] drm/amdgpu: convert kiq ring_mutex to a spinlock

2017-05-08 Thread Andres Rodriguez


On 2017-05-08 02:08 AM, Liu, Monk wrote:
> Andres
> 
> Some previous patches like move KIQ mutex-lock from amdgpu_virt to common 
> place jumped my NAK, but from technique perspective it's no matter anyway, 
> But this patch and the following patches are go to a dead end,  
> 
> 1, Don't use KIQ to access register inside INTR context ...

This is explained below.

> 2, Don't submit CP/hw blocking jobs on KIQ since they will prevent world 
> switch,

Only WRITE_DATA packets are being submitted. There is no blocking work that 
would prevent a worldswitch.

> 
> Besides, why not use MEC2 PIPE1 queue1 to serve your purpose ?  if you insist 
> to do above things in KIQ?
> Any reason you must use KIQ ?

Having a separate KIQ for bare-metal amdgpu and virtualized amdgpu sounds good 
at face-value. However, I'm not sure if it would introduce subtle 
synchronization bugs if both pipes attempt to touch the same HW resources 
simultaneously.

Can you explain your reasoning behind your current position that the KIQ 
shouldn't be used by baremetal amdgpu?

> 
> 
> BR
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf Of 
> Liu, Monk
> Sent: Monday, May 08, 2017 1:59 PM
> To: Andres Rodriguez ; amd-gfx@lists.freedesktop.org
> Subject: RE: [PATCH 4/8] drm/amdgpu: convert kiq ring_mutex to a spinlock
> 
> Clearly NAK
> 
>   mutex_lock(>ring_mutex);
>   amdgpu_ring_alloc(ring, 32);
>   amdgpu_ring_emit_rreg(ring, reg);
>   amdgpu_fence_emit(ring, );
>   amdgpu_ring_commit(ring);
>   mutex_unlock(>ring_mutex);
> 
> 
> be aware that amdgpu_fence_emit() introduces fence_wait() operation,

Please see the following patch:
[PATCH 6/8] drm/amdgpu: add macro for asynchronous KIQ register writes

This is why this patch is described as "step one"

> spin_lock is forbidden and why you want to use spinlock instead of mutex, 
> please give the explanation 

I don't understand what you mean by spin_lock being forbidden. The spin_lock is 
a standard kernel construct specifically designed to address mutual exclusion 
in contexts that disallow sleeping. That is exactly what we are trying to do 
here, as described in the commit message.

Regards,
Andres

> 
> BR Monk
> 
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf Of 
> Andres Rodriguez
> Sent: Saturday, May 06, 2017 1:10 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: andre...@gmail.com
> Subject: [PATCH 4/8] drm/amdgpu: convert kiq ring_mutex to a spinlock
> 
> First step towards enabling kiq register operations from an interrupt handler
> 
> Signed-off-by: Andres Rodriguez 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h|  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 10 +-
>  2 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 496ad66..1d987e3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -941,7 +941,7 @@ struct amdgpu_kiq {
>   struct amdgpu_ring  ring;
>   struct amdgpu_irq_src   irq;
>   uint32_treg_val_offs;
> - struct mutexring_mutex;
> + spinlock_t  ring_lock;
>  };
>  
>  /*
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index ef3a46d..ce22f04 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -169,12 +169,12 @@ uint32_t amdgpu_kiq_rreg(struct amdgpu_device *adev, 
> uint32_t reg)
>  
>   BUG_ON(!ring->funcs->emit_rreg);
>  
> - mutex_lock(>ring_mutex);
> + spin_lock(>ring_lock);
>   amdgpu_ring_alloc(ring, 32);
>   amdgpu_ring_emit_rreg(ring, reg);
>   amdgpu_fence_emit(ring, );
>   amdgpu_ring_commit(ring);
> - mutex_unlock(>ring_mutex);
> + spin_unlock(>ring_lock);
>  
>   r = dma_fence_wait(f, false);
>   if (r)
> @@ -195,12 +195,12 @@ void amdgpu_kiq_wreg(struct amdgpu_device *adev, 
> uint32_t reg, uint32_t v)
>  
>   BUG_ON(!ring->funcs->emit_wreg);
>  
> - mutex_lock(>ring_mutex);
> + spin_lock(>ring_lock);
>   amdgpu_ring_alloc(ring, 32);
>   amdgpu_ring_emit_wreg(ring, reg, v);
>   amdgpu_fence_emit(ring, );
>   amdgpu_ring_commit(ring);
> - mutex_unlock(>ring_mutex);
> + spin_unlock(>ring_lock);
>  
>   r = dma_fence_wait(f, false);
>   if (r)
> @@ -1903,7 +1903,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
>   mutex_init(>firmware.mutex);
>   mutex_init(>pm.mutex);
>   mutex_init(>gfx.gpu_clock_mutex);
> - mutex_init(>gfx.kiq.ring_mutex);
>   mutex_init(>srbm_mutex);
>   mutex_init(>grbm_idx_mutex);
>   mutex_init(>mn_lock);
> @@ -1921,6 +1920,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
>   

[PATCH] drm/amdgpu: fix errors in comments.

2017-05-08 Thread Alex Xie
Signed-off-by: Alex Xie 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 66bb60e..aab3206 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -479,9 +479,8 @@ void amdgpu_doorbell_get_kfd_info(struct amdgpu_device 
*adev,
 
 /*
  * amdgpu_wb_*()
- * Writeback is the the method by which the the GPU updates special pages
- * in memory with the status of certain GPU events (fences, ring pointers,
- * etc.).
+ * Writeback is the method by which GPU updates special pages in memory
+ * with the status of certain GPU events (fences, ring pointers,etc.).
  */
 
 /**
@@ -507,7 +506,7 @@ static void amdgpu_wb_fini(struct amdgpu_device *adev)
  *
  * @adev: amdgpu_device pointer
  *
- * Disables Writeback and frees the Writeback memory (all asics).
+ * Initialize writeback and allocates writeback memory (all asics).
  * Used at driver startup.
  * Returns 0 on success or an -error on failure.
  */
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 4/4] drm/amdgpu/SRIOV:implement guilty job TDR (V2)

2017-05-08 Thread Christian König

Am 08.05.2017 um 09:01 schrieb Liu, Monk:

@Christian

This one is changed to guilty job scheme accordingly with your response

BR Monk

-Original Message-
From: Monk Liu [mailto:monk@amd.com]
Sent: Monday, May 08, 2017 3:00 PM
To: amd-gfx@lists.freedesktop.org
Cc: Liu, Monk 
Subject: [PATCH] drm/amdgpu/SRIOV:implement guilty job TDR (V2)

1,TDR will kickout guilty job if it hang exceed the threshold of the given one from 
kernel paramter "job_hang_limit", that way a bad command stream will not 
infinitly cause GPU hang.

by default this threshold is 1 so a job will be kicked out after it hang.

2,if a job timeout TDR routine will not reset all sched/ring, instead if will 
only reset on the givn one which is indicated by @job of 
amdgpu_sriov_gpu_reset, that way we don't need to reset and recover each 
sched/ring if we already know which job cause GPU hang.

3,unblock sriov_gpu_reset for AI family.
4,don't init entity for KIQ

TODO:
when a job is considered as guilty, we should mark some flag in its fence 
status flag, and let UMD side aware that this fence signaling is not due to job 
complete but job hang.

Change-Id: I7b89c19a3de93249db570d0a80522176b1525a09
Signed-off-by: Monk Liu 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   |  4 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c| 36 ---
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  4 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c |  6 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  |  1 +
  drivers/gpu/drm/amd/scheduler/gpu_scheduler.c | 11 +++-  
drivers/gpu/drm/amd/scheduler/gpu_scheduler.h |  7 ++
  8 files changed, 60 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 90a69bf..93bcea2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -111,6 +111,7 @@ extern int amdgpu_prim_buf_per_se;  extern int 
amdgpu_pos_buf_per_se;  extern int amdgpu_cntl_sb_buf_per_se;  extern int 
amdgpu_param_buf_per_se;
+extern int amdgpu_job_hang_limit;
  
  #define AMDGPU_DEFAULT_GTT_SIZE_MB		3072ULL /* 3GB by default */

  #define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS3000
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index b43..23afc58 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -52,6 +52,10 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev, 
struct amdgpu_ctx *ctx)
struct amd_sched_rq *rq;
  
  		rq = >sched.sched_rq[AMD_SCHED_PRIORITY_NORMAL];

+
+   if (ring == >gfx.kiq.ring)
+   continue;
+


That looks like a bug fix and should probably go into a separate patch.


r = amd_sched_entity_init(>sched, >rings[i].entity,
  rq, amdgpu_sched_jobs);
if (r)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 0e5f314..f3990fe 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2537,7 +2537,7 @@ static int amdgpu_recover_vram_from_shadow(struct 
amdgpu_device *adev,
   */
  int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, struct amdgpu_job 
*job)  {
-   int i, r = 0;
+   int i, j, r = 0;
int resched;
struct amdgpu_bo *bo, *tmp;
struct amdgpu_ring *ring;
@@ -2550,19 +2550,30 @@ int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, 
struct amdgpu_job *job)
/* block TTM */
resched = ttm_bo_lock_delayed_workqueue(>mman.bdev);
  
-	/* block scheduler */

-   for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
-   ring = adev->rings[i];
+   /* we start from the ring trigger GPU hang */
+   j = job ? job->ring->idx : 0;
+
+   if (job)
+   if (amd_sched_invalidate_job(>base, amdgpu_job_hang_limit))
+   amd_sched_job_kickout(>base);


Well that looks like the wrong order to me. We should probably stop the 
scheduler before trying to mess anything with the job.


  
+	/* block scheduler */

+   for (i = j; i < j + AMDGPU_MAX_RINGS; ++i) {
+   ring = adev->rings[i % AMDGPU_MAX_RINGS];
if (!ring || !ring->sched.thread)
continue;
  
  		kthread_park(ring->sched.thread);

+
+   if (job && j != i)
+   continue;
+
+   /* only do job_reset on the hang ring if @job not NULL */
amd_sched_hw_job_reset(>sched);
-   }
  
-	/* after all hw jobs are reset, hw fence is meaningless, so force_completion */

-   amdgpu_fence_driver_force_completion(adev);
+   /* after all hw jobs are reset, hw fence is meaningless, so 
force_completion 

Re: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in gpu-reset

2017-05-08 Thread Christian König

Because we can always rely on TDR and HYPERVISOR to detect GPU hang and 
resubmit malicious jobs or even kick them out later,
and the gpu reset will eventually be invoked, so there is no reason to manually 
and voluntarily call gpu reset under SRIOV case.
Well there is a rather good reason, we detect that something is wrong 
much faster than waiting for the timeout.


But I agree that it was broken before as well and we can fix that later. 
Please add a code comment that this needs more work.


With that fixed feel free to add my rb on it.

Christian.

Am 08.05.2017 um 11:42 schrieb Liu, Monk:

The VM fault interrupt or illegal instruction  will be delivered to GPU no 
matter it's SR-IOV or bare-metal case,
And I removed them from invoking GPU reset is due to the same reason:
Don't trigger gpu reset for sriov case if possible, always beware that trigger 
GPU reset under SR-IOV is a heavy cost (need take full access mode on GPU, so 
all
Other VFs will be paused for a while)

Because we can always rely on TDR and HYPERVISOR to detect GPU hang and 
resubmit malicious jobs or even kick them out later,
and the gpu reset will eventually be invoked, so there is no reason to manually 
and voluntarily call gpu reset under SRIOV case.

BR Monk


-Original Message-
From: Christian König [mailto:deathsim...@vodafone.de]
Sent: Monday, May 08, 2017 5:34 PM
To: Liu, Monk ; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in gpu-reset

Sounds good, but what do we do with the amdgpu_irq_reset_work_func?

Please note that I find that calling amdgpu_gpu_reset() here is a bad idea in 
the first place.

Instead we should consider the scheduler as faulting and let the scheduler 
handle that as in the same way as a job timeout.

But I'm not sure if those interrupts are actually send under SRIOV or if the 
hypervisor handles them somehow.

Christian.

Am 08.05.2017 um 11:24 schrieb Liu, Monk:

I agree with disabling debugfs for amdgpu_reset when SRIOV detected.

-Original Message-
From: Christian König [mailto:deathsim...@vodafone.de]
Sent: Monday, May 08, 2017 5:20 PM
To: Liu, Monk ; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in
gpu-reset


You know that gpu reset under SR-IOV will have very big impact on all other VFs 
...

Mhm, good argument. But in this case we need to give at least some warning 
message instead of doing nothing.

Or even better disable creating the amdgpu_reste debugfs file altogether. This 
way nobody will wonder why using it doesn't trigger anything.

Christian.

Am 08.05.2017 um 11:10 schrieb Liu, Monk:

For SR-IOV use case, we call gpu reset under the case we have no choice ...

So many places like debug fs shouldn't a good reason to trigger gpu
reset

You know that gpu reset under SR-IOV will have very big impact on all other VFs 
...

BR Monk

-Original Message-
From: Christian König [mailto:deathsim...@vodafone.de]
Sent: Monday, May 08, 2017 5:08 PM
To: Liu, Monk ; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in
gpu-reset

Am 08.05.2017 um 08:51 schrieb Monk Liu:

because we don't want to do sriov-gpu-reset under certain cases, so
just split those two funtion and don't invoke sr-iov one from
bare-metal one.

Change-Id: I641126c241e2ee2dfd54e6d16c389b159f99cfe0
Signed-off-by: Monk Liu 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 3 ---
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c  | 3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c| 3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c| 6 +-
 5 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 45a60a6..4985a7e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2652,9 +2652,6 @@ int amdgpu_gpu_reset(struct amdgpu_device *adev)
int resched;
bool need_full_reset;
 
-	if (amdgpu_sriov_vf(adev))

-   return amdgpu_sriov_gpu_reset(adev, true);
-
if (!amdgpu_check_soft_reset(adev)) {
DRM_INFO("No hardware hang detected. Did some blocks stall?\n");
return 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 5772ef2..d7523d1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -651,7 +651,8 @@ static int amdgpu_debugfs_gpu_reset(struct seq_file *m, 
void *data)
struct amdgpu_device *adev = dev->dev_private;
 
 	seq_printf(m, "gpu reset\n");

-   amdgpu_gpu_reset(adev);
+   if (!amdgpu_sriov_vf(adev))
+   amdgpu_gpu_reset(adev);

Well that is clearly not a good idea. Why 

RE: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in gpu-reset

2017-05-08 Thread Liu, Monk
The VM fault interrupt or illegal instruction  will be delivered to GPU no 
matter it's SR-IOV or bare-metal case,
And I removed them from invoking GPU reset is due to the same reason:
Don't trigger gpu reset for sriov case if possible, always beware that trigger 
GPU reset under SR-IOV is a heavy cost (need take full access mode on GPU, so 
all
Other VFs will be paused for a while)

Because we can always rely on TDR and HYPERVISOR to detect GPU hang and 
resubmit malicious jobs or even kick them out later, 
and the gpu reset will eventually be invoked, so there is no reason to manually 
and voluntarily call gpu reset under SRIOV case.

BR Monk


-Original Message-
From: Christian König [mailto:deathsim...@vodafone.de] 
Sent: Monday, May 08, 2017 5:34 PM
To: Liu, Monk ; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in gpu-reset

Sounds good, but what do we do with the amdgpu_irq_reset_work_func?

Please note that I find that calling amdgpu_gpu_reset() here is a bad idea in 
the first place.

Instead we should consider the scheduler as faulting and let the scheduler 
handle that as in the same way as a job timeout.

But I'm not sure if those interrupts are actually send under SRIOV or if the 
hypervisor handles them somehow.

Christian.

Am 08.05.2017 um 11:24 schrieb Liu, Monk:
> I agree with disabling debugfs for amdgpu_reset when SRIOV detected.
>
> -Original Message-
> From: Christian König [mailto:deathsim...@vodafone.de]
> Sent: Monday, May 08, 2017 5:20 PM
> To: Liu, Monk ; amd-gfx@lists.freedesktop.org
> Subject: Re: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in 
> gpu-reset
>
>> You know that gpu reset under SR-IOV will have very big impact on all other 
>> VFs ...
> Mhm, good argument. But in this case we need to give at least some warning 
> message instead of doing nothing.
>
> Or even better disable creating the amdgpu_reste debugfs file altogether. 
> This way nobody will wonder why using it doesn't trigger anything.
>
> Christian.
>
> Am 08.05.2017 um 11:10 schrieb Liu, Monk:
>> For SR-IOV use case, we call gpu reset under the case we have no choice ...
>>
>> So many places like debug fs shouldn't a good reason to trigger gpu 
>> reset
>>
>> You know that gpu reset under SR-IOV will have very big impact on all other 
>> VFs ...
>>
>> BR Monk
>>
>> -Original Message-
>> From: Christian König [mailto:deathsim...@vodafone.de]
>> Sent: Monday, May 08, 2017 5:08 PM
>> To: Liu, Monk ; amd-gfx@lists.freedesktop.org
>> Subject: Re: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in 
>> gpu-reset
>>
>> Am 08.05.2017 um 08:51 schrieb Monk Liu:
>>> because we don't want to do sriov-gpu-reset under certain cases, so 
>>> just split those two funtion and don't invoke sr-iov one from 
>>> bare-metal one.
>>>
>>> Change-Id: I641126c241e2ee2dfd54e6d16c389b159f99cfe0
>>> Signed-off-by: Monk Liu 
>>> ---
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 3 ---
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c  | 3 ++-
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 2 +-
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c| 3 ++-
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c| 6 +-
>>> 5 files changed, 10 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> index 45a60a6..4985a7e 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> @@ -2652,9 +2652,6 @@ int amdgpu_gpu_reset(struct amdgpu_device *adev)
>>> int resched;
>>> bool need_full_reset;
>>> 
>>> -   if (amdgpu_sriov_vf(adev))
>>> -   return amdgpu_sriov_gpu_reset(adev, true);
>>> -
>>> if (!amdgpu_check_soft_reset(adev)) {
>>> DRM_INFO("No hardware hang detected. Did some blocks 
>>> stall?\n");
>>> return 0;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>>> index 5772ef2..d7523d1 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>>> @@ -651,7 +651,8 @@ static int amdgpu_debugfs_gpu_reset(struct seq_file *m, 
>>> void *data)
>>> struct amdgpu_device *adev = dev->dev_private;
>>> 
>>> seq_printf(m, "gpu reset\n");
>>> -   amdgpu_gpu_reset(adev);
>>> +   if (!amdgpu_sriov_vf(adev))
>>> +   amdgpu_gpu_reset(adev);
>> Well that is clearly not a good idea. Why do you want to disable the reset 
>> here?
>>
>>> 
>>> return 0;
>>> }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>> index 67be795..5bcbea0 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>> @@ 

Re: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in gpu-reset

2017-05-08 Thread Christian König

Sounds good, but what do we do with the amdgpu_irq_reset_work_func?

Please note that I find that calling amdgpu_gpu_reset() here is a bad 
idea in the first place.


Instead we should consider the scheduler as faulting and let the 
scheduler handle that as in the same way as a job timeout.


But I'm not sure if those interrupts are actually send under SRIOV or if 
the hypervisor handles them somehow.


Christian.

Am 08.05.2017 um 11:24 schrieb Liu, Monk:

I agree with disabling debugfs for amdgpu_reset when SRIOV detected.

-Original Message-
From: Christian König [mailto:deathsim...@vodafone.de]
Sent: Monday, May 08, 2017 5:20 PM
To: Liu, Monk ; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in gpu-reset


You know that gpu reset under SR-IOV will have very big impact on all other VFs 
...

Mhm, good argument. But in this case we need to give at least some warning 
message instead of doing nothing.

Or even better disable creating the amdgpu_reste debugfs file altogether. This 
way nobody will wonder why using it doesn't trigger anything.

Christian.

Am 08.05.2017 um 11:10 schrieb Liu, Monk:

For SR-IOV use case, we call gpu reset under the case we have no choice ...

So many places like debug fs shouldn't a good reason to trigger gpu
reset

You know that gpu reset under SR-IOV will have very big impact on all other VFs 
...

BR Monk

-Original Message-
From: Christian König [mailto:deathsim...@vodafone.de]
Sent: Monday, May 08, 2017 5:08 PM
To: Liu, Monk ; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in
gpu-reset

Am 08.05.2017 um 08:51 schrieb Monk Liu:

because we don't want to do sriov-gpu-reset under certain cases, so
just split those two funtion and don't invoke sr-iov one from
bare-metal one.

Change-Id: I641126c241e2ee2dfd54e6d16c389b159f99cfe0
Signed-off-by: Monk Liu 
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 3 ---
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c  | 3 ++-
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c| 3 ++-
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c| 6 +-
5 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 45a60a6..4985a7e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2652,9 +2652,6 @@ int amdgpu_gpu_reset(struct amdgpu_device *adev)
int resched;
bool need_full_reset;

-	if (amdgpu_sriov_vf(adev))

-   return amdgpu_sriov_gpu_reset(adev, true);
-
if (!amdgpu_check_soft_reset(adev)) {
DRM_INFO("No hardware hang detected. Did some blocks stall?\n");
return 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 5772ef2..d7523d1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -651,7 +651,8 @@ static int amdgpu_debugfs_gpu_reset(struct seq_file *m, 
void *data)
struct amdgpu_device *adev = dev->dev_private;

	seq_printf(m, "gpu reset\n");

-   amdgpu_gpu_reset(adev);
+   if (!amdgpu_sriov_vf(adev))
+   amdgpu_gpu_reset(adev);

Well that is clearly not a good idea. Why do you want to disable the reset here?


	return 0;

}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 67be795..5bcbea0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -221,7 +221,7 @@ void amdgpu_gem_object_close(struct
drm_gem_object *obj,

static int amdgpu_gem_handle_lockup(struct amdgpu_device *adev, int r)

{
-   if (r == -EDEADLK) {
+   if (r == -EDEADLK && !amdgpu_sriov_vf(adev)) {

Not a problem of your patch, but that stuff is outdated and should have been 
removed completely years ago. Going to take care of that.


r = amdgpu_gpu_reset(adev);
if (!r)
r = -EAGAIN;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
index f8a6c95..49c6e6e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
@@ -89,7 +89,8 @@ static void amdgpu_irq_reset_work_func(struct work_struct 
*work)
struct amdgpu_device *adev = container_of(work, struct amdgpu_device,
  reset_work);

-	amdgpu_gpu_reset(adev);

+   if (!amdgpu_sriov_vf(adev))
+   amdgpu_gpu_reset(adev);

Mhm, that disables the reset on an invalid register access or invalid command 
stream. Is that really what we want?

Christian.


}

/* Disable *all* interrupts */

diff --git 

Re: [PATCH] drm/amdgpu:no debugfs_gpu_reset for SRIOV

2017-05-08 Thread Christian König

Am 08.05.2017 um 11:28 schrieb Monk Liu:

Change-Id: Ie9730852da54ceb8b4c2c44acac2df3556a32d17
Signed-off-by: Monk Liu 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 9 +++--
  1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index fe173e0..417f190 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -661,8 +661,7 @@ static int amdgpu_debugfs_gpu_reset(struct seq_file *m, 
void *data)
struct amdgpu_device *adev = dev->dev_private;
  
  	seq_printf(m, "gpu reset\n");

-   if (!amdgpu_sriov_vf(adev))
-   amdgpu_gpu_reset(adev);
+   amdgpu_gpu_reset(adev);


Please squash that into the original commit while pushing into our 
repository.


With that done this is Reviewed-by: Christian König 
.


  
  	return 0;

  }
@@ -671,11 +670,17 @@ static const struct drm_info_list 
amdgpu_debugfs_fence_list[] = {
{"amdgpu_fence_info", _debugfs_fence_info, 0, NULL},
{"amdgpu_gpu_reset", _debugfs_gpu_reset, 0, NULL}
  };
+
+static const struct drm_info_list amdgpu_debugfs_fence_list_sriov[] = {
+   {"amdgpu_fence_info", _debugfs_fence_info, 0, NULL},
+};
  #endif
  
  int amdgpu_debugfs_fence_init(struct amdgpu_device *adev)

  {
  #if defined(CONFIG_DEBUG_FS)
+   if (amdgpu_sriov_vf(adev))
+   return amdgpu_debugfs_add_files(adev, 
amdgpu_debugfs_fence_list_sriov, 1);
return amdgpu_debugfs_add_files(adev, amdgpu_debugfs_fence_list, 2);
  #else
return 0;



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu/gfx6: flush caches after IB with the correct vmid

2017-05-08 Thread Nicolai Hähnle
From: Nicolai Hähnle 

Bring the code in line with what the radeon module does.

Without this change, the fence following the IB may be signalled
to the CPU even though some data written by shaders may not have
been written back yet.

This change fixes the OpenGL CTS test
GL45-CTS.gtf32.GL3Tests.packed_pixels.packed_pixels_pbo on Verde.

Signed-off-by: Nicolai Hähnle 
--
So, I'm not too familiar with the details of these syncs, but the radeon
module effectively does:

- IB
- SURFACE_SYNC (vm_id of IB)
- SURFACE_SYNC (vm_id == 0)
- EVENT_WRITE_EOP (kernel fence)

While amdgpu now does (with this change):

- IB
- SURFACE_SYNC (vm_id of IB) <-- this was missing
- SURFACE_SYNC (vm_id == 0)
- EVENT_WRITE_EOP (kernel fence)
- SURFACE_SYNC (vm_id == 0)
- EVENT_WRITE_EOP (user fence)

It seems like at least the second SURFACE_SYNC (vm_id == 0) should be
redundant, so the question is whether the SURFACE_SYNC (vm_id == 0)
should be rearranged somehow, perhaps also added to the IB emission.
But for better bisectability, that should probably be a separate
change.

Cheers,
Nicolai
---
 drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c | 19 +--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
index 03d2a0a..c4f444d 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
@@ -1922,20 +1922,35 @@ static void gfx_v6_0_ring_emit_ib(struct amdgpu_ring 
*ring,
control |= ib->length_dw | (vm_id << 24);
 
amdgpu_ring_write(ring, header);
amdgpu_ring_write(ring,
 #ifdef __BIG_ENDIAN
  (2 << 0) |
 #endif
  (ib->gpu_addr & 0xFFFC));
amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr) & 0x);
amdgpu_ring_write(ring, control);
+
+   if (!(ib->flags & AMDGPU_IB_FLAG_CE)) {
+   /* flush read cache over gart for this vmid */
+   amdgpu_ring_write(ring, PACKET3(PACKET3_SET_CONFIG_REG, 1));
+   amdgpu_ring_write(ring, (mmCP_COHER_CNTL2 - 
PACKET3_SET_CONFIG_REG_START));
+   amdgpu_ring_write(ring, vm_id);
+   amdgpu_ring_write(ring, PACKET3(PACKET3_SURFACE_SYNC, 3));
+   amdgpu_ring_write(ring, PACKET3_TCL1_ACTION_ENA |
+ PACKET3_TC_ACTION_ENA |
+ PACKET3_SH_KCACHE_ACTION_ENA |
+ PACKET3_SH_ICACHE_ACTION_ENA);
+   amdgpu_ring_write(ring, 0x);
+   amdgpu_ring_write(ring, 0);
+   amdgpu_ring_write(ring, 10); /* poll interval */
+   }
 }
 
 /**
  * gfx_v6_0_ring_test_ib - basic ring IB test
  *
  * @ring: amdgpu_ring structure holding ring information
  *
  * Allocate an IB and execute it on the gfx ring (SI).
  * Provides a basic gfx ring test to verify that IBs are working.
  * Returns 0 on success, error on failure.
@@ -3631,21 +3646,21 @@ static const struct amdgpu_ring_funcs 
gfx_v6_0_ring_funcs_gfx = {
.get_rptr = gfx_v6_0_ring_get_rptr,
.get_wptr = gfx_v6_0_ring_get_wptr,
.set_wptr = gfx_v6_0_ring_set_wptr_gfx,
.emit_frame_size =
5 + /* gfx_v6_0_ring_emit_hdp_flush */
5 + /* gfx_v6_0_ring_emit_hdp_invalidate */
14 + 14 + 14 + /* gfx_v6_0_ring_emit_fence x3 for user fence, 
vm fence */
7 + 4 + /* gfx_v6_0_ring_emit_pipeline_sync */
17 + 6 + /* gfx_v6_0_ring_emit_vm_flush */
3 + 2, /* gfx_v6_ring_emit_cntxcntl including vgt flush */
-   .emit_ib_size = 6, /* gfx_v6_0_ring_emit_ib */
+   .emit_ib_size = 6 + 8, /* gfx_v6_0_ring_emit_ib */
.emit_ib = gfx_v6_0_ring_emit_ib,
.emit_fence = gfx_v6_0_ring_emit_fence,
.emit_pipeline_sync = gfx_v6_0_ring_emit_pipeline_sync,
.emit_vm_flush = gfx_v6_0_ring_emit_vm_flush,
.emit_hdp_flush = gfx_v6_0_ring_emit_hdp_flush,
.emit_hdp_invalidate = gfx_v6_0_ring_emit_hdp_invalidate,
.test_ring = gfx_v6_0_ring_test_ring,
.test_ib = gfx_v6_0_ring_test_ib,
.insert_nop = amdgpu_ring_insert_nop,
.emit_cntxcntl = gfx_v6_ring_emit_cntxcntl,
@@ -3657,21 +3672,21 @@ static const struct amdgpu_ring_funcs 
gfx_v6_0_ring_funcs_compute = {
.nop = 0x8000,
.get_rptr = gfx_v6_0_ring_get_rptr,
.get_wptr = gfx_v6_0_ring_get_wptr,
.set_wptr = gfx_v6_0_ring_set_wptr_compute,
.emit_frame_size =
5 + /* gfx_v6_0_ring_emit_hdp_flush */
5 + /* gfx_v6_0_ring_emit_hdp_invalidate */
7 + /* gfx_v6_0_ring_emit_pipeline_sync */
17 + /* gfx_v6_0_ring_emit_vm_flush */
14 + 14 + 14, /* gfx_v6_0_ring_emit_fence x3 for user fence, vm 
fence */
-   .emit_ib_size = 6, /* gfx_v6_0_ring_emit_ib */
+   

[PATCH] drm/amdgpu:no debugfs_gpu_reset for SRIOV

2017-05-08 Thread Monk Liu
Change-Id: Ie9730852da54ceb8b4c2c44acac2df3556a32d17
Signed-off-by: Monk Liu 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index fe173e0..417f190 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -661,8 +661,7 @@ static int amdgpu_debugfs_gpu_reset(struct seq_file *m, 
void *data)
struct amdgpu_device *adev = dev->dev_private;
 
seq_printf(m, "gpu reset\n");
-   if (!amdgpu_sriov_vf(adev))
-   amdgpu_gpu_reset(adev);
+   amdgpu_gpu_reset(adev);
 
return 0;
 }
@@ -671,11 +670,17 @@ static const struct drm_info_list 
amdgpu_debugfs_fence_list[] = {
{"amdgpu_fence_info", _debugfs_fence_info, 0, NULL},
{"amdgpu_gpu_reset", _debugfs_gpu_reset, 0, NULL}
 };
+
+static const struct drm_info_list amdgpu_debugfs_fence_list_sriov[] = {
+   {"amdgpu_fence_info", _debugfs_fence_info, 0, NULL},
+};
 #endif
 
 int amdgpu_debugfs_fence_init(struct amdgpu_device *adev)
 {
 #if defined(CONFIG_DEBUG_FS)
+   if (amdgpu_sriov_vf(adev))
+   return amdgpu_debugfs_add_files(adev, 
amdgpu_debugfs_fence_list_sriov, 1);
return amdgpu_debugfs_add_files(adev, amdgpu_debugfs_fence_list, 2);
 #else
return 0;
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in gpu-reset

2017-05-08 Thread Liu, Monk
I agree with disabling debugfs for amdgpu_reset when SRIOV detected. 

-Original Message-
From: Christian König [mailto:deathsim...@vodafone.de] 
Sent: Monday, May 08, 2017 5:20 PM
To: Liu, Monk ; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in gpu-reset

> You know that gpu reset under SR-IOV will have very big impact on all other 
> VFs ...
Mhm, good argument. But in this case we need to give at least some warning 
message instead of doing nothing.

Or even better disable creating the amdgpu_reste debugfs file altogether. This 
way nobody will wonder why using it doesn't trigger anything.

Christian.

Am 08.05.2017 um 11:10 schrieb Liu, Monk:
> For SR-IOV use case, we call gpu reset under the case we have no choice ...
>
> So many places like debug fs shouldn't a good reason to trigger gpu 
> reset
>
> You know that gpu reset under SR-IOV will have very big impact on all other 
> VFs ...
>
> BR Monk
>
> -Original Message-
> From: Christian König [mailto:deathsim...@vodafone.de]
> Sent: Monday, May 08, 2017 5:08 PM
> To: Liu, Monk ; amd-gfx@lists.freedesktop.org
> Subject: Re: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in 
> gpu-reset
>
> Am 08.05.2017 um 08:51 schrieb Monk Liu:
>> because we don't want to do sriov-gpu-reset under certain cases, so 
>> just split those two funtion and don't invoke sr-iov one from 
>> bare-metal one.
>>
>> Change-Id: I641126c241e2ee2dfd54e6d16c389b159f99cfe0
>> Signed-off-by: Monk Liu 
>> ---
>>drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 3 ---
>>drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c  | 3 ++-
>>drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 2 +-
>>drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c| 3 ++-
>>drivers/gpu/drm/amd/amdgpu/amdgpu_job.c| 6 +-
>>5 files changed, 10 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> index 45a60a6..4985a7e 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> @@ -2652,9 +2652,6 @@ int amdgpu_gpu_reset(struct amdgpu_device *adev)
>>  int resched;
>>  bool need_full_reset;
>>
>> -if (amdgpu_sriov_vf(adev))
>> -return amdgpu_sriov_gpu_reset(adev, true);
>> -
>>  if (!amdgpu_check_soft_reset(adev)) {
>>  DRM_INFO("No hardware hang detected. Did some blocks stall?\n");
>>  return 0;
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>> index 5772ef2..d7523d1 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>> @@ -651,7 +651,8 @@ static int amdgpu_debugfs_gpu_reset(struct seq_file *m, 
>> void *data)
>>  struct amdgpu_device *adev = dev->dev_private;
>>
>>  seq_printf(m, "gpu reset\n");
>> -amdgpu_gpu_reset(adev);
>> +if (!amdgpu_sriov_vf(adev))
>> +amdgpu_gpu_reset(adev);
> Well that is clearly not a good idea. Why do you want to disable the reset 
> here?
>
>>
>>  return 0;
>>}
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>> index 67be795..5bcbea0 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>> @@ -221,7 +221,7 @@ void amdgpu_gem_object_close(struct 
>> drm_gem_object *obj,
>>
>>static int amdgpu_gem_handle_lockup(struct amdgpu_device *adev, int r)
>>{
>> -if (r == -EDEADLK) {
>> +if (r == -EDEADLK && !amdgpu_sriov_vf(adev)) {
> Not a problem of your patch, but that stuff is outdated and should have been 
> removed completely years ago. Going to take care of that.
>
>>  r = amdgpu_gpu_reset(adev);
>>  if (!r)
>>  r = -EAGAIN;
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
>> index f8a6c95..49c6e6e 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
>> @@ -89,7 +89,8 @@ static void amdgpu_irq_reset_work_func(struct work_struct 
>> *work)
>>  struct amdgpu_device *adev = container_of(work, struct amdgpu_device,
>>reset_work);
>>
>> -amdgpu_gpu_reset(adev);
>> +if (!amdgpu_sriov_vf(adev))
>> +amdgpu_gpu_reset(adev);
> Mhm, that disables the reset on an invalid register access or invalid command 
> stream. Is that really what we want?
>
> Christian.
>
>>}
>>
>>/* Disable *all* interrupts */
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> index 690ef3d..c7718af 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> @@ -36,7 +36,11 @@ static void 

Re: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in gpu-reset

2017-05-08 Thread Christian König

You know that gpu reset under SR-IOV will have very big impact on all other VFs 
...
Mhm, good argument. But in this case we need to give at least some 
warning message instead of doing nothing.


Or even better disable creating the amdgpu_reste debugfs file 
altogether. This way nobody will wonder why using it doesn't trigger 
anything.


Christian.

Am 08.05.2017 um 11:10 schrieb Liu, Monk:

For SR-IOV use case, we call gpu reset under the case we have no choice ...

So many places like debug fs shouldn't a good reason to trigger gpu reset

You know that gpu reset under SR-IOV will have very big impact on all other VFs 
...

BR Monk

-Original Message-
From: Christian König [mailto:deathsim...@vodafone.de]
Sent: Monday, May 08, 2017 5:08 PM
To: Liu, Monk ; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in gpu-reset

Am 08.05.2017 um 08:51 schrieb Monk Liu:

because we don't want to do sriov-gpu-reset under certain cases, so
just split those two funtion and don't invoke sr-iov one from
bare-metal one.

Change-Id: I641126c241e2ee2dfd54e6d16c389b159f99cfe0
Signed-off-by: Monk Liu 
---
   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 3 ---
   drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c  | 3 ++-
   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 2 +-
   drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c| 3 ++-
   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c| 6 +-
   5 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 45a60a6..4985a7e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2652,9 +2652,6 @@ int amdgpu_gpu_reset(struct amdgpu_device *adev)
int resched;
bool need_full_reset;
   
-	if (amdgpu_sriov_vf(adev))

-   return amdgpu_sriov_gpu_reset(adev, true);
-
if (!amdgpu_check_soft_reset(adev)) {
DRM_INFO("No hardware hang detected. Did some blocks stall?\n");
return 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 5772ef2..d7523d1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -651,7 +651,8 @@ static int amdgpu_debugfs_gpu_reset(struct seq_file *m, 
void *data)
struct amdgpu_device *adev = dev->dev_private;
   
   	seq_printf(m, "gpu reset\n");

-   amdgpu_gpu_reset(adev);
+   if (!amdgpu_sriov_vf(adev))
+   amdgpu_gpu_reset(adev);

Well that is clearly not a good idea. Why do you want to disable the reset here?

   
   	return 0;

   }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 67be795..5bcbea0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -221,7 +221,7 @@ void amdgpu_gem_object_close(struct drm_gem_object
*obj,
   
   static int amdgpu_gem_handle_lockup(struct amdgpu_device *adev, int r)

   {
-   if (r == -EDEADLK) {
+   if (r == -EDEADLK && !amdgpu_sriov_vf(adev)) {

Not a problem of your patch, but that stuff is outdated and should have been 
removed completely years ago. Going to take care of that.


r = amdgpu_gpu_reset(adev);
if (!r)
r = -EAGAIN;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
index f8a6c95..49c6e6e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
@@ -89,7 +89,8 @@ static void amdgpu_irq_reset_work_func(struct work_struct 
*work)
struct amdgpu_device *adev = container_of(work, struct amdgpu_device,
  reset_work);
   
-	amdgpu_gpu_reset(adev);

+   if (!amdgpu_sriov_vf(adev))
+   amdgpu_gpu_reset(adev);

Mhm, that disables the reset on an invalid register access or invalid command 
stream. Is that really what we want?

Christian.


   }
   
   /* Disable *all* interrupts */

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 690ef3d..c7718af 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -36,7 +36,11 @@ static void amdgpu_job_timedout(struct amd_sched_job *s_job)
  job->base.sched->name,
  atomic_read(>ring->fence_drv.last_seq),
  job->ring->fence_drv.sync_seq);
-   amdgpu_gpu_reset(job->adev);
+
+   if (amdgpu_sriov_vf(job->adev))
+   amdgpu_sriov_gpu_reset(job->adev, true);
+   else
+   amdgpu_gpu_reset(job->adev);
   }
   
   int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org

RE: [PATCH 3/4] drm/amdgpu:only call flr_work under infinite timeout

2017-05-08 Thread Liu, Monk
yeah my mistake, thanks for catch 

-Original Message-
From: Christian König [mailto:deathsim...@vodafone.de] 
Sent: Monday, May 08, 2017 5:11 PM
To: Liu, Monk ; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 3/4] drm/amdgpu:only call flr_work under infinite timeout

Am 08.05.2017 um 08:51 schrieb Monk Liu:
> Change-Id: I541aa5109f4fcab06ece4761a09dc7e053ec6837
> Signed-off-by: Monk Liu 
> ---
>   drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c | 15 +--
>   1 file changed, 9 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c 
> b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c
> index 1cdf5cc..0109b5c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c
> +++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c
> @@ -537,12 +537,15 @@ static int xgpu_vi_mailbox_rcv_irq(struct amdgpu_device 
> *adev,
>   {
>   int r;
>   
> - /* see what event we get */
> - r = xgpu_vi_mailbox_rcv_msg(adev, IDH_FLR_NOTIFICATION);
> -
> - /* only handle FLR_NOTIFY now */
> - if (!r)
> - schedule_work(>virt.flr_work);
> + /* trigger gpu-reset by hypervisor only if TDR disbaled */
> + if (msecs_to_jiffies(amdgpu_lockup_timeout) == MAX_SCHEDULE_TIMEOUT) {

That won't work like this, we use zero for infinite timeout here. See 
amdgpu_fence_driver_init_ring() as well.

Just changing that test to "amdgpu_lockup_timeout == 0" should work.

Christian.

> + /* see what event we get */
> + r = xgpu_vi_mailbox_rcv_msg(adev, IDH_FLR_NOTIFICATION);
> +
> + /* only handle FLR_NOTIFY now */
> + if (!r)
> + schedule_work(>virt.flr_work);
> + }
>   
>   return 0;
>   }


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 3/4] drm/amdgpu:only call flr_work under infinite timeout

2017-05-08 Thread Christian König

Am 08.05.2017 um 08:51 schrieb Monk Liu:

Change-Id: I541aa5109f4fcab06ece4761a09dc7e053ec6837
Signed-off-by: Monk Liu 
---
  drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c | 15 +--
  1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c 
b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c
index 1cdf5cc..0109b5c 100644
--- a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c
+++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c
@@ -537,12 +537,15 @@ static int xgpu_vi_mailbox_rcv_irq(struct amdgpu_device 
*adev,
  {
int r;
  
-	/* see what event we get */

-   r = xgpu_vi_mailbox_rcv_msg(adev, IDH_FLR_NOTIFICATION);
-
-   /* only handle FLR_NOTIFY now */
-   if (!r)
-   schedule_work(>virt.flr_work);
+   /* trigger gpu-reset by hypervisor only if TDR disbaled */
+   if (msecs_to_jiffies(amdgpu_lockup_timeout) == MAX_SCHEDULE_TIMEOUT) {


That won't work like this, we use zero for infinite timeout here. See 
amdgpu_fence_driver_init_ring() as well.


Just changing that test to "amdgpu_lockup_timeout == 0" should work.

Christian.


+   /* see what event we get */
+   r = xgpu_vi_mailbox_rcv_msg(adev, IDH_FLR_NOTIFICATION);
+
+   /* only handle FLR_NOTIFY now */
+   if (!r)
+   schedule_work(>virt.flr_work);
+   }
  
  	return 0;

  }



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in gpu-reset

2017-05-08 Thread Liu, Monk
For SR-IOV use case, we call gpu reset under the case we have no choice ...

So many places like debug fs shouldn't a good reason to trigger gpu reset

You know that gpu reset under SR-IOV will have very big impact on all other VFs 
...

BR Monk

-Original Message-
From: Christian König [mailto:deathsim...@vodafone.de] 
Sent: Monday, May 08, 2017 5:08 PM
To: Liu, Monk ; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in gpu-reset

Am 08.05.2017 um 08:51 schrieb Monk Liu:
> because we don't want to do sriov-gpu-reset under certain cases, so 
> just split those two funtion and don't invoke sr-iov one from 
> bare-metal one.
>
> Change-Id: I641126c241e2ee2dfd54e6d16c389b159f99cfe0
> Signed-off-by: Monk Liu 
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 3 ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c  | 3 ++-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c| 3 ++-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c| 6 +-
>   5 files changed, 10 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index 45a60a6..4985a7e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -2652,9 +2652,6 @@ int amdgpu_gpu_reset(struct amdgpu_device *adev)
>   int resched;
>   bool need_full_reset;
>   
> - if (amdgpu_sriov_vf(adev))
> - return amdgpu_sriov_gpu_reset(adev, true);
> -
>   if (!amdgpu_check_soft_reset(adev)) {
>   DRM_INFO("No hardware hang detected. Did some blocks stall?\n");
>   return 0;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> index 5772ef2..d7523d1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> @@ -651,7 +651,8 @@ static int amdgpu_debugfs_gpu_reset(struct seq_file *m, 
> void *data)
>   struct amdgpu_device *adev = dev->dev_private;
>   
>   seq_printf(m, "gpu reset\n");
> - amdgpu_gpu_reset(adev);
> + if (!amdgpu_sriov_vf(adev))
> + amdgpu_gpu_reset(adev);

Well that is clearly not a good idea. Why do you want to disable the reset here?

>   
>   return 0;
>   }
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index 67be795..5bcbea0 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -221,7 +221,7 @@ void amdgpu_gem_object_close(struct drm_gem_object 
> *obj,
>   
>   static int amdgpu_gem_handle_lockup(struct amdgpu_device *adev, int r)
>   {
> - if (r == -EDEADLK) {
> + if (r == -EDEADLK && !amdgpu_sriov_vf(adev)) {

Not a problem of your patch, but that stuff is outdated and should have been 
removed completely years ago. Going to take care of that.

>   r = amdgpu_gpu_reset(adev);
>   if (!r)
>   r = -EAGAIN;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
> index f8a6c95..49c6e6e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
> @@ -89,7 +89,8 @@ static void amdgpu_irq_reset_work_func(struct work_struct 
> *work)
>   struct amdgpu_device *adev = container_of(work, struct amdgpu_device,
> reset_work);
>   
> - amdgpu_gpu_reset(adev);
> + if (!amdgpu_sriov_vf(adev))
> + amdgpu_gpu_reset(adev);

Mhm, that disables the reset on an invalid register access or invalid command 
stream. Is that really what we want?

Christian.

>   }
>   
>   /* Disable *all* interrupts */
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index 690ef3d..c7718af 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -36,7 +36,11 @@ static void amdgpu_job_timedout(struct amd_sched_job 
> *s_job)
> job->base.sched->name,
> atomic_read(>ring->fence_drv.last_seq),
> job->ring->fence_drv.sync_seq);
> - amdgpu_gpu_reset(job->adev);
> +
> + if (amdgpu_sriov_vf(job->adev))
> + amdgpu_sriov_gpu_reset(job->adev, true);
> + else
> + amdgpu_gpu_reset(job->adev);
>   }
>   
>   int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/4] drm/amdgpu:use job* to replace voluntary

2017-05-08 Thread Christian König

Am 08.05.2017 um 08:51 schrieb Monk Liu:

that way we can know which job cause hang and
can do per sched reset/recovery instead of all
sched.

Change-Id: Ifc98cd74b2d93823c489de6a89087ba188957eff
Signed-off-by: Monk Liu 


Reviewed-by: Christian König 


---
  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 7 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c| 2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h   | 2 +-
  drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c  | 2 +-
  drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c  | 2 +-
  5 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 4985a7e..0e5f314 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2529,14 +2529,13 @@ static int amdgpu_recover_vram_from_shadow(struct 
amdgpu_device *adev,
   * amdgpu_sriov_gpu_reset - reset the asic
   *
   * @adev: amdgpu device pointer
- * @voluntary: if this reset is requested by guest.
- * (true means by guest and false means by HYPERVISOR )
+ * @job: which job trigger hang
   *
   * Attempt the reset the GPU if it has hung (all asics).
   * for SRIOV case.
   * Returns 0 for success or an error on failure.
   */
-int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, bool voluntary)
+int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, struct amdgpu_job *job)
  {
int i, r = 0;
int resched;
@@ -2566,7 +2565,7 @@ int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, 
bool voluntary)
amdgpu_fence_driver_force_completion(adev);
  
  	/* request to take full control of GPU before re-initialization  */

-   if (voluntary)
+   if (job)
amdgpu_virt_reset_gpu(adev);
else
amdgpu_virt_request_full_gpu(adev, true);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index c7718af..3c6fb6e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -38,7 +38,7 @@ static void amdgpu_job_timedout(struct amd_sched_job *s_job)
  job->ring->fence_drv.sync_seq);
  
  	if (amdgpu_sriov_vf(job->adev))

-   amdgpu_sriov_gpu_reset(job->adev, true);
+   amdgpu_sriov_gpu_reset(job->adev, job);
else
amdgpu_gpu_reset(job->adev);
  }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
index 6f2b7df..9e1062e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
@@ -96,7 +96,7 @@ void amdgpu_virt_kiq_wreg(struct amdgpu_device *adev, 
uint32_t reg, uint32_t v);
  int amdgpu_virt_request_full_gpu(struct amdgpu_device *adev, bool init);
  int amdgpu_virt_release_full_gpu(struct amdgpu_device *adev, bool init);
  int amdgpu_virt_reset_gpu(struct amdgpu_device *adev);
-int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, bool voluntary);
+int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, struct amdgpu_job *job);
  int amdgpu_virt_alloc_mm_table(struct amdgpu_device *adev);
  void amdgpu_virt_free_mm_table(struct amdgpu_device *adev);
  
diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c b/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c

index 96139ec..69da52d 100644
--- a/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
+++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
@@ -243,7 +243,7 @@ static void xgpu_ai_mailbox_flr_work(struct work_struct 
*work)
}
  
  	/* Trigger recovery due to world switch failure */

-   amdgpu_sriov_gpu_reset(adev, false);
+   amdgpu_sriov_gpu_reset(adev, NULL);
  }
  
  static int xgpu_ai_set_mailbox_rcv_irq(struct amdgpu_device *adev,

diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c 
b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c
index f0d64f1..1cdf5cc 100644
--- a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c
+++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c
@@ -514,7 +514,7 @@ static void xgpu_vi_mailbox_flr_work(struct work_struct 
*work)
}
  
  	/* Trigger recovery due to world switch failure */

-   amdgpu_sriov_gpu_reset(adev, false);
+   amdgpu_sriov_gpu_reset(adev, NULL);
  }
  
  static int xgpu_vi_set_mailbox_rcv_irq(struct amdgpu_device *adev,



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in gpu-reset

2017-05-08 Thread Christian König

Am 08.05.2017 um 08:51 schrieb Monk Liu:

because we don't want to do sriov-gpu-reset under certain
cases, so just split those two funtion and don't invoke
sr-iov one from bare-metal one.

Change-Id: I641126c241e2ee2dfd54e6d16c389b159f99cfe0
Signed-off-by: Monk Liu 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 3 ---
  drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c  | 3 ++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c| 3 ++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c| 6 +-
  5 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 45a60a6..4985a7e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2652,9 +2652,6 @@ int amdgpu_gpu_reset(struct amdgpu_device *adev)
int resched;
bool need_full_reset;
  
-	if (amdgpu_sriov_vf(adev))

-   return amdgpu_sriov_gpu_reset(adev, true);
-
if (!amdgpu_check_soft_reset(adev)) {
DRM_INFO("No hardware hang detected. Did some blocks stall?\n");
return 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 5772ef2..d7523d1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -651,7 +651,8 @@ static int amdgpu_debugfs_gpu_reset(struct seq_file *m, 
void *data)
struct amdgpu_device *adev = dev->dev_private;
  
  	seq_printf(m, "gpu reset\n");

-   amdgpu_gpu_reset(adev);
+   if (!amdgpu_sriov_vf(adev))
+   amdgpu_gpu_reset(adev);


Well that is clearly not a good idea. Why do you want to disable the 
reset here?


  
  	return 0;

  }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 67be795..5bcbea0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -221,7 +221,7 @@ void amdgpu_gem_object_close(struct drm_gem_object *obj,
  
  static int amdgpu_gem_handle_lockup(struct amdgpu_device *adev, int r)

  {
-   if (r == -EDEADLK) {
+   if (r == -EDEADLK && !amdgpu_sriov_vf(adev)) {


Not a problem of your patch, but that stuff is outdated and should have 
been removed completely years ago. Going to take care of that.



r = amdgpu_gpu_reset(adev);
if (!r)
r = -EAGAIN;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
index f8a6c95..49c6e6e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
@@ -89,7 +89,8 @@ static void amdgpu_irq_reset_work_func(struct work_struct 
*work)
struct amdgpu_device *adev = container_of(work, struct amdgpu_device,
  reset_work);
  
-	amdgpu_gpu_reset(adev);

+   if (!amdgpu_sriov_vf(adev))
+   amdgpu_gpu_reset(adev);


Mhm, that disables the reset on an invalid register access or invalid 
command stream. Is that really what we want?


Christian.


  }
  
  /* Disable *all* interrupts */

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 690ef3d..c7718af 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -36,7 +36,11 @@ static void amdgpu_job_timedout(struct amd_sched_job *s_job)
  job->base.sched->name,
  atomic_read(>ring->fence_drv.last_seq),
  job->ring->fence_drv.sync_seq);
-   amdgpu_gpu_reset(job->adev);
+
+   if (amdgpu_sriov_vf(job->adev))
+   amdgpu_sriov_gpu_reset(job->adev, true);
+   else
+   amdgpu_gpu_reset(job->adev);
  }
  
  int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 4/4] drm/amdgpu/SRIOV:implement guilty job TDR (V2)

2017-05-08 Thread Liu, Monk
@Christian

This one is changed to guilty job scheme accordingly with your response 

BR Monk

-Original Message-
From: Monk Liu [mailto:monk@amd.com] 
Sent: Monday, May 08, 2017 3:00 PM
To: amd-gfx@lists.freedesktop.org
Cc: Liu, Monk 
Subject: [PATCH] drm/amdgpu/SRIOV:implement guilty job TDR (V2)

1,TDR will kickout guilty job if it hang exceed the threshold of the given one 
from kernel paramter "job_hang_limit", that way a bad command stream will not 
infinitly cause GPU hang.

by default this threshold is 1 so a job will be kicked out after it hang.

2,if a job timeout TDR routine will not reset all sched/ring, instead if will 
only reset on the givn one which is indicated by @job of 
amdgpu_sriov_gpu_reset, that way we don't need to reset and recover each 
sched/ring if we already know which job cause GPU hang.

3,unblock sriov_gpu_reset for AI family.
4,don't init entity for KIQ

TODO:
when a job is considered as guilty, we should mark some flag in its fence 
status flag, and let UMD side aware that this fence signaling is not due to job 
complete but job hang.

Change-Id: I7b89c19a3de93249db570d0a80522176b1525a09
Signed-off-by: Monk Liu 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   |  4 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c| 36 ---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  4 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c |  6 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  |  1 +
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.c | 11 +++-  
drivers/gpu/drm/amd/scheduler/gpu_scheduler.h |  7 ++
 8 files changed, 60 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 90a69bf..93bcea2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -111,6 +111,7 @@ extern int amdgpu_prim_buf_per_se;  extern int 
amdgpu_pos_buf_per_se;  extern int amdgpu_cntl_sb_buf_per_se;  extern int 
amdgpu_param_buf_per_se;
+extern int amdgpu_job_hang_limit;
 
 #define AMDGPU_DEFAULT_GTT_SIZE_MB 3072ULL /* 3GB by default */
 #define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS 3000
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index b43..23afc58 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -52,6 +52,10 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev, 
struct amdgpu_ctx *ctx)
struct amd_sched_rq *rq;
 
rq = >sched.sched_rq[AMD_SCHED_PRIORITY_NORMAL];
+
+   if (ring == >gfx.kiq.ring)
+   continue;
+
r = amd_sched_entity_init(>sched, >rings[i].entity,
  rq, amdgpu_sched_jobs);
if (r)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 0e5f314..f3990fe 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2537,7 +2537,7 @@ static int amdgpu_recover_vram_from_shadow(struct 
amdgpu_device *adev,
  */
 int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, struct amdgpu_job *job) 
 {
-   int i, r = 0;
+   int i, j, r = 0;
int resched;
struct amdgpu_bo *bo, *tmp;
struct amdgpu_ring *ring;
@@ -2550,19 +2550,30 @@ int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, 
struct amdgpu_job *job)
/* block TTM */
resched = ttm_bo_lock_delayed_workqueue(>mman.bdev);
 
-   /* block scheduler */
-   for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
-   ring = adev->rings[i];
+   /* we start from the ring trigger GPU hang */
+   j = job ? job->ring->idx : 0;
+
+   if (job)
+   if (amd_sched_invalidate_job(>base, amdgpu_job_hang_limit))
+   amd_sched_job_kickout(>base);
 
+   /* block scheduler */
+   for (i = j; i < j + AMDGPU_MAX_RINGS; ++i) {
+   ring = adev->rings[i % AMDGPU_MAX_RINGS];
if (!ring || !ring->sched.thread)
continue;
 
kthread_park(ring->sched.thread);
+
+   if (job && j != i)
+   continue;
+
+   /* only do job_reset on the hang ring if @job not NULL */
amd_sched_hw_job_reset(>sched);
-   }
 
-   /* after all hw jobs are reset, hw fence is meaningless, so 
force_completion */
-   amdgpu_fence_driver_force_completion(adev);
+   /* after all hw jobs are reset, hw fence is meaningless, so 
force_completion */
+   amdgpu_fence_driver_force_completion_ring(ring);
+   }
 
/* request to take full control of GPU before re-initialization  */
if (job)
@@ -2615,11 +2626,16 @@ int 

[PATCH] drm/amdgpu/SRIOV:implement guilty job TDR (V2)

2017-05-08 Thread Monk Liu
1,TDR will kickout guilty job if it hang exceed the threshold
of the given one from kernel paramter "job_hang_limit", that
way a bad command stream will not infinitly cause GPU hang.

by default this threshold is 1 so a job will be kicked out
after it hang.

2,if a job timeout TDR routine will not reset all sched/ring,
instead if will only reset on the givn one which is indicated
by @job of amdgpu_sriov_gpu_reset, that way we don't need to
reset and recover each sched/ring if we already know which job
cause GPU hang.

3,unblock sriov_gpu_reset for AI family.
4,don't init entity for KIQ

TODO:
when a job is considered as guilty, we should mark some flag
in its fence status flag, and let UMD side aware that this
fence signaling is not due to job complete but job hang.

Change-Id: I7b89c19a3de93249db570d0a80522176b1525a09
Signed-off-by: Monk Liu 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   |  4 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c| 36 ---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  4 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c |  6 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  |  1 +
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.c | 11 +++-
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.h |  7 ++
 8 files changed, 60 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 90a69bf..93bcea2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -111,6 +111,7 @@ extern int amdgpu_prim_buf_per_se;
 extern int amdgpu_pos_buf_per_se;
 extern int amdgpu_cntl_sb_buf_per_se;
 extern int amdgpu_param_buf_per_se;
+extern int amdgpu_job_hang_limit;
 
 #define AMDGPU_DEFAULT_GTT_SIZE_MB 3072ULL /* 3GB by default */
 #define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS 3000
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index b43..23afc58 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -52,6 +52,10 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev, 
struct amdgpu_ctx *ctx)
struct amd_sched_rq *rq;
 
rq = >sched.sched_rq[AMD_SCHED_PRIORITY_NORMAL];
+
+   if (ring == >gfx.kiq.ring)
+   continue;
+
r = amd_sched_entity_init(>sched, >rings[i].entity,
  rq, amdgpu_sched_jobs);
if (r)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 0e5f314..f3990fe 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2537,7 +2537,7 @@ static int amdgpu_recover_vram_from_shadow(struct 
amdgpu_device *adev,
  */
 int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, struct amdgpu_job *job)
 {
-   int i, r = 0;
+   int i, j, r = 0;
int resched;
struct amdgpu_bo *bo, *tmp;
struct amdgpu_ring *ring;
@@ -2550,19 +2550,30 @@ int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, 
struct amdgpu_job *job)
/* block TTM */
resched = ttm_bo_lock_delayed_workqueue(>mman.bdev);
 
-   /* block scheduler */
-   for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
-   ring = adev->rings[i];
+   /* we start from the ring trigger GPU hang */
+   j = job ? job->ring->idx : 0;
+
+   if (job)
+   if (amd_sched_invalidate_job(>base, amdgpu_job_hang_limit))
+   amd_sched_job_kickout(>base);
 
+   /* block scheduler */
+   for (i = j; i < j + AMDGPU_MAX_RINGS; ++i) {
+   ring = adev->rings[i % AMDGPU_MAX_RINGS];
if (!ring || !ring->sched.thread)
continue;
 
kthread_park(ring->sched.thread);
+
+   if (job && j != i)
+   continue;
+
+   /* only do job_reset on the hang ring if @job not NULL */
amd_sched_hw_job_reset(>sched);
-   }
 
-   /* after all hw jobs are reset, hw fence is meaningless, so 
force_completion */
-   amdgpu_fence_driver_force_completion(adev);
+   /* after all hw jobs are reset, hw fence is meaningless, so 
force_completion */
+   amdgpu_fence_driver_force_completion_ring(ring);
+   }
 
/* request to take full control of GPU before re-initialization  */
if (job)
@@ -2615,11 +2626,16 @@ int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, 
struct amdgpu_job *job)
}
fence_put(fence);
 
-   for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
-   struct amdgpu_ring *ring = adev->rings[i];
+   for (i = j; i < j + AMDGPU_MAX_RINGS; ++i) {
+   ring = adev->rings[i % AMDGPU_MAX_RINGS];
if (!ring || 

[PATCH 1/4] drm/amdgpu:don't invoke srio-gpu-reset in gpu-reset

2017-05-08 Thread Monk Liu
because we don't want to do sriov-gpu-reset under certain
cases, so just split those two funtion and don't invoke
sr-iov one from bare-metal one.

Change-Id: I641126c241e2ee2dfd54e6d16c389b159f99cfe0
Signed-off-by: Monk Liu 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 3 ---
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c  | 3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c| 3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c| 6 +-
 5 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 45a60a6..4985a7e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2652,9 +2652,6 @@ int amdgpu_gpu_reset(struct amdgpu_device *adev)
int resched;
bool need_full_reset;
 
-   if (amdgpu_sriov_vf(adev))
-   return amdgpu_sriov_gpu_reset(adev, true);
-
if (!amdgpu_check_soft_reset(adev)) {
DRM_INFO("No hardware hang detected. Did some blocks stall?\n");
return 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 5772ef2..d7523d1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -651,7 +651,8 @@ static int amdgpu_debugfs_gpu_reset(struct seq_file *m, 
void *data)
struct amdgpu_device *adev = dev->dev_private;
 
seq_printf(m, "gpu reset\n");
-   amdgpu_gpu_reset(adev);
+   if (!amdgpu_sriov_vf(adev))
+   amdgpu_gpu_reset(adev);
 
return 0;
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 67be795..5bcbea0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -221,7 +221,7 @@ void amdgpu_gem_object_close(struct drm_gem_object *obj,
 
 static int amdgpu_gem_handle_lockup(struct amdgpu_device *adev, int r)
 {
-   if (r == -EDEADLK) {
+   if (r == -EDEADLK && !amdgpu_sriov_vf(adev)) {
r = amdgpu_gpu_reset(adev);
if (!r)
r = -EAGAIN;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
index f8a6c95..49c6e6e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
@@ -89,7 +89,8 @@ static void amdgpu_irq_reset_work_func(struct work_struct 
*work)
struct amdgpu_device *adev = container_of(work, struct amdgpu_device,
  reset_work);
 
-   amdgpu_gpu_reset(adev);
+   if (!amdgpu_sriov_vf(adev))
+   amdgpu_gpu_reset(adev);
 }
 
 /* Disable *all* interrupts */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 690ef3d..c7718af 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -36,7 +36,11 @@ static void amdgpu_job_timedout(struct amd_sched_job *s_job)
  job->base.sched->name,
  atomic_read(>ring->fence_drv.last_seq),
  job->ring->fence_drv.sync_seq);
-   amdgpu_gpu_reset(job->adev);
+
+   if (amdgpu_sriov_vf(job->adev))
+   amdgpu_sriov_gpu_reset(job->adev, true);
+   else
+   amdgpu_gpu_reset(job->adev);
 }
 
 int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 0/4] TDR guilty job feature

2017-05-08 Thread Monk Liu
for SRIOV gpu reset:
this feature allows driver to judge how much time can a job hang for
and will kickout this job from ring_mirror list when doing recover if 
the threshold is exceeded.


Monk Liu (4):
  drm/amdgpu:don't invoke srio-gpu-reset in gpu-reset
  drm/amdgpu:use job* to replace voluntary
  drm/amdgpu:only call flr_work under infinite timeout
  drm/amdgpu/SRIOV:implement guilty job TDR for

 drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   |  4 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c| 46 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  4 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 11 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c   |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c   |  3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c   |  6 +++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c  |  2 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h  |  2 +-
 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c |  2 +-
 drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c | 15 +
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c|  1 +
 drivers/gpu/drm/amd/amdgpu/soc15.c|  4 +--
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.c | 11 ++-
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.h |  7 
 18 files changed, 92 insertions(+), 31 deletions(-)

-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 4/8] drm/amdgpu: convert kiq ring_mutex to a spinlock

2017-05-08 Thread Liu, Monk
Andres

Some previous patches like move KIQ mutex-lock from amdgpu_virt to common place 
jumped my NAK, but from technique perspective it's no matter anyway, 
But this patch and the following patches are go to a dead end,  

1, Don't use KIQ to access register inside INTR context 
2, Don't submit CP/hw blocking jobs on KIQ since they will prevent world switch,

Besides, why not use MEC2 PIPE1 queue1 to serve your purpose ?  if you insist 
to do above things in KIQ?
Any reason you must use KIQ ?


BR
-Original Message-
From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf Of Liu, 
Monk
Sent: Monday, May 08, 2017 1:59 PM
To: Andres Rodriguez ; amd-gfx@lists.freedesktop.org
Subject: RE: [PATCH 4/8] drm/amdgpu: convert kiq ring_mutex to a spinlock

Clearly NAK

mutex_lock(>ring_mutex);
amdgpu_ring_alloc(ring, 32);
amdgpu_ring_emit_rreg(ring, reg);
amdgpu_fence_emit(ring, );
amdgpu_ring_commit(ring);
mutex_unlock(>ring_mutex);


be aware that amdgpu_fence_emit() introduces fence_wait() operation, spin_lock 
is forbidden and why you want to use spinlock instead of mutex, please give the 
explanation 

BR Monk

-Original Message-
From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf Of 
Andres Rodriguez
Sent: Saturday, May 06, 2017 1:10 AM
To: amd-gfx@lists.freedesktop.org
Cc: andre...@gmail.com
Subject: [PATCH 4/8] drm/amdgpu: convert kiq ring_mutex to a spinlock

First step towards enabling kiq register operations from an interrupt handler

Signed-off-by: Andres Rodriguez 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h|  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 10 +-
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 496ad66..1d987e3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -941,7 +941,7 @@ struct amdgpu_kiq {
struct amdgpu_ring  ring;
struct amdgpu_irq_src   irq;
uint32_treg_val_offs;
-   struct mutexring_mutex;
+   spinlock_t  ring_lock;
 };
 
 /*
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index ef3a46d..ce22f04 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -169,12 +169,12 @@ uint32_t amdgpu_kiq_rreg(struct amdgpu_device *adev, 
uint32_t reg)
 
BUG_ON(!ring->funcs->emit_rreg);
 
-   mutex_lock(>ring_mutex);
+   spin_lock(>ring_lock);
amdgpu_ring_alloc(ring, 32);
amdgpu_ring_emit_rreg(ring, reg);
amdgpu_fence_emit(ring, );
amdgpu_ring_commit(ring);
-   mutex_unlock(>ring_mutex);
+   spin_unlock(>ring_lock);
 
r = dma_fence_wait(f, false);
if (r)
@@ -195,12 +195,12 @@ void amdgpu_kiq_wreg(struct amdgpu_device *adev, uint32_t 
reg, uint32_t v)
 
BUG_ON(!ring->funcs->emit_wreg);
 
-   mutex_lock(>ring_mutex);
+   spin_lock(>ring_lock);
amdgpu_ring_alloc(ring, 32);
amdgpu_ring_emit_wreg(ring, reg, v);
amdgpu_fence_emit(ring, );
amdgpu_ring_commit(ring);
-   mutex_unlock(>ring_mutex);
+   spin_unlock(>ring_lock);
 
r = dma_fence_wait(f, false);
if (r)
@@ -1903,7 +1903,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
mutex_init(>firmware.mutex);
mutex_init(>pm.mutex);
mutex_init(>gfx.gpu_clock_mutex);
-   mutex_init(>gfx.kiq.ring_mutex);
mutex_init(>srbm_mutex);
mutex_init(>grbm_idx_mutex);
mutex_init(>mn_lock);
@@ -1921,6 +1920,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
spin_lock_init(>gc_cac_idx_lock);
spin_lock_init(>audio_endpt_idx_lock);
spin_lock_init(>mm_stats.lock);
+   spin_lock_init(>gfx.kiq.ring_lock);
 
INIT_LIST_HEAD(>shadow_list);
mutex_init(>shadow_list_lock);
--
2.9.3

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx