Re: [RFC] drm/amd/display: add SI support to AMD DC

2018-10-08 Thread sylvain . bertrand
On Mon, Oct 08, 2018 at 08:17:06PM +, sylvain.bertr...@gmail.com wrote:
> Sorry, I was wrong: I thought your patch set was enabling by default DC for
> dce6 (it requires the DC kernel param).
> I did force it, and it fails to init the dce6.

I did hack a bit your patch set on amd-staging-drm-next to make it go through 
the
asic init and I managed to get a x11 display with lines kind of garbled, but
you can still understand easily what's on the screen.

I checked the kernel log, and like you said, I got errors in DM_PPLIB due to an
invalid powerlevel and atombios/vbios table parsing regarding connectors.

It _seems_ there is not that much additional work to do in order to make it
properly work.

That said, since I am not an android linux dev, I cannot really understand why
the current dce6 implementation in amdgpu is a pb.

AFAIK, the real thing that you additionally get with DC is freesync. But
freesync is actually going to be interesting only if displays are able to
get their sync range lower bound to 0, and get significant power saving
thanks to this. For the use case of very low display refresh rate I don't even
think displayport or hdmi can do that, and be power friendly (you would have
to retrain the link probably each time you send a framebuffer to the display).

I don't know what are the features you have in atomic mode setting and not in
legacy mode setting? You are just setting the same video mode in 2 different
ways.

-- 
Sylvain
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [RFC] drm/amd/display: add SI support to AMD DC

2018-10-08 Thread sylvain . bertrand
On Mon, Oct 08, 2018 at 07:02:54PM +0200, Mauro Rossi wrote:
> Thanks for info, do you have some github or patch to share for
> comparison/mutual knowledge?

Sorry, I was wrong: I thought your patch set was enabling by default DC for
dce6 (it requires the DC kernel param).
I did force it, and it fails to init the dce6.

:(

-- 
Sylvain
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when suspend

2018-10-08 Thread Zhu, Rex
> This means we should either separate filling the BO from allocating it (e.g.
> split amdgpu_ucode_init_bo into two functions) and then only call the filling
> function during GPU reset and resume.

Got it. Thanks.
I will refine this function.

Regards
Rex
> -Original Message-
> From: Christian König 
> Sent: Tuesday, October 9, 2018 2:22 AM
> To: Zhu, Rex ; Koenig, Christian
> ; Deucher, Alexander
> ; Alex Deucher 
> Cc: amd-gfx list 
> Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
> suspend
> 
> Am 08.10.2018 um 20:15 schrieb Zhu, Rex:
> >
> >> -Original Message-
> >> From: Koenig, Christian
> >> Sent: Tuesday, October 9, 2018 2:03 AM
> >> To: Zhu, Rex ; Deucher, Alexander
> >> ; Alex Deucher 
> >> Cc: amd-gfx list 
> >> Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
> >> suspend
> >>
> >> Am 08.10.2018 um 19:58 schrieb Zhu, Rex:
>  -Original Message-
>  From: Christian König 
>  Sent: Tuesday, October 9, 2018 1:32 AM
>  To: Zhu, Rex ; Deucher, Alexander
>  ; Alex Deucher
> 
>  Cc: amd-gfx list 
>  Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
>  suspend
> 
>  Am 08.10.2018 um 18:30 schrieb Zhu, Rex:
> >> -Original Message-
> >> From: Deucher, Alexander
> >> Sent: Tuesday, October 9, 2018 12:21 AM
> >> To: Zhu, Rex ; Alex Deucher
>  
> >> Cc: amd-gfx list 
> >> Subject: RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo
> >> when suspend
> >>
> >>> -Original Message-
> >>> From: amd-gfx  On Behalf
> >> Of
> >>> Zhu, Rex
> >>> Sent: Monday, October 8, 2018 11:57 AM
> >>> To: Alex Deucher 
> >>> Cc: amd-gfx list 
> >>> Subject: RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo
> >>> when suspend
> >>>
> >>>
> >>>
>  -Original Message-
>  From: Alex Deucher 
>  Sent: Thursday, October 4, 2018 11:35 AM
>  To: Zhu, Rex 
>  Cc: amd-gfx list 
>  Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo
>  when suspend
> 
>  On Wed, Oct 3, 2018 at 7:11 AM Rex Zhu 
> wrote:
> > driver don't release the ucode memory when suspend. so don't
> > need to allocate bo when resume back.
> >
> > Signed-off-by: Rex Zhu 
> > ---
> > drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > index 9878212..adfeb93 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > @@ -434,7 +434,7 @@ int amdgpu_ucode_init_bo(struct
> >> amdgpu_device
>  *adev)
> >return 0;
> >}
> >
> > -   if (!adev->in_gpu_reset) {
> > +   if (!adev->in_gpu_reset && !adev->in_suspend) {
> >err = amdgpu_bo_create_kernel(adev,
> > adev->firmware.fw_size,
>  PAGE_SIZE,
> >amdgpu_sriov_vf(adev) ?
>  AMDGPU_GEM_DOMAIN_VRAM : AMDGPU_GEM_DOMAIN_GTT,
> > >firmware.fw_buf,
>  Not sure if we support S3 in SR-IOV, but I think this will
>  break it because we'll lose vram contents and not re-init it.
> >>> Confirm with SR-IOV team, S3 was not supported in SR-IOV.
> >>>
> >>> But I still confused why this patch will break the suspend
> >>> if in SRIOV
>  case?
> >> Pinned buffers don't get evicted so if we lose VRAM due to a gpu
> >> reset or S3, the data is lost.  GTT is retained since the OS manages
> that.
> > The gart table was unpinned when suspend.so don't need to create
> > the bo
>  again. we still copy the ucode to the bo.
> > And in baremetal,  this function can return directly for S3.
>  That's irrelevant.
> 
>  The whole code is buggy since amdgpu_ucode_fini_bo() will drop the
>  BO independent if we are in reset or in suspend.
> >>> We don't call amdgpu_ucode_fini_bo when suspend/sriov_reset.
> >> Yeah and exactly that's the bug which should be fixed instead.
> > Sorry, I don't understand.
> > Why we need to release the bo when suspend?
> > In amdgpu, we create a bunch of bo(vram_scratch.robj, e.g), we only free
> them when driver unload.
> 
> You should not release the BO while suspending, but the
> amdgpu_ucode_fini_bo() and amdgpu_ucode_init_bo() should be called
> under the same conditions.
> 
> This means we should either separate filling the BO from allocating it (e.g.
> split amdgpu_ucode_init_bo into two functions) and then only call the filling
> function during GPU reset and resume.
> 
> Or we add the same "if 

RE: [PATCH] amdgpu/gmc : fix compile warning

2018-10-08 Thread Deucher, Alexander
> -Original Message-
> From: Guenter Roeck  On Behalf Of Guenter Roeck
> Sent: Monday, October 8, 2018 1:41 PM
> To: Deucher, Alexander 
> Cc: Koenig, Christian ; Peng Hao
> ; airl...@linux.ie; linux-ker...@vger.kernel.org;
> dri-de...@lists.freedesktop.org; amd-gfx@lists.freedesktop.org
> Subject: Re: [PATCH] amdgpu/gmc : fix compile warning
> 
> On Mon, Oct 08, 2018 at 03:57:07PM +, Deucher, Alexander wrote:
> > > -Original Message-
> > > From: Guenter Roeck  On Behalf Of Guenter
> Roeck
> > > Sent: Monday, October 8, 2018 10:11 AM
> > > To: Koenig, Christian ; Peng Hao
> > > 
> > > Cc: airl...@linux.ie; linux-ker...@vger.kernel.org; dri-
> > > de...@lists.freedesktop.org; amd-gfx@lists.freedesktop.org; Deucher,
> > > Alexander 
> > > Subject: Re: [PATCH] amdgpu/gmc : fix compile warning
> > >
> > > On 10/08/2018 06:47 AM, Koenig, Christian wrote:
> > > > Am 08.10.2018 um 15:33 schrieb Guenter Roeck:
> > > >> On 10/08/2018 01:00 AM, Christian König wrote:
> > > >>> Am 05.10.2018 um 10:38 schrieb Guenter Roeck:
> > >  On 10/05/2018 01:14 AM, Koenig, Christian wrote:
> > > > Am 04.10.2018 um 20:52 schrieb Guenter Roeck:
> > > >> Hi,
> > > >>
> > > >> On Fri, Sep 14, 2018 at 06:05:52PM +0800, Peng Hao wrote:
> > > >>> drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:
> > > >>>    In function ‘gmc_v8_0_process_interrupt’:
> > > >>> drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:1447:10:
> > > >>>    warning: missing braces around initializer
> > > >>> [-Wmissing-braces]
> > > >>>
> > > >>> Signed-off-by: Peng Hao 
> > > >> Was there any feedback on this patch ? The problem does
> > > >> affect us, and we'll need a fix.
> > > >
> > > > Well as discussed using "{ { 0 } }" is as wrong as using "{ 0 }".
> > > >
> > > 
> > >  Ah, sorry, I must have missed the discussion.
> > > 
> > >  It is for sure not the best solution, but at least it compiles,
> > >  and it seems to be proliferating.
> > > >>>
> > > >>> Yeah, and exactly that's the problem. As the discussion showed
> > > >>> "{ {
> > > >>> 0 } }" is buggy because it tells the compiler to only initialize
> > > >>> the first member of the structure, but not all of it.
> > > >>>
> > > >>> That is incorrect and rather dangerous cause it can lead to
> > > >>> unforeseen results and should probably trigger a warning.
> > > >>>
> > > 
> > >  $ git grep "{ *{ *0 *} *}" | wc
> > >       144    1180   11802
> > >  $ git grep "{ *{ *0 *} *}" drivers/gpu/drm/amd/ | wc
> > >    50 459    5239
> > > 
> > > > We should either use only "{ }" or even better make nails with
> > > > heads and use memset().
> > > 
> > >  I'd rather leave it up to the compiler to decide what is most
> > >  efficient.
> > > >>>
> > > >>> And I would rather prefer to have a working driver :)
> > > >>>
> > > >>
> > > >> So { } isn't correct either ?
> > > >
> > > > Yes, initializing structures with { } is known to be problematic as 
> > > > well.
> > > >
> > > > It doesn't necessary initialize all bytes when you have padding
> > > > causing random failures when structures are memcmp().
> > > >
> > > >>
> > > >> One thing I found missing in the discussion was the reference to
> > > >> the C standard.
> > > >> The C99 standard states in section 6.7.8 (Initialization) clause 19:
> > > >> "... all
> > > >> subobjects that are not initialized explicitly shall be
> > > >> initialized implicitly the same as objects that have static storage
> duration".
> > > >> Clause 21 makes further reference to partial initialization,
> > > >> suggesting the same. Various online resources, including the gcc
> > > >> documentation, all state the same. I don't find any reference to
> > > >> a partial initialization which would leave members of a structure
> > > >> undefined. It would be interesting for me to understand how and
> > > >> why this does not apply here.
> > > >>
> > > >> In this context, it is interesting that the other 48 instances of
> > > >> the { { 0 } } initialization in the same driver don't raise
> > > >> similar concerns, nor seemed to have caused any operational
> problems.
> > > >
> > > > Feel free to provide patches to replace those with memset().
> > > >
> > >
> > > Not me. As I see it, the problem, if it exists, would be a violation
> > > of the C standard. I don't believe hacking around bad C compilers. I
> > > would rather blacklist such compilers.
> > >
> > > >>
> > > >> Anyway, I fixed up the code in our tree (with { }), so I'll leave
> > > >> it up to you folks to decide what if anything to do about it.
> > > >
> > > > Well considering the known problems with {} initialization I'm
> > > > certainly rejecting all patches which turns memset() into {}.
> > > >
> > >
> > > Please point me to specific instances of this problem.
> >
> > I think there are a number of places in DC (drivers/gpu/drm/amd/display)
> where we applied the original proposed 

Re: [PATCH] amdgpu/gmc : fix compile warning

2018-10-08 Thread Guenter Roeck
On Mon, Oct 08, 2018 at 03:57:07PM +, Deucher, Alexander wrote:
> > -Original Message-
> > From: Guenter Roeck  On Behalf Of Guenter Roeck
> > Sent: Monday, October 8, 2018 10:11 AM
> > To: Koenig, Christian ; Peng Hao
> > 
> > Cc: airl...@linux.ie; linux-ker...@vger.kernel.org; dri-
> > de...@lists.freedesktop.org; amd-gfx@lists.freedesktop.org; Deucher,
> > Alexander 
> > Subject: Re: [PATCH] amdgpu/gmc : fix compile warning
> > 
> > On 10/08/2018 06:47 AM, Koenig, Christian wrote:
> > > Am 08.10.2018 um 15:33 schrieb Guenter Roeck:
> > >> On 10/08/2018 01:00 AM, Christian König wrote:
> > >>> Am 05.10.2018 um 10:38 schrieb Guenter Roeck:
> >  On 10/05/2018 01:14 AM, Koenig, Christian wrote:
> > > Am 04.10.2018 um 20:52 schrieb Guenter Roeck:
> > >> Hi,
> > >>
> > >> On Fri, Sep 14, 2018 at 06:05:52PM +0800, Peng Hao wrote:
> > >>> drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:
> > >>>    In function ‘gmc_v8_0_process_interrupt’:
> > >>> drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:1447:10:
> > >>>    warning: missing braces around initializer
> > >>> [-Wmissing-braces]
> > >>>
> > >>> Signed-off-by: Peng Hao 
> > >> Was there any feedback on this patch ? The problem does affect
> > >> us, and we'll need a fix.
> > >
> > > Well as discussed using "{ { 0 } }" is as wrong as using "{ 0 }".
> > >
> > 
> >  Ah, sorry, I must have missed the discussion.
> > 
> >  It is for sure not the best solution, but at least it compiles, and
> >  it seems to be proliferating.
> > >>>
> > >>> Yeah, and exactly that's the problem. As the discussion showed "{ {
> > >>> 0 } }" is buggy because it tells the compiler to only initialize the
> > >>> first member of the structure, but not all of it.
> > >>>
> > >>> That is incorrect and rather dangerous cause it can lead to
> > >>> unforeseen results and should probably trigger a warning.
> > >>>
> > 
> >  $ git grep "{ *{ *0 *} *}" | wc
> >       144    1180   11802
> >  $ git grep "{ *{ *0 *} *}" drivers/gpu/drm/amd/ | wc
> >    50 459    5239
> > 
> > > We should either use only "{ }" or even better make nails with
> > > heads and use memset().
> > 
> >  I'd rather leave it up to the compiler to decide what is most
> >  efficient.
> > >>>
> > >>> And I would rather prefer to have a working driver :)
> > >>>
> > >>
> > >> So { } isn't correct either ?
> > >
> > > Yes, initializing structures with { } is known to be problematic as well.
> > >
> > > It doesn't necessary initialize all bytes when you have padding
> > > causing random failures when structures are memcmp().
> > >
> > >>
> > >> One thing I found missing in the discussion was the reference to the
> > >> C standard.
> > >> The C99 standard states in section 6.7.8 (Initialization) clause 19:
> > >> "... all
> > >> subobjects that are not initialized explicitly shall be initialized
> > >> implicitly the same as objects that have static storage duration".
> > >> Clause 21 makes further reference to partial initialization,
> > >> suggesting the same. Various online resources, including the gcc
> > >> documentation, all state the same. I don't find any reference to a
> > >> partial initialization which would leave members of a structure
> > >> undefined. It would be interesting for me to understand how and why
> > >> this does not apply here.
> > >>
> > >> In this context, it is interesting that the other 48 instances of the
> > >> { { 0 } } initialization in the same driver don't raise similar
> > >> concerns, nor seemed to have caused any operational problems.
> > >
> > > Feel free to provide patches to replace those with memset().
> > >
> > 
> > Not me. As I see it, the problem, if it exists, would be a violation of the 
> > C
> > standard. I don't believe hacking around bad C compilers. I would rather
> > blacklist such compilers.
> > 
> > >>
> > >> Anyway, I fixed up the code in our tree (with { }), so I'll leave it
> > >> up to you folks to decide what if anything to do about it.
> > >
> > > Well considering the known problems with {} initialization I'm
> > > certainly rejecting all patches which turns memset() into {}.
> > >
> > 
> > Please point me to specific instances of this problem.
> 
> I think there are a number of places in DC (drivers/gpu/drm/amd/display) 
> where we applied the original proposed solution before realizing that it 
> would only initialize the first element.  It would be nice to get them fixed 
> up.
> 

I think this is factually incorrect. What you might want to try to say
is that padding may not be initialized when using anything but memset().
But that is a different problem.

Guenter
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] amdgpu/gmc : fix compile warning

2018-10-08 Thread Guenter Roeck
On Mon, Oct 08, 2018 at 05:22:24PM +, Koenig, Christian wrote:
> Am 08.10.2018 um 17:57 schrieb Deucher, Alexander:
>  One thing I found missing in the discussion was the reference to the
>  C standard.
>  The C99 standard states in section 6.7.8 (Initialization) clause 19:
>  "... all
>  subobjects that are not initialized explicitly shall be initialized
>  implicitly the same as objects that have static storage duration".
>  Clause 21 makes further reference to partial initialization,
>  suggesting the same. Various online resources, including the gcc
>  documentation, all state the same. I don't find any reference to a
>  partial initialization which would leave members of a structure
>  undefined. It would be interesting for me to understand how and why
>  this does not apply here.
> 
>  In this context, it is interesting that the other 48 instances of the
>  { { 0 } } initialization in the same driver don't raise similar
>  concerns, nor seemed to have caused any operational problems.
> >>> Feel free to provide patches to replace those with memset().
> >>>
> >> Not me. As I see it, the problem, if it exists, would be a violation of 
> >> the C
> >> standard. I don't believe hacking around bad C compilers. I would rather
> >> blacklist such compilers.
> 
> Well then you would need to blacklist basically all gcc variants of the 
> last decade or so.
> 
> Initializing only known members of structures is a perfectly valid 
> optimization and well known issue when you then compare the structure 
> with memcpy() or use the bytes for hashing or something similar.
> 

Isn't that about padding ? That is a completely different issue.

Guenter
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] amdgpu/gmc : fix compile warning

2018-10-08 Thread Guenter Roeck

On 10/08/2018 01:00 AM, Christian König wrote:

Am 05.10.2018 um 10:38 schrieb Guenter Roeck:

On 10/05/2018 01:14 AM, Koenig, Christian wrote:

Am 04.10.2018 um 20:52 schrieb Guenter Roeck:

Hi,

On Fri, Sep 14, 2018 at 06:05:52PM +0800, Peng Hao wrote:

drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:
  In function ‘gmc_v8_0_process_interrupt’:
drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:1447:10:
  warning: missing braces around initializer [-Wmissing-braces]

Signed-off-by: Peng Hao 

Was there any feedback on this patch ? The problem does affect us,
and we'll need a fix.


Well as discussed using "{ { 0 } }" is as wrong as using "{ 0 }".



Ah, sorry, I must have missed the discussion.

It is for sure not the best solution, but at least it compiles, and it seems
to be proliferating.


Yeah, and exactly that's the problem. As the discussion showed "{ { 0 } }" is 
buggy because it tells the compiler to only initialize the first member of the structure, 
but not all of it.

That is incorrect and rather dangerous cause it can lead to unforeseen results 
and should probably trigger a warning.



$ git grep "{ *{ *0 *} *}" | wc
    144    1180   11802
$ git grep "{ *{ *0 *} *}" drivers/gpu/drm/amd/ | wc
 50 459    5239


We should either use only "{ }" or even better make nails with heads and
use memset().


I'd rather leave it up to the compiler to decide what is most efficient.


And I would rather prefer to have a working driver :)



So { } isn't correct either ?

One thing I found missing in the discussion was the reference to the C standard.
The C99 standard states in section 6.7.8 (Initialization) clause 19: "... all
subobjects that are not initialized explicitly shall be initialized implicitly
the same as objects that have static storage duration". Clause 21 makes further
reference to partial initialization, suggesting the same. Various online
resources, including the gcc documentation, all state the same. I don't find
any reference to a partial initialization which would leave members of a 
structure
undefined. It would be interesting for me to understand how and why this does
not apply here.

In this context, it is interesting that the other 48 instances of the
{ { 0 } } initialization in the same driver don't raise similar concerns,
nor seemed to have caused any operational problems.

Anyway, I fixed up the code in our tree (with { }), so I'll leave it
up to you folks to decide what if anything to do about it.

Thanks,
Guenter
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] amdgpu/gmc : fix compile warning

2018-10-08 Thread Guenter Roeck

On 10/08/2018 06:47 AM, Koenig, Christian wrote:

Am 08.10.2018 um 15:33 schrieb Guenter Roeck:

On 10/08/2018 01:00 AM, Christian König wrote:

Am 05.10.2018 um 10:38 schrieb Guenter Roeck:

On 10/05/2018 01:14 AM, Koenig, Christian wrote:

Am 04.10.2018 um 20:52 schrieb Guenter Roeck:

Hi,

On Fri, Sep 14, 2018 at 06:05:52PM +0800, Peng Hao wrote:

drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:
   In function ‘gmc_v8_0_process_interrupt’:
drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:1447:10:
   warning: missing braces around initializer [-Wmissing-braces]

Signed-off-by: Peng Hao 

Was there any feedback on this patch ? The problem does affect us,
and we'll need a fix.


Well as discussed using "{ { 0 } }" is as wrong as using "{ 0 }".



Ah, sorry, I must have missed the discussion.

It is for sure not the best solution, but at least it compiles, and
it seems
to be proliferating.


Yeah, and exactly that's the problem. As the discussion showed "{ { 0
} }" is buggy because it tells the compiler to only initialize the
first member of the structure, but not all of it.

That is incorrect and rather dangerous cause it can lead to
unforeseen results and should probably trigger a warning.



$ git grep "{ *{ *0 *} *}" | wc
     144    1180   11802
$ git grep "{ *{ *0 *} *}" drivers/gpu/drm/amd/ | wc
  50 459    5239


We should either use only "{ }" or even better make nails with
heads and
use memset().


I'd rather leave it up to the compiler to decide what is most
efficient.


And I would rather prefer to have a working driver :)



So { } isn't correct either ?


Yes, initializing structures with { } is known to be problematic as well.

It doesn't necessary initialize all bytes when you have padding causing
random failures when structures are memcmp().



One thing I found missing in the discussion was the reference to the C
standard.
The C99 standard states in section 6.7.8 (Initialization) clause 19:
"... all
subobjects that are not initialized explicitly shall be initialized
implicitly
the same as objects that have static storage duration". Clause 21
makes further
reference to partial initialization, suggesting the same. Various online
resources, including the gcc documentation, all state the same. I
don't find
any reference to a partial initialization which would leave members of
a structure
undefined. It would be interesting for me to understand how and why
this does
not apply here.

In this context, it is interesting that the other 48 instances of the
{ { 0 } } initialization in the same driver don't raise similar concerns,
nor seemed to have caused any operational problems.


Feel free to provide patches to replace those with memset().



Not me. As I see it, the problem, if it exists, would be a violation of the
C standard. I don't believe hacking around bad C compilers. I would rather
blacklist such compilers.



Anyway, I fixed up the code in our tree (with { }), so I'll leave it
up to you folks to decide what if anything to do about it.


Well considering the known problems with {} initialization I'm certainly
rejecting all patches which turns memset() into {}.



Please point me to specific instances of this problem.

Thanks,
Guenter
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH][drm-next] drm/amdgpu/powerplay: fix missing break in switch statements

2018-10-08 Thread Colin King
From: Colin Ian King 

There are several switch statements that are missing break statements.
Add missing breaks to handle any fall-throughs corner cases.

Detected by CoverityScan, CID#1457175 ("Missing break in switch")

Fixes: 18aafc59b106 ("drm/amd/powerplay: implement fw related smu interface for 
iceland.")
Signed-off-by: Colin Ian King 
---
 drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c  | 2 ++
 drivers/gpu/drm/amd/powerplay/smumgr/fiji_smumgr.c| 2 ++
 drivers/gpu/drm/amd/powerplay/smumgr/iceland_smumgr.c | 2 ++
 drivers/gpu/drm/amd/powerplay/smumgr/tonga_smumgr.c   | 2 ++
 drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c   | 2 ++
 5 files changed, 10 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c 
b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
index 18643e06bc6f..669bd0c2a16c 100644
--- a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
+++ b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
@@ -2269,11 +2269,13 @@ static uint32_t ci_get_offsetof(uint32_t type, uint32_t 
member)
case DRAM_LOG_BUFF_SIZE:
return offsetof(SMU7_SoftRegisters, DRAM_LOG_BUFF_SIZE);
}
+   break;
case SMU_Discrete_DpmTable:
switch (member) {
case LowSclkInterruptThreshold:
return offsetof(SMU7_Discrete_DpmTable, 
LowSclkInterruptT);
}
+   break;
}
pr_debug("can't get the offset of type %x member %x\n", type, member);
return 0;
diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/fiji_smumgr.c 
b/drivers/gpu/drm/amd/powerplay/smumgr/fiji_smumgr.c
index ec14798e87b6..bddd6d09f887 100644
--- a/drivers/gpu/drm/amd/powerplay/smumgr/fiji_smumgr.c
+++ b/drivers/gpu/drm/amd/powerplay/smumgr/fiji_smumgr.c
@@ -2331,6 +2331,7 @@ static uint32_t fiji_get_offsetof(uint32_t type, uint32_t 
member)
case DRAM_LOG_BUFF_SIZE:
return offsetof(SMU73_SoftRegisters, 
DRAM_LOG_BUFF_SIZE);
}
+   break;
case SMU_Discrete_DpmTable:
switch (member) {
case UvdBootLevel:
@@ -2340,6 +2341,7 @@ static uint32_t fiji_get_offsetof(uint32_t type, uint32_t 
member)
case LowSclkInterruptThreshold:
return offsetof(SMU73_Discrete_DpmTable, 
LowSclkInterruptThreshold);
}
+   break;
}
pr_warn("can't get the offset of type %x member %x\n", type, member);
return 0;
diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/iceland_smumgr.c 
b/drivers/gpu/drm/amd/powerplay/smumgr/iceland_smumgr.c
index 73aa368a454e..2d4c7f167b88 100644
--- a/drivers/gpu/drm/amd/powerplay/smumgr/iceland_smumgr.c
+++ b/drivers/gpu/drm/amd/powerplay/smumgr/iceland_smumgr.c
@@ -2237,11 +2237,13 @@ static uint32_t iceland_get_offsetof(uint32_t type, 
uint32_t member)
case DRAM_LOG_BUFF_SIZE:
return offsetof(SMU71_SoftRegisters, 
DRAM_LOG_BUFF_SIZE);
}
+   break;
case SMU_Discrete_DpmTable:
switch (member) {
case LowSclkInterruptThreshold:
return offsetof(SMU71_Discrete_DpmTable, 
LowSclkInterruptThreshold);
}
+   break;
}
pr_warn("can't get the offset of type %x member %x\n", type, member);
return 0;
diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/tonga_smumgr.c 
b/drivers/gpu/drm/amd/powerplay/smumgr/tonga_smumgr.c
index ae8378ed32ee..a2ba5b012866 100644
--- a/drivers/gpu/drm/amd/powerplay/smumgr/tonga_smumgr.c
+++ b/drivers/gpu/drm/amd/powerplay/smumgr/tonga_smumgr.c
@@ -2619,6 +2619,7 @@ static uint32_t tonga_get_offsetof(uint32_t type, 
uint32_t member)
case DRAM_LOG_BUFF_SIZE:
return offsetof(SMU72_SoftRegisters, 
DRAM_LOG_BUFF_SIZE);
}
+   break;
case SMU_Discrete_DpmTable:
switch (member) {
case UvdBootLevel:
@@ -2628,6 +2629,7 @@ static uint32_t tonga_get_offsetof(uint32_t type, 
uint32_t member)
case LowSclkInterruptThreshold:
return offsetof(SMU72_Discrete_DpmTable, 
LowSclkInterruptThreshold);
}
+   break;
}
pr_warn("can't get the offset of type %x member %x\n", type, member);
return 0;
diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c 
b/drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c
index 3d415fabbd93..9f71512b2510 100644
--- a/drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c
+++ b/drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c
@@ -2185,6 +2185,7 @@ static uint32_t vegam_get_offsetof(uint32_t type, 
uint32_t member)
case DRAM_LOG_BUFF_SIZE:
return offsetof(SMU75_SoftRegisters, 
DRAM_LOG_BUFF_SIZE);
}
+ 

Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when suspend

2018-10-08 Thread Christian König

Am 08.10.2018 um 20:15 schrieb Zhu, Rex:



-Original Message-
From: Koenig, Christian
Sent: Tuesday, October 9, 2018 2:03 AM
To: Zhu, Rex ; Deucher, Alexander
; Alex Deucher 
Cc: amd-gfx list 
Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
suspend

Am 08.10.2018 um 19:58 schrieb Zhu, Rex:

-Original Message-
From: Christian König 
Sent: Tuesday, October 9, 2018 1:32 AM
To: Zhu, Rex ; Deucher, Alexander
; Alex Deucher 
Cc: amd-gfx list 
Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
suspend

Am 08.10.2018 um 18:30 schrieb Zhu, Rex:

-Original Message-
From: Deucher, Alexander
Sent: Tuesday, October 9, 2018 12:21 AM
To: Zhu, Rex ; Alex Deucher



Cc: amd-gfx list 
Subject: RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
suspend


-Original Message-
From: amd-gfx  On Behalf

Of

Zhu, Rex
Sent: Monday, October 8, 2018 11:57 AM
To: Alex Deucher 
Cc: amd-gfx list 
Subject: RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo
when suspend




-Original Message-
From: Alex Deucher 
Sent: Thursday, October 4, 2018 11:35 AM
To: Zhu, Rex 
Cc: amd-gfx list 
Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo
when suspend

On Wed, Oct 3, 2018 at 7:11 AM Rex Zhu  wrote:

driver don't release the ucode memory when suspend. so don't
need to allocate bo when resume back.

Signed-off-by: Rex Zhu 
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index 9878212..adfeb93 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -434,7 +434,7 @@ int amdgpu_ucode_init_bo(struct

amdgpu_device

*adev)

   return 0;
   }

-   if (!adev->in_gpu_reset) {
+   if (!adev->in_gpu_reset && !adev->in_suspend) {
   err = amdgpu_bo_create_kernel(adev,
adev->firmware.fw_size,

PAGE_SIZE,

   amdgpu_sriov_vf(adev) ?

AMDGPU_GEM_DOMAIN_VRAM : AMDGPU_GEM_DOMAIN_GTT,

>firmware.fw_buf,

Not sure if we support S3 in SR-IOV, but I think this will break
it because we'll lose vram contents and not re-init it.

Confirm with SR-IOV team, S3 was not supported in SR-IOV.

But I still confused why this patch will break the suspend if
in SRIOV

case?

Pinned buffers don't get evicted so if we lose VRAM due to a gpu
reset or S3, the data is lost.  GTT is retained since the OS manages that.

The gart table was unpinned when suspend.so don't need to create the
bo

again. we still copy the ucode to the bo.

And in baremetal,  this function can return directly for S3.

That's irrelevant.

The whole code is buggy since amdgpu_ucode_fini_bo() will drop the BO
independent if we are in reset or in suspend.

We don't call amdgpu_ucode_fini_bo when suspend/sriov_reset.

Yeah and exactly that's the bug which should be fixed instead.

Sorry, I don't understand.
Why we need to release the bo when suspend?
In amdgpu, we create a bunch of bo(vram_scratch.robj, e.g), we only free them 
when driver unload.


You should not release the BO while suspending, but the 
amdgpu_ucode_fini_bo() and amdgpu_ucode_init_bo() should be called under 
the same conditions.


This means we should either separate filling the BO from allocating it 
(e.g. split amdgpu_ucode_init_bo into two functions) and then only call 
the filling function during GPU reset and resume.


Or we add the same "if (!adev->in_gpu_reset && !adev->in_suspend)" into 
the amdgpu_ucode_fini_bo() function as well.


I consider the first one more cleaner.

Christian.



Rex


Christian.


Rex


The correct handling here is to remove the if all together and make
sure
amdgpu_bo_create_kernel() is ALWAYS called.

Cause then it is always re-created if it isn't there already.

Alternatively we could fix up the callers of amdgpu_ucode_init_bo()
and
amdgpu_ucode_fini_bo() to be correctly balanced.

Christian.


Rex



Alex


Rex


Alex


--
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when suspend

2018-10-08 Thread Zhu, Rex


> -Original Message-
> From: Koenig, Christian
> Sent: Tuesday, October 9, 2018 2:03 AM
> To: Zhu, Rex ; Deucher, Alexander
> ; Alex Deucher 
> Cc: amd-gfx list 
> Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
> suspend
> 
> Am 08.10.2018 um 19:58 schrieb Zhu, Rex:
> >
> >> -Original Message-
> >> From: Christian König 
> >> Sent: Tuesday, October 9, 2018 1:32 AM
> >> To: Zhu, Rex ; Deucher, Alexander
> >> ; Alex Deucher 
> >> Cc: amd-gfx list 
> >> Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
> >> suspend
> >>
> >> Am 08.10.2018 um 18:30 schrieb Zhu, Rex:
>  -Original Message-
>  From: Deucher, Alexander
>  Sent: Tuesday, October 9, 2018 12:21 AM
>  To: Zhu, Rex ; Alex Deucher
> >> 
>  Cc: amd-gfx list 
>  Subject: RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
>  suspend
> 
> > -Original Message-
> > From: amd-gfx  On Behalf
> Of
> > Zhu, Rex
> > Sent: Monday, October 8, 2018 11:57 AM
> > To: Alex Deucher 
> > Cc: amd-gfx list 
> > Subject: RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo
> > when suspend
> >
> >
> >
> >> -Original Message-
> >> From: Alex Deucher 
> >> Sent: Thursday, October 4, 2018 11:35 AM
> >> To: Zhu, Rex 
> >> Cc: amd-gfx list 
> >> Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo
> >> when suspend
> >>
> >> On Wed, Oct 3, 2018 at 7:11 AM Rex Zhu  wrote:
> >>> driver don't release the ucode memory when suspend. so don't
> >>> need to allocate bo when resume back.
> >>>
> >>> Signed-off-by: Rex Zhu 
> >>> ---
> >>>drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 2 +-
> >>>1 file changed, 1 insertion(+), 1 deletion(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> >>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> >>> index 9878212..adfeb93 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> >>> @@ -434,7 +434,7 @@ int amdgpu_ucode_init_bo(struct
>  amdgpu_device
> >> *adev)
> >>>   return 0;
> >>>   }
> >>>
> >>> -   if (!adev->in_gpu_reset) {
> >>> +   if (!adev->in_gpu_reset && !adev->in_suspend) {
> >>>   err = amdgpu_bo_create_kernel(adev,
> >>> adev->firmware.fw_size,
> >> PAGE_SIZE,
> >>>   amdgpu_sriov_vf(adev) ?
> >> AMDGPU_GEM_DOMAIN_VRAM : AMDGPU_GEM_DOMAIN_GTT,
> >>>
> >>> >firmware.fw_buf,
> >> Not sure if we support S3 in SR-IOV, but I think this will break
> >> it because we'll lose vram contents and not re-init it.
> > Confirm with SR-IOV team, S3 was not supported in SR-IOV.
> >
> >But I still confused why this patch will break the suspend if
> > in SRIOV
> >> case?
>  Pinned buffers don't get evicted so if we lose VRAM due to a gpu
>  reset or S3, the data is lost.  GTT is retained since the OS manages 
>  that.
> >>> The gart table was unpinned when suspend.so don't need to create the
> >>> bo
> >> again. we still copy the ucode to the bo.
> >>> And in baremetal,  this function can return directly for S3.
> >> That's irrelevant.
> >>
> >> The whole code is buggy since amdgpu_ucode_fini_bo() will drop the BO
> >> independent if we are in reset or in suspend.
> > We don't call amdgpu_ucode_fini_bo when suspend/sriov_reset.
> 
> Yeah and exactly that's the bug which should be fixed instead.

Sorry, I don't understand.
Why we need to release the bo when suspend?
In amdgpu, we create a bunch of bo(vram_scratch.robj, e.g), we only free them 
when driver unload.

Rex

> Christian.
> 
> >
> > Rex
> >
> >> The correct handling here is to remove the if all together and make
> >> sure
> >> amdgpu_bo_create_kernel() is ALWAYS called.
> >>
> >> Cause then it is always re-created if it isn't there already.
> >>
> >> Alternatively we could fix up the callers of amdgpu_ucode_init_bo()
> >> and
> >> amdgpu_ucode_fini_bo() to be correctly balanced.
> >>
> >> Christian.
> >>
> >>> Rex
> >>>
> >>>
>  Alex
> 
> > Rex
> >
> >> Alex
> >>
> >>> --
> >>> 1.9.1
> >>>
> >>> ___
> >>> amd-gfx mailing list
> >>> amd-gfx@lists.freedesktop.org
> >>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> >>> ___
> >>> amd-gfx mailing list
> >>> amd-gfx@lists.freedesktop.org
> >>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list

Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when suspend

2018-10-08 Thread Koenig, Christian
Am 08.10.2018 um 19:58 schrieb Zhu, Rex:
>
>> -Original Message-
>> From: Christian König 
>> Sent: Tuesday, October 9, 2018 1:32 AM
>> To: Zhu, Rex ; Deucher, Alexander
>> ; Alex Deucher 
>> Cc: amd-gfx list 
>> Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
>> suspend
>>
>> Am 08.10.2018 um 18:30 schrieb Zhu, Rex:
 -Original Message-
 From: Deucher, Alexander
 Sent: Tuesday, October 9, 2018 12:21 AM
 To: Zhu, Rex ; Alex Deucher
>> 
 Cc: amd-gfx list 
 Subject: RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
 suspend

> -Original Message-
> From: amd-gfx  On Behalf Of
> Zhu, Rex
> Sent: Monday, October 8, 2018 11:57 AM
> To: Alex Deucher 
> Cc: amd-gfx list 
> Subject: RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
> suspend
>
>
>
>> -Original Message-
>> From: Alex Deucher 
>> Sent: Thursday, October 4, 2018 11:35 AM
>> To: Zhu, Rex 
>> Cc: amd-gfx list 
>> Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
>> suspend
>>
>> On Wed, Oct 3, 2018 at 7:11 AM Rex Zhu  wrote:
>>> driver don't release the ucode memory when suspend. so don't need
>>> to allocate bo when resume back.
>>>
>>> Signed-off-by: Rex Zhu 
>>> ---
>>>drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 2 +-
>>>1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
>>> index 9878212..adfeb93 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
>>> @@ -434,7 +434,7 @@ int amdgpu_ucode_init_bo(struct
 amdgpu_device
>> *adev)
>>>   return 0;
>>>   }
>>>
>>> -   if (!adev->in_gpu_reset) {
>>> +   if (!adev->in_gpu_reset && !adev->in_suspend) {
>>>   err = amdgpu_bo_create_kernel(adev,
>>> adev->firmware.fw_size,
>> PAGE_SIZE,
>>>   amdgpu_sriov_vf(adev) ?
>> AMDGPU_GEM_DOMAIN_VRAM : AMDGPU_GEM_DOMAIN_GTT,
>>>   >firmware.fw_buf,
>> Not sure if we support S3 in SR-IOV, but I think this will break it
>> because we'll lose vram contents and not re-init it.
> Confirm with SR-IOV team, S3 was not supported in SR-IOV.
>
>But I still confused why this patch will break the suspend if in SRIOV
>> case?
 Pinned buffers don't get evicted so if we lose VRAM due to a gpu
 reset or S3, the data is lost.  GTT is retained since the OS manages that.
>>> The gart table was unpinned when suspend.so don't need to create the bo
>> again. we still copy the ucode to the bo.
>>> And in baremetal,  this function can return directly for S3.
>> That's irrelevant.
>>
>> The whole code is buggy since amdgpu_ucode_fini_bo() will drop the BO
>> independent if we are in reset or in suspend.
> We don't call amdgpu_ucode_fini_bo when suspend/sriov_reset.

Yeah and exactly that's the bug which should be fixed instead.

Christian.

>
> Rex
>
>> The correct handling here is to remove the if all together and make sure
>> amdgpu_bo_create_kernel() is ALWAYS called.
>>
>> Cause then it is always re-created if it isn't there already.
>>
>> Alternatively we could fix up the callers of amdgpu_ucode_init_bo() and
>> amdgpu_ucode_fini_bo() to be correctly balanced.
>>
>> Christian.
>>
>>> Rex
>>>
>>>
 Alex

> Rex
>
>> Alex
>>
>>> --
>>> 1.9.1
>>>
>>> ___
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>> ___
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when suspend

2018-10-08 Thread Zhu, Rex


> -Original Message-
> From: Christian König 
> Sent: Tuesday, October 9, 2018 1:32 AM
> To: Zhu, Rex ; Deucher, Alexander
> ; Alex Deucher 
> Cc: amd-gfx list 
> Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
> suspend
> 
> Am 08.10.2018 um 18:30 schrieb Zhu, Rex:
> >
> >> -Original Message-
> >> From: Deucher, Alexander
> >> Sent: Tuesday, October 9, 2018 12:21 AM
> >> To: Zhu, Rex ; Alex Deucher
> 
> >> Cc: amd-gfx list 
> >> Subject: RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
> >> suspend
> >>
> >>> -Original Message-
> >>> From: amd-gfx  On Behalf Of
> >>> Zhu, Rex
> >>> Sent: Monday, October 8, 2018 11:57 AM
> >>> To: Alex Deucher 
> >>> Cc: amd-gfx list 
> >>> Subject: RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
> >>> suspend
> >>>
> >>>
> >>>
>  -Original Message-
>  From: Alex Deucher 
>  Sent: Thursday, October 4, 2018 11:35 AM
>  To: Zhu, Rex 
>  Cc: amd-gfx list 
>  Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
>  suspend
> 
>  On Wed, Oct 3, 2018 at 7:11 AM Rex Zhu  wrote:
> > driver don't release the ucode memory when suspend. so don't need
> > to allocate bo when resume back.
> >
> > Signed-off-by: Rex Zhu 
> > ---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > index 9878212..adfeb93 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > @@ -434,7 +434,7 @@ int amdgpu_ucode_init_bo(struct
> >> amdgpu_device
>  *adev)
> >  return 0;
> >  }
> >
> > -   if (!adev->in_gpu_reset) {
> > +   if (!adev->in_gpu_reset && !adev->in_suspend) {
> >  err = amdgpu_bo_create_kernel(adev,
> > adev->firmware.fw_size,
>  PAGE_SIZE,
> >  amdgpu_sriov_vf(adev) ?
>  AMDGPU_GEM_DOMAIN_VRAM : AMDGPU_GEM_DOMAIN_GTT,
> >  >firmware.fw_buf,
>  Not sure if we support S3 in SR-IOV, but I think this will break it
>  because we'll lose vram contents and not re-init it.
> >>> Confirm with SR-IOV team, S3 was not supported in SR-IOV.
> >>>
> >>>   But I still confused why this patch will break the suspend if in SRIOV
> case?
> >> Pinned buffers don't get evicted so if we lose VRAM due to a gpu
> >> reset or S3, the data is lost.  GTT is retained since the OS manages that.
> > The gart table was unpinned when suspend.so don't need to create the bo
> again. we still copy the ucode to the bo.
> > And in baremetal,  this function can return directly for S3.
> 
> That's irrelevant.
> 
> The whole code is buggy since amdgpu_ucode_fini_bo() will drop the BO
> independent if we are in reset or in suspend.

We don't call amdgpu_ucode_fini_bo when suspend/sriov_reset.

Rex

> The correct handling here is to remove the if all together and make sure
> amdgpu_bo_create_kernel() is ALWAYS called.
> 
> Cause then it is always re-created if it isn't there already.
> 
> Alternatively we could fix up the callers of amdgpu_ucode_init_bo() and
> amdgpu_ucode_fini_bo() to be correctly balanced.
> 
> Christian.
> 
> >
> > Rex
> >
> >
> >> Alex
> >>
> >>> Rex
> >>>
>  Alex
> 
> > --
> > 1.9.1
> >
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> >>> ___
> >>> amd-gfx mailing list
> >>> amd-gfx@lists.freedesktop.org
> >>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 3/6] drm: add support of syncobj timeline point wait v2

2018-10-08 Thread Jason Ekstrand
On Mon, Oct 8, 2018 at 12:53 AM Zhou, David(ChunMing) 
wrote:

> >> Another general comment (no good place to put it) is that I think we
> want two kinds of waits:  Wait for time point to be completed and wait for
> time point to become available.  The first is the usual CPU wait for
> completion while the second is for use by userspace drivers to wait until
> the first moment where they can submit work which depends on a given time
> point.
>
>
>
> Hi Jason,
>
>
>
> How about adding two new wait flags?
>
> DRM_SYNCOBJ_WAIT_FLAGS_WAIT_COMPLETED
>
> DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE
>

Those seem like fine names to me.  We should require that one of the two
flags be present when the sync object is a timeline.

--Jason


> Thanks,
>
> David
>
>
>
> *From:* Christian König 
> *Sent:* Tuesday, September 25, 2018 5:50 PM
> *To:* Jason Ekstrand ; Zhou, David(ChunMing) <
> david1.z...@amd.com>
> *Cc:* amd-gfx mailing list ; Maling list -
> DRI developers 
> *Subject:* Re: [PATCH 3/6] drm: add support of syncobj timeline point
> wait v2
>
>
>
> Am 25.09.2018 um 11:22 schrieb Jason Ekstrand:
>
> On Thu, Sep 20, 2018 at 6:04 AM Chunming Zhou  wrote:
>
> points array is one-to-one match with syncobjs array.
> v2:
> add seperate ioctl for timeline point wait, otherwise break uapi.
>
>
>
> I think ioctl structs can be extended as long as fields aren't
> re-ordered.  I'm not sure on the details of this though as I'm not a
> particularly experienced kernel developer.
>
>
> Yeah, that is correct. The problem in this particular case is that we
> don't change the direct IOCTL parameter, but rather the array it points to.
>
> We could do something like keep the existing handles array and add a
> separate optional one for the timeline points. That would also drop the
> need for the padding of the structure.
>
>
> Another general comment (no good place to put it) is that I think we want
> two kinds of waits:  Wait for time point to be completed and wait for time
> point to become available.  The first is the usual CPU wait for completion
> while the second is for use by userspace drivers to wait until the first
> moment where they can submit work which depends on a given time point.
>
>
> Oh, yeah that is a really good point as ell.
>
> Christian.
>
>
>
>
> Signed-off-by: Chunming Zhou 
> ---
>  drivers/gpu/drm/drm_internal.h |  2 +
>  drivers/gpu/drm/drm_ioctl.c|  2 +
>  drivers/gpu/drm/drm_syncobj.c  | 99 +-
>  include/uapi/drm/drm.h | 14 +
>  4 files changed, 103 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_internal.h
> b/drivers/gpu/drm/drm_internal.h
> index 0c4eb4a9ab31..566d44e3c782 100644
> --- a/drivers/gpu/drm/drm_internal.h
> +++ b/drivers/gpu/drm/drm_internal.h
> @@ -183,6 +183,8 @@ int drm_syncobj_fd_to_handle_ioctl(struct drm_device
> *dev, void *data,
>struct drm_file *file_private);
>  int drm_syncobj_wait_ioctl(struct drm_device *dev, void *data,
>struct drm_file *file_private);
> +int drm_syncobj_timeline_wait_ioctl(struct drm_device *dev, void *data,
> +   struct drm_file *file_private);
>  int drm_syncobj_reset_ioctl(struct drm_device *dev, void *data,
> struct drm_file *file_private);
>  int drm_syncobj_signal_ioctl(struct drm_device *dev, void *data,
> diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
> index 6b4a633b4240..c0891614f516 100644
> --- a/drivers/gpu/drm/drm_ioctl.c
> +++ b/drivers/gpu/drm/drm_ioctl.c
> @@ -669,6 +669,8 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
>   DRM_UNLOCKED|DRM_RENDER_ALLOW),
> DRM_IOCTL_DEF(DRM_IOCTL_SYNCOBJ_WAIT, drm_syncobj_wait_ioctl,
>   DRM_UNLOCKED|DRM_RENDER_ALLOW),
> +   DRM_IOCTL_DEF(DRM_IOCTL_SYNCOBJ_TIMELINE_WAIT,
> drm_syncobj_timeline_wait_ioctl,
> + DRM_UNLOCKED|DRM_RENDER_ALLOW),
> DRM_IOCTL_DEF(DRM_IOCTL_SYNCOBJ_RESET, drm_syncobj_reset_ioctl,
>   DRM_UNLOCKED|DRM_RENDER_ALLOW),
> DRM_IOCTL_DEF(DRM_IOCTL_SYNCOBJ_SIGNAL, drm_syncobj_signal_ioctl,
> diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
> index 67472bd77c83..a43de0e4616c 100644
> --- a/drivers/gpu/drm/drm_syncobj.c
> +++ b/drivers/gpu/drm/drm_syncobj.c
> @@ -126,13 +126,14 @@ static void drm_syncobj_add_callback_locked(struct
> drm_syncobj *syncobj,
>  }
>
>  static int drm_syncobj_fence_get_or_add_callback(struct drm_syncobj
> *syncobj,
> +u64 point,
>  struct dma_fence **fence,
>  struct drm_syncobj_cb *cb,
>  drm_syncobj_func_t func)
>  {
> int ret;
>
> -   ret = drm_syncobj_search_fence(syncobj, 0, 0, fence);
> +   ret = 

Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when suspend

2018-10-08 Thread Christian König

Am 08.10.2018 um 18:30 schrieb Zhu, Rex:



-Original Message-
From: Deucher, Alexander
Sent: Tuesday, October 9, 2018 12:21 AM
To: Zhu, Rex ; Alex Deucher 
Cc: amd-gfx list 
Subject: RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
suspend


-Original Message-
From: amd-gfx  On Behalf Of
Zhu, Rex
Sent: Monday, October 8, 2018 11:57 AM
To: Alex Deucher 
Cc: amd-gfx list 
Subject: RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
suspend




-Original Message-
From: Alex Deucher 
Sent: Thursday, October 4, 2018 11:35 AM
To: Zhu, Rex 
Cc: amd-gfx list 
Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
suspend

On Wed, Oct 3, 2018 at 7:11 AM Rex Zhu  wrote:

driver don't release the ucode memory when suspend. so don't need
to allocate bo when resume back.

Signed-off-by: Rex Zhu 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index 9878212..adfeb93 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -434,7 +434,7 @@ int amdgpu_ucode_init_bo(struct

amdgpu_device

*adev)

 return 0;
 }

-   if (!adev->in_gpu_reset) {
+   if (!adev->in_gpu_reset && !adev->in_suspend) {
 err = amdgpu_bo_create_kernel(adev,
adev->firmware.fw_size,

PAGE_SIZE,

 amdgpu_sriov_vf(adev) ?

AMDGPU_GEM_DOMAIN_VRAM : AMDGPU_GEM_DOMAIN_GTT,

 >firmware.fw_buf,

Not sure if we support S3 in SR-IOV, but I think this will break it
because we'll lose vram contents and not re-init it.

Confirm with SR-IOV team, S3 was not supported in SR-IOV.

  But I still confused why this patch will break the suspend if in SRIOV case?

Pinned buffers don't get evicted so if we lose VRAM due to a gpu reset or S3,
the data is lost.  GTT is retained since the OS manages that.

The gart table was unpinned when suspend.so don't need to create the bo again. 
we still copy the ucode to the bo.
And in baremetal,  this function can return directly for S3.


That's irrelevant.

The whole code is buggy since amdgpu_ucode_fini_bo() will drop the BO 
independent if we are in reset or in suspend.


The correct handling here is to remove the if all together and make sure 
amdgpu_bo_create_kernel() is ALWAYS called.


Cause then it is always re-created if it isn't there already.

Alternatively we could fix up the callers of amdgpu_ucode_init_bo() and 
amdgpu_ucode_fini_bo() to be correctly balanced.


Christian.



Rex



Alex


Rex


Alex


--
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] amdgpu/gmc : fix compile warning

2018-10-08 Thread Koenig, Christian
Am 08.10.2018 um 17:57 schrieb Deucher, Alexander:
 One thing I found missing in the discussion was the reference to the
 C standard.
 The C99 standard states in section 6.7.8 (Initialization) clause 19:
 "... all
 subobjects that are not initialized explicitly shall be initialized
 implicitly the same as objects that have static storage duration".
 Clause 21 makes further reference to partial initialization,
 suggesting the same. Various online resources, including the gcc
 documentation, all state the same. I don't find any reference to a
 partial initialization which would leave members of a structure
 undefined. It would be interesting for me to understand how and why
 this does not apply here.

 In this context, it is interesting that the other 48 instances of the
 { { 0 } } initialization in the same driver don't raise similar
 concerns, nor seemed to have caused any operational problems.
>>> Feel free to provide patches to replace those with memset().
>>>
>> Not me. As I see it, the problem, if it exists, would be a violation of the C
>> standard. I don't believe hacking around bad C compilers. I would rather
>> blacklist such compilers.

Well then you would need to blacklist basically all gcc variants of the 
last decade or so.

Initializing only known members of structures is a perfectly valid 
optimization and well known issue when you then compare the structure 
with memcpy() or use the bytes for hashing or something similar.

 Anyway, I fixed up the code in our tree (with { }), so I'll leave it
 up to you folks to decide what if anything to do about it.
>>> Well considering the known problems with {} initialization I'm
>>> certainly rejecting all patches which turns memset() into {}.
>>>
>> Please point me to specific instances of this problem.

See here for a good example of how people try to avoid issues: 
https://cgit.freedesktop.org/mesa/mesa/commit/?id=9422999e4041d4e984acbd2f44813d5928d20f18

> I think there are a number of places in DC (drivers/gpu/drm/amd/display) 
> where we applied the original proposed solution before realizing that it 
> would only initialize the first element.  It would be nice to get them fixed 
> up.

Yes, agree.

Christian.

>
> Alex
>

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [RFC] drm/amd/display: add SI support to AMD DC

2018-10-08 Thread Mauro Rossi
Hi Sylvain,

On Mon, Oct 8, 2018 at 2:04 PM  wrote:
>
> I am currently testing your patch set, on amd-staging-drm-next
> (380d480842d584278dba9aa74341017d8c7d8c23) with an AMD tahiti xt part and a
> displayport monitor.
> patch02 drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c did not apply 
> but
> seems kind of benign.
>
> It's working out of the box on my AMD tahiti xt part. I did not manage to 
> break
> it with aggressive mode programming. Let's see how it goes with my everday
> usage.

Thanks for info, do you have some github or patch to share for
comparison/mutual knowledge?

>
> > The series adds preliminar SI support as a Proof Of Concept,
> > based on the idea that DCE6 is similar to DCE8, to be reviewed and refined
>
> Did want to do it, but did drop it due to DC code getting fixed with too much
> changes.
> Brutally mapping DCE6 to DCE8 is an act of faith... and it's working on my
> part.

Do you mean just pretending SI/DCE6 parts were belonging  to CIK/DCE8 family?
That was a doubt for me, because  GCN 2nd Generation additions were:

FreeSync support (which should be missing in SI/GCN 1st generation),
AMD TrueAudio (not sure about impacts in DC),
A revised version of AMD PowerTune technology (less states in GCN 1st gen),
GCN 2nd generation introduced an entity called "Shader Engine"

While implementing the DCE6 code paths with 'systematic conservative approach'
I have started to check how/if extend the approach to dce60/resources
and dce60/irq,
I'd like to know if to push further in that direction or if keep the
DCE8 headers/masks "hack"

One problem I see are Mosaic Colored Artifacts on screen surface in
the 3D renders of 3Dmark Slingshot Extreeme,
another visual problem is some imperfection of video sync/buffer swaps
with drm_hwcomposer stack, same as per other GCN families.

It will be interesting to launch some conformance tool like Piglit on
linux, Android CTS dEQP VK tests,
but after having triaged/removed most of the current drm gralloc regressions.

Mauro

>
>
> > Android-x86 need/motivation lies in the following chain of dependencies:
> > Vulkan radv requires gbm gralloc prime_fd support,
> > gbm gralloc requires drm hwcomposer,
> > drm hwcomposer requires Atomic Display Framework,
> > Atomic Display Framework requires AMD DC, currently not supporting SI.
> >
> > So the goals are:
> > 1) to get Vulkan radv working on SI parts for android-x86.
>
> AFAIK, Vulkan support is not dependent on the display block. I am running 
> heavy
> vulkan games on a custom gnu/linux x86_64 AMD hardware based system, then the
> hwcomposer is android only?

Yes, at the moment drm_hwcomposer is used only in Android builds

>
> > 2) to remove the gap in SI (GCN 1st gen) not having atomic support.
>
> I was nearly sure that atomic support was implicitely added for parts
> supporting only legacy DRM mode programming interfaces?

drm_hwcomposer API does not see atomic properties with amdgpu on SI parts.
if Atomic Display Framework available in amdgpu, then hwc, gbm gralloc and radv
would be already working on Android.

Also radeondrmfb does not support Atomic Display Framework and that is a problem
to have one stack for all drivers, becase with AMD DC, radeon r5xx...
r7xx parts will not
work even if OpenGLES supported.

Mauro


>
>
> --
> Sylvain
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when suspend

2018-10-08 Thread Zhu, Rex


> -Original Message-
> From: Deucher, Alexander
> Sent: Tuesday, October 9, 2018 12:21 AM
> To: Zhu, Rex ; Alex Deucher 
> Cc: amd-gfx list 
> Subject: RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
> suspend
> 
> > -Original Message-
> > From: amd-gfx  On Behalf Of
> > Zhu, Rex
> > Sent: Monday, October 8, 2018 11:57 AM
> > To: Alex Deucher 
> > Cc: amd-gfx list 
> > Subject: RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
> > suspend
> >
> >
> >
> > > -Original Message-
> > > From: Alex Deucher 
> > > Sent: Thursday, October 4, 2018 11:35 AM
> > > To: Zhu, Rex 
> > > Cc: amd-gfx list 
> > > Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
> > > suspend
> > >
> > > On Wed, Oct 3, 2018 at 7:11 AM Rex Zhu  wrote:
> > > >
> > > > driver don't release the ucode memory when suspend. so don't need
> > > > to allocate bo when resume back.
> > > >
> > > > Signed-off-by: Rex Zhu 
> > > > ---
> > > >  drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 2 +-
> > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > > > index 9878212..adfeb93 100644
> > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > > > @@ -434,7 +434,7 @@ int amdgpu_ucode_init_bo(struct
> amdgpu_device
> > > *adev)
> > > > return 0;
> > > > }
> > > >
> > > > -   if (!adev->in_gpu_reset) {
> > > > +   if (!adev->in_gpu_reset && !adev->in_suspend) {
> > > > err = amdgpu_bo_create_kernel(adev,
> > > > adev->firmware.fw_size,
> > > PAGE_SIZE,
> > > > amdgpu_sriov_vf(adev) ?
> > > AMDGPU_GEM_DOMAIN_VRAM : AMDGPU_GEM_DOMAIN_GTT,
> > > > >firmware.fw_buf,
> > >
> > > Not sure if we support S3 in SR-IOV, but I think this will break it
> > > because we'll lose vram contents and not re-init it.
> >
> > Confirm with SR-IOV team, S3 was not supported in SR-IOV.
> >
> >  But I still confused why this patch will break the suspend if in SRIOV 
> > case?
> 
> Pinned buffers don't get evicted so if we lose VRAM due to a gpu reset or S3,
> the data is lost.  GTT is retained since the OS manages that.

The gart table was unpinned when suspend.so don't need to create the bo again. 
we still copy the ucode to the bo.
And in baremetal,  this function can return directly for S3. 

Rex


> Alex
> 
> >
> > Rex
> >
> > > Alex
> > >
> > > > --
> > > > 1.9.1
> > > >
> > > > ___
> > > > amd-gfx mailing list
> > > > amd-gfx@lists.freedesktop.org
> > > > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when suspend

2018-10-08 Thread Deucher, Alexander
> -Original Message-
> From: amd-gfx  On Behalf Of Zhu,
> Rex
> Sent: Monday, October 8, 2018 11:57 AM
> To: Alex Deucher 
> Cc: amd-gfx list 
> Subject: RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
> suspend
> 
> 
> 
> > -Original Message-
> > From: Alex Deucher 
> > Sent: Thursday, October 4, 2018 11:35 AM
> > To: Zhu, Rex 
> > Cc: amd-gfx list 
> > Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
> > suspend
> >
> > On Wed, Oct 3, 2018 at 7:11 AM Rex Zhu  wrote:
> > >
> > > driver don't release the ucode memory when suspend. so don't need to
> > > allocate bo when resume back.
> > >
> > > Signed-off-by: Rex Zhu 
> > > ---
> > >  drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > > index 9878212..adfeb93 100644
> > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > > @@ -434,7 +434,7 @@ int amdgpu_ucode_init_bo(struct amdgpu_device
> > *adev)
> > > return 0;
> > > }
> > >
> > > -   if (!adev->in_gpu_reset) {
> > > +   if (!adev->in_gpu_reset && !adev->in_suspend) {
> > > err = amdgpu_bo_create_kernel(adev,
> > > adev->firmware.fw_size,
> > PAGE_SIZE,
> > > amdgpu_sriov_vf(adev) ?
> > AMDGPU_GEM_DOMAIN_VRAM : AMDGPU_GEM_DOMAIN_GTT,
> > > >firmware.fw_buf,
> >
> > Not sure if we support S3 in SR-IOV, but I think this will break it
> > because we'll lose vram contents and not re-init it.
> 
> Confirm with SR-IOV team, S3 was not supported in SR-IOV.
> 
>  But I still confused why this patch will break the suspend if in SRIOV case?

Pinned buffers don't get evicted so if we lose VRAM due to a gpu reset or S3, 
the data is lost.  GTT is retained since the OS manages that.

Alex

> 
> Rex
> 
> > Alex
> >
> > > --
> > > 1.9.1
> > >
> > > ___
> > > amd-gfx mailing list
> > > amd-gfx@lists.freedesktop.org
> > > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 4/5] drm/amdgpu: Add fw load in gfx_v8 and sdma_v3

2018-10-08 Thread Zhu, Rex
Thanks.
I will fix this error in a following patch.

Best Regards
Rex

> -Original Message-
> From: Alex Deucher 
> Sent: Thursday, October 4, 2018 11:33 AM
> To: Zhu, Rex 
> Cc: amd-gfx list 
> Subject: Re: [PATCH 4/5] drm/amdgpu: Add fw load in gfx_v8 and sdma_v3
> 
> On Wed, Oct 3, 2018 at 7:11 AM Rex Zhu  wrote:
> >
> > gfx and sdma can be initialized before smu.
> >
> > Signed-off-by: Rex Zhu 
> 
> I think sdma_v2_4.c needs a similar update for iceland.  With that fixed:
> Reviewed-by: Alex Deucher 
> 
> > ---
> >  drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c  | 11 +++
> > drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c |  8 
> >  2 files changed, 19 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
> > b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
> > index 6b1954e..77e05c1 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
> > @@ -4180,9 +4180,20 @@ static void gfx_v8_0_rlc_start(struct
> > amdgpu_device *adev)
> >
> >  static int gfx_v8_0_rlc_resume(struct amdgpu_device *adev)  {
> > +   int r;
> > +
> > gfx_v8_0_rlc_stop(adev);
> > gfx_v8_0_rlc_reset(adev);
> > gfx_v8_0_init_pg(adev);
> > +
> > +   if (adev->powerplay.pp_funcs->load_firmware) {
> > +   r = adev->powerplay.pp_funcs->load_firmware(adev-
> >powerplay.pp_handle);
> > +   if (r) {
> > +   pr_err("firmware loading failed\n");
> > +   return r;
> > +   }
> > +   }
> > +
> > gfx_v8_0_rlc_start(adev);
> >
> > return 0;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
> > b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
> > index 6fb3eda..0bdde7f 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
> > @@ -788,6 +788,14 @@ static int sdma_v3_0_start(struct amdgpu_device
> > *adev)  {
> > int r;
> >
> > +   if (adev->powerplay.pp_funcs->load_firmware) {
> > +   r = adev->powerplay.pp_funcs->load_firmware(adev-
> >powerplay.pp_handle);
> > +   if (r) {
> > +   pr_err("firmware loading failed\n");
> > +   return r;
> > +   }
> > +   }
> > +
> > /* disable sdma engine before programing it */
> > sdma_v3_0_ctx_switch_enable(adev, false);
> > sdma_v3_0_enable(adev, false);
> > --
> > 1.9.1
> >
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] amdgpu/gmc : fix compile warning

2018-10-08 Thread Deucher, Alexander
> -Original Message-
> From: Guenter Roeck  On Behalf Of Guenter Roeck
> Sent: Monday, October 8, 2018 10:11 AM
> To: Koenig, Christian ; Peng Hao
> 
> Cc: airl...@linux.ie; linux-ker...@vger.kernel.org; dri-
> de...@lists.freedesktop.org; amd-gfx@lists.freedesktop.org; Deucher,
> Alexander 
> Subject: Re: [PATCH] amdgpu/gmc : fix compile warning
> 
> On 10/08/2018 06:47 AM, Koenig, Christian wrote:
> > Am 08.10.2018 um 15:33 schrieb Guenter Roeck:
> >> On 10/08/2018 01:00 AM, Christian König wrote:
> >>> Am 05.10.2018 um 10:38 schrieb Guenter Roeck:
>  On 10/05/2018 01:14 AM, Koenig, Christian wrote:
> > Am 04.10.2018 um 20:52 schrieb Guenter Roeck:
> >> Hi,
> >>
> >> On Fri, Sep 14, 2018 at 06:05:52PM +0800, Peng Hao wrote:
> >>> drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:
> >>>    In function ‘gmc_v8_0_process_interrupt’:
> >>> drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:1447:10:
> >>>    warning: missing braces around initializer
> >>> [-Wmissing-braces]
> >>>
> >>> Signed-off-by: Peng Hao 
> >> Was there any feedback on this patch ? The problem does affect
> >> us, and we'll need a fix.
> >
> > Well as discussed using "{ { 0 } }" is as wrong as using "{ 0 }".
> >
> 
>  Ah, sorry, I must have missed the discussion.
> 
>  It is for sure not the best solution, but at least it compiles, and
>  it seems to be proliferating.
> >>>
> >>> Yeah, and exactly that's the problem. As the discussion showed "{ {
> >>> 0 } }" is buggy because it tells the compiler to only initialize the
> >>> first member of the structure, but not all of it.
> >>>
> >>> That is incorrect and rather dangerous cause it can lead to
> >>> unforeseen results and should probably trigger a warning.
> >>>
> 
>  $ git grep "{ *{ *0 *} *}" | wc
>       144    1180   11802
>  $ git grep "{ *{ *0 *} *}" drivers/gpu/drm/amd/ | wc
>    50 459    5239
> 
> > We should either use only "{ }" or even better make nails with
> > heads and use memset().
> 
>  I'd rather leave it up to the compiler to decide what is most
>  efficient.
> >>>
> >>> And I would rather prefer to have a working driver :)
> >>>
> >>
> >> So { } isn't correct either ?
> >
> > Yes, initializing structures with { } is known to be problematic as well.
> >
> > It doesn't necessary initialize all bytes when you have padding
> > causing random failures when structures are memcmp().
> >
> >>
> >> One thing I found missing in the discussion was the reference to the
> >> C standard.
> >> The C99 standard states in section 6.7.8 (Initialization) clause 19:
> >> "... all
> >> subobjects that are not initialized explicitly shall be initialized
> >> implicitly the same as objects that have static storage duration".
> >> Clause 21 makes further reference to partial initialization,
> >> suggesting the same. Various online resources, including the gcc
> >> documentation, all state the same. I don't find any reference to a
> >> partial initialization which would leave members of a structure
> >> undefined. It would be interesting for me to understand how and why
> >> this does not apply here.
> >>
> >> In this context, it is interesting that the other 48 instances of the
> >> { { 0 } } initialization in the same driver don't raise similar
> >> concerns, nor seemed to have caused any operational problems.
> >
> > Feel free to provide patches to replace those with memset().
> >
> 
> Not me. As I see it, the problem, if it exists, would be a violation of the C
> standard. I don't believe hacking around bad C compilers. I would rather
> blacklist such compilers.
> 
> >>
> >> Anyway, I fixed up the code in our tree (with { }), so I'll leave it
> >> up to you folks to decide what if anything to do about it.
> >
> > Well considering the known problems with {} initialization I'm
> > certainly rejecting all patches which turns memset() into {}.
> >
> 
> Please point me to specific instances of this problem.

I think there are a number of places in DC (drivers/gpu/drm/amd/display) where 
we applied the original proposed solution before realizing that it would only 
initialize the first element.  It would be nice to get them fixed up.

Alex

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when suspend

2018-10-08 Thread Zhu, Rex


> -Original Message-
> From: Alex Deucher 
> Sent: Thursday, October 4, 2018 11:35 AM
> To: Zhu, Rex 
> Cc: amd-gfx list 
> Subject: Re: [PATCH 1/5] drm/amdgpu: Don't reallocate ucode bo when
> suspend
> 
> On Wed, Oct 3, 2018 at 7:11 AM Rex Zhu  wrote:
> >
> > driver don't release the ucode memory when suspend. so don't need to
> > allocate bo when resume back.
> >
> > Signed-off-by: Rex Zhu 
> > ---
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > index 9878212..adfeb93 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> > @@ -434,7 +434,7 @@ int amdgpu_ucode_init_bo(struct amdgpu_device
> *adev)
> > return 0;
> > }
> >
> > -   if (!adev->in_gpu_reset) {
> > +   if (!adev->in_gpu_reset && !adev->in_suspend) {
> > err = amdgpu_bo_create_kernel(adev, adev->firmware.fw_size,
> PAGE_SIZE,
> > amdgpu_sriov_vf(adev) ?
> AMDGPU_GEM_DOMAIN_VRAM : AMDGPU_GEM_DOMAIN_GTT,
> > >firmware.fw_buf,
> 
> Not sure if we support S3 in SR-IOV, but I think this will break it because 
> we'll
> lose vram contents and not re-init it.

Confirm with SR-IOV team, S3 was not supported in SR-IOV.

 But I still confused why this patch will break the suspend if in SRIOV case?

Rex

> Alex
> 
> > --
> > 1.9.1
> >
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [RFC] drm/amd/display: add SI support to AMD DC

2018-10-08 Thread Deucher, Alexander
It's trivial to add the metadata/version headers to the existing firmware, so 
that is not an issue.  The issue is with the firmware itself.  If we used the 
existing firmware, we'd have to add a bunch of really ugly hacks in the driver 
to work around the current limitations.


Alex



From: amd-gfx  on behalf of Mike Lothian 

Sent: Monday, October 8, 2018 8:16:01 AM
To: Koenig, Christian
Cc: Mauro Rossi; Wentland, Harry; amd-gfx@lists.freedesktop.org
Subject: Re: [RFC] drm/amd/display: add SI support to AMD DC

I thought it just required the existing firmware, with padding / extra info put 
around it?

On Mon, 8 Oct 2018 at 12:29 Christian König 
mailto:ckoenig.leichtzumer...@gmail.com>> 
wrote:
UVD/VCE on SI with amdgpu would need new firmware.

And so far we never had time to actually look into releasing that firmware.

Regards,
Christian.


Am 08.10.2018 um 13:22 schrieb Mauro Rossi:
Hi Mike,
On Mon, Oct 8, 2018 at 1:00 PM Mike Lothian 
mailto:m...@fireburn.co.uk>> wrote:
Hi Mauro

Do you know if there are any plans to add in UVD support on SI too?

Thanks

Mike

At the moment my focus is on getting a conformant, working and stable
implementation of Atomic Display Framework, with the objective to have it
upstreamed to amd-gfx branch, then staging (drm-next) and maybe merged in linux 
kernel.

To be honest my attempt is based on code paths inspection and mimicking,
so in this moment I do not even know the state of UVD and what changes are 
needed,
but, based on what I saw for DCE6 support addition on top of DCE8,
covering all compatible HW modules makes a lot of sense and it is an 
opportunity to exploit,
if feasible.

For this to happen in most complete and reliable way the feedback of staff who 
worked on DAL/DC
will be essential, because what I did now was to adapt code for DCE8 to work 
for DCE6,
but it was like an "optimistic monkey with a keyboard" approach, with all due 
respect for monkeys with keyboards,
:-) I may have missed dozen of changes.

Mauro


On Mon, 8 Oct 2018 at 03:24 Mauro Rossi 
mailto:issor.or...@gmail.com>> wrote:
[PATCH 01/10] drm/amd/display: add asics info for SI parts
[PATCH 02/10] drm/amd/display: dc/dce: add DCE6 support
[PATCH 03/10] drm/amd/display: dc/core: add DCE6 support
[PATCH 04/10] drm/amd/display: dc/bios: add support for DCE6
[PATCH 05/10] drm/amd/display: dc/gpio: add support for DCE6
[PATCH 06/10] drm/amd/display: dc/i2caux: add support for DCE6
[PATCH 07/10] drm/amd/display: dc/irq: add support for DCE6
[PATCH 08/10] drm/amd/display: amdgpu_dm: add SI support
[PATCH 09/10] drm/amdgpu: enable DC support for SI parts
[PATCH 10/10] drm/amd/display: enable SI support in the Kconfig

The series adds preliminar SI support as a Proof Of Concept,
based on the idea that DCE6 is similar to DCE8, to be reviewed and refined

Android-x86 need/motivation lies in the following chain of dependencies:
Vulkan radv requires gbm gralloc prime_fd support,
gbm gralloc requires drm hwcomposer,
drm hwcomposer requires Atomic Display Framework,
Atomic Display Framework requires AMD DC, currently not supporting SI.

So the goals are:
1) to get Vulkan radv working on SI parts for android-x86.
2) to remove the gap in SI (GCN 1st gen) not having atomic support.

DCE6 specific code was implemented as a replica of existing DCE8 support
and based on how DCE8 specific code was added on top of DCE10,11 support
by adding dce60* sources, functions, macros for each existing in dce80*

CONFIG_DRM_AMD_DC_SI parameter has been added to control SI support in DC

During this first iteration of review, there are aspects to verify:
- dce60 code has been added mechanically, so there may be redundancies
and space for refactoring part of the code
- dce60_resources was having too many building errors due to missing DCE6 macros
in order to temporarily overcome the problem dce_8_0_{d,sh_mask}.h headers
were used for the PoC
- dc/irq suffered the same problem dce_8_0_{d,sh_mask}.h headers
were used for the PoC
- gfx6 may require some ad hoc initialization, skipped for the moment
- Hainan specific code requires review, as some documentation and code paths
seem to point that famility may not have DCE6, please confirm
- video decoding blocks code have not been touched
- dc/dce/dce_clock_source.{c,h} may be missing some SI/DCE6 specifics
- dc/dce/dce_dmcu.{c,h} may be missing some SI/DCE6 specifics
- dc/dce/dce_hwseq.h may be missing some SI/DCE6 specifics
- dc/dce/dce_link_encoder.h may be missing some SI/DCE6 specifics
- dc/dce/dce_stream_encoder.h may be missing some SI/DCE6 specifics
- dc/amdgpu_dm/* changes may be incomplete
- Any other omissis to be reviewed
- Feedback on best testing strategy required

Review from an expert of the DC impacted modules is recommended

SW Layer
/===\
| DCDisplay Timing  ModeAsic|
| Interface Service Service Manager Capability* 

Re: [PATCH] amdgpu/gmc : fix compile warning

2018-10-08 Thread Koenig, Christian
Am 08.10.2018 um 15:33 schrieb Guenter Roeck:
> On 10/08/2018 01:00 AM, Christian König wrote:
>> Am 05.10.2018 um 10:38 schrieb Guenter Roeck:
>>> On 10/05/2018 01:14 AM, Koenig, Christian wrote:
 Am 04.10.2018 um 20:52 schrieb Guenter Roeck:
> Hi,
>
> On Fri, Sep 14, 2018 at 06:05:52PM +0800, Peng Hao wrote:
>> drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:
>>   In function ‘gmc_v8_0_process_interrupt’:
>> drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:1447:10:
>>   warning: missing braces around initializer [-Wmissing-braces]
>>
>> Signed-off-by: Peng Hao 
> Was there any feedback on this patch ? The problem does affect us,
> and we'll need a fix.

 Well as discussed using "{ { 0 } }" is as wrong as using "{ 0 }".

>>>
>>> Ah, sorry, I must have missed the discussion.
>>>
>>> It is for sure not the best solution, but at least it compiles, and 
>>> it seems
>>> to be proliferating.
>>
>> Yeah, and exactly that's the problem. As the discussion showed "{ { 0 
>> } }" is buggy because it tells the compiler to only initialize the 
>> first member of the structure, but not all of it.
>>
>> That is incorrect and rather dangerous cause it can lead to 
>> unforeseen results and should probably trigger a warning.
>>
>>>
>>> $ git grep "{ *{ *0 *} *}" | wc
>>>     144    1180   11802
>>> $ git grep "{ *{ *0 *} *}" drivers/gpu/drm/amd/ | wc
>>>  50 459    5239
>>>
 We should either use only "{ }" or even better make nails with 
 heads and
 use memset().
>>>
>>> I'd rather leave it up to the compiler to decide what is most 
>>> efficient.
>>
>> And I would rather prefer to have a working driver :)
>>
>
> So { } isn't correct either ?

Yes, initializing structures with { } is known to be problematic as well.

It doesn't necessary initialize all bytes when you have padding causing 
random failures when structures are memcmp().

>
> One thing I found missing in the discussion was the reference to the C 
> standard.
> The C99 standard states in section 6.7.8 (Initialization) clause 19: 
> "... all
> subobjects that are not initialized explicitly shall be initialized 
> implicitly
> the same as objects that have static storage duration". Clause 21 
> makes further
> reference to partial initialization, suggesting the same. Various online
> resources, including the gcc documentation, all state the same. I 
> don't find
> any reference to a partial initialization which would leave members of 
> a structure
> undefined. It would be interesting for me to understand how and why 
> this does
> not apply here.
>
> In this context, it is interesting that the other 48 instances of the
> { { 0 } } initialization in the same driver don't raise similar concerns,
> nor seemed to have caused any operational problems.

Feel free to provide patches to replace those with memset().

>
> Anyway, I fixed up the code in our tree (with { }), so I'll leave it
> up to you folks to decide what if anything to do about it.

Well considering the known problems with {} initialization I'm certainly 
rejecting all patches which turns memset() into {}.

Regards,
Christian.

>
> Thanks,
> Guenter

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 8/8] drm/amdgpu: use paging queue for VM page table updates

2018-10-08 Thread Christian König
Only for testing, not sure if we should keep it like this.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index a362904d73f7..5fa80b231da3 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -2052,7 +2052,7 @@ static void sdma_v4_0_set_vm_pte_funcs(struct 
amdgpu_device *adev)
 
adev->vm_manager.vm_pte_funcs = _v4_0_vm_pte_funcs;
for (i = 0; i < adev->sdma.num_instances; i++) {
-   sched = >sdma.instance[i].ring.sched;
+   sched = >sdma.instance[i].page.sched;
adev->vm_manager.vm_pte_rqs[i] =
>sched_rq[DRM_SCHED_PRIORITY_KERNEL];
}
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/8] drm/amdgpu: fix incorrect use of amdgpu_irq_add_id in si_dma.c

2018-10-08 Thread Christian König
Adding a second irq source because of a different src_id is actually a
bug.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.h |  4 
 drivers/gpu/drm/amd/amdgpu/si_dma.c  | 27 ---
 2 files changed, 8 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.h
index d17503f0df8e..500113ec65ca 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.h
@@ -46,10 +46,6 @@ struct amdgpu_sdma_instance {
 
 struct amdgpu_sdma {
struct amdgpu_sdma_instance instance[AMDGPU_MAX_SDMA_INSTANCES];
-#ifdef CONFIG_DRM_AMDGPU_SI
-   //SI DMA has a difference trap irq number for the second engine
-   struct amdgpu_irq_src   trap_irq_1;
-#endif
struct amdgpu_irq_src   trap_irq;
struct amdgpu_irq_src   illegal_inst_irq;
int num_instances;
diff --git a/drivers/gpu/drm/amd/amdgpu/si_dma.c 
b/drivers/gpu/drm/amd/amdgpu/si_dma.c
index d4ceaf440f26..adbaea6da0d7 100644
--- a/drivers/gpu/drm/amd/amdgpu/si_dma.c
+++ b/drivers/gpu/drm/amd/amdgpu/si_dma.c
@@ -502,12 +502,14 @@ static int si_dma_sw_init(void *handle)
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
/* DMA0 trap event */
-   r = amdgpu_irq_add_id(adev, AMDGPU_IRQ_CLIENTID_LEGACY, 224, 
>sdma.trap_irq);
+   r = amdgpu_irq_add_id(adev, AMDGPU_IRQ_CLIENTID_LEGACY, 224,
+ >sdma.trap_irq);
if (r)
return r;
 
/* DMA1 trap event */
-   r = amdgpu_irq_add_id(adev, AMDGPU_IRQ_CLIENTID_LEGACY, 244, 
>sdma.trap_irq_1);
+   r = amdgpu_irq_add_id(adev, AMDGPU_IRQ_CLIENTID_LEGACY, 244,
+ >sdma.trap_irq);
if (r)
return r;
 
@@ -649,17 +651,10 @@ static int si_dma_process_trap_irq(struct amdgpu_device 
*adev,
  struct amdgpu_irq_src *source,
  struct amdgpu_iv_entry *entry)
 {
-   amdgpu_fence_process(>sdma.instance[0].ring);
-
-   return 0;
-}
-
-static int si_dma_process_trap_irq_1(struct amdgpu_device *adev,
- struct amdgpu_irq_src *source,
- struct amdgpu_iv_entry *entry)
-{
-   amdgpu_fence_process(>sdma.instance[1].ring);
-
+   if (entry->src_id == 224)
+   amdgpu_fence_process(>sdma.instance[0].ring);
+   else
+   amdgpu_fence_process(>sdma.instance[1].ring);
return 0;
 }
 
@@ -786,11 +781,6 @@ static const struct amdgpu_irq_src_funcs 
si_dma_trap_irq_funcs = {
.process = si_dma_process_trap_irq,
 };
 
-static const struct amdgpu_irq_src_funcs si_dma_trap_irq_funcs_1 = {
-   .set = si_dma_set_trap_irq_state,
-   .process = si_dma_process_trap_irq_1,
-};
-
 static const struct amdgpu_irq_src_funcs si_dma_illegal_inst_irq_funcs = {
.process = si_dma_process_illegal_inst_irq,
 };
@@ -799,7 +789,6 @@ static void si_dma_set_irq_funcs(struct amdgpu_device *adev)
 {
adev->sdma.trap_irq.num_types = AMDGPU_SDMA_IRQ_LAST;
adev->sdma.trap_irq.funcs = _dma_trap_irq_funcs;
-   adev->sdma.trap_irq_1.funcs = _dma_trap_irq_funcs_1;
adev->sdma.illegal_inst_irq.funcs = _dma_illegal_inst_irq_funcs;
 }
 
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 4/8] drm/amdgpu: remove non gfx specific handling from sdma_v4_0_gfx_resume

2018-10-08 Thread Christian König
Needed to start using the paging queue.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 36 +++---
 1 file changed, 20 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index 5ecf6c9252c4..1124b45d166d 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -686,13 +686,10 @@ static void sdma_v4_0_gfx_resume(struct amdgpu_device 
*adev, unsigned int i)
u32 wb_offset;
u32 doorbell;
u32 doorbell_offset;
-   u32 temp;
u64 wptr_gpu_addr;
 
wb_offset = (ring->rptr_offs * 4);
 
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_SEM_WAIT_FAIL_TIMER_CNTL), 0);
-
/* Set ring buffer size in dwords */
rb_bufsz = order_base_2(ring->ring_size / 4);
rb_cntl = RREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_GFX_RB_CNTL));
@@ -752,18 +749,6 @@ static void sdma_v4_0_gfx_resume(struct amdgpu_device 
*adev, unsigned int i)
/* set minor_ptr_update to 0 after wptr programed */
WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_MINOR_PTR_UPDATE), 
0);
 
-   /* set utc l1 enable flag always to 1 */
-   temp = RREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_CNTL));
-   temp = REG_SET_FIELD(temp, SDMA0_CNTL, UTC_L1_ENABLE, 1);
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_CNTL), temp);
-
-   if (!amdgpu_sriov_vf(adev)) {
-   /* unhalt engine */
-   temp = RREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_F32_CNTL));
-   temp = REG_SET_FIELD(temp, SDMA0_F32_CNTL, HALT, 0);
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_F32_CNTL), 
temp);
-   }
-
/* setup the wptr shadow polling */
wptr_gpu_addr = adev->wb.gpu_addr + (ring->wptr_offs * 4);
WREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_GFX_RB_WPTR_POLL_ADDR_LO),
@@ -942,9 +927,28 @@ static int sdma_v4_0_start(struct amdgpu_device *adev)
}
 
/* start the gfx rings and rlc compute queues */
-   for (i = 0; i < adev->sdma.num_instances; i++)
+   for (i = 0; i < adev->sdma.num_instances; i++) {
+   uint32_t temp;
+
+   WREG32(sdma_v4_0_get_reg_offset(adev, i,
+   mmSDMA0_SEM_WAIT_FAIL_TIMER_CNTL), 0);
sdma_v4_0_gfx_resume(adev, i);
 
+   /* set utc l1 enable flag always to 1 */
+   temp = RREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_CNTL));
+   temp = REG_SET_FIELD(temp, SDMA0_CNTL, UTC_L1_ENABLE, 1);
+   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_CNTL), temp);
+
+   if (!amdgpu_sriov_vf(adev)) {
+   /* unhalt engine */
+   temp = RREG32(sdma_v4_0_get_reg_offset(adev, i,
+   mmSDMA0_F32_CNTL));
+   temp = REG_SET_FIELD(temp, SDMA0_F32_CNTL, HALT, 0);
+   WREG32(sdma_v4_0_get_reg_offset(adev, i,
+   mmSDMA0_F32_CNTL), temp);
+   }
+   }
+
if (amdgpu_sriov_vf(adev)) {
sdma_v4_0_ctx_switch_enable(adev, true);
sdma_v4_0_enable(adev, true);
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 5/8] drm/amdgpu: remove SRIOV specific handling from sdma_v4_0_gfx_resume

2018-10-08 Thread Christian König
Just use the same code path for both SRIOV and bare metal.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 8 +---
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index 1124b45d166d..61da9b862ede 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -723,11 +723,6 @@ static void sdma_v4_0_gfx_resume(struct amdgpu_device 
*adev, unsigned int i)
/* before programing wptr to a less value, need set minor_ptr_update 
first */
WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_MINOR_PTR_UPDATE), 
1);
 
-   if (!amdgpu_sriov_vf(adev)) { /* only bare-metal use register write for 
wptr */
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR), 
lower_32_bits(ring->wptr) << 2);
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_GFX_RB_WPTR_HI), upper_32_bits(ring->wptr) << 2);
-   }
-
doorbell = RREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_GFX_DOORBELL));
doorbell_offset = RREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_GFX_DOORBELL_OFFSET));
 
@@ -743,8 +738,7 @@ static void sdma_v4_0_gfx_resume(struct amdgpu_device 
*adev, unsigned int i)
adev->nbio_funcs->sdma_doorbell_range(adev, i, ring->use_doorbell,
  ring->doorbell_index);
 
-   if (amdgpu_sriov_vf(adev))
-   sdma_v4_0_ring_set_wptr(ring);
+   sdma_v4_0_ring_set_wptr(ring);
 
/* set minor_ptr_update to 0 after wptr programed */
WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_MINOR_PTR_UPDATE), 
0);
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 3/8] drm/amdgpu: add basics for SDMA page queue support

2018-10-08 Thread Christian König
Just the common helper and a new ring in the SDMA instance.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c | 10 --
 drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.h |  1 +
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
index bc9244b429ef..0fb9907494bb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
@@ -34,11 +34,9 @@ struct amdgpu_sdma_instance * 
amdgpu_get_sdma_instance(struct amdgpu_ring *ring)
int i;
 
for (i = 0; i < adev->sdma.num_instances; i++)
-   if (>sdma.instance[i].ring == ring)
-   break;
+   if (ring == >sdma.instance[i].ring ||
+   ring == >sdma.instance[i].page)
+   return >sdma.instance[i];
 
-   if (i < AMDGPU_MAX_SDMA_INSTANCES)
-   return >sdma.instance[i];
-   else
-   return NULL;
+   return NULL;
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.h
index 500113ec65ca..556db42edaed 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.h
@@ -41,6 +41,7 @@ struct amdgpu_sdma_instance {
uint32_tfeature_version;
 
struct amdgpu_ring  ring;
+   struct amdgpu_ring  page;
boolburst_nop;
 };
 
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/8] drm/amdgpu: fix sdma v4 startup under SRIOV

2018-10-08 Thread Christian König
Under SRIOV we were enabling the ring buffer before it was initialized.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 234 -
 1 file changed, 116 insertions(+), 118 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index c20d413f277c..5ecf6c9252c4 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -673,13 +673,14 @@ static void sdma_v4_0_enable(struct amdgpu_device *adev, 
bool enable)
  * sdma_v4_0_gfx_resume - setup and start the async dma engines
  *
  * @adev: amdgpu_device pointer
+ * @i: instance to resume
  *
  * Set up the gfx DMA ring buffers and enable them (VEGA10).
  * Returns 0 for success, error for failure.
  */
-static int sdma_v4_0_gfx_resume(struct amdgpu_device *adev)
+static void sdma_v4_0_gfx_resume(struct amdgpu_device *adev, unsigned int i)
 {
-   struct amdgpu_ring *ring;
+   struct amdgpu_ring *ring = >sdma.instance[i].ring;
u32 rb_cntl, ib_cntl, wptr_poll_cntl;
u32 rb_bufsz;
u32 wb_offset;
@@ -687,129 +688,108 @@ static int sdma_v4_0_gfx_resume(struct amdgpu_device 
*adev)
u32 doorbell_offset;
u32 temp;
u64 wptr_gpu_addr;
-   int i, r;
 
-   for (i = 0; i < adev->sdma.num_instances; i++) {
-   ring = >sdma.instance[i].ring;
-   wb_offset = (ring->rptr_offs * 4);
+   wb_offset = (ring->rptr_offs * 4);
 
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_SEM_WAIT_FAIL_TIMER_CNTL), 0);
+   WREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_SEM_WAIT_FAIL_TIMER_CNTL), 0);
 
-   /* Set ring buffer size in dwords */
-   rb_bufsz = order_base_2(ring->ring_size / 4);
-   rb_cntl = RREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_GFX_RB_CNTL));
-   rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_SIZE, 
rb_bufsz);
+   /* Set ring buffer size in dwords */
+   rb_bufsz = order_base_2(ring->ring_size / 4);
+   rb_cntl = RREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_GFX_RB_CNTL));
+   rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_SIZE, rb_bufsz);
 #ifdef __BIG_ENDIAN
-   rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, 
RB_SWAP_ENABLE, 1);
-   rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL,
-   RPTR_WRITEBACK_SWAP_ENABLE, 1);
+   rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_SWAP_ENABLE, 1);
+   rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL,
+   RPTR_WRITEBACK_SWAP_ENABLE, 1);
 #endif
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_CNTL), 
rb_cntl);
+   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_CNTL), rb_cntl);
 
-   /* Initialize the ring buffer's read and write pointers */
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_RPTR), 
0);
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_GFX_RB_RPTR_HI), 0);
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR), 
0);
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_GFX_RB_WPTR_HI), 0);
+   /* Initialize the ring buffer's read and write pointers */
+   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_RPTR), 0);
+   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_RPTR_HI), 0);
+   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR), 0);
+   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR_HI), 0);
 
-   /* set the wb address whether it's enabled or not */
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_GFX_RB_RPTR_ADDR_HI),
-  upper_32_bits(adev->wb.gpu_addr + wb_offset) & 
0x);
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_GFX_RB_RPTR_ADDR_LO),
-  lower_32_bits(adev->wb.gpu_addr + wb_offset) & 
0xFFFC);
+   /* set the wb address whether it's enabled or not */
+   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_RPTR_ADDR_HI),
+  upper_32_bits(adev->wb.gpu_addr + wb_offset) & 0x);
+   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_RPTR_ADDR_LO),
+  lower_32_bits(adev->wb.gpu_addr + wb_offset) & 0xFFFC);
 
-   rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, 
RPTR_WRITEBACK_ENABLE, 1);
+   rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, 
RPTR_WRITEBACK_ENABLE, 1);
 
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_BASE), 
ring->gpu_addr >> 8);
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_GFX_RB_BASE_HI), ring->gpu_addr >> 40);
+   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_BASE), 
ring->gpu_addr >> 8);
+   

[PATCH 7/8] drm/amdgpu: activate paging queue on SDMA v4

2018-10-08 Thread Christian König
Implement all the necessary stuff to get those extra rings working.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 324 -
 1 file changed, 274 insertions(+), 50 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index 55384bad7a70..a362904d73f7 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -427,6 +427,57 @@ static void sdma_v4_0_ring_set_wptr(struct amdgpu_ring 
*ring)
}
 }
 
+/**
+ * sdma_v4_0_page_ring_get_wptr - get the current write pointer
+ *
+ * @ring: amdgpu ring pointer
+ *
+ * Get the current wptr from the hardware (VEGA10+).
+ */
+static uint64_t sdma_v4_0_page_ring_get_wptr(struct amdgpu_ring *ring)
+{
+   struct amdgpu_device *adev = ring->adev;
+   u64 wptr;
+
+   if (ring->use_doorbell) {
+   /* XXX check if swapping is necessary on BE */
+   wptr = READ_ONCE(*((u64 *)>wb.wb[ring->wptr_offs]));
+   } else {
+   wptr = RREG32_SDMA(ring->me, mmSDMA0_PAGE_RB_WPTR_HI);
+   wptr = wptr << 32;
+   wptr |= RREG32_SDMA(ring->me, mmSDMA0_PAGE_RB_WPTR);
+   }
+
+   return wptr >> 2;
+}
+
+/**
+ * sdma_v4_0_ring_set_wptr - commit the write pointer
+ *
+ * @ring: amdgpu ring pointer
+ *
+ * Write the wptr back to the hardware (VEGA10+).
+ */
+static void sdma_v4_0_page_ring_set_wptr(struct amdgpu_ring *ring)
+{
+   struct amdgpu_device *adev = ring->adev;
+
+   if (ring->use_doorbell) {
+   u64 *wb = (u64 *)>wb.wb[ring->wptr_offs];
+
+   /* XXX check if swapping is necessary on BE */
+   WRITE_ONCE(*wb, (ring->wptr << 2));
+   WDOORBELL64(ring->doorbell_index, ring->wptr << 2);
+   } else {
+   uint64_t wptr = ring->wptr << 2;
+
+   WREG32_SDMA(ring->me, mmSDMA0_PAGE_RB_WPTR,
+   lower_32_bits(wptr));
+   WREG32_SDMA(ring->me, mmSDMA0_PAGE_RB_WPTR_HI,
+   upper_32_bits(wptr));
+   }
+}
+
 static void sdma_v4_0_ring_insert_nop(struct amdgpu_ring *ring, uint32_t count)
 {
struct amdgpu_sdma_instance *sdma = amdgpu_get_sdma_instance(ring);
@@ -597,6 +648,35 @@ static void sdma_v4_0_rlc_stop(struct amdgpu_device *adev)
/* XXX todo */
 }
 
+/**
+ * sdma_v4_0_page_stop - stop the page async dma engines
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Stop the page async dma ring buffers (VEGA10).
+ */
+static void sdma_v4_0_page_stop(struct amdgpu_device *adev)
+{
+   struct amdgpu_ring *sdma0 = >sdma.instance[0].page;
+   struct amdgpu_ring *sdma1 = >sdma.instance[1].page;
+   u32 rb_cntl, ib_cntl;
+   int i;
+
+   for (i = 0; i < adev->sdma.num_instances; i++) {
+   rb_cntl = RREG32_SDMA(i, mmSDMA0_PAGE_RB_CNTL);
+   rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_PAGE_RB_CNTL,
+   RB_ENABLE, 0);
+   WREG32_SDMA(i, mmSDMA0_PAGE_RB_CNTL, rb_cntl);
+   ib_cntl = RREG32_SDMA(i, mmSDMA0_PAGE_IB_CNTL);
+   ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_PAGE_IB_CNTL,
+   IB_ENABLE, 0);
+   WREG32_SDMA(i, mmSDMA0_PAGE_IB_CNTL, ib_cntl);
+   }
+
+   sdma0->ready = false;
+   sdma1->ready = false;
+}
+
 /**
  * sdma_v_0_ctx_switch_enable - stop the async dma engines context switch
  *
@@ -664,6 +744,7 @@ static void sdma_v4_0_enable(struct amdgpu_device *adev, 
bool enable)
if (enable == false) {
sdma_v4_0_gfx_stop(adev);
sdma_v4_0_rlc_stop(adev);
+   sdma_v4_0_page_stop(adev);
}
 
for (i = 0; i < adev->sdma.num_instances; i++) {
@@ -673,6 +754,23 @@ static void sdma_v4_0_enable(struct amdgpu_device *adev, 
bool enable)
}
 }
 
+/**
+ * sdma_v4_0_rb_cntl - get parameters for rb_cntl
+ */
+static uint32_t sdma_v4_0_rb_cntl(struct amdgpu_ring *ring, uint32_t rb_cntl)
+{
+   /* Set ring buffer size in dwords */
+   uint32_t rb_bufsz = order_base_2(ring->ring_size / 4);
+
+   rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_SIZE, rb_bufsz);
+#ifdef __BIG_ENDIAN
+   rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_SWAP_ENABLE, 1);
+   rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL,
+   RPTR_WRITEBACK_SWAP_ENABLE, 1);
+#endif
+   return rb_cntl;
+}
+
 /**
  * sdma_v4_0_gfx_resume - setup and start the async dma engines
  *
@@ -686,7 +784,6 @@ static void sdma_v4_0_gfx_resume(struct amdgpu_device 
*adev, unsigned int i)
 {
struct amdgpu_ring *ring = >sdma.instance[i].ring;
u32 rb_cntl, ib_cntl, wptr_poll_cntl;
-   u32 rb_bufsz;
u32 wb_offset;
u32 doorbell;
u32 doorbell_offset;
@@ -694,15 +791,8 @@ static void sdma_v4_0_gfx_resume(struct amdgpu_device 
*adev, 

[PATCH 6/8] drm/amdgpu: add some [WR]REG32_SDMA macros to sdma_v4_0.c

2018-10-08 Thread Christian König
Significantly shortens the code.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 126 -
 1 file changed, 63 insertions(+), 63 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index 61da9b862ede..55384bad7a70 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -54,6 +54,11 @@ MODULE_FIRMWARE("amdgpu/raven2_sdma.bin");
 #define SDMA0_POWER_CNTL__ON_OFF_CONDITION_HOLD_TIME_MASK  0x00F8L
 #define SDMA0_POWER_CNTL__ON_OFF_STATUS_DURATION_TIME_MASK 0xFC00L
 
+#define WREG32_SDMA(instance, offset, value) \
+   WREG32(sdma_v4_0_get_reg_offset(adev, (instance), (offset)), value)
+#define RREG32_SDMA(instance, offset) \
+   RREG32(sdma_v4_0_get_reg_offset(adev, (instance), (offset)))
+
 static void sdma_v4_0_set_ring_funcs(struct amdgpu_device *adev);
 static void sdma_v4_0_set_buffer_funcs(struct amdgpu_device *adev);
 static void sdma_v4_0_set_vm_pte_funcs(struct amdgpu_device *adev);
@@ -367,8 +372,8 @@ static uint64_t sdma_v4_0_ring_get_wptr(struct amdgpu_ring 
*ring)
} else {
u32 lowbit, highbit;
 
-   lowbit = RREG32(sdma_v4_0_get_reg_offset(adev, ring->me, 
mmSDMA0_GFX_RB_WPTR)) >> 2;
-   highbit = RREG32(sdma_v4_0_get_reg_offset(adev, ring->me, 
mmSDMA0_GFX_RB_WPTR_HI)) >> 2;
+   lowbit = RREG32_SDMA(ring->me, mmSDMA0_GFX_RB_WPTR) >> 2;
+   highbit = RREG32_SDMA(ring->me, mmSDMA0_GFX_RB_WPTR_HI) >> 2;
 
DRM_DEBUG("wptr [%i]high== 0x%08x low==0x%08x\n",
ring->me, highbit, lowbit);
@@ -415,8 +420,10 @@ static void sdma_v4_0_ring_set_wptr(struct amdgpu_ring 
*ring)
lower_32_bits(ring->wptr << 2),
ring->me,
upper_32_bits(ring->wptr << 2));
-   WREG32(sdma_v4_0_get_reg_offset(adev, ring->me, 
mmSDMA0_GFX_RB_WPTR), lower_32_bits(ring->wptr << 2));
-   WREG32(sdma_v4_0_get_reg_offset(adev, ring->me, 
mmSDMA0_GFX_RB_WPTR_HI), upper_32_bits(ring->wptr << 2));
+   WREG32_SDMA(ring->me, mmSDMA0_GFX_RB_WPTR,
+   lower_32_bits(ring->wptr << 2));
+   WREG32_SDMA(ring->me, mmSDMA0_GFX_RB_WPTR_HI,
+   upper_32_bits(ring->wptr << 2));
}
 }
 
@@ -566,12 +573,12 @@ static void sdma_v4_0_gfx_stop(struct amdgpu_device *adev)
amdgpu_ttm_set_buffer_funcs_status(adev, false);
 
for (i = 0; i < adev->sdma.num_instances; i++) {
-   rb_cntl = RREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_GFX_RB_CNTL));
+   rb_cntl = RREG32_SDMA(i, mmSDMA0_GFX_RB_CNTL);
rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_ENABLE, 
0);
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_CNTL), 
rb_cntl);
-   ib_cntl = RREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_GFX_IB_CNTL));
+   WREG32_SDMA(i, mmSDMA0_GFX_RB_CNTL, rb_cntl);
+   ib_cntl = RREG32_SDMA(i, mmSDMA0_GFX_IB_CNTL);
ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL, IB_ENABLE, 
0);
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_GFX_IB_CNTL), 
ib_cntl);
+   WREG32_SDMA(i, mmSDMA0_GFX_IB_CNTL, ib_cntl);
}
 
sdma0->ready = false;
@@ -628,18 +635,15 @@ static void sdma_v4_0_ctx_switch_enable(struct 
amdgpu_device *adev, bool enable)
}
 
for (i = 0; i < adev->sdma.num_instances; i++) {
-   f32_cntl = RREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_CNTL));
+   f32_cntl = RREG32_SDMA(i, mmSDMA0_CNTL);
f32_cntl = REG_SET_FIELD(f32_cntl, SDMA0_CNTL,
AUTO_CTXSW_ENABLE, enable ? 1 : 0);
if (enable && amdgpu_sdma_phase_quantum) {
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_PHASE0_QUANTUM),
-  phase_quantum);
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_PHASE1_QUANTUM),
-  phase_quantum);
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, 
mmSDMA0_PHASE2_QUANTUM),
-  phase_quantum);
+   WREG32_SDMA(i, mmSDMA0_PHASE0_QUANTUM, phase_quantum);
+   WREG32_SDMA(i, mmSDMA0_PHASE1_QUANTUM, phase_quantum);
+   WREG32_SDMA(i, mmSDMA0_PHASE2_QUANTUM, phase_quantum);
}
-   WREG32(sdma_v4_0_get_reg_offset(adev, i, mmSDMA0_CNTL), 
f32_cntl);
+   WREG32_SDMA(i, mmSDMA0_CNTL, f32_cntl);
}
 
 }
@@ -663,9 +667,9 @@ static void sdma_v4_0_enable(struct amdgpu_device *adev, 
bool enable)
}
 
for (i = 0; i < adev->sdma.num_instances; i++) {
-

Re: [RFC] drm/amd/display: add SI support to AMD DC

2018-10-08 Thread sylvain . bertrand
On Mon, Oct 08, 2018 at 12:04:23PM +, sylvain.bertr...@gmail.com wrote:
> I am currently testing your patch set, on amd-staging-drm-next
> (380d480842d584278dba9aa74341017d8c7d8c23) with an AMD tahiti xt part and a
> displayport monitor.

Forgot a very important thing: I run it with the linux tsc fix, or all linux
timings will be wrong and no displayport programing will happen. This fix is
not in current vanilla amd-staging-drm-next.

-- 
Sylvain
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [RFC] drm/amd/display: add SI support to AMD DC

2018-10-08 Thread Mike Lothian
I thought it just required the existing firmware, with padding / extra info
put around it?

On Mon, 8 Oct 2018 at 12:29 Christian König <
ckoenig.leichtzumer...@gmail.com> wrote:

> UVD/VCE on SI with amdgpu would need new firmware.
>
> And so far we never had time to actually look into releasing that firmware.
>
> Regards,
> Christian.
>
>
> Am 08.10.2018 um 13:22 schrieb Mauro Rossi:
>
> Hi Mike,
> On Mon, Oct 8, 2018 at 1:00 PM Mike Lothian  wrote:
>
>> Hi Mauro
>>
>> Do you know if there are any plans to add in UVD support on SI too?
>>
>> Thanks
>>
>> Mike
>>
>
> At the moment my focus is on getting a conformant, working and stable
> implementation of Atomic Display Framework, with the objective to have it
> upstreamed to amd-gfx branch, then staging (drm-next) and maybe merged in
> linux kernel.
>
> To be honest my attempt is based on code paths inspection and mimicking,
> so in this moment I do not even know the state of UVD and what changes are
> needed,
> but, based on what I saw for DCE6 support addition on top of DCE8,
> covering all compatible HW modules makes a lot of sense and it is an
> opportunity to exploit,
> if feasible.
>
> For this to happen in most complete and reliable way the feedback of staff
> who worked on DAL/DC
> will be essential, because what I did now was to adapt code for DCE8 to
> work for DCE6,
> but it was like an "optimistic monkey with a keyboard" approach, with all
> due respect for monkeys with keyboards,
> :-) I may have missed dozen of changes.
>
> Mauro
>
>
>>
>> On Mon, 8 Oct 2018 at 03:24 Mauro Rossi  wrote:
>>
>>> [PATCH 01/10] drm/amd/display: add asics info for SI parts
>>> [PATCH 02/10] drm/amd/display: dc/dce: add DCE6 support
>>> [PATCH 03/10] drm/amd/display: dc/core: add DCE6 support
>>> [PATCH 04/10] drm/amd/display: dc/bios: add support for DCE6
>>> [PATCH 05/10] drm/amd/display: dc/gpio: add support for DCE6
>>> [PATCH 06/10] drm/amd/display: dc/i2caux: add support for DCE6
>>> [PATCH 07/10] drm/amd/display: dc/irq: add support for DCE6
>>> [PATCH 08/10] drm/amd/display: amdgpu_dm: add SI support
>>> [PATCH 09/10] drm/amdgpu: enable DC support for SI parts
>>> [PATCH 10/10] drm/amd/display: enable SI support in the Kconfig
>>>
>>> The series adds preliminar SI support as a Proof Of Concept,
>>> based on the idea that DCE6 is similar to DCE8, to be reviewed and
>>> refined
>>>
>>> Android-x86 need/motivation lies in the following chain of dependencies:
>>> Vulkan radv requires gbm gralloc prime_fd support,
>>> gbm gralloc requires drm hwcomposer,
>>> drm hwcomposer requires Atomic Display Framework,
>>> Atomic Display Framework requires AMD DC, currently not supporting SI.
>>>
>>> So the goals are:
>>> 1) to get Vulkan radv working on SI parts for android-x86.
>>> 2) to remove the gap in SI (GCN 1st gen) not having atomic support.
>>>
>>> DCE6 specific code was implemented as a replica of existing DCE8 support
>>> and based on how DCE8 specific code was added on top of DCE10,11 support
>>> by adding dce60* sources, functions, macros for each existing in dce80*
>>>
>>> CONFIG_DRM_AMD_DC_SI parameter has been added to control SI support in DC
>>>
>>> During this first iteration of review, there are aspects to verify:
>>> - dce60 code has been added mechanically, so there may be redundancies
>>> and space for refactoring part of the code
>>> - dce60_resources was having too many building errors due to missing
>>> DCE6 macros
>>> in order to temporarily overcome the problem dce_8_0_{d,sh_mask}.h
>>> headers
>>> were used for the PoC
>>> - dc/irq suffered the same problem dce_8_0_{d,sh_mask}.h headers
>>> were used for the PoC
>>> - gfx6 may require some ad hoc initialization, skipped for the moment
>>> - Hainan specific code requires review, as some documentation and code
>>> paths
>>> seem to point that famility may not have DCE6, please confirm
>>> - video decoding blocks code have not been touched
>>> - dc/dce/dce_clock_source.{c,h} may be missing some SI/DCE6 specifics
>>> - dc/dce/dce_dmcu.{c,h} may be missing some SI/DCE6 specifics
>>> - dc/dce/dce_hwseq.h may be missing some SI/DCE6 specifics
>>> - dc/dce/dce_link_encoder.h may be missing some SI/DCE6 specifics
>>> - dc/dce/dce_stream_encoder.h may be missing some SI/DCE6 specifics
>>> - dc/amdgpu_dm/* changes may be incomplete
>>> - Any other omissis to be reviewed
>>> - Feedback on best testing strategy required
>>>
>>> Review from an expert of the DC impacted modules is recommended
>>>
>>> SW Layer
>>> /===\
>>> | DCDisplay Timing  ModeAsic|
>>> | Interface Service Service Manager Capability* |
>>> |   |
>>> | Display   TopologyDisplay LinkAdapter |
>>> | Path  Manager Capability  Service Service |
>>> |   Service |
>>> 

Re: [RFC] drm/amd/display: add SI support to AMD DC

2018-10-08 Thread sylvain . bertrand
I am currently testing your patch set, on amd-staging-drm-next
(380d480842d584278dba9aa74341017d8c7d8c23) with an AMD tahiti xt part and a
displayport monitor.
patch02 drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c did not apply but
seems kind of benign.

It's working out of the box on my AMD tahiti xt part. I did not manage to break
it with aggressive mode programming. Let's see how it goes with my everday
usage. 

> The series adds preliminar SI support as a Proof Of Concept, 
> based on the idea that DCE6 is similar to DCE8, to be reviewed and refined

Did want to do it, but did drop it due to DC code getting fixed with too much
changes.
Brutally mapping DCE6 to DCE8 is an act of faith... and it's working on my
part.

> Android-x86 need/motivation lies in the following chain of dependencies: 
> Vulkan radv requires gbm gralloc prime_fd support,
> gbm gralloc requires drm hwcomposer,
> drm hwcomposer requires Atomic Display Framework, 
> Atomic Display Framework requires AMD DC, currently not supporting SI.
> 
> So the goals are:
> 1) to get Vulkan radv working on SI parts for android-x86.

AFAIK, Vulkan support is not dependent on the display block. I am running heavy
vulkan games on a custom gnu/linux x86_64 AMD hardware based system, then the
hwcomposer is android only?

> 2) to remove the gap in SI (GCN 1st gen) not having atomic support. 

I was nearly sure that atomic support was implicitely added for parts
supporting only legacy DRM mode programming interfaces?

-- 
Sylvain
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: fix AGP location with VRAM at 0x0

2018-10-08 Thread Christian König

Ping, can anybody review this?

Am 04.10.2018 um 11:02 schrieb Christian König:

That also simplifies handling quite a bit.

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 7 ++-
  1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index 9a5b252784a1..999e15945355 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -200,16 +200,13 @@ void amdgpu_gmc_agp_location(struct amdgpu_device *adev, 
struct amdgpu_gmc *mc)
}
  
  	if (size_bf > size_af) {

-   mc->agp_start = mc->fb_start > mc->gart_start ?
-   mc->gart_end + 1 : 0;
+   mc->agp_start = (mc->fb_start - size_bf) & sixteen_gb_mask;
mc->agp_size = size_bf;
} else {
-   mc->agp_start = (mc->fb_start > mc->gart_start ?
-   mc->fb_end : mc->gart_end) + 1,
+   mc->agp_start = ALIGN(mc->fb_end + 1, sixteen_gb);
mc->agp_size = size_af;
}
  
-	mc->agp_start = ALIGN(mc->agp_start, sixteen_gb);

mc->agp_end = mc->agp_start + mc->agp_size - 1;
dev_info(adev->dev, "AGP: %lluM 0x%016llX - 0x%016llX\n",
mc->agp_size >> 20, mc->agp_start, mc->agp_end);


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [RFC] drm/amd/display: add SI support to AMD DC

2018-10-08 Thread Christian König

UVD/VCE on SI with amdgpu would need new firmware.

And so far we never had time to actually look into releasing that firmware.

Regards,
Christian.

Am 08.10.2018 um 13:22 schrieb Mauro Rossi:

Hi Mike,
On Mon, Oct 8, 2018 at 1:00 PM Mike Lothian > wrote:


Hi Mauro

Do you know if there are any plans to add in UVD support on SI too?

Thanks

Mike


At the moment my focus is on getting a conformant, working and stable
implementation of Atomic Display Framework, with the objective to have it
upstreamed to amd-gfx branch, then staging (drm-next) and maybe merged 
in linux kernel.


To be honest my attempt is based on code paths inspection and mimicking,
so in this moment I do not even know the state of UVD and what changes 
are needed,

but, based on what I saw for DCE6 support addition on top of DCE8,
covering all compatible HW modules makes a lot of sense and it is an 
opportunity to exploit,

if feasible.

For this to happen in most complete and reliable way the feedback of 
staff who worked on DAL/DC
will be essential, because what I did now was to adapt code for DCE8 
to work for DCE6,
but it was like an "optimistic monkey with a keyboard" approach, with 
all due respect for monkeys with keyboards,

:-) I may have missed dozen of changes.

Mauro


On Mon, 8 Oct 2018 at 03:24 Mauro Rossi mailto:issor.or...@gmail.com>> wrote:

[PATCH 01/10] drm/amd/display: add asics info for SI parts
[PATCH 02/10] drm/amd/display: dc/dce: add DCE6 support
[PATCH 03/10] drm/amd/display: dc/core: add DCE6 support
[PATCH 04/10] drm/amd/display: dc/bios: add support for DCE6
[PATCH 05/10] drm/amd/display: dc/gpio: add support for DCE6
[PATCH 06/10] drm/amd/display: dc/i2caux: add support for DCE6
[PATCH 07/10] drm/amd/display: dc/irq: add support for DCE6
[PATCH 08/10] drm/amd/display: amdgpu_dm: add SI support
[PATCH 09/10] drm/amdgpu: enable DC support for SI parts
[PATCH 10/10] drm/amd/display: enable SI support in the Kconfig

The series adds preliminar SI support as a Proof Of Concept,
based on the idea that DCE6 is similar to DCE8, to be reviewed
and refined

Android-x86 need/motivation lies in the following chain of
dependencies:
Vulkan radv requires gbm gralloc prime_fd support,
gbm gralloc requires drm hwcomposer,
drm hwcomposer requires Atomic Display Framework,
Atomic Display Framework requires AMD DC, currently not
supporting SI.

So the goals are:
1) to get Vulkan radv working on SI parts for android-x86.
2) to remove the gap in SI (GCN 1st gen) not having atomic
support.

DCE6 specific code was implemented as a replica of existing
DCE8 support
and based on how DCE8 specific code was added on top of
DCE10,11 support
by adding dce60* sources, functions, macros for each existing
in dce80*

CONFIG_DRM_AMD_DC_SI parameter has been added to control SI
support in DC

During this first iteration of review, there are aspects to
verify:
- dce60 code has been added mechanically, so there may be
redundancies
and space for refactoring part of the code
- dce60_resources was having too many building errors due to
missing DCE6 macros
in order to temporarily overcome the problem
dce_8_0_{d,sh_mask}.h headers
were used for the PoC
- dc/irq suffered the same problem dce_8_0_{d,sh_mask}.h headers
were used for the PoC
- gfx6 may require some ad hoc initialization, skipped for the
moment
- Hainan specific code requires review, as some documentation
and code paths
seem to point that famility may not have DCE6, please confirm
- video decoding blocks code have not been touched
- dc/dce/dce_clock_source.{c,h} may be missing some SI/DCE6
specifics
- dc/dce/dce_dmcu.{c,h} may be missing some SI/DCE6 specifics
- dc/dce/dce_hwseq.h may be missing some SI/DCE6 specifics
- dc/dce/dce_link_encoder.h may be missing some SI/DCE6 specifics
- dc/dce/dce_stream_encoder.h may be missing some SI/DCE6
specifics
- dc/amdgpu_dm/* changes may be incomplete
- Any other omissis to be reviewed
- Feedback on best testing strategy required

Review from an expert of the DC impacted modules is recommended

    SW Layer
/===\
| DC        Display     Timing          Mode Asic        |
| Interface Service     Service         Manager  Capability* |
|        |
| Display   Topology    Display         Link Adapter     |
| Path      Manager     Capability      Service  Service     |
|                       Service     

Re: [RFC] drm/amd/display: add SI support to AMD DC

2018-10-08 Thread Mauro Rossi
Hi Mike,
On Mon, Oct 8, 2018 at 1:00 PM Mike Lothian  wrote:

> Hi Mauro
>
> Do you know if there are any plans to add in UVD support on SI too?
>
> Thanks
>
> Mike
>

At the moment my focus is on getting a conformant, working and stable
implementation of Atomic Display Framework, with the objective to have it
upstreamed to amd-gfx branch, then staging (drm-next) and maybe merged in
linux kernel.

To be honest my attempt is based on code paths inspection and mimicking,
so in this moment I do not even know the state of UVD and what changes are
needed,
but, based on what I saw for DCE6 support addition on top of DCE8,
covering all compatible HW modules makes a lot of sense and it is an
opportunity to exploit,
if feasible.

For this to happen in most complete and reliable way the feedback of staff
who worked on DAL/DC
will be essential, because what I did now was to adapt code for DCE8 to
work for DCE6,
but it was like an "optimistic monkey with a keyboard" approach, with all
due respect for monkeys with keyboards,
:-) I may have missed dozen of changes.

Mauro


>
> On Mon, 8 Oct 2018 at 03:24 Mauro Rossi  wrote:
>
>> [PATCH 01/10] drm/amd/display: add asics info for SI parts
>> [PATCH 02/10] drm/amd/display: dc/dce: add DCE6 support
>> [PATCH 03/10] drm/amd/display: dc/core: add DCE6 support
>> [PATCH 04/10] drm/amd/display: dc/bios: add support for DCE6
>> [PATCH 05/10] drm/amd/display: dc/gpio: add support for DCE6
>> [PATCH 06/10] drm/amd/display: dc/i2caux: add support for DCE6
>> [PATCH 07/10] drm/amd/display: dc/irq: add support for DCE6
>> [PATCH 08/10] drm/amd/display: amdgpu_dm: add SI support
>> [PATCH 09/10] drm/amdgpu: enable DC support for SI parts
>> [PATCH 10/10] drm/amd/display: enable SI support in the Kconfig
>>
>> The series adds preliminar SI support as a Proof Of Concept,
>> based on the idea that DCE6 is similar to DCE8, to be reviewed and refined
>>
>> Android-x86 need/motivation lies in the following chain of dependencies:
>> Vulkan radv requires gbm gralloc prime_fd support,
>> gbm gralloc requires drm hwcomposer,
>> drm hwcomposer requires Atomic Display Framework,
>> Atomic Display Framework requires AMD DC, currently not supporting SI.
>>
>> So the goals are:
>> 1) to get Vulkan radv working on SI parts for android-x86.
>> 2) to remove the gap in SI (GCN 1st gen) not having atomic support.
>>
>> DCE6 specific code was implemented as a replica of existing DCE8 support
>> and based on how DCE8 specific code was added on top of DCE10,11 support
>> by adding dce60* sources, functions, macros for each existing in dce80*
>>
>> CONFIG_DRM_AMD_DC_SI parameter has been added to control SI support in DC
>>
>> During this first iteration of review, there are aspects to verify:
>> - dce60 code has been added mechanically, so there may be redundancies
>> and space for refactoring part of the code
>> - dce60_resources was having too many building errors due to missing DCE6
>> macros
>> in order to temporarily overcome the problem dce_8_0_{d,sh_mask}.h headers
>> were used for the PoC
>> - dc/irq suffered the same problem dce_8_0_{d,sh_mask}.h headers
>> were used for the PoC
>> - gfx6 may require some ad hoc initialization, skipped for the moment
>> - Hainan specific code requires review, as some documentation and code
>> paths
>> seem to point that famility may not have DCE6, please confirm
>> - video decoding blocks code have not been touched
>> - dc/dce/dce_clock_source.{c,h} may be missing some SI/DCE6 specifics
>> - dc/dce/dce_dmcu.{c,h} may be missing some SI/DCE6 specifics
>> - dc/dce/dce_hwseq.h may be missing some SI/DCE6 specifics
>> - dc/dce/dce_link_encoder.h may be missing some SI/DCE6 specifics
>> - dc/dce/dce_stream_encoder.h may be missing some SI/DCE6 specifics
>> - dc/amdgpu_dm/* changes may be incomplete
>> - Any other omissis to be reviewed
>> - Feedback on best testing strategy required
>>
>> Review from an expert of the DC impacted modules is recommended
>>
>> SW Layer
>> /===\
>> | DCDisplay Timing  ModeAsic|
>> | Interface Service Service Manager Capability* |
>> |   |
>> | Display   TopologyDisplay LinkAdapter |
>> | Path  Manager Capability  Service Service |
>> |   Service |
>> |---|
>> | GPIO* IRQ I2cAux  HW  BIOS|
>> |   Service**   Manager*Sequencer*  Parser* |
>> |   |
>> | Connector Encoder Audio   GPU Controller  |
>> |   |
>> \===/
>> HW Layer
>>
>> Legend:
>> *dce60 support was 

Re: [RFC] drm/amd/display: add SI support to AMD DC

2018-10-08 Thread Mike Lothian
Hi Mauro

Do you know if there are any plans to add in UVD support on SI too?

Thanks

Mike

On Mon, 8 Oct 2018 at 03:24 Mauro Rossi  wrote:

> [PATCH 01/10] drm/amd/display: add asics info for SI parts
> [PATCH 02/10] drm/amd/display: dc/dce: add DCE6 support
> [PATCH 03/10] drm/amd/display: dc/core: add DCE6 support
> [PATCH 04/10] drm/amd/display: dc/bios: add support for DCE6
> [PATCH 05/10] drm/amd/display: dc/gpio: add support for DCE6
> [PATCH 06/10] drm/amd/display: dc/i2caux: add support for DCE6
> [PATCH 07/10] drm/amd/display: dc/irq: add support for DCE6
> [PATCH 08/10] drm/amd/display: amdgpu_dm: add SI support
> [PATCH 09/10] drm/amdgpu: enable DC support for SI parts
> [PATCH 10/10] drm/amd/display: enable SI support in the Kconfig
>
> The series adds preliminar SI support as a Proof Of Concept,
> based on the idea that DCE6 is similar to DCE8, to be reviewed and refined
>
> Android-x86 need/motivation lies in the following chain of dependencies:
> Vulkan radv requires gbm gralloc prime_fd support,
> gbm gralloc requires drm hwcomposer,
> drm hwcomposer requires Atomic Display Framework,
> Atomic Display Framework requires AMD DC, currently not supporting SI.
>
> So the goals are:
> 1) to get Vulkan radv working on SI parts for android-x86.
> 2) to remove the gap in SI (GCN 1st gen) not having atomic support.
>
> DCE6 specific code was implemented as a replica of existing DCE8 support
> and based on how DCE8 specific code was added on top of DCE10,11 support
> by adding dce60* sources, functions, macros for each existing in dce80*
>
> CONFIG_DRM_AMD_DC_SI parameter has been added to control SI support in DC
>
> During this first iteration of review, there are aspects to verify:
> - dce60 code has been added mechanically, so there may be redundancies
> and space for refactoring part of the code
> - dce60_resources was having too many building errors due to missing DCE6
> macros
> in order to temporarily overcome the problem dce_8_0_{d,sh_mask}.h headers
> were used for the PoC
> - dc/irq suffered the same problem dce_8_0_{d,sh_mask}.h headers
> were used for the PoC
> - gfx6 may require some ad hoc initialization, skipped for the moment
> - Hainan specific code requires review, as some documentation and code
> paths
> seem to point that famility may not have DCE6, please confirm
> - video decoding blocks code have not been touched
> - dc/dce/dce_clock_source.{c,h} may be missing some SI/DCE6 specifics
> - dc/dce/dce_dmcu.{c,h} may be missing some SI/DCE6 specifics
> - dc/dce/dce_hwseq.h may be missing some SI/DCE6 specifics
> - dc/dce/dce_link_encoder.h may be missing some SI/DCE6 specifics
> - dc/dce/dce_stream_encoder.h may be missing some SI/DCE6 specifics
> - dc/amdgpu_dm/* changes may be incomplete
> - Any other omissis to be reviewed
> - Feedback on best testing strategy required
>
> Review from an expert of the DC impacted modules is recommended
>
> SW Layer
> /===\
> | DCDisplay Timing  ModeAsic|
> | Interface Service Service Manager Capability* |
> |   |
> | Display   TopologyDisplay LinkAdapter |
> | Path  Manager Capability  Service Service |
> |   Service |
> |---|
> | GPIO* IRQ I2cAux  HW  BIOS|
> |   Service**   Manager*Sequencer*  Parser* |
> |   |
> | Connector Encoder Audio   GPU Controller  |
> |   |
> \===/
> HW Layer
>
> Legend:
> *dce60 support was added cleanly with dce_6_0_{d,sh_mask}.h headers
> **dce60 support was added using dce_8_0_{d,sh_mask}.h headers
>
> Android-x86 preliminary tests results:
>
> [Boots with drm gralloc]
> 3DMark Slingshot
> GFXbench OpenGLES benchmarks OK
> V1 GPU benchmark (OpenGLES) OK
> Regression in Google Chrome, Youtube (app does not show up)
> Regression in Olympus Rising,  Chicken Invaders (app does not show up)
>
> [Boots with drm hwcomposer + gbm gralloc]
> Google Chrome, Youtube are OK
> Vulkan radv HAL API becomes available with hwc+gbm gralloc
> V1 GPU benchmark (Vulkan API) OK
> Sacha Willems examples OK
> Some glitch/freeze in 3DMark Slingshot Extreeme and API overhead
>
> Kind regards
>
> Mauro Rossi
> android-x86 team
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v2 2/3] drm: Add variable refresh property to DRM CRTC

2018-10-08 Thread Michel Dänzer
On 2018-10-05 7:48 p.m., Kazlauskas, Nicholas wrote:
> On 10/05/2018 12:56 PM, Michel Dänzer wrote:
>> On 2018-10-05 6:21 p.m., Kazlauskas, Nicholas wrote:
>>> On 10/05/2018 04:10 AM, Pekka Paalanen wrote:

 I also believe that it would be useful to expose vmin/vmax to userspace
 as properties, so that display servers and apps can plan ahead on when
 they will render. I suppose that can be left for later when someone
 starts working on userspace taking advantage of it, so that the proper
 userspace requirement for new UABI is fulfilled.
>>>
>>> Knowing the vmin/vmax could potentially be useful for testing but most
>>> applications shouldn't be trying to predict or adjust rendering based on
>>> these.
>>
>> FWIW, recent research indicates that this is necessary for perfectly
>> smooth animation without micro-stuttering, even with VRR.
> 
> This was brought up in previous RFCs about this but I'm not really
> convinced by the research methodology and the results from those threads.
> 
> Targeting the max vrefresh of the display (which is known without
> additional properties) should be best for prediction since anything
> lower would be inherently less smooth by definition.

Smooth animation isn't only about frame-rate, it's also about the timing
of each frame's presentation and its contents being consistent with each
other[0]. To ensure this, the application has to know the constraints
for when the next frame can be presented, and it has to be able to
control when the frame is presented.


[0] One example of this is https://www.youtube.com/watch?v=V4fgYS38aQg .
When this video is played back at 60 fps, the upper sphere moves
smoothly, but the lower sphere sometimes jumps back and forth slightly
(it's pretty subtle, might take a while to notice), because its position
isn't always consistent with the frame's presentation timing.

-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] amdgpu/gmc : fix compile warning

2018-10-08 Thread Christian König

Am 05.10.2018 um 10:38 schrieb Guenter Roeck:

On 10/05/2018 01:14 AM, Koenig, Christian wrote:

Am 04.10.2018 um 20:52 schrieb Guenter Roeck:

Hi,

On Fri, Sep 14, 2018 at 06:05:52PM +0800, Peng Hao wrote:

drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:
  In function ‘gmc_v8_0_process_interrupt’:
drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:1447:10:
  warning: missing braces around initializer [-Wmissing-braces]

Signed-off-by: Peng Hao 

Was there any feedback on this patch ? The problem does affect us,
and we'll need a fix.


Well as discussed using "{ { 0 } }" is as wrong as using "{ 0 }".



Ah, sorry, I must have missed the discussion.

It is for sure not the best solution, but at least it compiles, and it 
seems

to be proliferating.


Yeah, and exactly that's the problem. As the discussion showed "{ { 0 } 
}" is buggy because it tells the compiler to only initialize the first 
member of the structure, but not all of it.


That is incorrect and rather dangerous cause it can lead to unforeseen 
results and should probably trigger a warning.




$ git grep "{ *{ *0 *} *}" | wc
    144    1180   11802
$ git grep "{ *{ *0 *} *}" drivers/gpu/drm/amd/ | wc
 50 459    5239


We should either use only "{ }" or even better make nails with heads and
use memset().


I'd rather leave it up to the compiler to decide what is most efficient.


And I would rather prefer to have a working driver :)

Christian.



Guenter



Christian.




---
   drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c | 2 +-
   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 2 +-
   2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c

index 9333109..55f4755 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
@@ -1444,7 +1444,7 @@ static int gmc_v8_0_process_interrupt(struct 
amdgpu_device *adev,

   gmc_v8_0_set_fault_enable_default(adev, false);
      if (printk_ratelimit()) {
-    struct amdgpu_task_info task_info = { 0 };
+    struct amdgpu_task_info task_info = { {0} };

I wondered if
    struct amdgpu_task_info task_info = { };
would be better.

Thanks,
Guenter


amdgpu_vm_get_task_info(adev, entry->pasid, _info);
   diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c

index 72f8018..e8b78c5 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -259,7 +259,7 @@ static int gmc_v9_0_process_interrupt(struct 
amdgpu_device *adev,

   }
      if (printk_ratelimit()) {
-    struct amdgpu_task_info task_info = { 0 };
+    struct amdgpu_task_info task_info = { {0} };
      amdgpu_vm_get_task_info(adev, entry->pasid, _info);




___
dri-devel mailing list
dri-de...@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v2 05/17] compat_ioctl: move more drivers to generic_compat_ioctl_ptrarg

2018-10-08 Thread Bjorn Andersson
On Wed 12 Sep 08:08 PDT 2018, Arnd Bergmann wrote:

[..]
> diff --git a/drivers/rpmsg/rpmsg_char.c b/drivers/rpmsg/rpmsg_char.c
> index a76b963a7e50..02aefb2b2d47 100644
> --- a/drivers/rpmsg/rpmsg_char.c
> +++ b/drivers/rpmsg/rpmsg_char.c
> @@ -285,7 +285,7 @@ static const struct file_operations rpmsg_eptdev_fops = {
>   .write = rpmsg_eptdev_write,
>   .poll = rpmsg_eptdev_poll,
>   .unlocked_ioctl = rpmsg_eptdev_ioctl,
> - .compat_ioctl = rpmsg_eptdev_ioctl,
> + .compat_ioctl = generic_compat_ioctl_ptrarg,
>  };
>  
>  static ssize_t name_show(struct device *dev, struct device_attribute *attr,
> @@ -446,7 +446,7 @@ static const struct file_operations rpmsg_ctrldev_fops = {
>   .open = rpmsg_ctrldev_open,
>   .release = rpmsg_ctrldev_release,
>   .unlocked_ioctl = rpmsg_ctrldev_ioctl,
> - .compat_ioctl = rpmsg_ctrldev_ioctl,
> + .compat_ioctl = generic_compat_ioctl_ptrarg,
>  };
>  

For rpmsg part

Acked-by: Bjorn Andersson 

Regards,
Bjorn
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Fwd: amd-gpu ac adaptor bug

2018-10-08 Thread UTKU HELVACI
kernels newer than 4.16 has this "amd-gpu ac adapter" bug, which makes 
the computer useless if ac adapter is plugged in , 4.19 and newer 
doesn't boot or as i tested 4.19-rc4 boots but makes computer unusable


I have filled a bug report in bugzilla here: 
https://bugzilla.kernel.org/show_bug.cgi?id=201077

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/radeon: ratelimit bo warnings

2018-10-08 Thread Nick Alcock
On 5 Oct 2018, Michel Dänzer told this:

> On 2018-10-04 9:58 p.m., Nick Alcock wrote:
>> So a few days ago I started getting sprays of these warnings --
>> sorry, but because it was a few days ago I'm not sure what I was
>> running at the time (but it was probably either Stellaris or Chromium).
>> 
>> Sep 25 22:06:34 mutilate err: : [  544.718905] [drm:radeon_cs_parser_relocs] 
>> *ERROR* gem object lookup failed 0xc
>> Sep 25 22:06:34 mutilate err: : [  544.718909] [drm:radeon_cs_ioctl] *ERROR* 
>> Failed to parse relocation -2!
>> Sep 25 22:06:34 mutilate err: : [  544.719710] [drm:radeon_cs_parser_relocs] 
>> *ERROR* gem object lookup failed 0xc
>> Sep 25 22:06:34 mutilate err: : [  544.719714] [drm:radeon_cs_ioctl] *ERROR* 
>> Failed to parse relocation -2!
>> Sep 25 22:06:34 mutilate err: : [  544.719862] [drm:radeon_cs_parser_relocs] 
>> *ERROR* gem object lookup failed 0xc
>> Sep 25 22:06:34 mutilate err: : [  544.719865] [drm:radeon_cs_ioctl] *ERROR* 
>> Failed to parse relocation -2!
>> Sep 25 22:06:34 mutilate err: : [  544.720772] [drm:radeon_cs_parser_relocs] 
>> *ERROR* gem object lookup failed 0xc
>> Sep 25 22:06:34 mutilate err: : [  544.720778] [drm:radeon_cs_ioctl] *ERROR* 
>> Failed to parse relocation -2!
>
> These are likely due to https://bugs.freedesktop.org/105381 , fixed in
> xf86-video-ati 18.1.0.

This is with X server 1.20.0 and xf86-video-ati 18.0.1, so I concur.

I'll upgrade once I'm back near the machine in question (I mean, I could
upgrade now but I'm 200 miles away so running an X server on it at
present is a bit pointless).

>> Sep 25 22:06:34 mutilate warning: : [  544.721415] radeon :01:00.0: vbo 
>> resource seems too big for the bo
>
> Not sure this can also be caused by the above, but I guess it's possible.

It can clearly be caused by *something*, and without anything obvious
going wrong in the user interface you can clearly get crazy log flooding
without the user being any the wiser. It doesn't really matter what
causes it, just that it is causable. :)

>> This patch is against 4.18.11: I saw the warnings on 4.17.6 with Mesa
>> 18.1.2, but nothing much seems to have changed in this area so I bet
>> this could recur.
>
> Not sure it makes sense to have the last paragraph in the Git commit
> log, but either way:

Yeah, I stuck it in the wrong place in the mail. (I meant to move it and
then completely forgot. Mea culpa.)

> Reviewed-by: Michel Dänzer 

Thanks!

-- 
NULL && (void)
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] amdgpu/gmc : fix compile warning

2018-10-08 Thread Guenter Roeck

On 10/05/2018 01:14 AM, Koenig, Christian wrote:

Am 04.10.2018 um 20:52 schrieb Guenter Roeck:

Hi,

On Fri, Sep 14, 2018 at 06:05:52PM +0800, Peng Hao wrote:

drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:
  In function ‘gmc_v8_0_process_interrupt’:
drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:1447:10:
  warning: missing braces around initializer [-Wmissing-braces]

Signed-off-by: Peng Hao 

Was there any feedback on this patch ? The problem does affect us,
and we'll need a fix.


Well as discussed using "{ { 0 } }" is as wrong as using "{ 0 }".



Ah, sorry, I must have missed the discussion.

It is for sure not the best solution, but at least it compiles, and it seems
to be proliferating.

$ git grep "{ *{ *0 *} *}" | wc
1441180   11802
$ git grep "{ *{ *0 *} *}" drivers/gpu/drm/amd/ | wc
 50 4595239


We should either use only "{ }" or even better make nails with heads and
use memset().


I'd rather leave it up to the compiler to decide what is most efficient.

Guenter



Christian.




---
   drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c | 2 +-
   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 2 +-
   2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
index 9333109..55f4755 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
@@ -1444,7 +1444,7 @@ static int gmc_v8_0_process_interrupt(struct 
amdgpu_device *adev,
gmc_v8_0_set_fault_enable_default(adev, false);
   
   	if (printk_ratelimit()) {

-   struct amdgpu_task_info task_info = { 0 };
+   struct amdgpu_task_info task_info = { {0} };
   

I wondered if
struct amdgpu_task_info task_info = { };
would be better.

Thanks,
Guenter


amdgpu_vm_get_task_info(adev, entry->pasid, _info);
   
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c

index 72f8018..e8b78c5 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -259,7 +259,7 @@ static int gmc_v9_0_process_interrupt(struct amdgpu_device 
*adev,
}
   
   	if (printk_ratelimit()) {

-   struct amdgpu_task_info task_info = { 0 };
+   struct amdgpu_task_info task_info = { {0} };
   
   		amdgpu_vm_get_task_info(adev, entry->pasid, _info);
   




___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx