Re: [PATCH 9/9] drm/amdgpu: WIP add IOCTL interface for per VM BOs

2017-08-25 Thread Felix Kuehling
That's clever. I was scratching my head where the BOs were getting
validated, just by sharing the VM reservation object. Until I carefully
read your previous commit.

In general, I find the VM code extremely frustrating and confusing to
review. Too many lists of different things, and it's really hard to keep
track of which list track what type of object, in which situation.

For example, take struct amdgpu_vm:

/* BOs who needs a validation */
struct list_headevicted;

/* BOs moved, but not yet updated in the PT */
struct list_headmoved;

/* BO mappings freed, but not yet updated in the PT */
struct list_headfreed;

Three lists of BOs (according to the comments). But evicted and moved
are lists of amdgpu_vm_bo_base, freed is a list of amdgpu_bo_va_mapping.
moved and freed are used for tracking BOs mapped in the VM. I think
moved may also track page table BOs, but I'm not sure. evicted is used
only for tracking page table BOs.

In patch #7 you add relocated to the mix. Now it gets really funny.
What's the difference between relocated and evicted and moved? It seems
PT BOs can be on any of these lists. I think evicted means the BO needs
to be validated. moved or relocated means it's been validated but its
mappings must be updated. For PT BOs and mapped BOs that means different
things, so it makes sense to have different lists for them. But I think
PT BOs can also end up on the moved list when amdgpu_vm_bo_invalidate is
called for a page table BO (through amdgpu_bo_move_notify). So I'm still
confused.

I think this could be clarified with more descriptive names for the
lists. If PT BOs and mapped BOs must be tracked separately that should
be clear from the names.



Regards,
  Felix


On 2017-08-25 05:38 AM, Christian König wrote:
> From: Christian König 
>
> Add the IOCTL interface so that applications can allocate per VM BOs.
>
> Still WIP since not all corner cases are tested yet, but this reduces average
> CS overhead for 10K BOs from 21ms down to 48us.
>
> Signed-off-by: Christian König 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  7 ++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c|  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c   | 59 
> ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c |  3 +-
>  include/uapi/drm/amdgpu_drm.h |  2 ++
>  5 files changed, 51 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index b1e817c..21cab36 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -457,9 +457,10 @@ struct amdgpu_sa_bo {
>   */
>  void amdgpu_gem_force_release(struct amdgpu_device *adev);
>  int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,
> - int alignment, u32 initial_domain,
> - u64 flags, bool kernel,
> - struct drm_gem_object **obj);
> +  int alignment, u32 initial_domain,
> +  u64 flags, bool kernel,
> +  struct reservation_object *resv,
> +  struct drm_gem_object **obj);
>  
>  int amdgpu_mode_dumb_create(struct drm_file *file_priv,
>   struct drm_device *dev,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> index 0e907ea..7256f83 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> @@ -144,7 +144,7 @@ static int amdgpufb_create_pinned_object(struct 
> amdgpu_fbdev *rfbdev,
>  AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
>  AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS |
>  AMDGPU_GEM_CREATE_VRAM_CLEARED,
> -true, );
> +true, NULL, );
>   if (ret) {
>   pr_err("failed to allocate framebuffer (%d)\n", aligned_size);
>   return -ENOMEM;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index d028806..b8e8d67 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -44,11 +44,12 @@ void amdgpu_gem_object_free(struct drm_gem_object *gobj)
>  }
>  
>  int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,
> - int alignment, u32 initial_domain,
> - u64 flags, bool kernel,
> - struct drm_gem_object **obj)
> +  int alignment, u32 initial_domain,
> +  u64 flags, bool kernel,
> +  struct reservation_object *resv,
> +   

RE: [PATCH] drm/amdgpu: Add sysfs file for VBIOS

2017-08-25 Thread Deucher, Alexander
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Deucher, Alexander
> Sent: Friday, August 25, 2017 4:07 PM
> To: Kuehling, Felix; amd-gfx@lists.freedesktop.org
> Subject: RE: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
> 
> > -Original Message-
> > From: Kuehling, Felix
> > Sent: Friday, August 25, 2017 3:59 PM
> > To: Deucher, Alexander; amd-gfx@lists.freedesktop.org
> > Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
> >
> > On 2017-08-25 03:40 PM, Deucher, Alexander wrote:
> > >> -Original Message-
> > >> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On
> > Behalf
> > >> Of Felix Kuehling
> > >> Sent: Friday, August 25, 2017 3:34 PM
> > >> To: amd-gfx@lists.freedesktop.org
> > >> Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
> > >>
> > >> I think the power measurement is a bit of a hack right now. It requires
> > >> pinging the SMU twice to start and stop the measurement, and waiting
> > for
> > >> some time (but not too long) in between. I think if we want to expose
> > >> this via hwmon, we'll need to have some kind of timer that polls it in
> > >> regular intervals in the background and updates a value that can be
> > >> queried through HWMon without delay.
> > > I don't know that hwmon has any requirements with respect to delay.  I
> > think a slight delay in reading it back through hwmon is preferable to a
> > background thread polling it. Regular polling may also have negative
> impacts
> > on performance or stability, depending on how it was validated.
> >
> > If an application is reading hwmon data from 4 GPUs and each query for
> > power usage from each GPU takes one second, that would seriously limit
> > the update frequency of the application.
> >
> > Maybe the first read could block long enough to get useful data. But
> > subsequent reads could probably avoid blocking if we keep track of the
> > time stamp of the last query. If it was very recently, we can return the
> > last value. If it was long enough ago, we can get an updated value from
> > the SMU. If the last query was too long ago, we start over and block for
> > a second.
> 
> Could we just set the sampling period down?  If the application is polling for
> instantaneous power we'd probably want a short smu sampling period
> anyway.  Otherwise, the application shouldn’t be polling so frequently in the
> first place.

Another alternative would be to expose the SMU interface directly.  So getting 
the power would be something like:

echo 1 > power
#wait
echo 0 > power
cat power

or:
echo 2 > power # seconds to sample
cat power #returns -EBUSY until the sampling period is done

Alex

> 
> Alex
> 
> >
> > Regards,
> >   Felix
> >
> > > Alex
> > >
> > >> Regards,
> > >>   Felix
> > >>
> > >>
> > >> On 2017-08-25 11:56 AM, Deucher, Alexander wrote:
> >  -Original Message-
> >  From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On
> > >> Behalf
> >  Of Russell, Kent
> >  Sent: Friday, August 25, 2017 9:00 AM
> >  To: StDenis, Tom; Christian König; amd-gfx@lists.freedesktop.org
> >  Subject: RE: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
> > 
> >  There is GPU Power usage reported through amdgpu_pm_info,
> which
> > >> also
> >  has some other information as well. I'd like that in sysfs, but I am
> > unsure if
> >  we are  allowed to due to the other information reported there.
> > >>> For power and voltage, I believe there are standard hwmon interfaces.
> > It
> > >> would probably be best to expose it via those.  For clocks/pcie, I think
> you
> > >> can already determine them via the existing pp sysfs interfaces.
> > >>> Alex
> > >>>
> >   Kent
> > 
> >  -Original Message-
> >  From: StDenis, Tom
> >  Sent: Friday, August 25, 2017 8:58 AM
> >  To: Christian König; Russell, Kent; amd-gfx@lists.freedesktop.org
> >  Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
> > 
> >  On 25/08/17 08:56 AM, Christian König wrote:
> > > Hi Kent,
> > >
> > > agree on the VBIOS dump file, that clearly belongs to debugsf.
> > >
> > > The power usage stuff I can't say much about cause I'm not deeply
> > into
> > > this, but keep in mind the restriction for sysfs:
> > > 1. It's a stable interface. So it must be very well designed.
> > > 2. Only one value per file. I think the power stuff doesn't fulfill
> > > that requirement at the moment.
> >  What "power" stuff are we talking about?  The sensors interface or
> the
> >  pm_info or something else?
> > 
> >  Keep in mind umr uses the sensors debugfs file in --top mode.
> > 
> >  Tom
> >  ___
> >  amd-gfx mailing list
> >  amd-gfx@lists.freedesktop.org
> >  https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> > >>> ___
> > 

RE: [PATCH] drm/amdgpu: Add sysfs file for VBIOS

2017-08-25 Thread Deucher, Alexander
> -Original Message-
> From: Kuehling, Felix
> Sent: Friday, August 25, 2017 3:59 PM
> To: Deucher, Alexander; amd-gfx@lists.freedesktop.org
> Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
> 
> On 2017-08-25 03:40 PM, Deucher, Alexander wrote:
> >> -Original Message-
> >> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On
> Behalf
> >> Of Felix Kuehling
> >> Sent: Friday, August 25, 2017 3:34 PM
> >> To: amd-gfx@lists.freedesktop.org
> >> Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
> >>
> >> I think the power measurement is a bit of a hack right now. It requires
> >> pinging the SMU twice to start and stop the measurement, and waiting
> for
> >> some time (but not too long) in between. I think if we want to expose
> >> this via hwmon, we'll need to have some kind of timer that polls it in
> >> regular intervals in the background and updates a value that can be
> >> queried through HWMon without delay.
> > I don't know that hwmon has any requirements with respect to delay.  I
> think a slight delay in reading it back through hwmon is preferable to a
> background thread polling it. Regular polling may also have negative impacts
> on performance or stability, depending on how it was validated.
> 
> If an application is reading hwmon data from 4 GPUs and each query for
> power usage from each GPU takes one second, that would seriously limit
> the update frequency of the application.
> 
> Maybe the first read could block long enough to get useful data. But
> subsequent reads could probably avoid blocking if we keep track of the
> time stamp of the last query. If it was very recently, we can return the
> last value. If it was long enough ago, we can get an updated value from
> the SMU. If the last query was too long ago, we start over and block for
> a second.

Could we just set the sampling period down?  If the application is polling for 
instantaneous power we'd probably want a short smu sampling period anyway.  
Otherwise, the application shouldn’t be polling so frequently in the first 
place.

Alex

> 
> Regards,
>   Felix
> 
> > Alex
> >
> >> Regards,
> >>   Felix
> >>
> >>
> >> On 2017-08-25 11:56 AM, Deucher, Alexander wrote:
>  -Original Message-
>  From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On
> >> Behalf
>  Of Russell, Kent
>  Sent: Friday, August 25, 2017 9:00 AM
>  To: StDenis, Tom; Christian König; amd-gfx@lists.freedesktop.org
>  Subject: RE: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
> 
>  There is GPU Power usage reported through amdgpu_pm_info, which
> >> also
>  has some other information as well. I'd like that in sysfs, but I am
> unsure if
>  we are  allowed to due to the other information reported there.
> >>> For power and voltage, I believe there are standard hwmon interfaces.
> It
> >> would probably be best to expose it via those.  For clocks/pcie, I think 
> >> you
> >> can already determine them via the existing pp sysfs interfaces.
> >>> Alex
> >>>
>   Kent
> 
>  -Original Message-
>  From: StDenis, Tom
>  Sent: Friday, August 25, 2017 8:58 AM
>  To: Christian König; Russell, Kent; amd-gfx@lists.freedesktop.org
>  Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
> 
>  On 25/08/17 08:56 AM, Christian König wrote:
> > Hi Kent,
> >
> > agree on the VBIOS dump file, that clearly belongs to debugsf.
> >
> > The power usage stuff I can't say much about cause I'm not deeply
> into
> > this, but keep in mind the restriction for sysfs:
> > 1. It's a stable interface. So it must be very well designed.
> > 2. Only one value per file. I think the power stuff doesn't fulfill
> > that requirement at the moment.
>  What "power" stuff are we talking about?  The sensors interface or the
>  pm_info or something else?
> 
>  Keep in mind umr uses the sensors debugfs file in --top mode.
> 
>  Tom
>  ___
>  amd-gfx mailing list
>  amd-gfx@lists.freedesktop.org
>  https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> >>> ___
> >>> amd-gfx mailing list
> >>> amd-gfx@lists.freedesktop.org
> >>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> >> ___
> >> amd-gfx mailing list
> >> amd-gfx@lists.freedesktop.org
> >> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS

2017-08-25 Thread Felix Kuehling
On 2017-08-25 03:40 PM, Deucher, Alexander wrote:
>> -Original Message-
>> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
>> Of Felix Kuehling
>> Sent: Friday, August 25, 2017 3:34 PM
>> To: amd-gfx@lists.freedesktop.org
>> Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
>>
>> I think the power measurement is a bit of a hack right now. It requires
>> pinging the SMU twice to start and stop the measurement, and waiting for
>> some time (but not too long) in between. I think if we want to expose
>> this via hwmon, we'll need to have some kind of timer that polls it in
>> regular intervals in the background and updates a value that can be
>> queried through HWMon without delay.
> I don't know that hwmon has any requirements with respect to delay.  I think 
> a slight delay in reading it back through hwmon is preferable to a background 
> thread polling it. Regular polling may also have negative impacts on 
> performance or stability, depending on how it was validated.

If an application is reading hwmon data from 4 GPUs and each query for
power usage from each GPU takes one second, that would seriously limit
the update frequency of the application.

Maybe the first read could block long enough to get useful data. But
subsequent reads could probably avoid blocking if we keep track of the
time stamp of the last query. If it was very recently, we can return the
last value. If it was long enough ago, we can get an updated value from
the SMU. If the last query was too long ago, we start over and block for
a second.

Regards,
  Felix

> Alex
>
>> Regards,
>>   Felix
>>
>>
>> On 2017-08-25 11:56 AM, Deucher, Alexander wrote:
 -Original Message-
 From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On
>> Behalf
 Of Russell, Kent
 Sent: Friday, August 25, 2017 9:00 AM
 To: StDenis, Tom; Christian König; amd-gfx@lists.freedesktop.org
 Subject: RE: [PATCH] drm/amdgpu: Add sysfs file for VBIOS

 There is GPU Power usage reported through amdgpu_pm_info, which
>> also
 has some other information as well. I'd like that in sysfs, but I am 
 unsure if
 we are  allowed to due to the other information reported there.
>>> For power and voltage, I believe there are standard hwmon interfaces.  It
>> would probably be best to expose it via those.  For clocks/pcie, I think you
>> can already determine them via the existing pp sysfs interfaces.
>>> Alex
>>>
  Kent

 -Original Message-
 From: StDenis, Tom
 Sent: Friday, August 25, 2017 8:58 AM
 To: Christian König; Russell, Kent; amd-gfx@lists.freedesktop.org
 Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS

 On 25/08/17 08:56 AM, Christian König wrote:
> Hi Kent,
>
> agree on the VBIOS dump file, that clearly belongs to debugsf.
>
> The power usage stuff I can't say much about cause I'm not deeply into
> this, but keep in mind the restriction for sysfs:
> 1. It's a stable interface. So it must be very well designed.
> 2. Only one value per file. I think the power stuff doesn't fulfill
> that requirement at the moment.
 What "power" stuff are we talking about?  The sensors interface or the
 pm_info or something else?

 Keep in mind umr uses the sensors debugfs file in --top mode.

 Tom
 ___
 amd-gfx mailing list
 amd-gfx@lists.freedesktop.org
 https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>> ___
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>> ___
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amdgpu: Add sysfs file for VBIOS

2017-08-25 Thread Deucher, Alexander
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Felix Kuehling
> Sent: Friday, August 25, 2017 3:34 PM
> To: amd-gfx@lists.freedesktop.org
> Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
> 
> I think the power measurement is a bit of a hack right now. It requires
> pinging the SMU twice to start and stop the measurement, and waiting for
> some time (but not too long) in between. I think if we want to expose
> this via hwmon, we'll need to have some kind of timer that polls it in
> regular intervals in the background and updates a value that can be
> queried through HWMon without delay.

I don't know that hwmon has any requirements with respect to delay.  I think a 
slight delay in reading it back through hwmon is preferable to a background 
thread polling it. Regular polling may also have negative impacts on 
performance or stability, depending on how it was validated.

Alex

> 
> Regards,
>   Felix
> 
> 
> On 2017-08-25 11:56 AM, Deucher, Alexander wrote:
> >> -Original Message-
> >> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On
> Behalf
> >> Of Russell, Kent
> >> Sent: Friday, August 25, 2017 9:00 AM
> >> To: StDenis, Tom; Christian König; amd-gfx@lists.freedesktop.org
> >> Subject: RE: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
> >>
> >> There is GPU Power usage reported through amdgpu_pm_info, which
> also
> >> has some other information as well. I'd like that in sysfs, but I am 
> >> unsure if
> >> we are  allowed to due to the other information reported there.
> > For power and voltage, I believe there are standard hwmon interfaces.  It
> would probably be best to expose it via those.  For clocks/pcie, I think you
> can already determine them via the existing pp sysfs interfaces.
> >
> > Alex
> >
> >>  Kent
> >>
> >> -Original Message-
> >> From: StDenis, Tom
> >> Sent: Friday, August 25, 2017 8:58 AM
> >> To: Christian König; Russell, Kent; amd-gfx@lists.freedesktop.org
> >> Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
> >>
> >> On 25/08/17 08:56 AM, Christian König wrote:
> >>> Hi Kent,
> >>>
> >>> agree on the VBIOS dump file, that clearly belongs to debugsf.
> >>>
> >>> The power usage stuff I can't say much about cause I'm not deeply into
> >>> this, but keep in mind the restriction for sysfs:
> >>> 1. It's a stable interface. So it must be very well designed.
> >>> 2. Only one value per file. I think the power stuff doesn't fulfill
> >>> that requirement at the moment.
> >> What "power" stuff are we talking about?  The sensors interface or the
> >> pm_info or something else?
> >>
> >> Keep in mind umr uses the sensors debugfs file in --top mode.
> >>
> >> Tom
> >> ___
> >> amd-gfx mailing list
> >> amd-gfx@lists.freedesktop.org
> >> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> 
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS

2017-08-25 Thread Tom St Denis

On 25/08/17 03:33 PM, Felix Kuehling wrote:

I think the power measurement is a bit of a hack right now. It requires
pinging the SMU twice to start and stop the measurement, and waiting for
some time (but not too long) in between. I think if we want to expose
this via hwmon, we'll need to have some kind of timer that polls it in
regular intervals in the background and updates a value that can be
queried through HWMon without delay.


We had this debate about 8 months ago :-)

I don't know if adding an ever present kthread just for this though is a 
super use of resources.


In umr when I read it I read the sensors in a separate thread from the 
main UI so that it doesn't mess with things.


Tom
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS

2017-08-25 Thread Felix Kuehling
I think the power measurement is a bit of a hack right now. It requires
pinging the SMU twice to start and stop the measurement, and waiting for
some time (but not too long) in between. I think if we want to expose
this via hwmon, we'll need to have some kind of timer that polls it in
regular intervals in the background and updates a value that can be
queried through HWMon without delay.

Regards,
  Felix


On 2017-08-25 11:56 AM, Deucher, Alexander wrote:
>> -Original Message-
>> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
>> Of Russell, Kent
>> Sent: Friday, August 25, 2017 9:00 AM
>> To: StDenis, Tom; Christian König; amd-gfx@lists.freedesktop.org
>> Subject: RE: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
>>
>> There is GPU Power usage reported through amdgpu_pm_info, which also
>> has some other information as well. I'd like that in sysfs, but I am unsure 
>> if
>> we are  allowed to due to the other information reported there.
> For power and voltage, I believe there are standard hwmon interfaces.  It 
> would probably be best to expose it via those.  For clocks/pcie, I think you 
> can already determine them via the existing pp sysfs interfaces.
>
> Alex
>
>>  Kent
>>
>> -Original Message-
>> From: StDenis, Tom
>> Sent: Friday, August 25, 2017 8:58 AM
>> To: Christian König; Russell, Kent; amd-gfx@lists.freedesktop.org
>> Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
>>
>> On 25/08/17 08:56 AM, Christian König wrote:
>>> Hi Kent,
>>>
>>> agree on the VBIOS dump file, that clearly belongs to debugsf.
>>>
>>> The power usage stuff I can't say much about cause I'm not deeply into
>>> this, but keep in mind the restriction for sysfs:
>>> 1. It's a stable interface. So it must be very well designed.
>>> 2. Only one value per file. I think the power stuff doesn't fulfill
>>> that requirement at the moment.
>> What "power" stuff are we talking about?  The sensors interface or the
>> pm_info or something else?
>>
>> Keep in mind umr uses the sensors debugfs file in --top mode.
>>
>> Tom
>> ___
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Move VBIOS version to sysfs

2017-08-25 Thread Alex Deucher
On Fri, Aug 25, 2017 at 3:13 PM, Christian König
 wrote:
> Am 25.08.2017 um 16:41 schrieb Alex Deucher:
>>
>> On Fri, Aug 25, 2017 at 9:43 AM, Kent Russell 
>> wrote:
>>>
>>> sysfs is more stable, and doesn't require root to access
>>>
>>> Signed-off-by: Kent Russell 
>>
>> Might as well move the vbios binary itself to sysfs as well.  It
>> should be a stable interface :)
>
>
> Yeah, but that is nothing a "normal" client application should be interested
> in.
>
> Additional to that I'm still not sure if a large binary BIOS qualifies as
> single value per file.

Well, you can read pci roms out of sysfs, so there is precedent.  For
our purposes, debugfs is fine.

Alex

>
> Christian.
>
>
>>
>> Alex
>>
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 54
>>> +-
>>>   1 file changed, 23 insertions(+), 31 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> index c041496..77929e4 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> @@ -67,7 +67,6 @@ static int amdgpu_debugfs_regs_init(struct
>>> amdgpu_device *adev);
>>>   static void amdgpu_debugfs_regs_cleanup(struct amdgpu_device *adev);
>>>   static int amdgpu_debugfs_test_ib_ring_init(struct amdgpu_device
>>> *adev);
>>>   static int amdgpu_debugfs_vbios_dump_init(struct amdgpu_device *adev);
>>> -static int amdgpu_debugfs_vbios_version_init(struct amdgpu_device
>>> *adev);
>>>
>>>   static const char *amdgpu_asic_name[] = {
>>>  "TAHITI",
>>> @@ -890,6 +889,20 @@ static uint32_t cail_ioreg_read(struct card_info
>>> *info, uint32_t reg)
>>>  return r;
>>>   }
>>>
>>> +static ssize_t amdgpu_atombios_get_vbios_version(struct device *dev,
>>> +struct device_attribute
>>> *attr,
>>> +char *buf)
>>> +{
>>> +   struct drm_device *ddev = dev_get_drvdata(dev);
>>> +   struct amdgpu_device *adev = ddev->dev_private;
>>> +   struct atom_context *ctx = adev->mode_info.atom_context;
>>> +
>>> +   return snprintf(buf, PAGE_SIZE, "%s\n", ctx->vbios_version);
>>> +}
>>> +
>>> +static DEVICE_ATTR(vbios_version, 0444,
>>> amdgpu_atombios_get_vbios_version,
>>> +  NULL);
>>> +
>>>   /**
>>>* amdgpu_atombios_fini - free the driver info and callbacks for
>>> atombios
>>>*
>>> @@ -909,6 +922,7 @@ static void amdgpu_atombios_fini(struct amdgpu_device
>>> *adev)
>>>  adev->mode_info.atom_context = NULL;
>>>  kfree(adev->mode_info.atom_card_info);
>>>  adev->mode_info.atom_card_info = NULL;
>>> +   device_remove_file(adev->dev, _attr_vbios_version);
>>>   }
>>>
>>>   /**
>>> @@ -925,6 +939,7 @@ static int amdgpu_atombios_init(struct amdgpu_device
>>> *adev)
>>>   {
>>>  struct card_info *atom_card_info =
>>>  kzalloc(sizeof(struct card_info), GFP_KERNEL);
>>> +   int ret;
>>>
>>>  if (!atom_card_info)
>>>  return -ENOMEM;
>>> @@ -961,6 +976,13 @@ static int amdgpu_atombios_init(struct amdgpu_device
>>> *adev)
>>>  amdgpu_atombios_scratch_regs_init(adev);
>>>  amdgpu_atombios_allocate_fb_scratch(adev);
>>>  }
>>> +
>>> +   ret = device_create_file(adev->dev, _attr_vbios_version);
>>> +   if (ret) {
>>> +   DRM_ERROR("Failed to create device file for VBIOS
>>> version\n");
>>> +   return ret;
>>> +   }
>>> +
>>>  return 0;
>>>   }
>>>
>>> @@ -2256,10 +2278,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
>>>  if (r)
>>>  DRM_ERROR("Creating vbios dump debugfs failed (%d).\n",
>>> r);
>>>
>>> -   r = amdgpu_debugfs_vbios_version_init(adev);
>>> -   if (r)
>>> -   DRM_ERROR("Creating vbios version debugfs failed
>>> (%d).\n", r);
>>> -
>>>  if ((amdgpu_testing & 1)) {
>>>  if (adev->accel_working)
>>>  amdgpu_test_moves(adev);
>>> @@ -3851,39 +3869,17 @@ static int amdgpu_debugfs_get_vbios_dump(struct
>>> seq_file *m, void *data)
>>>  return 0;
>>>   }
>>>
>>> -static int amdgpu_debugfs_get_vbios_version(struct seq_file *m, void
>>> *data)
>>> -{
>>> -   struct drm_info_node *node = (struct drm_info_node *) m->private;
>>> -   struct drm_device *dev = node->minor->dev;
>>> -   struct amdgpu_device *adev = dev->dev_private;
>>> -   struct atom_context *ctx = adev->mode_info.atom_context;
>>> -
>>> -   seq_printf(m, "%s\n", ctx->vbios_version);
>>> -   return 0;
>>> -}
>>> -
>>>   static const struct drm_info_list amdgpu_vbios_dump_list[] = {
>>>  {"amdgpu_vbios",
>>>   amdgpu_debugfs_get_vbios_dump,
>>>   0, NULL},
>>>   };
>>>
>>> -static const struct 

Re: [PATCH] drm/amdgpu: Move VBIOS version to sysfs

2017-08-25 Thread Christian König

Am 25.08.2017 um 16:41 schrieb Alex Deucher:

On Fri, Aug 25, 2017 at 9:43 AM, Kent Russell  wrote:

sysfs is more stable, and doesn't require root to access

Signed-off-by: Kent Russell 

Might as well move the vbios binary itself to sysfs as well.  It
should be a stable interface :)


Yeah, but that is nothing a "normal" client application should be 
interested in.


Additional to that I'm still not sure if a large binary BIOS qualifies 
as single value per file.


Christian.



Alex


---
  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 54 +-
  1 file changed, 23 insertions(+), 31 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index c041496..77929e4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -67,7 +67,6 @@ static int amdgpu_debugfs_regs_init(struct amdgpu_device 
*adev);
  static void amdgpu_debugfs_regs_cleanup(struct amdgpu_device *adev);
  static int amdgpu_debugfs_test_ib_ring_init(struct amdgpu_device *adev);
  static int amdgpu_debugfs_vbios_dump_init(struct amdgpu_device *adev);
-static int amdgpu_debugfs_vbios_version_init(struct amdgpu_device *adev);

  static const char *amdgpu_asic_name[] = {
 "TAHITI",
@@ -890,6 +889,20 @@ static uint32_t cail_ioreg_read(struct card_info *info, 
uint32_t reg)
 return r;
  }

+static ssize_t amdgpu_atombios_get_vbios_version(struct device *dev,
+struct device_attribute *attr,
+char *buf)
+{
+   struct drm_device *ddev = dev_get_drvdata(dev);
+   struct amdgpu_device *adev = ddev->dev_private;
+   struct atom_context *ctx = adev->mode_info.atom_context;
+
+   return snprintf(buf, PAGE_SIZE, "%s\n", ctx->vbios_version);
+}
+
+static DEVICE_ATTR(vbios_version, 0444, amdgpu_atombios_get_vbios_version,
+  NULL);
+
  /**
   * amdgpu_atombios_fini - free the driver info and callbacks for atombios
   *
@@ -909,6 +922,7 @@ static void amdgpu_atombios_fini(struct amdgpu_device *adev)
 adev->mode_info.atom_context = NULL;
 kfree(adev->mode_info.atom_card_info);
 adev->mode_info.atom_card_info = NULL;
+   device_remove_file(adev->dev, _attr_vbios_version);
  }

  /**
@@ -925,6 +939,7 @@ static int amdgpu_atombios_init(struct amdgpu_device *adev)
  {
 struct card_info *atom_card_info =
 kzalloc(sizeof(struct card_info), GFP_KERNEL);
+   int ret;

 if (!atom_card_info)
 return -ENOMEM;
@@ -961,6 +976,13 @@ static int amdgpu_atombios_init(struct amdgpu_device *adev)
 amdgpu_atombios_scratch_regs_init(adev);
 amdgpu_atombios_allocate_fb_scratch(adev);
 }
+
+   ret = device_create_file(adev->dev, _attr_vbios_version);
+   if (ret) {
+   DRM_ERROR("Failed to create device file for VBIOS version\n");
+   return ret;
+   }
+
 return 0;
  }

@@ -2256,10 +2278,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 if (r)
 DRM_ERROR("Creating vbios dump debugfs failed (%d).\n", r);

-   r = amdgpu_debugfs_vbios_version_init(adev);
-   if (r)
-   DRM_ERROR("Creating vbios version debugfs failed (%d).\n", r);
-
 if ((amdgpu_testing & 1)) {
 if (adev->accel_working)
 amdgpu_test_moves(adev);
@@ -3851,39 +3869,17 @@ static int amdgpu_debugfs_get_vbios_dump(struct 
seq_file *m, void *data)
 return 0;
  }

-static int amdgpu_debugfs_get_vbios_version(struct seq_file *m, void *data)
-{
-   struct drm_info_node *node = (struct drm_info_node *) m->private;
-   struct drm_device *dev = node->minor->dev;
-   struct amdgpu_device *adev = dev->dev_private;
-   struct atom_context *ctx = adev->mode_info.atom_context;
-
-   seq_printf(m, "%s\n", ctx->vbios_version);
-   return 0;
-}
-
  static const struct drm_info_list amdgpu_vbios_dump_list[] = {
 {"amdgpu_vbios",
  amdgpu_debugfs_get_vbios_dump,
  0, NULL},
  };

-static const struct drm_info_list amdgpu_vbios_version_list[] = {
-   {"amdgpu_vbios_version",
-amdgpu_debugfs_get_vbios_version,
-0, NULL},
-};
-
  static int amdgpu_debugfs_vbios_dump_init(struct amdgpu_device *adev)
  {
 return amdgpu_debugfs_add_files(adev,
 amdgpu_vbios_dump_list, 1);
  }
-static int amdgpu_debugfs_vbios_version_init(struct amdgpu_device *adev)
-{
-   return amdgpu_debugfs_add_files(adev,
-   amdgpu_vbios_version_list, 1);
-}
  #else
  static int amdgpu_debugfs_test_ib_ring_init(struct amdgpu_device *adev)
  {
@@ -3897,9 +3893,5 @@ static int 

Re: [PATCH 9/9] drm/amdgpu: WIP add IOCTL interface for per VM BOs

2017-08-25 Thread Marek Olšák
On Fri, Aug 25, 2017 at 3:00 PM, Christian König
 wrote:
> Am 25.08.2017 um 12:32 schrieb zhoucm1:
>>
>>
>>
>> On 2017年08月25日 17:38, Christian König wrote:
>>>
>>> From: Christian König 
>>>
>>> Add the IOCTL interface so that applications can allocate per VM BOs.
>>>
>>> Still WIP since not all corner cases are tested yet, but this reduces
>>> average
>>> CS overhead for 10K BOs from 21ms down to 48us.
>>
>> Wow, cheers, eventually you get per vm bo to same reservation with PD/pts,
>> indeed save a lot of bo list.
>
>
> Don't cheer to loud yet, that is a completely constructed test case.
>
> So far I wasn't able to archive any improvements with any real game on this
> with Mesa.
>
> BTW: Marek can you take a look with some CPU bound tests? I can provide a
> kernel branch if necessary.

Do you have a branch that works on Raven? This patch series doesn't,
and I didn't investigate why.

Marek

>
> Regards,
> Christian.
>
>
>> overall looks good, I will take a detailed check for this tomorrow.
>>
>> Regards,
>> David Zhou
>>>
>>>
>>> Signed-off-by: Christian König 
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  7 ++--
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c|  2 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c   | 59
>>> ++-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c |  3 +-
>>>   include/uapi/drm/amdgpu_drm.h |  2 ++
>>>   5 files changed, 51 insertions(+), 22 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> index b1e817c..21cab36 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> @@ -457,9 +457,10 @@ struct amdgpu_sa_bo {
>>>*/
>>>   void amdgpu_gem_force_release(struct amdgpu_device *adev);
>>>   int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long
>>> size,
>>> -int alignment, u32 initial_domain,
>>> -u64 flags, bool kernel,
>>> -struct drm_gem_object **obj);
>>> + int alignment, u32 initial_domain,
>>> + u64 flags, bool kernel,
>>> + struct reservation_object *resv,
>>> + struct drm_gem_object **obj);
>>> int amdgpu_mode_dumb_create(struct drm_file *file_priv,
>>>   struct drm_device *dev,
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
>>> index 0e907ea..7256f83 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
>>> @@ -144,7 +144,7 @@ static int amdgpufb_create_pinned_object(struct
>>> amdgpu_fbdev *rfbdev,
>>>  AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
>>>  AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS |
>>>  AMDGPU_GEM_CREATE_VRAM_CLEARED,
>>> -   true, );
>>> +   true, NULL, );
>>>   if (ret) {
>>>   pr_err("failed to allocate framebuffer (%d)\n", aligned_size);
>>>   return -ENOMEM;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>> index d028806..b8e8d67 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
>>> @@ -44,11 +44,12 @@ void amdgpu_gem_object_free(struct drm_gem_object
>>> *gobj)
>>>   }
>>> int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned
>>> long size,
>>> -int alignment, u32 initial_domain,
>>> -u64 flags, bool kernel,
>>> -struct drm_gem_object **obj)
>>> + int alignment, u32 initial_domain,
>>> + u64 flags, bool kernel,
>>> + struct reservation_object *resv,
>>> + struct drm_gem_object **obj)
>>>   {
>>> -struct amdgpu_bo *robj;
>>> +struct amdgpu_bo *bo;
>>>   int r;
>>> *obj = NULL;
>>> @@ -59,7 +60,7 @@ int amdgpu_gem_object_create(struct amdgpu_device
>>> *adev, unsigned long size,
>>> retry:
>>>   r = amdgpu_bo_create(adev, size, alignment, kernel, initial_domain,
>>> - flags, NULL, NULL, 0, );
>>> + flags, NULL, resv, 0, );
>>>   if (r) {
>>>   if (r != -ERESTARTSYS) {
>>>   if (initial_domain == AMDGPU_GEM_DOMAIN_VRAM) {
>>> @@ -71,7 +72,7 @@ int amdgpu_gem_object_create(struct amdgpu_device
>>> *adev, unsigned long size,
>>>   }
>>>   return r;
>>>   }
>>> -*obj = >gem_base;
>>> +*obj = >gem_base;
>>> return 0;
>>>   }
>>> @@ -136,13 +137,14 @@ void amdgpu_gem_object_close(struct drm_gem_object
>>> *obj,
>>>   struct amdgpu_vm *vm = >vm;
>>> struct amdgpu_bo_list_entry vm_pd;
>>> -struct list_head list;
>>> +struct list_head list, duplicates;
>>>   struct 

RE: [PATCH umr 2/2] Add SI/CIK to read_vram.

2017-08-25 Thread Deucher, Alexander
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Tom St Denis
> Sent: Friday, August 25, 2017 8:56 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: StDenis, Tom
> Subject: [PATCH umr 2/2] Add SI/CIK to read_vram.
> 
> Tested with my Tahiti device.
> 
> Signed-off-by: Tom St Denis 

Series is:
Reviewed-by: Alex Deucher 

> ---
>  src/lib/read_vram.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/src/lib/read_vram.c b/src/lib/read_vram.c
> index f2c3a15c27fe..c254f5a2e406 100644
> --- a/src/lib/read_vram.c
> +++ b/src/lib/read_vram.c
> @@ -815,6 +815,8 @@ int umr_read_vram(struct umr_asic *asic, uint32_t
> vmid, uint64_t address, uint32
>   }
> 
>   switch (asic->family) {
> + case FAMILY_SI:
> + case FAMILY_CIK:
>   case FAMILY_VI:
>   return umr_read_vram_vi(asic, vmid, address, size,
> dst);
>   case FAMILY_RV:
> --
> 2.12.0
> 
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 2/4] drm/amd/powerplay: Remove obsolete code of reduced refresh rate featur

2017-08-25 Thread Deucher, Alexander
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Rex Zhu
> Sent: Friday, August 25, 2017 5:27 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Zhu, Rex
> Subject: [PATCH 2/4] drm/amd/powerplay: Remove obsolete code of
> reduced refresh rate featur

Typo in patch title:  featur -> feature

With that fixed:
Reviewed-by: Alex Deucher 

> 
> this feature was not supported on linux and obsolete.
> 
> Change-Id: I7434e9370e4a29489bff7feb1421e028710fbe14
> Signed-off-by: Rex Zhu 
> ---
>  drivers/gpu/drm/amd/include/pptable.h |  6 --
>  drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c | 18 -
> -
>  drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c  |  1 -
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c|  1 -
>  drivers/gpu/drm/amd/powerplay/inc/power_state.h   |  4 
>  5 files changed, 30 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/include/pptable.h
> b/drivers/gpu/drm/amd/include/pptable.h
> index 0b6a057..1dda72a 100644
> --- a/drivers/gpu/drm/amd/include/pptable.h
> +++ b/drivers/gpu/drm/amd/include/pptable.h
> @@ -285,12 +285,6 @@
>  #define ATOM_PPLIB_PCIE_LINK_WIDTH_MASK0x00F8
>  #define ATOM_PPLIB_PCIE_LINK_WIDTH_SHIFT   3
> 
> -// lookup into reduced refresh-rate table
> -#define ATOM_PPLIB_LIMITED_REFRESHRATE_VALUE_MASK  0x0F00
> -#define ATOM_PPLIB_LIMITED_REFRESHRATE_VALUE_SHIFT 8
> -
> -#define ATOM_PPLIB_LIMITED_REFRESHRATE_UNLIMITED0
> -#define ATOM_PPLIB_LIMITED_REFRESHRATE_50HZ 1
>  // 2-15 TBD as needed.
> 
>  #define ATOM_PPLIB_SOFTWARE_DISABLE_LOADBALANCING
> 0x1000
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c
> b/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c
> index 707809b..f974832 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c
> @@ -680,7 +680,6 @@ static int init_non_clock_fields(struct pp_hwmgr
> *hwmgr,
>   uint8_t version,
>const ATOM_PPLIB_NONCLOCK_INFO
> *pnon_clock_info)
>  {
> - unsigned long rrr_index;
>   unsigned long tmp;
> 
>   ps->classification.ui_label = (le16_to_cpu(pnon_clock_info-
> >usClassification) &
> @@ -709,23 +708,6 @@ static int init_non_clock_fields(struct pp_hwmgr
> *hwmgr,
> 
>   ps->display.disableFrameModulation = false;
> 
> - rrr_index = (le32_to_cpu(pnon_clock_info->ulCapsAndSettings) &
> -
>   ATOM_PPLIB_LIMITED_REFRESHRATE_VALUE_MASK) >>
> - ATOM_PPLIB_LIMITED_REFRESHRATE_VALUE_SHIFT;
> -
> - if (rrr_index != ATOM_PPLIB_LIMITED_REFRESHRATE_UNLIMITED) {
> - static const uint8_t
> look_up[(ATOM_PPLIB_LIMITED_REFRESHRATE_VALUE_MASK >>
> ATOM_PPLIB_LIMITED_REFRESHRATE_VALUE_SHIFT) + 1] = \
> - { 0, 50, 0 };
> -
> - ps->display.refreshrateSource =
> PP_RefreshrateSource_Explicit;
> - ps->display.explicitRefreshrate = look_up[rrr_index];
> - ps->display.limitRefreshrate = true;
> -
> - if (ps->display.explicitRefreshrate == 0)
> - ps->display.limitRefreshrate = false;
> - } else
> - ps->display.limitRefreshrate = false;
> -
>   tmp = le32_to_cpu(pnon_clock_info->ulCapsAndSettings) &
>   ATOM_PPLIB_ENABLE_VARIBRIGHT;
> 
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
> b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
> index 736f193..27bd1a0 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
> @@ -2990,7 +2990,6 @@ static int
> smu7_get_pp_table_entry_callback_func_v1(struct pp_hwmgr *hwmgr,
>   power_state->pcie.lanes = 0;
> 
>   power_state->display.disableFrameModulation = false;
> - power_state->display.limitRefreshrate = false;
>   power_state->display.enableVariBright =
>   (0 != (le32_to_cpu(state_entry->ulCapsAndSettings)
> &
> 
>   ATOM_Tonga_ENABLE_VARIBRIGHT));
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
> index 29e44c3..f20758f 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
> @@ -3021,7 +3021,6 @@ static int
> vega10_get_pp_table_entry_callback_func(struct pp_hwmgr *hwmgr,
>   ATOM_Vega10_DISALLOW_ON_DC)
> != 0);
> 
>   power_state->display.disableFrameModulation = false;
> - power_state->display.limitRefreshrate = false;
>   power_state->display.enableVariBright =
>   ((le32_to_cpu(state_entry->ulCapsAndSettings) &
> 
>   ATOM_Vega10_ENABLE_VARIBRIGHT) != 0);
> diff --git 

RE: [PATCH] drm/amdgpu: Move VBIOS version to sysfs

2017-08-25 Thread Russell, Kent
Christian was saying that the VBIOS binary should remain in debugfs (on the 
other thread). I'll let you two discuss, since I have zero vested interest in 
the location of the VBIOS binary :)

 Kent

-Original Message-
From: Alex Deucher [mailto:alexdeuc...@gmail.com] 
Sent: Friday, August 25, 2017 10:41 AM
To: Russell, Kent
Cc: amd-gfx list
Subject: Re: [PATCH] drm/amdgpu: Move VBIOS version to sysfs

On Fri, Aug 25, 2017 at 9:43 AM, Kent Russell  wrote:
> sysfs is more stable, and doesn't require root to access
>
> Signed-off-by: Kent Russell 

Might as well move the vbios binary itself to sysfs as well.  It should be a 
stable interface :)

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 54 
> +-
>  1 file changed, 23 insertions(+), 31 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index c041496..77929e4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -67,7 +67,6 @@ static int amdgpu_debugfs_regs_init(struct 
> amdgpu_device *adev);  static void amdgpu_debugfs_regs_cleanup(struct 
> amdgpu_device *adev);  static int 
> amdgpu_debugfs_test_ib_ring_init(struct amdgpu_device *adev);  static 
> int amdgpu_debugfs_vbios_dump_init(struct amdgpu_device *adev); 
> -static int amdgpu_debugfs_vbios_version_init(struct amdgpu_device 
> *adev);
>
>  static const char *amdgpu_asic_name[] = {
> "TAHITI",
> @@ -890,6 +889,20 @@ static uint32_t cail_ioreg_read(struct card_info *info, 
> uint32_t reg)
> return r;
>  }
>
> +static ssize_t amdgpu_atombios_get_vbios_version(struct device *dev,
> +struct device_attribute 
> *attr,
> +char *buf) {
> +   struct drm_device *ddev = dev_get_drvdata(dev);
> +   struct amdgpu_device *adev = ddev->dev_private;
> +   struct atom_context *ctx = adev->mode_info.atom_context;
> +
> +   return snprintf(buf, PAGE_SIZE, "%s\n", ctx->vbios_version); }
> +
> +static DEVICE_ATTR(vbios_version, 0444, amdgpu_atombios_get_vbios_version,
> +  NULL);
> +
>  /**
>   * amdgpu_atombios_fini - free the driver info and callbacks for atombios
>   *
> @@ -909,6 +922,7 @@ static void amdgpu_atombios_fini(struct amdgpu_device 
> *adev)
> adev->mode_info.atom_context = NULL;
> kfree(adev->mode_info.atom_card_info);
> adev->mode_info.atom_card_info = NULL;
> +   device_remove_file(adev->dev, _attr_vbios_version);
>  }
>
>  /**
> @@ -925,6 +939,7 @@ static int amdgpu_atombios_init(struct 
> amdgpu_device *adev)  {
> struct card_info *atom_card_info =
> kzalloc(sizeof(struct card_info), GFP_KERNEL);
> +   int ret;
>
> if (!atom_card_info)
> return -ENOMEM;
> @@ -961,6 +976,13 @@ static int amdgpu_atombios_init(struct amdgpu_device 
> *adev)
> amdgpu_atombios_scratch_regs_init(adev);
> amdgpu_atombios_allocate_fb_scratch(adev);
> }
> +
> +   ret = device_create_file(adev->dev, _attr_vbios_version);
> +   if (ret) {
> +   DRM_ERROR("Failed to create device file for VBIOS version\n");
> +   return ret;
> +   }
> +
> return 0;
>  }
>
> @@ -2256,10 +2278,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
> if (r)
> DRM_ERROR("Creating vbios dump debugfs failed 
> (%d).\n", r);
>
> -   r = amdgpu_debugfs_vbios_version_init(adev);
> -   if (r)
> -   DRM_ERROR("Creating vbios version debugfs failed (%d).\n", r);
> -
> if ((amdgpu_testing & 1)) {
> if (adev->accel_working)
> amdgpu_test_moves(adev); @@ -3851,39 +3869,17 
> @@ static int amdgpu_debugfs_get_vbios_dump(struct seq_file *m, void *data)
> return 0;
>  }
>
> -static int amdgpu_debugfs_get_vbios_version(struct seq_file *m, void 
> *data) -{
> -   struct drm_info_node *node = (struct drm_info_node *) m->private;
> -   struct drm_device *dev = node->minor->dev;
> -   struct amdgpu_device *adev = dev->dev_private;
> -   struct atom_context *ctx = adev->mode_info.atom_context;
> -
> -   seq_printf(m, "%s\n", ctx->vbios_version);
> -   return 0;
> -}
> -
>  static const struct drm_info_list amdgpu_vbios_dump_list[] = {
> {"amdgpu_vbios",
>  amdgpu_debugfs_get_vbios_dump,
>  0, NULL},
>  };
>
> -static const struct drm_info_list amdgpu_vbios_version_list[] = {
> -   {"amdgpu_vbios_version",
> -amdgpu_debugfs_get_vbios_version,
> -0, NULL},
> -};
> -
>  static int amdgpu_debugfs_vbios_dump_init(struct amdgpu_device *adev)  
> {
> return amdgpu_debugfs_add_files(adev,
> 

RE: [PATCH] drm/amdgpu: Move VBIOS version to sysfs

2017-08-25 Thread Russell, Kent
That's right, it'll be in there, or the sym-linked /sys/class/drm/card0/device 
, depending on how you like to find the amdgpu device sysfs files.

 Kent

-Original Message-
From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf Of Tom 
St Denis
Sent: Friday, August 25, 2017 9:59 AM
To: amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/amdgpu: Move VBIOS version to sysfs

Acked-by: Tom St Denis 

Seems to be the easiest way to find this is via

/sys/bus/pci/devices/${devname}/vbios_version

Tom

On 25/08/17 09:43 AM, Kent Russell wrote:
> sysfs is more stable, and doesn't require root to access
> 
> Signed-off-by: Kent Russell 
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 54 
> +-
>   1 file changed, 23 insertions(+), 31 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index c041496..77929e4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -67,7 +67,6 @@ static int amdgpu_debugfs_regs_init(struct amdgpu_device 
> *adev);
>   static void amdgpu_debugfs_regs_cleanup(struct amdgpu_device *adev);
>   static int amdgpu_debugfs_test_ib_ring_init(struct amdgpu_device *adev);
>   static int amdgpu_debugfs_vbios_dump_init(struct amdgpu_device 
> *adev); -static int amdgpu_debugfs_vbios_version_init(struct 
> amdgpu_device *adev);
>   
>   static const char *amdgpu_asic_name[] = {
>   "TAHITI",
> @@ -890,6 +889,20 @@ static uint32_t cail_ioreg_read(struct card_info *info, 
> uint32_t reg)
>   return r;
>   }
>   
> +static ssize_t amdgpu_atombios_get_vbios_version(struct device *dev,
> +  struct device_attribute *attr,
> +  char *buf)
> +{
> + struct drm_device *ddev = dev_get_drvdata(dev);
> + struct amdgpu_device *adev = ddev->dev_private;
> + struct atom_context *ctx = adev->mode_info.atom_context;
> +
> + return snprintf(buf, PAGE_SIZE, "%s\n", ctx->vbios_version); }
> +
> +static DEVICE_ATTR(vbios_version, 0444, amdgpu_atombios_get_vbios_version,
> +NULL);
> +
>   /**
>* amdgpu_atombios_fini - free the driver info and callbacks for atombios
>*
> @@ -909,6 +922,7 @@ static void amdgpu_atombios_fini(struct amdgpu_device 
> *adev)
>   adev->mode_info.atom_context = NULL;
>   kfree(adev->mode_info.atom_card_info);
>   adev->mode_info.atom_card_info = NULL;
> + device_remove_file(adev->dev, _attr_vbios_version);
>   }
>   
>   /**
> @@ -925,6 +939,7 @@ static int amdgpu_atombios_init(struct amdgpu_device 
> *adev)
>   {
>   struct card_info *atom_card_info =
>   kzalloc(sizeof(struct card_info), GFP_KERNEL);
> + int ret;
>   
>   if (!atom_card_info)
>   return -ENOMEM;
> @@ -961,6 +976,13 @@ static int amdgpu_atombios_init(struct amdgpu_device 
> *adev)
>   amdgpu_atombios_scratch_regs_init(adev);
>   amdgpu_atombios_allocate_fb_scratch(adev);
>   }
> +
> + ret = device_create_file(adev->dev, _attr_vbios_version);
> + if (ret) {
> + DRM_ERROR("Failed to create device file for VBIOS version\n");
> + return ret;
> + }
> +
>   return 0;
>   }
>   
> @@ -2256,10 +2278,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
>   if (r)
>   DRM_ERROR("Creating vbios dump debugfs failed (%d).\n", r);
>   
> - r = amdgpu_debugfs_vbios_version_init(adev);
> - if (r)
> - DRM_ERROR("Creating vbios version debugfs failed (%d).\n", r);
> -
>   if ((amdgpu_testing & 1)) {
>   if (adev->accel_working)
>   amdgpu_test_moves(adev);
> @@ -3851,39 +3869,17 @@ static int amdgpu_debugfs_get_vbios_dump(struct 
> seq_file *m, void *data)
>   return 0;
>   }
>   
> -static int amdgpu_debugfs_get_vbios_version(struct seq_file *m, void 
> *data) -{
> - struct drm_info_node *node = (struct drm_info_node *) m->private;
> - struct drm_device *dev = node->minor->dev;
> - struct amdgpu_device *adev = dev->dev_private;
> - struct atom_context *ctx = adev->mode_info.atom_context;
> -
> - seq_printf(m, "%s\n", ctx->vbios_version);
> - return 0;
> -}
> -
>   static const struct drm_info_list amdgpu_vbios_dump_list[] = {
>   {"amdgpu_vbios",
>amdgpu_debugfs_get_vbios_dump,
>0, NULL},
>   };
>   
> -static const struct drm_info_list amdgpu_vbios_version_list[] = {
> - {"amdgpu_vbios_version",
> -  amdgpu_debugfs_get_vbios_version,
> -  0, NULL},
> -};
> -
>   static int amdgpu_debugfs_vbios_dump_init(struct amdgpu_device *adev)
>   {
>   return amdgpu_debugfs_add_files(adev,
>   amdgpu_vbios_dump_list, 1);
>   }
> -static int 

Re: [PATCH] drm/amd/powerplay: delete pp dead code on raven

2017-08-25 Thread Alex Deucher
On Fri, Aug 25, 2017 at 6:18 AM, Rex Zhu  wrote:
> Change-Id: Ib2a3cd06c540a90eb33fc9e4ce0f3122c5f2c0d3
> Signed-off-by: Rex Zhu 

Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c | 66 
> --
>  1 file changed, 66 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
> index a5fa546..ebbee5f 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
> @@ -49,29 +49,6 @@
>  int rv_display_clock_voltage_request(struct pp_hwmgr *hwmgr,
> struct pp_display_clock_request *clock_req);
>
> -struct phm_vq_budgeting_record rv_vqtable[] = {
> -/* CUs, SSP low, SSP High, Display Configuration, AWD/non-AWD,
> - * Sustainable GFXCLK, Sustainable FCLK, Sustainable CUs,
> - * unused, unused, unused
> - */
> -   { 11, 30, 60, VQ_DisplayConfig_NoneAWD,  8, 16, 11, 0, 0, 0 },
> -   { 11, 30, 60, VQ_DisplayConfig_AWD,  8, 16, 11, 0, 0, 0 },
> -
> -   {  8, 30, 60, VQ_DisplayConfig_NoneAWD, 10, 16,  8, 0, 0, 0 },
> -   {  8, 30, 60, VQ_DisplayConfig_AWD, 10, 16,  8, 0, 0, 0 },
> -
> -   { 10, 12, 30, VQ_DisplayConfig_NoneAWD,  4, 12, 10, 0, 0, 0 },
> -   { 10, 12, 30, VQ_DisplayConfig_AWD,  4, 12, 10, 0, 0, 0 },
> -
> -   {  8, 12, 30, VQ_DisplayConfig_NoneAWD,  45000, 12,  8, 0, 0, 0 },
> -   {  8, 12, 30, VQ_DisplayConfig_AWD,  45000, 12,  8, 0, 0, 0 },
> -
> -   {  6, 12, 30, VQ_DisplayConfig_NoneAWD,  45000, 12,  6, 0, 0, 0 },
> -   {  6, 12, 30, VQ_DisplayConfig_AWD,  45000, 12,  6, 0, 0, 0 },
> -
> -   {  3, 12, 30, VQ_DisplayConfig_NoneAWD,  45000, 12,  3, 0, 0, 0 },
> -   {  3, 12, 30, VQ_DisplayConfig_AWD,  45000, 12,  3, 0, 0, 0 },
> -};
>
>  static struct rv_power_state *cast_rv_ps(struct pp_hw_power_state *hw_ps)
>  {
> @@ -90,42 +67,6 @@ static const struct rv_power_state *cast_const_rv_ps(
> return (struct rv_power_state *)hw_ps;
>  }
>
> -static int rv_init_vq_budget_table(struct pp_hwmgr *hwmgr)
> -{
> -   uint32_t table_size, i;
> -   struct phm_vq_budgeting_table *ptable;
> -   uint32_t num_entries = ARRAY_SIZE(rv_vqtable);
> -
> -   if (hwmgr->dyn_state.vq_budgeting_table != NULL)
> -   return 0;
> -
> -   table_size = sizeof(struct phm_vq_budgeting_table) +
> -   sizeof(struct phm_vq_budgeting_record) * (num_entries 
> - 1);
> -
> -   ptable = kzalloc(table_size, GFP_KERNEL);
> -   if (NULL == ptable)
> -   return -ENOMEM;
> -
> -   ptable->numEntries = (uint8_t) num_entries;
> -
> -   for (i = 0; i < ptable->numEntries; i++) {
> -   ptable->entries[i].ulCUs = rv_vqtable[i].ulCUs;
> -   ptable->entries[i].ulSustainableSOCPowerLimitLow = 
> rv_vqtable[i].ulSustainableSOCPowerLimitLow;
> -   ptable->entries[i].ulSustainableSOCPowerLimitHigh = 
> rv_vqtable[i].ulSustainableSOCPowerLimitHigh;
> -   ptable->entries[i].ulMinSclkLow = rv_vqtable[i].ulMinSclkLow;
> -   ptable->entries[i].ulMinSclkHigh = 
> rv_vqtable[i].ulMinSclkHigh;
> -   ptable->entries[i].ucDispConfig = rv_vqtable[i].ucDispConfig;
> -   ptable->entries[i].ulDClk = rv_vqtable[i].ulDClk;
> -   ptable->entries[i].ulEClk = rv_vqtable[i].ulEClk;
> -   ptable->entries[i].ulSustainableSclk = 
> rv_vqtable[i].ulSustainableSclk;
> -   ptable->entries[i].ulSustainableCUs = 
> rv_vqtable[i].ulSustainableCUs;
> -   }
> -
> -   hwmgr->dyn_state.vq_budgeting_table = ptable;
> -
> -   return 0;
> -}
> -
>  static int rv_initialize_dpm_defaults(struct pp_hwmgr *hwmgr)
>  {
> struct rv_hwmgr *rv_hwmgr = (struct rv_hwmgr *)(hwmgr->backend);
> @@ -589,8 +530,6 @@ static int rv_hwmgr_backend_init(struct pp_hwmgr *hwmgr)
>
> hwmgr->platform_descriptor.minimumClocksReductionPercentage = 50;
>
> -   rv_init_vq_budget_table(hwmgr);
> -
> return result;
>  }
>
> @@ -635,11 +574,6 @@ static int rv_hwmgr_backend_fini(struct pp_hwmgr *hwmgr)
> hwmgr->dyn_state.vddc_dep_on_dal_pwrl = NULL;
> }
>
> -   if (NULL != hwmgr->dyn_state.vq_budgeting_table) {
> -   kfree(hwmgr->dyn_state.vq_budgeting_table);
> -   hwmgr->dyn_state.vq_budgeting_table = NULL;
> -   }
> -
> kfree(hwmgr->backend);
> hwmgr->backend = NULL;
>
> --
> 1.9.1
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Move VBIOS version to sysfs

2017-08-25 Thread Alex Deucher
On Fri, Aug 25, 2017 at 9:43 AM, Kent Russell  wrote:
> sysfs is more stable, and doesn't require root to access
>
> Signed-off-by: Kent Russell 

Might as well move the vbios binary itself to sysfs as well.  It
should be a stable interface :)

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 54 
> +-
>  1 file changed, 23 insertions(+), 31 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index c041496..77929e4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -67,7 +67,6 @@ static int amdgpu_debugfs_regs_init(struct amdgpu_device 
> *adev);
>  static void amdgpu_debugfs_regs_cleanup(struct amdgpu_device *adev);
>  static int amdgpu_debugfs_test_ib_ring_init(struct amdgpu_device *adev);
>  static int amdgpu_debugfs_vbios_dump_init(struct amdgpu_device *adev);
> -static int amdgpu_debugfs_vbios_version_init(struct amdgpu_device *adev);
>
>  static const char *amdgpu_asic_name[] = {
> "TAHITI",
> @@ -890,6 +889,20 @@ static uint32_t cail_ioreg_read(struct card_info *info, 
> uint32_t reg)
> return r;
>  }
>
> +static ssize_t amdgpu_atombios_get_vbios_version(struct device *dev,
> +struct device_attribute 
> *attr,
> +char *buf)
> +{
> +   struct drm_device *ddev = dev_get_drvdata(dev);
> +   struct amdgpu_device *adev = ddev->dev_private;
> +   struct atom_context *ctx = adev->mode_info.atom_context;
> +
> +   return snprintf(buf, PAGE_SIZE, "%s\n", ctx->vbios_version);
> +}
> +
> +static DEVICE_ATTR(vbios_version, 0444, amdgpu_atombios_get_vbios_version,
> +  NULL);
> +
>  /**
>   * amdgpu_atombios_fini - free the driver info and callbacks for atombios
>   *
> @@ -909,6 +922,7 @@ static void amdgpu_atombios_fini(struct amdgpu_device 
> *adev)
> adev->mode_info.atom_context = NULL;
> kfree(adev->mode_info.atom_card_info);
> adev->mode_info.atom_card_info = NULL;
> +   device_remove_file(adev->dev, _attr_vbios_version);
>  }
>
>  /**
> @@ -925,6 +939,7 @@ static int amdgpu_atombios_init(struct amdgpu_device 
> *adev)
>  {
> struct card_info *atom_card_info =
> kzalloc(sizeof(struct card_info), GFP_KERNEL);
> +   int ret;
>
> if (!atom_card_info)
> return -ENOMEM;
> @@ -961,6 +976,13 @@ static int amdgpu_atombios_init(struct amdgpu_device 
> *adev)
> amdgpu_atombios_scratch_regs_init(adev);
> amdgpu_atombios_allocate_fb_scratch(adev);
> }
> +
> +   ret = device_create_file(adev->dev, _attr_vbios_version);
> +   if (ret) {
> +   DRM_ERROR("Failed to create device file for VBIOS version\n");
> +   return ret;
> +   }
> +
> return 0;
>  }
>
> @@ -2256,10 +2278,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
> if (r)
> DRM_ERROR("Creating vbios dump debugfs failed (%d).\n", r);
>
> -   r = amdgpu_debugfs_vbios_version_init(adev);
> -   if (r)
> -   DRM_ERROR("Creating vbios version debugfs failed (%d).\n", r);
> -
> if ((amdgpu_testing & 1)) {
> if (adev->accel_working)
> amdgpu_test_moves(adev);
> @@ -3851,39 +3869,17 @@ static int amdgpu_debugfs_get_vbios_dump(struct 
> seq_file *m, void *data)
> return 0;
>  }
>
> -static int amdgpu_debugfs_get_vbios_version(struct seq_file *m, void *data)
> -{
> -   struct drm_info_node *node = (struct drm_info_node *) m->private;
> -   struct drm_device *dev = node->minor->dev;
> -   struct amdgpu_device *adev = dev->dev_private;
> -   struct atom_context *ctx = adev->mode_info.atom_context;
> -
> -   seq_printf(m, "%s\n", ctx->vbios_version);
> -   return 0;
> -}
> -
>  static const struct drm_info_list amdgpu_vbios_dump_list[] = {
> {"amdgpu_vbios",
>  amdgpu_debugfs_get_vbios_dump,
>  0, NULL},
>  };
>
> -static const struct drm_info_list amdgpu_vbios_version_list[] = {
> -   {"amdgpu_vbios_version",
> -amdgpu_debugfs_get_vbios_version,
> -0, NULL},
> -};
> -
>  static int amdgpu_debugfs_vbios_dump_init(struct amdgpu_device *adev)
>  {
> return amdgpu_debugfs_add_files(adev,
> amdgpu_vbios_dump_list, 1);
>  }
> -static int amdgpu_debugfs_vbios_version_init(struct amdgpu_device *adev)
> -{
> -   return amdgpu_debugfs_add_files(adev,
> -   amdgpu_vbios_version_list, 1);
> -}
>  #else
>  static int amdgpu_debugfs_test_ib_ring_init(struct amdgpu_device *adev)
>  {
> @@ -3897,9 +3893,5 @@ static int amdgpu_debugfs_vbios_dump_init(struct 
> amdgpu_device 

Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS

2017-08-25 Thread Alex Deucher
On Fri, Aug 25, 2017 at 8:59 AM, Russell, Kent  wrote:
> There is GPU Power usage reported through amdgpu_pm_info, which also has some 
> other information as well. I'd like that in sysfs, but I am unsure if we are  
> allowed to due to the other information reported there.
>

We can add a sysfs file for power info, but I'd like the keep the
debugfs one as well since we regularly change the format as we add
features and such.  Depending on what info you want to get, you may
already have what you need via the sysfs pp files.

Alex


>  Kent
>
> -Original Message-
> From: StDenis, Tom
> Sent: Friday, August 25, 2017 8:58 AM
> To: Christian König; Russell, Kent; amd-gfx@lists.freedesktop.org
> Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
>
> On 25/08/17 08:56 AM, Christian König wrote:
>> Hi Kent,
>>
>> agree on the VBIOS dump file, that clearly belongs to debugsf.
>>
>> The power usage stuff I can't say much about cause I'm not deeply into
>> this, but keep in mind the restriction for sysfs:
>> 1. It's a stable interface. So it must be very well designed.
>> 2. Only one value per file. I think the power stuff doesn't fulfill
>> that requirement at the moment.
>
> What "power" stuff are we talking about?  The sensors interface or the 
> pm_info or something else?
>
> Keep in mind umr uses the sensors debugfs file in --top mode.
>
> Tom
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH umr] Add vbios_version to config data and print out with 'umr -c'

2017-08-25 Thread Tom St Denis
Signed-off-by: Tom St Denis 
---
 src/app/print_config.c |  8 ++--
 src/lib/discover.c | 23 +++
 src/lib/scan_config.c  |  9 +
 src/umr.h  |  1 +
 4 files changed, 31 insertions(+), 10 deletions(-)

diff --git a/src/app/print_config.c b/src/app/print_config.c
index 051666d52e02..1e481906a3be 100644
--- a/src/app/print_config.c
+++ b/src/app/print_config.c
@@ -100,8 +100,12 @@ void umr_print_config(struct umr_asic *asic)
 {
int r, x;
 
-   printf("\tasic.instance == %d\n\n", asic->instance);
-   printf("\tumr.version == %s\n\n", UMR_BUILD_REV);
+   printf("\tasic.instance == %d\n", asic->instance);
+   printf("\tasic.devname == %s\n", asic->options.pci.name);
+
+   printf("\n\tumr.version == %s\n\n", UMR_BUILD_REV);
+
+   printf("\tvbios.version == %s\n\n", asic->config.vbios_version);
 
for (r = 0; asic->config.fw[r].name[0]; r++) {
printf("\tfw.%s == .feature==%lu .firmware==0x%08lx\n",
diff --git a/src/lib/discover.c b/src/lib/discover.c
index 31ff25b06f9f..dcc212fc39e4 100644
--- a/src/lib/discover.c
+++ b/src/lib/discover.c
@@ -101,7 +101,7 @@ struct umr_asic *umr_discover_asic(struct umr_options 
*options)
unsigned did;
struct umr_asic *asic;
long trydid = options->forcedid;
-   int busmatch = 0, parsed_did;
+   int busmatch = 0, parsed_did, need_config_scan = 0;
 
// Try to map to instance if we have a specific pci device
if (options->pci.domain || options->pci.bus ||
@@ -169,13 +169,16 @@ struct umr_asic *umr_discover_asic(struct umr_options 
*options)
if (strstr(name, "dev="))
memmove(name, name+4, strlen(name)-3);
 
-   // read the PCI info
-   strcpy(options->pci.name, name);
-   sscanf(name, "%04x:%02x:%02x.%x",
-   >pci.domain,
-   >pci.bus,
-   >pci.slot,
-   >pci.func);
+   if (!strlen(options->pci.name)) {
+   // read the PCI info
+   strcpy(options->pci.name, name);
+   sscanf(name, "%04x:%02x:%02x.%x",
+   >pci.domain,
+   >pci.bus,
+   >pci.slot,
+   >pci.func);
+   need_config_scan = 1;
+   }
}
 
if (trydid < 0) {
@@ -321,6 +324,10 @@ struct umr_asic *umr_discover_asic(struct umr_options 
*options)
asic->pci.mem = pcimem_v;
}
}
+
+   if (need_config_scan)
+   umr_scan_config(asic);
+
return asic;
 err_pci:
umr_close_asic(asic);
diff --git a/src/lib/scan_config.c b/src/lib/scan_config.c
index 5d2975f33e34..ade2d6e032bf 100644
--- a/src/lib/scan_config.c
+++ b/src/lib/scan_config.c
@@ -87,6 +87,15 @@ int umr_scan_config(struct umr_asic *asic)
if (asic->options.no_kernel)
return -1;
 
+   // read vbios version
+   snprintf(fname, sizeof(fname)-1, 
"/sys/bus/pci/devices/%s/vbios_version", asic->options.pci.name);
+   f = fopen(fname, "r");
+   if (f) {
+   fgets(asic->config.vbios_version, 
sizeof(asic->config.vbios_version)-1, f);
+   
asic->config.vbios_version[strlen(asic->config.vbios_version)-1] = 0; // remove 
newline...
+   fclose(f);
+   }
+
/* process FW block */
snprintf(fname, sizeof(fname)-1, 
"/sys/kernel/debug/dri/%d/amdgpu_firmware_info", asic->instance);
f = fopen(fname, "r");
diff --git a/src/umr.h b/src/umr.h
index fa89b92a4b8c..b6fe4ee876a7 100644
--- a/src/umr.h
+++ b/src/umr.h
@@ -232,6 +232,7 @@ struct umr_asic {
struct umr_gfx_config gfx;
struct umr_fw_config fw[UMR_MAX_FW];
struct umr_pci_config pci;
+   char vbios_version[128];
} config;
struct {
int mmio,
-- 
2.12.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH umr 2/2] Add SI/CIK to read_vram.

2017-08-25 Thread Tom St Denis
Tested with my Tahiti device.

Signed-off-by: Tom St Denis 
---
 src/lib/read_vram.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/src/lib/read_vram.c b/src/lib/read_vram.c
index f2c3a15c27fe..c254f5a2e406 100644
--- a/src/lib/read_vram.c
+++ b/src/lib/read_vram.c
@@ -815,6 +815,8 @@ int umr_read_vram(struct umr_asic *asic, uint32_t vmid, 
uint64_t address, uint32
}
 
switch (asic->family) {
+   case FAMILY_SI:
+   case FAMILY_CIK:
case FAMILY_VI:
return umr_read_vram_vi(asic, vmid, address, size, dst);
case FAMILY_RV:
-- 
2.12.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH umr 1/2] Fix UVD/VCN IB detection.

2017-08-25 Thread Tom St Denis
Since we add the IP name to the register names we
need to strip that off again.

Signed-off-by: Tom St Denis 
---
 src/lib/ring_decode.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/src/lib/ring_decode.c b/src/lib/ring_decode.c
index 9d6e366e4594..52855ab4fb62 100644
--- a/src/lib/ring_decode.c
+++ b/src/lib/ring_decode.c
@@ -748,6 +748,10 @@ static void print_decode_pm4(struct umr_asic *asic, struct 
umr_ring_decoder *dec
name = umr_reg_name(asic, 
decoder->pm4.next_write_mem.addr_lo);
printf("   word (%lu): %s(0x%lx) <= 0x%lx", (unsigned 
long)decoder->pm4.cur_word++, name, (unsigned 
long)decoder->pm4.next_write_mem.addr_lo, (unsigned long)ib);
 
+   // strip off IP name
+   name = strstr(name, ".");
+   if (name[0] == '.') ++name;
+
// detect VCN/UVD IBs and chain them once all
// 4 pieces of information are found
if (!strcmp(name, "mmUVD_LMI_RBC_IB_VMID")) {
-- 
2.12.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amdgpu: Add sysfs file for VBIOS

2017-08-25 Thread Russell, Kent
The decision to move the VBIOS version to sysfs has been decided. I didn't get 
any confirmation regarding moving the GPU power usage, or the actual VBIOS 
dump, as the dump seems to make more sense in debugfs. I'd like to move power 
usage to sysfs as the SMI tool needs root privileges just to read the value, 
which makes people cranky. I just want to make sure that it's alright to do so 
from the powers-that-be.

 Kent

-Original Message-
From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf Of Tom 
St Denis
Sent: Friday, August 25, 2017 8:37 AM
To: amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS

Hi Kent,

Loosely following this thread.  Was there a decision on whether to leave this 
in debugfs or sysfs?

I'd like to throw the version string in umr's config output (umr -c) :-)

Cheers,
Tom

On 24/08/17 05:30 PM, Russell, Kent wrote:
> The plan is for the vbios version to be available through the ROCM-SMI 
> utility. We have the GPU power usage listed in the debugfs currently. 
> If they are both to be used for a userspace utility, should we be 
> moving both of those to sysfs instead?
> 
> Kent
> 
> KENT RUSSELL
> Software Engineer | Vertical Workstation/Compute
> 1 Commerce Valley Drive East
> Markham, ON L3T 7X6
> Canada
> O +(1) 289-695-2122   | Ext x72122
> 
> 
> --
> --
> *From:* Christian König 
> *Sent:* Thursday, August 24, 2017 2:10:35 PM
> *To:* Russell, Kent; Alex Deucher; Kuehling, Felix
> *Cc:* amd-gfx@lists.freedesktop.org
> *Subject:* Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS Actually 
> the main difference is not root vs. user, but rather unstable vs 
> stable interface.
> 
> If you want a stable interface for an userspace tool you should use 
> sysfs, if you don't care about that you should use debugfs.
> 
> Christian.
> 
> Am 24.08.2017 um 18:37 schrieb Russell, Kent:
>> We already access the GPU Power usage via debugfs, which is why I didn't 
>> think it was a huge deal to switch it over, since we already need root 
>> access for the SMI due to that.
>>
>>   Kent
>>
>> -Original Message-
>> From: Alex Deucher [mailto:alexdeuc...@gmail.com]
>> Sent: Thursday, August 24, 2017 11:39 AM
>> To: Kuehling, Felix
>> Cc: Russell, Kent; Christian König; amd-gfx@lists.freedesktop.org
>> Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
>>
>> On Thu, Aug 24, 2017 at 11:35 AM, Kuehling, Felix  
>> wrote:
>>> Debugfs is only accessible by Root. The BIOS version is already reported in 
>>> the kernel log, which is visible to everyone. So why hide this away in 
>>> debugfs?
>>>
>>> I think Kent's intention is to add VBIOS version reporting to the rocm-smi 
>>> tool. I'd prefer using a stable interface like sysfs, and one that doesn't 
>>> require root access in a tool like rocm-smi.
>>>
>> Ah, ok, sysfs is fine in that case.  I thought this was just general 
>> debugging stuff.
>>
>> Alex
>>
>>> Regards,
>>>Felix
>>> 
>>> From: amd-gfx  on behalf of 
>>> Russell, Kent 
>>> Sent: Thursday, August 24, 2017 9:06:22 AM
>>> To: Alex Deucher
>>> Cc: Christian König; amd-gfx@lists.freedesktop.org
>>> Subject: RE: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
>>>
>>> I can definitely add that as well.
>>>
>>>   Kent
>>>
>>> -Original Message-
>>> From: Alex Deucher [mailto:alexdeuc...@gmail.com]
>>> Sent: Thursday, August 24, 2017 8:56 AM
>>> To: Russell, Kent
>>> Cc: Christian König; amd-gfx@lists.freedesktop.org
>>> Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
>>>
>>> On Thu, Aug 24, 2017 at 5:58 AM, Russell, Kent  wrote:
 No real reason for sysfs instead of debugfs, I just picked the one that 
 was more familiar with. I can definitely move it to debugfs instead. I 
 will also clean up the commit message per Michel's comments. Thank you!

>>> While you are at it, can you expose the vbios binary itself via debugfs?  
>>> That's been on by todo list for a while.
>>>
>>> Alex
>>>
   Kent

 -Original Message-
 From: Christian König [mailto:deathsim...@vodafone.de]
 Sent: Thursday, August 24, 2017 2:22 AM
 To: Russell, Kent; amd-gfx@lists.freedesktop.org
 Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS

 Am 23.08.2017 um 20:12 schrieb Kent Russell:
> This won't change after initialization, so we can add the 
> information when we parse the atombios information. This ensures 
> that we can find out the VBIOS, even when the dmesg buffer fills 
> up, and makes it easier to associate which VBIOS is for which GPU 
> on mGPU configurations. Set the size to 20 characters in case of 
> some weird VBIOS suffix that exceeds the expected 17 character 
> format
> (3-8-3\0)

Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS

2017-08-25 Thread Tom St Denis

Hi Kent,

Loosely following this thread.  Was there a decision on whether to leave 
this in debugfs or sysfs?


I'd like to throw the version string in umr's config output (umr -c) :-)

Cheers,
Tom

On 24/08/17 05:30 PM, Russell, Kent wrote:
The plan is for the vbios version to be available through the ROCM-SMI 
utility. We have the GPU power usage listed in the debugfs currently. If 
they are both to be used for a userspace utility, should we be moving 
both of those to sysfs instead?


Kent

KENT RUSSELL
Software Engineer | Vertical Workstation/Compute
1 Commerce Valley Drive East
Markham, ON L3T 7X6
Canada
O +(1) 289-695-2122   | Ext x72122



*From:* Christian König 
*Sent:* Thursday, August 24, 2017 2:10:35 PM
*To:* Russell, Kent; Alex Deucher; Kuehling, Felix
*Cc:* amd-gfx@lists.freedesktop.org
*Subject:* Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS
Actually the main difference is not root vs. user, but rather unstable
vs stable interface.

If you want a stable interface for an userspace tool you should use
sysfs, if you don't care about that you should use debugfs.

Christian.

Am 24.08.2017 um 18:37 schrieb Russell, Kent:

We already access the GPU Power usage via debugfs, which is why I didn't think 
it was a huge deal to switch it over, since we already need root access for the 
SMI due to that.

  Kent

-Original Message-
From: Alex Deucher [mailto:alexdeuc...@gmail.com]
Sent: Thursday, August 24, 2017 11:39 AM
To: Kuehling, Felix
Cc: Russell, Kent; Christian König; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS

On Thu, Aug 24, 2017 at 11:35 AM, Kuehling, Felix  
wrote:

Debugfs is only accessible by Root. The BIOS version is already reported in the 
kernel log, which is visible to everyone. So why hide this away in debugfs?

I think Kent's intention is to add VBIOS version reporting to the rocm-smi 
tool. I'd prefer using a stable interface like sysfs, and one that doesn't 
require root access in a tool like rocm-smi.


Ah, ok, sysfs is fine in that case.  I thought this was just general debugging 
stuff.

Alex


Regards,
   Felix

From: amd-gfx  on behalf of
Russell, Kent 
Sent: Thursday, August 24, 2017 9:06:22 AM
To: Alex Deucher
Cc: Christian König; amd-gfx@lists.freedesktop.org
Subject: RE: [PATCH] drm/amdgpu: Add sysfs file for VBIOS

I can definitely add that as well.

  Kent

-Original Message-
From: Alex Deucher [mailto:alexdeuc...@gmail.com]
Sent: Thursday, August 24, 2017 8:56 AM
To: Russell, Kent
Cc: Christian König; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS

On Thu, Aug 24, 2017 at 5:58 AM, Russell, Kent  wrote:

No real reason for sysfs instead of debugfs, I just picked the one that was 
more familiar with. I can definitely move it to debugfs instead. I will also 
clean up the commit message per Michel's comments. Thank you!


While you are at it, can you expose the vbios binary itself via debugfs?  
That's been on by todo list for a while.

Alex


  Kent

-Original Message-
From: Christian König [mailto:deathsim...@vodafone.de]
Sent: Thursday, August 24, 2017 2:22 AM
To: Russell, Kent; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/amdgpu: Add sysfs file for VBIOS

Am 23.08.2017 um 20:12 schrieb Kent Russell:

This won't change after initialization, so we can add the
information when we parse the atombios information. This ensures
that we can find out the VBIOS, even when the dmesg buffer fills up,
and makes it easier to associate which VBIOS is for which GPU on
mGPU configurations. Set the size to 20 characters in case of some
weird VBIOS suffix that exceeds the expected 17 character format
(3-8-3\0)

Is there any reason that needs to be sysfs? Sounds more like an use case for 
debugfs.

Christian.


Signed-off-by: Kent Russell 
---
   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 23 +++
   drivers/gpu/drm/amd/amdgpu/atom.c  |  5 -
   drivers/gpu/drm/amd/amdgpu/atom.h  |  1 +
   3 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index a1f9424..f40be71 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -888,6 +888,20 @@ static uint32_t cail_ioreg_read(struct card_info *info, 
uint32_t reg)
   return r;
   }

+static ssize_t amdgpu_atombios_get_vbios_version(struct device *dev,
+  struct device_attribute *attr,
+  char *buf) {
+ struct drm_device *ddev = dev_get_drvdata(dev);
+ struct amdgpu_device *adev = 

RE: [PATCH] drm/amd/powerplay: delete pp dead code on raven

2017-08-25 Thread Zhang, Hawking
Reviewed-by: Hawking Zhang 

Regards,
Hawking
-Original Message-
From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf Of Rex 
Zhu
Sent: Friday, August 25, 2017 18:18
To: amd-gfx@lists.freedesktop.org
Cc: Zhu, Rex 
Subject: [PATCH] drm/amd/powerplay: delete pp dead code on raven

Change-Id: Ib2a3cd06c540a90eb33fc9e4ce0f3122c5f2c0d3
Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c | 66 --
 1 file changed, 66 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
index a5fa546..ebbee5f 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
@@ -49,29 +49,6 @@
 int rv_display_clock_voltage_request(struct pp_hwmgr *hwmgr,
struct pp_display_clock_request *clock_req);
 
-struct phm_vq_budgeting_record rv_vqtable[] = {
-/* CUs, SSP low, SSP High, Display Configuration, AWD/non-AWD,
- * Sustainable GFXCLK, Sustainable FCLK, Sustainable CUs,
- * unused, unused, unused
- */
-   { 11, 30, 60, VQ_DisplayConfig_NoneAWD,  8, 16, 11, 0, 0, 0 },
-   { 11, 30, 60, VQ_DisplayConfig_AWD,  8, 16, 11, 0, 0, 0 },
-
-   {  8, 30, 60, VQ_DisplayConfig_NoneAWD, 10, 16,  8, 0, 0, 0 },
-   {  8, 30, 60, VQ_DisplayConfig_AWD, 10, 16,  8, 0, 0, 0 },
-
-   { 10, 12, 30, VQ_DisplayConfig_NoneAWD,  4, 12, 10, 0, 0, 0 },
-   { 10, 12, 30, VQ_DisplayConfig_AWD,  4, 12, 10, 0, 0, 0 },
-
-   {  8, 12, 30, VQ_DisplayConfig_NoneAWD,  45000, 12,  8, 0, 0, 0 },
-   {  8, 12, 30, VQ_DisplayConfig_AWD,  45000, 12,  8, 0, 0, 0 },
-
-   {  6, 12, 30, VQ_DisplayConfig_NoneAWD,  45000, 12,  6, 0, 0, 0 },
-   {  6, 12, 30, VQ_DisplayConfig_AWD,  45000, 12,  6, 0, 0, 0 },
-
-   {  3, 12, 30, VQ_DisplayConfig_NoneAWD,  45000, 12,  3, 0, 0, 0 },
-   {  3, 12, 30, VQ_DisplayConfig_AWD,  45000, 12,  3, 0, 0, 0 },
-};
 
 static struct rv_power_state *cast_rv_ps(struct pp_hw_power_state *hw_ps)  { 
@@ -90,42 +67,6 @@ static const struct rv_power_state *cast_const_rv_ps(
return (struct rv_power_state *)hw_ps;  }
 
-static int rv_init_vq_budget_table(struct pp_hwmgr *hwmgr) -{
-   uint32_t table_size, i;
-   struct phm_vq_budgeting_table *ptable;
-   uint32_t num_entries = ARRAY_SIZE(rv_vqtable);
-
-   if (hwmgr->dyn_state.vq_budgeting_table != NULL)
-   return 0;
-
-   table_size = sizeof(struct phm_vq_budgeting_table) +
-   sizeof(struct phm_vq_budgeting_record) * (num_entries - 
1);
-
-   ptable = kzalloc(table_size, GFP_KERNEL);
-   if (NULL == ptable)
-   return -ENOMEM;
-
-   ptable->numEntries = (uint8_t) num_entries;
-
-   for (i = 0; i < ptable->numEntries; i++) {
-   ptable->entries[i].ulCUs = rv_vqtable[i].ulCUs;
-   ptable->entries[i].ulSustainableSOCPowerLimitLow = 
rv_vqtable[i].ulSustainableSOCPowerLimitLow;
-   ptable->entries[i].ulSustainableSOCPowerLimitHigh = 
rv_vqtable[i].ulSustainableSOCPowerLimitHigh;
-   ptable->entries[i].ulMinSclkLow = rv_vqtable[i].ulMinSclkLow;
-   ptable->entries[i].ulMinSclkHigh = rv_vqtable[i].ulMinSclkHigh;
-   ptable->entries[i].ucDispConfig = rv_vqtable[i].ucDispConfig;
-   ptable->entries[i].ulDClk = rv_vqtable[i].ulDClk;
-   ptable->entries[i].ulEClk = rv_vqtable[i].ulEClk;
-   ptable->entries[i].ulSustainableSclk = 
rv_vqtable[i].ulSustainableSclk;
-   ptable->entries[i].ulSustainableCUs = 
rv_vqtable[i].ulSustainableCUs;
-   }
-
-   hwmgr->dyn_state.vq_budgeting_table = ptable;
-
-   return 0;
-}
-
 static int rv_initialize_dpm_defaults(struct pp_hwmgr *hwmgr)  {
struct rv_hwmgr *rv_hwmgr = (struct rv_hwmgr *)(hwmgr->backend); @@ 
-589,8 +530,6 @@ static int rv_hwmgr_backend_init(struct pp_hwmgr *hwmgr)
 
hwmgr->platform_descriptor.minimumClocksReductionPercentage = 50;
 
-   rv_init_vq_budget_table(hwmgr);
-
return result;
 }
 
@@ -635,11 +574,6 @@ static int rv_hwmgr_backend_fini(struct pp_hwmgr *hwmgr)
hwmgr->dyn_state.vddc_dep_on_dal_pwrl = NULL;
}
 
-   if (NULL != hwmgr->dyn_state.vq_budgeting_table) {
-   kfree(hwmgr->dyn_state.vq_budgeting_table);
-   hwmgr->dyn_state.vq_budgeting_table = NULL;
-   }
-
kfree(hwmgr->backend);
hwmgr->backend = NULL;
 
--
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amd/powerplay: delete pp dead code on raven

2017-08-25 Thread Rex Zhu
Change-Id: Ib2a3cd06c540a90eb33fc9e4ce0f3122c5f2c0d3
Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c | 66 --
 1 file changed, 66 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
index a5fa546..ebbee5f 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
@@ -49,29 +49,6 @@
 int rv_display_clock_voltage_request(struct pp_hwmgr *hwmgr,
struct pp_display_clock_request *clock_req);
 
-struct phm_vq_budgeting_record rv_vqtable[] = {
-/* CUs, SSP low, SSP High, Display Configuration, AWD/non-AWD,
- * Sustainable GFXCLK, Sustainable FCLK, Sustainable CUs,
- * unused, unused, unused
- */
-   { 11, 30, 60, VQ_DisplayConfig_NoneAWD,  8, 16, 11, 0, 0, 0 },
-   { 11, 30, 60, VQ_DisplayConfig_AWD,  8, 16, 11, 0, 0, 0 },
-
-   {  8, 30, 60, VQ_DisplayConfig_NoneAWD, 10, 16,  8, 0, 0, 0 },
-   {  8, 30, 60, VQ_DisplayConfig_AWD, 10, 16,  8, 0, 0, 0 },
-
-   { 10, 12, 30, VQ_DisplayConfig_NoneAWD,  4, 12, 10, 0, 0, 0 },
-   { 10, 12, 30, VQ_DisplayConfig_AWD,  4, 12, 10, 0, 0, 0 },
-
-   {  8, 12, 30, VQ_DisplayConfig_NoneAWD,  45000, 12,  8, 0, 0, 0 },
-   {  8, 12, 30, VQ_DisplayConfig_AWD,  45000, 12,  8, 0, 0, 0 },
-
-   {  6, 12, 30, VQ_DisplayConfig_NoneAWD,  45000, 12,  6, 0, 0, 0 },
-   {  6, 12, 30, VQ_DisplayConfig_AWD,  45000, 12,  6, 0, 0, 0 },
-
-   {  3, 12, 30, VQ_DisplayConfig_NoneAWD,  45000, 12,  3, 0, 0, 0 },
-   {  3, 12, 30, VQ_DisplayConfig_AWD,  45000, 12,  3, 0, 0, 0 },
-};
 
 static struct rv_power_state *cast_rv_ps(struct pp_hw_power_state *hw_ps)
 {
@@ -90,42 +67,6 @@ static const struct rv_power_state *cast_const_rv_ps(
return (struct rv_power_state *)hw_ps;
 }
 
-static int rv_init_vq_budget_table(struct pp_hwmgr *hwmgr)
-{
-   uint32_t table_size, i;
-   struct phm_vq_budgeting_table *ptable;
-   uint32_t num_entries = ARRAY_SIZE(rv_vqtable);
-
-   if (hwmgr->dyn_state.vq_budgeting_table != NULL)
-   return 0;
-
-   table_size = sizeof(struct phm_vq_budgeting_table) +
-   sizeof(struct phm_vq_budgeting_record) * (num_entries - 
1);
-
-   ptable = kzalloc(table_size, GFP_KERNEL);
-   if (NULL == ptable)
-   return -ENOMEM;
-
-   ptable->numEntries = (uint8_t) num_entries;
-
-   for (i = 0; i < ptable->numEntries; i++) {
-   ptable->entries[i].ulCUs = rv_vqtable[i].ulCUs;
-   ptable->entries[i].ulSustainableSOCPowerLimitLow = 
rv_vqtable[i].ulSustainableSOCPowerLimitLow;
-   ptable->entries[i].ulSustainableSOCPowerLimitHigh = 
rv_vqtable[i].ulSustainableSOCPowerLimitHigh;
-   ptable->entries[i].ulMinSclkLow = rv_vqtable[i].ulMinSclkLow;
-   ptable->entries[i].ulMinSclkHigh = rv_vqtable[i].ulMinSclkHigh;
-   ptable->entries[i].ucDispConfig = rv_vqtable[i].ucDispConfig;
-   ptable->entries[i].ulDClk = rv_vqtable[i].ulDClk;
-   ptable->entries[i].ulEClk = rv_vqtable[i].ulEClk;
-   ptable->entries[i].ulSustainableSclk = 
rv_vqtable[i].ulSustainableSclk;
-   ptable->entries[i].ulSustainableCUs = 
rv_vqtable[i].ulSustainableCUs;
-   }
-
-   hwmgr->dyn_state.vq_budgeting_table = ptable;
-
-   return 0;
-}
-
 static int rv_initialize_dpm_defaults(struct pp_hwmgr *hwmgr)
 {
struct rv_hwmgr *rv_hwmgr = (struct rv_hwmgr *)(hwmgr->backend);
@@ -589,8 +530,6 @@ static int rv_hwmgr_backend_init(struct pp_hwmgr *hwmgr)
 
hwmgr->platform_descriptor.minimumClocksReductionPercentage = 50;
 
-   rv_init_vq_budget_table(hwmgr);
-
return result;
 }
 
@@ -635,11 +574,6 @@ static int rv_hwmgr_backend_fini(struct pp_hwmgr *hwmgr)
hwmgr->dyn_state.vddc_dep_on_dal_pwrl = NULL;
}
 
-   if (NULL != hwmgr->dyn_state.vq_budgeting_table) {
-   kfree(hwmgr->dyn_state.vq_budgeting_table);
-   hwmgr->dyn_state.vq_budgeting_table = NULL;
-   }
-
kfree(hwmgr->backend);
hwmgr->backend = NULL;
 
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 9/9] drm/amdgpu: WIP add IOCTL interface for per VM BOs

2017-08-25 Thread zhoucm1



On 2017年08月25日 17:38, Christian König wrote:

From: Christian König 

Add the IOCTL interface so that applications can allocate per VM BOs.

Still WIP since not all corner cases are tested yet, but this reduces average
CS overhead for 10K BOs from 21ms down to 48us.
Wow, cheers, eventually you get per vm bo to same reservation with 
PD/pts, indeed save a lot of bo list.


overall looks good, I will take a detailed check for this tomorrow.

Regards,
David Zhou


Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  7 ++--
  drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c|  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c   | 59 ++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c |  3 +-
  include/uapi/drm/amdgpu_drm.h |  2 ++
  5 files changed, 51 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index b1e817c..21cab36 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -457,9 +457,10 @@ struct amdgpu_sa_bo {
   */
  void amdgpu_gem_force_release(struct amdgpu_device *adev);
  int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,
-   int alignment, u32 initial_domain,
-   u64 flags, bool kernel,
-   struct drm_gem_object **obj);
+int alignment, u32 initial_domain,
+u64 flags, bool kernel,
+struct reservation_object *resv,
+struct drm_gem_object **obj);
  
  int amdgpu_mode_dumb_create(struct drm_file *file_priv,

struct drm_device *dev,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
index 0e907ea..7256f83 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
@@ -144,7 +144,7 @@ static int amdgpufb_create_pinned_object(struct 
amdgpu_fbdev *rfbdev,
   AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
   AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS |
   AMDGPU_GEM_CREATE_VRAM_CLEARED,
-  true, );
+  true, NULL, );
if (ret) {
pr_err("failed to allocate framebuffer (%d)\n", aligned_size);
return -ENOMEM;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index d028806..b8e8d67 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -44,11 +44,12 @@ void amdgpu_gem_object_free(struct drm_gem_object *gobj)
  }
  
  int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,

-   int alignment, u32 initial_domain,
-   u64 flags, bool kernel,
-   struct drm_gem_object **obj)
+int alignment, u32 initial_domain,
+u64 flags, bool kernel,
+struct reservation_object *resv,
+struct drm_gem_object **obj)
  {
-   struct amdgpu_bo *robj;
+   struct amdgpu_bo *bo;
int r;
  
  	*obj = NULL;

@@ -59,7 +60,7 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, 
unsigned long size,
  
  retry:

r = amdgpu_bo_create(adev, size, alignment, kernel, initial_domain,
-flags, NULL, NULL, 0, );
+flags, NULL, resv, 0, );
if (r) {
if (r != -ERESTARTSYS) {
if (initial_domain == AMDGPU_GEM_DOMAIN_VRAM) {
@@ -71,7 +72,7 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, 
unsigned long size,
}
return r;
}
-   *obj = >gem_base;
+   *obj = >gem_base;
  
  	return 0;

  }
@@ -136,13 +137,14 @@ void amdgpu_gem_object_close(struct drm_gem_object *obj,
struct amdgpu_vm *vm = >vm;
  
  	struct amdgpu_bo_list_entry vm_pd;

-   struct list_head list;
+   struct list_head list, duplicates;
struct ttm_validate_buffer tv;
struct ww_acquire_ctx ticket;
struct amdgpu_bo_va *bo_va;
int r;
  
  	INIT_LIST_HEAD();

+   INIT_LIST_HEAD();
  
  	tv.bo = >tbo;

tv.shared = true;
@@ -150,7 +152,7 @@ void amdgpu_gem_object_close(struct drm_gem_object *obj,
  
  	amdgpu_vm_get_pd_bo(vm, , _pd);
  
-	r = ttm_eu_reserve_buffers(, , false, NULL);

+   r = ttm_eu_reserve_buffers(, , false, );
if (r) {
dev_err(adev->dev, "leaking bo va because "
"we fail to reserve bo (%d)\n", r);
@@ -185,9 +187,12 @@ int 

[PATCH 2/4] drm/amd/powerplay: Remove obsolete code of reduced refresh rate featur

2017-08-25 Thread Rex Zhu
this feature was not supported on linux and obsolete.

Change-Id: I7434e9370e4a29489bff7feb1421e028710fbe14
Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/include/pptable.h |  6 --
 drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c | 18 --
 drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c  |  1 -
 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c|  1 -
 drivers/gpu/drm/amd/powerplay/inc/power_state.h   |  4 
 5 files changed, 30 deletions(-)

diff --git a/drivers/gpu/drm/amd/include/pptable.h 
b/drivers/gpu/drm/amd/include/pptable.h
index 0b6a057..1dda72a 100644
--- a/drivers/gpu/drm/amd/include/pptable.h
+++ b/drivers/gpu/drm/amd/include/pptable.h
@@ -285,12 +285,6 @@
 #define ATOM_PPLIB_PCIE_LINK_WIDTH_MASK0x00F8
 #define ATOM_PPLIB_PCIE_LINK_WIDTH_SHIFT   3
 
-// lookup into reduced refresh-rate table
-#define ATOM_PPLIB_LIMITED_REFRESHRATE_VALUE_MASK  0x0F00
-#define ATOM_PPLIB_LIMITED_REFRESHRATE_VALUE_SHIFT 8
-
-#define ATOM_PPLIB_LIMITED_REFRESHRATE_UNLIMITED0
-#define ATOM_PPLIB_LIMITED_REFRESHRATE_50HZ 1
 // 2-15 TBD as needed.
 
 #define ATOM_PPLIB_SOFTWARE_DISABLE_LOADBALANCING0x1000
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c
index 707809b..f974832 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c
@@ -680,7 +680,6 @@ static int init_non_clock_fields(struct pp_hwmgr *hwmgr,
uint8_t version,
 const ATOM_PPLIB_NONCLOCK_INFO *pnon_clock_info)
 {
-   unsigned long rrr_index;
unsigned long tmp;
 
ps->classification.ui_label = 
(le16_to_cpu(pnon_clock_info->usClassification) &
@@ -709,23 +708,6 @@ static int init_non_clock_fields(struct pp_hwmgr *hwmgr,
 
ps->display.disableFrameModulation = false;
 
-   rrr_index = (le32_to_cpu(pnon_clock_info->ulCapsAndSettings) &
-   ATOM_PPLIB_LIMITED_REFRESHRATE_VALUE_MASK) >>
-   ATOM_PPLIB_LIMITED_REFRESHRATE_VALUE_SHIFT;
-
-   if (rrr_index != ATOM_PPLIB_LIMITED_REFRESHRATE_UNLIMITED) {
-   static const uint8_t 
look_up[(ATOM_PPLIB_LIMITED_REFRESHRATE_VALUE_MASK >> 
ATOM_PPLIB_LIMITED_REFRESHRATE_VALUE_SHIFT) + 1] = \
-   { 0, 50, 0 };
-
-   ps->display.refreshrateSource = PP_RefreshrateSource_Explicit;
-   ps->display.explicitRefreshrate = look_up[rrr_index];
-   ps->display.limitRefreshrate = true;
-
-   if (ps->display.explicitRefreshrate == 0)
-   ps->display.limitRefreshrate = false;
-   } else
-   ps->display.limitRefreshrate = false;
-
tmp = le32_to_cpu(pnon_clock_info->ulCapsAndSettings) &
ATOM_PPLIB_ENABLE_VARIBRIGHT;
 
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
index 736f193..27bd1a0 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
@@ -2990,7 +2990,6 @@ static int 
smu7_get_pp_table_entry_callback_func_v1(struct pp_hwmgr *hwmgr,
power_state->pcie.lanes = 0;
 
power_state->display.disableFrameModulation = false;
-   power_state->display.limitRefreshrate = false;
power_state->display.enableVariBright =
(0 != (le32_to_cpu(state_entry->ulCapsAndSettings) &
ATOM_Tonga_ENABLE_VARIBRIGHT));
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
index 29e44c3..f20758f 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
@@ -3021,7 +3021,6 @@ static int vega10_get_pp_table_entry_callback_func(struct 
pp_hwmgr *hwmgr,
ATOM_Vega10_DISALLOW_ON_DC) != 0);
 
power_state->display.disableFrameModulation = false;
-   power_state->display.limitRefreshrate = false;
power_state->display.enableVariBright =
((le32_to_cpu(state_entry->ulCapsAndSettings) &
ATOM_Vega10_ENABLE_VARIBRIGHT) != 0);
diff --git a/drivers/gpu/drm/amd/powerplay/inc/power_state.h 
b/drivers/gpu/drm/amd/powerplay/inc/power_state.h
index 827860f..44069f7 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/power_state.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/power_state.h
@@ -98,10 +98,6 @@ enum PP_RefreshrateSource {
 
 struct PP_StateDisplayBlock {
bool  disableFrameModulation;
-   bool  limitRefreshrate;
-   enum PP_RefreshrateSource refreshrateSource;
-   int

[PATCH] drm/amd/amdgpu: fix build warning

2017-08-25 Thread Roger He
Change-Id: I335123d3f77b11adc65b29463e12f825d19ca382
Signed-off-by: Roger He 
---
 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c | 2 +-
 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
index a7351ba..6c8040e 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
@@ -124,7 +124,7 @@ static void gfxhub_v1_0_init_tlb_regs(struct amdgpu_device 
*adev)
 
 static void gfxhub_v1_0_init_cache_regs(struct amdgpu_device *adev)
 {
-   uint32_t tmp, field;
+   uint32_t tmp;
 
/* Setup L2 cache */
tmp = RREG32_SOC15(GC, 0, mmVM_L2_CNTL);
diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c 
b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
index 2a6fa73..74cb647 100644
--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
@@ -138,7 +138,7 @@ static void mmhub_v1_0_init_tlb_regs(struct amdgpu_device 
*adev)
 
 static void mmhub_v1_0_init_cache_regs(struct amdgpu_device *adev)
 {
-   uint32_t tmp, field;
+   uint32_t tmp;
 
/* Setup L2 cache */
tmp = RREG32_SOC15(MMHUB, 0, mmVM_L2_CNTL);
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/amdgpu: fix build warning

2017-08-25 Thread Christian König

Am 25.08.2017 um 11:45 schrieb Roger He:

Change-Id: I335123d3f77b11adc65b29463e12f825d19ca382
Signed-off-by: Roger He 


Reviewed-by: Christian König 


---
  drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c | 2 +-
  drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c  | 2 +-
  2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
index a7351ba..6c8040e 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
@@ -124,7 +124,7 @@ static void gfxhub_v1_0_init_tlb_regs(struct amdgpu_device 
*adev)
  
  static void gfxhub_v1_0_init_cache_regs(struct amdgpu_device *adev)

  {
-   uint32_t tmp, field;
+   uint32_t tmp;
  
  	/* Setup L2 cache */

tmp = RREG32_SOC15(GC, 0, mmVM_L2_CNTL);
diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c 
b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
index 2a6fa73..74cb647 100644
--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
@@ -138,7 +138,7 @@ static void mmhub_v1_0_init_tlb_regs(struct amdgpu_device 
*adev)
  
  static void mmhub_v1_0_init_cache_regs(struct amdgpu_device *adev)

  {
-   uint32_t tmp, field;
+   uint32_t tmp;
  
  	/* Setup L2 cache */

tmp = RREG32_SOC15(MMHUB, 0, mmVM_L2_CNTL);



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 3/4] drm/amd/powerplay: refine pp code for raven

2017-08-25 Thread Rex Zhu
delete useless code.

Signed-off-by: Rex Zhu 

Conflicts:
drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c

Change-Id: I403457762ee4cadb357f69ed0846adfa55a9dbea
---
 drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c | 111 +
 1 file changed, 37 insertions(+), 74 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
index 4c7f430..5bbefdd 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
@@ -38,19 +38,39 @@
 #include "pp_soc15.h"
 
 #define RAVEN_MAX_DEEPSLEEP_DIVIDER_ID 5
-#define RAVEN_MINIMUM_ENGINE_CLOCK 800   //8Mhz, the low boundary of 
engine clock allowed on this chip
+#define RAVEN_MINIMUM_ENGINE_CLOCK 800   /* 8Mhz, the low boundary of 
engine clock allowed on this chip */
 #define SCLK_MIN_DIV_INTV_SHIFT 12
-#define RAVEN_DISPCLK_BYPASS_THRESHOLD 1 //100mhz
+#define RAVEN_DISPCLK_BYPASS_THRESHOLD 1 /* 100Mhz */
 #define SMC_RAM_END 0x4
 
 static const unsigned long PhwRaven_Magic = (unsigned long) PHM_Rv_Magic;
+
+
 int rv_display_clock_voltage_request(struct pp_hwmgr *hwmgr,
struct pp_display_clock_request *clock_req);
 
 struct phm_vq_budgeting_record rv_vqtable[] = {
-   /* _TBD
-* CUs, SSP low, SSP High, Min Sclk Low, Min Sclk, High, AWD/non-AWD, 
DCLK, ECLK, Sustainable Sclk, Sustainable CUs */
-   { 8, 0, 45, 0, 0, VQ_DisplayConfig_NoneAWD, 8, 12, 4, 0 },
+/* CUs, SSP low, SSP High, Display Configuration, AWD/non-AWD,
+ * Sustainable GFXCLK, Sustainable FCLK, Sustainable CUs,
+ * unused, unused, unused
+ */
+   { 11, 30, 60, VQ_DisplayConfig_NoneAWD,  8, 16, 11, 0, 0, 0 },
+   { 11, 30, 60, VQ_DisplayConfig_AWD,  8, 16, 11, 0, 0, 0 },
+
+   {  8, 30, 60, VQ_DisplayConfig_NoneAWD, 10, 16,  8, 0, 0, 0 },
+   {  8, 30, 60, VQ_DisplayConfig_AWD, 10, 16,  8, 0, 0, 0 },
+
+   { 10, 12, 30, VQ_DisplayConfig_NoneAWD,  4, 12, 10, 0, 0, 0 },
+   { 10, 12, 30, VQ_DisplayConfig_AWD,  4, 12, 10, 0, 0, 0 },
+
+   {  8, 12, 30, VQ_DisplayConfig_NoneAWD,  45000, 12,  8, 0, 0, 0 },
+   {  8, 12, 30, VQ_DisplayConfig_AWD,  45000, 12,  8, 0, 0, 0 },
+
+   {  6, 12, 30, VQ_DisplayConfig_NoneAWD,  45000, 12,  6, 0, 0, 0 },
+   {  6, 12, 30, VQ_DisplayConfig_AWD,  45000, 12,  6, 0, 0, 0 },
+
+   {  3, 12, 30, VQ_DisplayConfig_NoneAWD,  45000, 12,  3, 0, 0, 0 },
+   {  3, 12, 30, VQ_DisplayConfig_AWD,  45000, 12,  3, 0, 0, 0 },
 };
 
 static struct rv_power_state *cast_rv_ps(struct pp_hw_power_state *hw_ps)
@@ -109,62 +129,21 @@ static int rv_init_vq_budget_table(struct pp_hwmgr *hwmgr)
 static int rv_initialize_dpm_defaults(struct pp_hwmgr *hwmgr)
 {
struct rv_hwmgr *rv_hwmgr = (struct rv_hwmgr *)(hwmgr->backend);
-   struct cgs_system_info sys_info = {0};
-   int result;
 
-   rv_hwmgr->ddi_power_gating_disabled = 0;
-   rv_hwmgr->bapm_enabled = 1;
rv_hwmgr->dce_slow_sclk_threshold = 3;
-   rv_hwmgr->disable_driver_thermal_policy = 1;
rv_hwmgr->thermal_auto_throttling_treshold = 0;
rv_hwmgr->is_nb_dpm_enabled = 1;
rv_hwmgr->dpm_flags = 1;
-   rv_hwmgr->disable_smu_acp_s3_handshake = 1;
-   rv_hwmgr->disable_notify_smu_vpu_recovery = 0;
rv_hwmgr->gfx_off_controled_by_driver = false;
 
phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
-   PHM_PlatformCaps_DynamicM3Arbiter);
-
-   phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
-   PHM_PlatformCaps_UVDPowerGating);
-
-   phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
-   PHM_PlatformCaps_UVDDynamicPowerGating);
-
-   phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
-   PHM_PlatformCaps_VCEPowerGating);
-
-   phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
-   PHM_PlatformCaps_SamuPowerGating);
-
-   phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
-   PHM_PlatformCaps_ACP);
-
-   phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
PHM_PlatformCaps_SclkDeepSleep);
 
phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
-   PHM_PlatformCaps_GFXDynamicMGPowerGating);
-
-   phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
PHM_PlatformCaps_SclkThrottleLowNotification);
 
-   phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
-   PHM_PlatformCaps_DisableVoltageIsland);
-
phm_cap_set(hwmgr->platform_descriptor.platformCaps,
-   

[PATCH 4/4] drm/amd/powerplay: fix flicker issue when HDP enabled

2017-08-25 Thread Rex Zhu
cherry-pick this change from windows.

Change-Id: If6eb8b096275e32a94e9ed1568fc1466b2417ecd
Signed-off-by: Rex Zhu 

Conflicts:
drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
---
 .../gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c  | 14 
 drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c | 42 --
 drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.h |  4 ++-
 drivers/gpu/drm/amd/powerplay/inc/hwmgr.h  |  2 ++
 4 files changed, 43 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c
index fcc722e..967f50f 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c
@@ -323,6 +323,9 @@ int phm_check_states_equal(struct pp_hwmgr *hwmgr,
 int phm_store_dal_configuration_data(struct pp_hwmgr *hwmgr,
const struct amd_pp_display_configuration *display_config)
 {
+   int index = 0;
+   int number_of_active_display = 0;
+
PHM_FUNC_CHECK(hwmgr);
 
if (display_config == NULL)
@@ -330,6 +333,17 @@ int phm_store_dal_configuration_data(struct pp_hwmgr 
*hwmgr,
 
hwmgr->display_config = *display_config;
 
+   if (NULL != hwmgr->hwmgr_func->set_deep_sleep_dcefclk)
+   hwmgr->hwmgr_func->set_deep_sleep_dcefclk(hwmgr, 
hwmgr->display_config.min_dcef_deep_sleep_set_clk);
+
+   for (index = 0; index < 
hwmgr->display_config.num_path_including_non_display; index++) {
+   if (hwmgr->display_config.displays[index].controller_id != 0)
+   number_of_active_display++;
+   }
+
+   if (NULL != hwmgr->hwmgr_func->set_active_display_count)
+   hwmgr->hwmgr_func->set_active_display_count(hwmgr, 
number_of_active_display);
+
if (hwmgr->hwmgr_func->store_cc6_data == NULL)
return -EINVAL;
 
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
index 5bbefdd..a5fa546 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
@@ -135,6 +135,9 @@ static int rv_initialize_dpm_defaults(struct pp_hwmgr 
*hwmgr)
rv_hwmgr->is_nb_dpm_enabled = 1;
rv_hwmgr->dpm_flags = 1;
rv_hwmgr->gfx_off_controled_by_driver = false;
+   rv_hwmgr->need_min_deep_sleep_dcefclk = true;
+   rv_hwmgr->num_active_display = 0;
+   rv_hwmgr->deep_sleep_dcefclk = 0;
 
phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
PHM_PlatformCaps_SclkDeepSleep);
@@ -225,17 +228,9 @@ static int rv_tf_set_clock_limit(struct pp_hwmgr *hwmgr, 
void *input,
clock_req.clock_type = amd_pp_dcf_clock;
clock_req.clock_freq_in_khz = clocks.dcefClock * 10;
 
-   if (clocks.dcefClock == 0 && clocks.dcefClockInSR == 0)
-   clock_req.clock_freq_in_khz = rv_data->dcf_actual_hard_min_freq;
-
PP_ASSERT_WITH_CODE(!rv_display_clock_voltage_request(hwmgr, 
_req),
"Attempt to set DCF Clock Failed!", return 
-EINVAL);
 
-   if(rv_data->need_min_deep_sleep_dcefclk && 0 != clocks.dcefClockInSR)
-   smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
-   PPSMC_MSG_SetMinDeepSleepDcefclk,
-   clocks.dcefClockInSR / 100);
-
if((hwmgr->gfx_arbiter.sclk_hard_min != 0) &&
((hwmgr->gfx_arbiter.sclk_hard_min / 100) != 
rv_data->soc_actual_hard_min_freq)) {
smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
@@ -263,26 +258,35 @@ static int rv_tf_set_clock_limit(struct pp_hwmgr *hwmgr, 
void *input,
return 0;
 }
 
-
-static int rv_tf_set_num_active_display(struct pp_hwmgr *hwmgr, void *input,
-   void *output, void *storage, int result)
+static int rv_set_deep_sleep_dcefclk(struct pp_hwmgr *hwmgr, uint32_t clock)
 {
-   uint32_t  num_of_active_displays = 0;
-   struct cgs_display_info info = {0};
+   struct rv_hwmgr *rv_data = (struct rv_hwmgr *)(hwmgr->backend);
 
-   cgs_get_active_displays_info(hwmgr->device, );
-   num_of_active_displays = info.display_count;
+   if (rv_data->need_min_deep_sleep_dcefclk && rv_data->deep_sleep_dcefclk 
!= clock/100) {
+   rv_data->deep_sleep_dcefclk = clock/100;
+   smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+   PPSMC_MSG_SetMinDeepSleepDcefclk,
+   rv_data->deep_sleep_dcefclk);
+   }
+   return 0;
+}
+
+static int rv_set_active_display_count(struct pp_hwmgr *hwmgr, uint32_t count)
+{
+   struct rv_hwmgr *rv_data = (struct rv_hwmgr *)(hwmgr->backend);
 
-   smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+   if (rv_data->num_active_display != 

[PATCH 1/4] drm/amd/powerplay: add dummy pp table for raven.

2017-08-25 Thread Rex Zhu
Change-Id: I235d31017ebc2801e57a60e7e6293172dbf1c7d7
Signed-off-by: Rex Zhu 
---
 .../gpu/drm/amd/powerplay/hwmgr/processpptables.c  | 62 +-
 1 file changed, 50 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c
index 2716721..707809b 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c
@@ -678,7 +678,8 @@ static PP_StateClassificationFlags 
make_classification_flags(
 static int init_non_clock_fields(struct pp_hwmgr *hwmgr,
struct pp_power_state *ps,
uint8_t version,
-const ATOM_PPLIB_NONCLOCK_INFO *pnon_clock_info) {
+const ATOM_PPLIB_NONCLOCK_INFO *pnon_clock_info)
+{
unsigned long rrr_index;
unsigned long tmp;
 
@@ -790,6 +791,39 @@ static const ATOM_PPLIB_STATE_V2 *get_state_entry_v2(
return pstate;
 }
 
+static unsigned char soft_dummy_pp_table[] = {
+   0xe1, 0x01, 0x06, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x42, 0x00, 0x4a, 
0x00, 0x6c, 0x00, 0x00,
+   0x00, 0x00, 0x00, 0x42, 0x00, 0x02, 0x00, 0x00, 0x00, 0x13, 0x00, 0x00, 
0x80, 0x00, 0x00, 0x00,
+   0x00, 0x4e, 0x00, 0x88, 0x00, 0x00, 0x9e, 0x00, 0x17, 0x00, 0x00, 0x00, 
0x9e, 0x00, 0x00, 0x00,
+   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xb8, 0x00, 0x00, 0x00, 0x00, 0x00, 
0x00, 0x00, 0x00, 0x00,
+   0x00, 0x00, 0x02, 0x02, 0x00, 0x00, 0x01, 0x01, 0x01, 0x00, 0x08, 0x04, 
0x00, 0x00, 0x00, 0x00,
+   0x07, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 
0x01, 0x00, 0x00, 0x00,
+   0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 
0x02, 0x18, 0x05, 0x00,
+   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 
0x00, 0x00, 0x00, 0x00,
+   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 
0x00, 0x00, 0x00, 0x00,
+   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 
0x00, 0x00, 0x1a, 0x00,
+   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xe1, 0x00, 0x43, 0x01, 
0x00, 0x00, 0x00, 0x00,
+   0x8e, 0x01, 0x00, 0x00, 0xb8, 0x01, 0x00, 0x00, 0x08, 0x30, 0x75, 0x00, 
0x80, 0x00, 0xa0, 0x8c,
+   0x00, 0x7e, 0x00, 0x71, 0xa5, 0x00, 0x7c, 0x00, 0xe5, 0xc8, 0x00, 0x70, 
0x00, 0x91, 0xf4, 0x00,
+   0x64, 0x00, 0x40, 0x19, 0x01, 0x5a, 0x00, 0x0e, 0x28, 0x01, 0x52, 0x00, 
0x80, 0x38, 0x01, 0x4a,
+   0x00, 0x00, 0x09, 0x30, 0x75, 0x00, 0x30, 0x75, 0x00, 0x40, 0x9c, 0x00, 
0x40, 0x9c, 0x00, 0x59,
+   0xd8, 0x00, 0x59, 0xd8, 0x00, 0x91, 0xf4, 0x00, 0x91, 0xf4, 0x00, 0x0e, 
0x28, 0x01, 0x0e, 0x28,
+   0x01, 0x90, 0x5f, 0x01, 0x90, 0x5f, 0x01, 0x00, 0x77, 0x01, 0x00, 0x77, 
0x01, 0xca, 0x91, 0x01,
+   0xca, 0x91, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x80, 0x00, 
0x00, 0x7e, 0x00, 0x01,
+   0x7c, 0x00, 0x02, 0x70, 0x00, 0x03, 0x64, 0x00, 0x04, 0x5a, 0x00, 0x05, 
0x52, 0x00, 0x06, 0x4a,
+   0x00, 0x07, 0x08, 0x08, 0x00, 0x08, 0x00, 0x01, 0x02, 0x02, 0x02, 0x01, 
0x02, 0x02, 0x02, 0x03,
+   0x02, 0x04, 0x02, 0x00, 0x08, 0x40, 0x9c, 0x00, 0x30, 0x75, 0x00, 0x74, 
0xb5, 0x00, 0xa0, 0x8c,
+   0x00, 0x60, 0xea, 0x00, 0x74, 0xb5, 0x00, 0x0e, 0x28, 0x01, 0x60, 0xea, 
0x00, 0x90, 0x5f, 0x01,
+   0x40, 0x19, 0x01, 0xb2, 0xb0, 0x01, 0x90, 0x5f, 0x01, 0xc0, 0xd4, 0x01, 
0x00, 0x77, 0x01, 0x5e,
+   0xff, 0x01, 0xca, 0x91, 0x01, 0x08, 0x80, 0x00, 0x00, 0x7e, 0x00, 0x01, 
0x7c, 0x00, 0x02, 0x70,
+   0x00, 0x03, 0x64, 0x00, 0x04, 0x5a, 0x00, 0x05, 0x52, 0x00, 0x06, 0x4a, 
0x00, 0x07, 0x00, 0x08,
+   0x80, 0x00, 0x30, 0x75, 0x00, 0x7e, 0x00, 0x40, 0x9c, 0x00, 0x7c, 0x00, 
0x59, 0xd8, 0x00, 0x70,
+   0x00, 0xdc, 0x0b, 0x01, 0x64, 0x00, 0x80, 0x38, 0x01, 0x5a, 0x00, 0x80, 
0x38, 0x01, 0x52, 0x00,
+   0x80, 0x38, 0x01, 0x4a, 0x00, 0x80, 0x38, 0x01, 0x08, 0x30, 0x75, 0x00, 
0x80, 0x00, 0xa0, 0x8c,
+   0x00, 0x7e, 0x00, 0x71, 0xa5, 0x00, 0x7c, 0x00, 0xe5, 0xc8, 0x00, 0x74, 
0x00, 0x91, 0xf4, 0x00,
+   0x66, 0x00, 0x40, 0x19, 0x01, 0x58, 0x00, 0x0e, 0x28, 0x01, 0x52, 0x00, 
0x80, 0x38, 0x01, 0x4a,
+   0x00
+};
 
 static const ATOM_PPLIB_POWERPLAYTABLE *get_powerplay_table(
 struct pp_hwmgr *hwmgr)
@@ -799,12 +833,17 @@ static const ATOM_PPLIB_POWERPLAYTABLE 
*get_powerplay_table(
uint16_t size;
 
if (!table_addr) {
-   table_addr = cgs_atom_get_data_table(hwmgr->device,
-   GetIndexIntoMasterTable(DATA, PowerPlayInfo),
-   , , );
-
-   hwmgr->soft_pp_table = table_addr;
-   hwmgr->soft_pp_table_size = size;
+   if (hwmgr->chip_id == CHIP_RAVEN) {
+   table_addr = _dummy_pp_table[0];
+  

[PATCH 9/9] drm/amdgpu: WIP add IOCTL interface for per VM BOs

2017-08-25 Thread Christian König
From: Christian König 

Add the IOCTL interface so that applications can allocate per VM BOs.

Still WIP since not all corner cases are tested yet, but this reduces average
CS overhead for 10K BOs from 21ms down to 48us.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  7 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c|  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c   | 59 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c |  3 +-
 include/uapi/drm/amdgpu_drm.h |  2 ++
 5 files changed, 51 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index b1e817c..21cab36 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -457,9 +457,10 @@ struct amdgpu_sa_bo {
  */
 void amdgpu_gem_force_release(struct amdgpu_device *adev);
 int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,
-   int alignment, u32 initial_domain,
-   u64 flags, bool kernel,
-   struct drm_gem_object **obj);
+int alignment, u32 initial_domain,
+u64 flags, bool kernel,
+struct reservation_object *resv,
+struct drm_gem_object **obj);
 
 int amdgpu_mode_dumb_create(struct drm_file *file_priv,
struct drm_device *dev,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
index 0e907ea..7256f83 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
@@ -144,7 +144,7 @@ static int amdgpufb_create_pinned_object(struct 
amdgpu_fbdev *rfbdev,
   AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
   AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS |
   AMDGPU_GEM_CREATE_VRAM_CLEARED,
-  true, );
+  true, NULL, );
if (ret) {
pr_err("failed to allocate framebuffer (%d)\n", aligned_size);
return -ENOMEM;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index d028806..b8e8d67 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -44,11 +44,12 @@ void amdgpu_gem_object_free(struct drm_gem_object *gobj)
 }
 
 int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,
-   int alignment, u32 initial_domain,
-   u64 flags, bool kernel,
-   struct drm_gem_object **obj)
+int alignment, u32 initial_domain,
+u64 flags, bool kernel,
+struct reservation_object *resv,
+struct drm_gem_object **obj)
 {
-   struct amdgpu_bo *robj;
+   struct amdgpu_bo *bo;
int r;
 
*obj = NULL;
@@ -59,7 +60,7 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, 
unsigned long size,
 
 retry:
r = amdgpu_bo_create(adev, size, alignment, kernel, initial_domain,
-flags, NULL, NULL, 0, );
+flags, NULL, resv, 0, );
if (r) {
if (r != -ERESTARTSYS) {
if (initial_domain == AMDGPU_GEM_DOMAIN_VRAM) {
@@ -71,7 +72,7 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, 
unsigned long size,
}
return r;
}
-   *obj = >gem_base;
+   *obj = >gem_base;
 
return 0;
 }
@@ -136,13 +137,14 @@ void amdgpu_gem_object_close(struct drm_gem_object *obj,
struct amdgpu_vm *vm = >vm;
 
struct amdgpu_bo_list_entry vm_pd;
-   struct list_head list;
+   struct list_head list, duplicates;
struct ttm_validate_buffer tv;
struct ww_acquire_ctx ticket;
struct amdgpu_bo_va *bo_va;
int r;
 
INIT_LIST_HEAD();
+   INIT_LIST_HEAD();
 
tv.bo = >tbo;
tv.shared = true;
@@ -150,7 +152,7 @@ void amdgpu_gem_object_close(struct drm_gem_object *obj,
 
amdgpu_vm_get_pd_bo(vm, , _pd);
 
-   r = ttm_eu_reserve_buffers(, , false, NULL);
+   r = ttm_eu_reserve_buffers(, , false, );
if (r) {
dev_err(adev->dev, "leaking bo va because "
"we fail to reserve bo (%d)\n", r);
@@ -185,9 +187,12 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
*data,
struct drm_file *filp)
 {
struct amdgpu_device *adev = dev->dev_private;
+   struct amdgpu_fpriv *fpriv = filp->driver_priv;
+   struct amdgpu_vm *vm = >vm;
  

[PATCH 5/9] drm/amdgpu: rework moved handling in the VM

2017-08-25 Thread Christian König
From: Christian König 

Instead of using the vm_state use a separate flag to note
that the BO was moved.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 13 +++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  3 +++
 2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 16148ef..85189f1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1787,13 +1787,13 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev,
else
flags = 0x0;
 
-   /* We access vm_status without the status lock here, but that is ok
-* because when we don't clear the BO is locked and so the status can't
-* change
-*/
-   if ((!clear && !list_empty(_va->base.vm_status)) ||
-   bo_va->cleared != clear)
+   if (!clear && bo_va->base.moved) {
+   bo_va->base.moved = false;
+   list_splice_init(_va->valids, _va->invalids);
+
+   } else if (bo_va->cleared != clear) {
list_splice_init(_va->valids, _va->invalids);
+   }
 
list_for_each_entry(mapping, _va->invalids, list) {
r = amdgpu_vm_bo_split_mapping(adev, exclusive, pages_addr, vm,
@@ -2418,6 +2418,7 @@ void amdgpu_vm_bo_invalidate(struct amdgpu_device *adev,
struct amdgpu_vm_bo_base *bo_base;
 
list_for_each_entry(bo_base, >va, bo_list) {
+   bo_base->moved = true;
spin_lock(_base->vm->status_lock);
list_move(_base->vm_status, _base->vm->moved);
spin_unlock(_base->vm->status_lock);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index e705f0f..ff093d4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -105,6 +105,9 @@ struct amdgpu_vm_bo_base {
 
/* protected by spinlock */
struct list_headvm_status;
+
+   /* protected by the BO being reserved */
+   boolmoved;
 };
 
 struct amdgpu_vm_pt {
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 7/9] drm/amdgpu: rework page directory filling v2

2017-08-25 Thread Christian König
From: Christian König 

Keep track off relocated PDs/PTs instead of walking and checking all PDs.

v2: better root PD handling

Signed-off-by: Christian König 
Reviewed-by: Alex Deucher  (v1)
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 87 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  3 ++
 2 files changed, 61 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 592c3e7..b02451f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -196,7 +196,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
}
 
spin_lock(>status_lock);
-   list_del_init(_base->vm_status);
+   list_move(_base->vm_status, >relocated);
}
spin_unlock(>status_lock);
 
@@ -313,8 +313,10 @@ static int amdgpu_vm_alloc_levels(struct amdgpu_device 
*adev,
entry->base.vm = vm;
entry->base.bo = pt;
list_add_tail(>base.bo_list, >va);
-   INIT_LIST_HEAD(>base.vm_status);
-   entry->addr = 0;
+   spin_lock(>status_lock);
+   list_add(>base.vm_status, >relocated);
+   spin_unlock(>status_lock);
+   entry->addr = ~0ULL;
}
 
if (level < adev->vm_manager.num_level) {
@@ -999,18 +1001,17 @@ static int amdgpu_vm_wait_pd(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
  */
 static int amdgpu_vm_update_level(struct amdgpu_device *adev,
  struct amdgpu_vm *vm,
- struct amdgpu_vm_pt *parent,
- unsigned level)
+ struct amdgpu_vm_pt *parent)
 {
struct amdgpu_bo *shadow;
struct amdgpu_ring *ring = NULL;
uint64_t pd_addr, shadow_addr = 0;
-   uint32_t incr = amdgpu_vm_bo_size(adev, level + 1);
uint64_t last_pde = ~0, last_pt = ~0, last_shadow = ~0;
unsigned count = 0, pt_idx, ndw = 0;
struct amdgpu_job *job;
struct amdgpu_pte_update_params params;
struct dma_fence *fence = NULL;
+   uint32_t incr;
 
int r;
 
@@ -1058,12 +1059,17 @@ static int amdgpu_vm_update_level(struct amdgpu_device 
*adev,
 
/* walk over the address space and update the directory */
for (pt_idx = 0; pt_idx <= parent->last_entry_used; ++pt_idx) {
-   struct amdgpu_bo *bo = parent->entries[pt_idx].base.bo;
+   struct amdgpu_vm_pt *entry = >entries[pt_idx];
+   struct amdgpu_bo *bo = entry->base.bo;
uint64_t pde, pt;
 
if (bo == NULL)
continue;
 
+   spin_lock(>status_lock);
+   list_del_init(>base.vm_status);
+   spin_unlock(>status_lock);
+
pt = amdgpu_bo_gpu_offset(bo);
pt = amdgpu_gart_get_vm_pde(adev, pt);
/* Don't update huge pages here */
@@ -1074,6 +1080,7 @@ static int amdgpu_vm_update_level(struct amdgpu_device 
*adev,
parent->entries[pt_idx].addr = pt | AMDGPU_PTE_VALID;
 
pde = pd_addr + pt_idx * 8;
+   incr = amdgpu_bo_size(bo);
if (((last_pde + 8 * count) != pde) ||
((last_pt + incr * count) != pt) ||
(count == AMDGPU_VM_MAX_UPDATE_SIZE)) {
@@ -1134,20 +1141,6 @@ static int amdgpu_vm_update_level(struct amdgpu_device 
*adev,
dma_fence_put(fence);
}
}
-   /*
-* Recurse into the subdirectories. This recursion is harmless because
-* we only have a maximum of 5 layers.
-*/
-   for (pt_idx = 0; pt_idx <= parent->last_entry_used; ++pt_idx) {
-   struct amdgpu_vm_pt *entry = >entries[pt_idx];
-
-   if (!entry->base.bo)
-   continue;
-
-   r = amdgpu_vm_update_level(adev, vm, entry, level + 1);
-   if (r)
-   return r;
-   }
 
return 0;
 
@@ -1163,7 +1156,8 @@ static int amdgpu_vm_update_level(struct amdgpu_device 
*adev,
  *
  * Mark all PD level as invalid after an error.
  */
-static void amdgpu_vm_invalidate_level(struct amdgpu_vm_pt *parent)
+static void amdgpu_vm_invalidate_level(struct amdgpu_vm *vm,
+  struct amdgpu_vm_pt *parent)
 {
unsigned pt_idx;
 
@@ -1178,7 +1172,10 @@ static void amdgpu_vm_invalidate_level(struct 
amdgpu_vm_pt *parent)
continue;
 
entry->addr = ~0ULL;
-   amdgpu_vm_invalidate_level(entry);
+   spin_lock(>status_lock);
+

RE: [PATCH] drm/amd/amdgpu: fix BANK_SELECT on Vega10

2017-08-25 Thread He, Roger
Ok. 

Thanks
Roger(Hongbo.He)
-Original Message-
From: Christian König [mailto:deathsim...@vodafone.de] 
Sent: Friday, August 25, 2017 3:33 PM
To: He, Roger ; amd-gfx@lists.freedesktop.org
Cc: Koenig, Christian 
Subject: Re: [PATCH] drm/amd/amdgpu: fix BANK_SELECT on Vega10

Hi Roger,

there are a few warnings introduced by this commit:
> drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c: In function
> ‘gfxhub_v1_0_init_cache_regs’:
> drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c:127:16: warning: unused 
> variable ‘field’ [-Wunused-variable]
>   uint32_t tmp, field;
> ^
> drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c: In function
> ‘mmhub_v1_0_init_cache_regs’:
> drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c:141:16: warning: unused 
> variable ‘field’ [-Wunused-variable]
>   uint32_t tmp, field;
> ^

Please provide a fix.

Thanks,
Christian.

Am 24.08.2017 um 10:19 schrieb Christian König:
> Am 24.08.2017 um 09:07 schrieb Roger He:
>> BANK_SELECT should always be FRAGMENT_SIZE + 3 due to 8-entry (2^3) 
>> per cache line in L2 TLB for Vega10.
>>
>> Change-Id: I8cfcff197e2c571c1a547aaed959e492b4a6fe0e
>> Signed-off-by: Roger He 
>
> Reviewed-by: Christian König 
>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c | 3 +--
>>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c  | 3 +--
>>   2 files changed, 2 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>> b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>> index 4f2788b..a7351ba 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>> @@ -143,9 +143,8 @@ static void gfxhub_v1_0_init_cache_regs(struct
>> amdgpu_device *adev)
>>   tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE, 1);
>>   WREG32_SOC15(GC, 0, mmVM_L2_CNTL2, tmp);
>>   -field = adev->vm_manager.fragment_size;
>>   tmp = mmVM_L2_CNTL3_DEFAULT;
>> -tmp = REG_SET_FIELD(tmp, VM_L2_CNTL3, BANK_SELECT, field);
>> +tmp = REG_SET_FIELD(tmp, VM_L2_CNTL3, BANK_SELECT, 9);
>>   tmp = REG_SET_FIELD(tmp, VM_L2_CNTL3, 
>> L2_CACHE_BIGK_FRAGMENT_SIZE, 6);
>>   WREG32_SOC15(GC, 0, mmVM_L2_CNTL3, tmp);
>>   diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>> b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>> index 4395a4f..2a6fa73 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>> @@ -157,9 +157,8 @@ static void mmhub_v1_0_init_cache_regs(struct 
>> amdgpu_device *adev)
>>   tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE, 1);
>>   WREG32_SOC15(MMHUB, 0, mmVM_L2_CNTL2, tmp);
>>   -field = adev->vm_manager.fragment_size;
>>   tmp = mmVM_L2_CNTL3_DEFAULT;
>> -tmp = REG_SET_FIELD(tmp, VM_L2_CNTL3, BANK_SELECT, field);
>> +tmp = REG_SET_FIELD(tmp, VM_L2_CNTL3, BANK_SELECT, 9);
>>   tmp = REG_SET_FIELD(tmp, VM_L2_CNTL3, 
>> L2_CACHE_BIGK_FRAGMENT_SIZE, 6);
>>   WREG32_SOC15(MMHUB, 0, mmVM_L2_CNTL3, tmp);
>
>

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx