Re: [PATCH v2 0/7] Allow dynamic allocation of software IO TLB bounce buffers

2023-07-10 Thread Mike Lothian
Hi

I was hoping this might land for 6.5-rc1, is there a new version that
might apply against 6.5?

Cheers

Mike

On Tue, 9 May 2023 at 08:32, Greg Kroah-Hartman
 wrote:
>
> On Tue, May 09, 2023 at 09:16:35AM +0200, Petr Tesařík wrote:
> > On Wed, 26 Apr 2023 14:44:39 +0200
> > Petr Tesařík  wrote:
> >
> > > Hi Greg,
> > >
> > > On Wed, 26 Apr 2023 14:26:36 +0200
> > > Greg Kroah-Hartman  wrote:
> > >
> > > > On Wed, Apr 26, 2023 at 02:15:20PM +0200, Petr Tesařík wrote:
> > > > > Hi,
> > > > >
> > > > > On Wed, 19 Apr 2023 12:03:52 +0200
> > > > > Petr Tesarik  wrote:
> > > > >
> > > > > > From: Petr Tesarik 
> > > > > >
> > > > > > The goal of my work is to provide more flexibility in the sizing of
> > > > > > SWIOTLB.
> > > > > >
> > > > > > The software IO TLB was designed with these assumptions:
> > > > > >
> > > > > > 1. It would not be used much, especially on 64-bit systems.
> > > > > > 2. A small fixed memory area (64 MiB by default) is sufficient to
> > > > > >handle the few cases which require a bounce buffer.
> > > > > > 3. 64 MiB is little enough that it has no impact on the rest of the
> > > > > >system.
> > > > > >
> > > > > > First, if SEV is active, all DMA must be done through shared
> > > > > > unencrypted pages, and SWIOTLB is used to make this happen without
> > > > > > changing device drivers. The software IO TLB size is increased to
> > > > > > 6% of total memory in sev_setup_arch(), but that is more of an
> > > > > > approximation. The actual requirements may vary depending on the
> > > > > > amount of I/O and which drivers are used. These factors may not be
> > > > > > know at boot time, i.e. when SWIOTLB is allocated.
> > > > > >
> > > > > > Second, other colleagues have noticed that they can reliably get
> > > > > > rid of occasional OOM kills on an Arm embedded device by reducing
> > > > > > the SWIOTLB size. This can be achieved with a kernel parameter, but
> > > > > > determining the right value puts additional burden on pre-release
> > > > > > testing, which could be avoided if SWIOTLB is allocated small and
> > > > > > grows only when necessary.
> > > > >
> > > > > Now that merging into 6.4 has begun, what about this patch series? I'm
> > > > > eager to get some feedback (positive or negative) and respin the next
> > > > > version.
> > > >
> > > > It's the merge window, we can't add new things that haven't been in
> > > > linux-next already.
> > >
> > > This is understood. I'm not asking for immediate inclusion.
> > >
> > > >   Please resubmit it after -rc1 is out.
> > >
> > > If you can believe that rebasing to -rc1 will be enough, then I will
> > > also try to believe I'm lucky. ;-)
> > >
> > > The kind of feedback I really want to get is e.g. about the extra
> > > per-device DMA-specific fields. If they cannot be added to struct
> > > device, then I'd rather start discussing an interim solution, because
> > > getting all existing DMA fields out of that struct will take a lot of
> > > time...
> >
> > All right, 6.4-rc1 is out now. The patch series still applies cleanly.
> >
> > Any comments what must be changed (if anything) to get it in?
>
> Try resending it, it's long out of my review queue...
>
> thanks,
>
> greg k-h


Re: [PATCH v2 0/7] Allow dynamic allocation of software IO TLB bounce buffers

2023-05-01 Thread Mike Lothian
Hi

I've not had any issues since using this, but I imagine most people
won't know how to set swiotlb=dynamic if they start seeing this (when
it lands)

Any clue as to why this broke last cycle?

Thanks

Mike

On Fri, 28 Apr 2023 at 10:07, Petr Tesařík  wrote:
>
> On Fri, 28 Apr 2023 09:53:38 +0100
> Mike Lothian  wrote:
>
> > On Wed, 19 Apr 2023 at 11:05, Petr Tesarik  
> > wrote:
> > >
> > > From: Petr Tesarik 
> > >
> > > The goal of my work is to provide more flexibility in the sizing of
> > > SWIOTLB.
> > >
> > > The software IO TLB was designed with these assumptions:
> > >
> > > 1. It would not be used much, especially on 64-bit systems.
> > > 2. A small fixed memory area (64 MiB by default) is sufficient to
> > >handle the few cases which require a bounce buffer.
> > > 3. 64 MiB is little enough that it has no impact on the rest of the
> > >system.
> > >
> > > First, if SEV is active, all DMA must be done through shared
> > > unencrypted pages, and SWIOTLB is used to make this happen without
> > > changing device drivers. The software IO TLB size is increased to
> > > 6% of total memory in sev_setup_arch(), but that is more of an
> > > approximation. The actual requirements may vary depending on the
> > > amount of I/O and which drivers are used. These factors may not be
> > > know at boot time, i.e. when SWIOTLB is allocated.
> > >
> > > Second, other colleagues have noticed that they can reliably get
> > > rid of occasional OOM kills on an Arm embedded device by reducing
> > > the SWIOTLB size. This can be achieved with a kernel parameter, but
> > > determining the right value puts additional burden on pre-release
> > > testing, which could be avoided if SWIOTLB is allocated small and
> > > grows only when necessary.
> > >
> > > Changes from v1-devel-v7:
> > > - Add comments to acquire/release barriers
> > > - Fix whitespace issues reported by checkpatch.pl
> > >
> > > Changes from v1-devel-v6:
> > > - Provide long description of functions
> > > - Fix kernel-doc (Returns: to Return:)
> > > - Rename __lookup_dyn_slot() to lookup_dyn_slot_locked()
> > >
> > > Changes from RFC:
> > > - Track dynamic buffers per device instead of per swiotlb
> > > - Use a linked list instead of a maple tree
> > > - Move initialization of swiotlb fields of struct device to a
> > >   helper function
> > > - Rename __lookup_dyn_slot() to lookup_dyn_slot_locked()
> > > - Introduce per-device flag if dynamic buffers are in use
> > > - Add one more user of DMA_ATTR_MAY_SLEEP
> > > - Add kernel-doc comments for new (and some old) code
> > > - Properly escape '*' in dma-attributes.rst
> > >
> > > Petr Tesarik (7):
> > >   swiotlb: Use a helper to initialize swiotlb fields in struct device
> > >   swiotlb: Move code around in preparation for dynamic bounce buffers
> > >   dma-mapping: introduce the DMA_ATTR_MAY_SLEEP attribute
> > >   swiotlb: Dynamically allocated bounce buffers
> > >   swiotlb: Add a boot option to enable dynamic bounce buffers
> > >   drm: Use DMA_ATTR_MAY_SLEEP from process context
> > >   swiotlb: per-device flag if there are dynamically allocated buffers
> > >
> > >  .../admin-guide/kernel-parameters.txt |   6 +-
> > >  Documentation/core-api/dma-attributes.rst |  10 +
> > >  drivers/base/core.c   |   4 +-
> > >  drivers/gpu/drm/drm_gem_shmem_helper.c|   2 +-
> > >  drivers/gpu/drm/drm_prime.c   |   2 +-
> > >  include/linux/device.h|  12 +
> > >  include/linux/dma-mapping.h   |   6 +
> > >  include/linux/swiotlb.h   |  54 ++-
> > >  kernel/dma/swiotlb.c  | 382 --
> > >  9 files changed, 443 insertions(+), 35 deletions(-)
> > >
> > > --
> > > 2.25.1
> > >
> >
> > Hi
> >
> > Is this a potential fix for
> > https://bugzilla.kernel.org/show_bug.cgi?id=217310 where I'm manually
> > setting bigger buffers to keep my wifi working?
>
> Yes. With these patches applied, your system should run just fine with
> swiotlb=dynamic. However, keep in mind that this implementation adds a
> bit of overhead. In short, it trades a bit of performance for not
> having to figure out the optimal swiotlb size at boot time.
>
> Petr T


Re: [PATCH v2 0/7] Allow dynamic allocation of software IO TLB bounce buffers

2023-04-28 Thread Mike Lothian
On Wed, 19 Apr 2023 at 11:05, Petr Tesarik  wrote:
>
> From: Petr Tesarik 
>
> The goal of my work is to provide more flexibility in the sizing of
> SWIOTLB.
>
> The software IO TLB was designed with these assumptions:
>
> 1. It would not be used much, especially on 64-bit systems.
> 2. A small fixed memory area (64 MiB by default) is sufficient to
>handle the few cases which require a bounce buffer.
> 3. 64 MiB is little enough that it has no impact on the rest of the
>system.
>
> First, if SEV is active, all DMA must be done through shared
> unencrypted pages, and SWIOTLB is used to make this happen without
> changing device drivers. The software IO TLB size is increased to
> 6% of total memory in sev_setup_arch(), but that is more of an
> approximation. The actual requirements may vary depending on the
> amount of I/O and which drivers are used. These factors may not be
> know at boot time, i.e. when SWIOTLB is allocated.
>
> Second, other colleagues have noticed that they can reliably get
> rid of occasional OOM kills on an Arm embedded device by reducing
> the SWIOTLB size. This can be achieved with a kernel parameter, but
> determining the right value puts additional burden on pre-release
> testing, which could be avoided if SWIOTLB is allocated small and
> grows only when necessary.
>
> Changes from v1-devel-v7:
> - Add comments to acquire/release barriers
> - Fix whitespace issues reported by checkpatch.pl
>
> Changes from v1-devel-v6:
> - Provide long description of functions
> - Fix kernel-doc (Returns: to Return:)
> - Rename __lookup_dyn_slot() to lookup_dyn_slot_locked()
>
> Changes from RFC:
> - Track dynamic buffers per device instead of per swiotlb
> - Use a linked list instead of a maple tree
> - Move initialization of swiotlb fields of struct device to a
>   helper function
> - Rename __lookup_dyn_slot() to lookup_dyn_slot_locked()
> - Introduce per-device flag if dynamic buffers are in use
> - Add one more user of DMA_ATTR_MAY_SLEEP
> - Add kernel-doc comments for new (and some old) code
> - Properly escape '*' in dma-attributes.rst
>
> Petr Tesarik (7):
>   swiotlb: Use a helper to initialize swiotlb fields in struct device
>   swiotlb: Move code around in preparation for dynamic bounce buffers
>   dma-mapping: introduce the DMA_ATTR_MAY_SLEEP attribute
>   swiotlb: Dynamically allocated bounce buffers
>   swiotlb: Add a boot option to enable dynamic bounce buffers
>   drm: Use DMA_ATTR_MAY_SLEEP from process context
>   swiotlb: per-device flag if there are dynamically allocated buffers
>
>  .../admin-guide/kernel-parameters.txt |   6 +-
>  Documentation/core-api/dma-attributes.rst |  10 +
>  drivers/base/core.c   |   4 +-
>  drivers/gpu/drm/drm_gem_shmem_helper.c|   2 +-
>  drivers/gpu/drm/drm_prime.c   |   2 +-
>  include/linux/device.h|  12 +
>  include/linux/dma-mapping.h   |   6 +
>  include/linux/swiotlb.h   |  54 ++-
>  kernel/dma/swiotlb.c  | 382 --
>  9 files changed, 443 insertions(+), 35 deletions(-)
>
> --
> 2.25.1
>

Hi

Is this a potential fix for
https://bugzilla.kernel.org/show_bug.cgi?id=217310 where I'm manually
setting bigger buffers to keep my wifi working?

Thanks

Mike


Re: [PATCH 10/13] drm/amdgpu: use scheduler depenencies for CS

2022-12-21 Thread Mike Lothian
On Wed, 21 Dec 2022 at 15:52, Luben Tuikov  wrote:
>
> On 2022-12-21 10:34, Mike Lothian wrote:
> > On Fri, 14 Oct 2022 at 09:47, Christian König
> >  wrote:
> >>
> >> Entirely remove the sync obj in the job.
> >>
> >> Signed-off-by: Christian König 
> >> ---
> >>  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 21 ++---
> >>  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h  |  2 ++
> >>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c |  9 +
> >>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.h |  1 -
> >>  4 files changed, 13 insertions(+), 20 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> >> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >> index d45b86bcf7fa..0528c2b1db6e 100644
> >> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >> @@ -426,7 +426,7 @@ static int amdgpu_cs_p2_dependencies(struct 
> >> amdgpu_cs_parser *p,
> >> dma_fence_put(old);
> >> }
> >>
> >> -   r = amdgpu_sync_fence(&p->gang_leader->sync, fence);
> >> +   r = amdgpu_sync_fence(&p->sync, fence);
> >> dma_fence_put(fence);
> >> if (r)
> >> return r;
> >> @@ -448,7 +448,7 @@ static int amdgpu_syncobj_lookup_and_add(struct 
> >> amdgpu_cs_parser *p,
> >> return r;
> >> }
> >>
> >> -   r = amdgpu_sync_fence(&p->gang_leader->sync, fence);
> >> +   r = amdgpu_sync_fence(&p->sync, fence);
> >> if (r)
> >> goto error;
> >>
> >> @@ -1108,7 +1108,7 @@ static int amdgpu_cs_vm_handling(struct 
> >> amdgpu_cs_parser *p)
> >> if (r)
> >> return r;
> >>
> >> -   r = amdgpu_sync_fence(&job->sync, fpriv->prt_va->last_pt_update);
> >> +   r = amdgpu_sync_fence(&p->sync, fpriv->prt_va->last_pt_update);
> >> if (r)
> >> return r;
> >>
> >> @@ -1119,7 +1119,7 @@ static int amdgpu_cs_vm_handling(struct 
> >> amdgpu_cs_parser *p)
> >> if (r)
> >> return r;
> >>
> >> -   r = amdgpu_sync_fence(&job->sync, bo_va->last_pt_update);
> >> +   r = amdgpu_sync_fence(&p->sync, bo_va->last_pt_update);
> >> if (r)
> >> return r;
> >> }
> >> @@ -1138,7 +1138,7 @@ static int amdgpu_cs_vm_handling(struct 
> >> amdgpu_cs_parser *p)
> >> if (r)
> >> return r;
> >>
> >> -   r = amdgpu_sync_fence(&job->sync, bo_va->last_pt_update);
> >> +   r = amdgpu_sync_fence(&p->sync, bo_va->last_pt_update);
> >> if (r)
> >> return r;
> >> }
> >> @@ -1151,7 +1151,7 @@ static int amdgpu_cs_vm_handling(struct 
> >> amdgpu_cs_parser *p)
> >> if (r)
> >> return r;
> >>
> >> -   r = amdgpu_sync_fence(&job->sync, vm->last_update);
> >> +   r = amdgpu_sync_fence(&p->sync, vm->last_update);
> >> if (r)
> >> return r;
> >>
> >> @@ -1183,7 +1183,6 @@ static int amdgpu_cs_vm_handling(struct 
> >> amdgpu_cs_parser *p)
> >>  static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
> >>  {
> >> struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> >> -   struct amdgpu_job *leader = p->gang_leader;
> >> struct amdgpu_bo_list_entry *e;
> >> unsigned int i;
> >> int r;
> >> @@ -1195,14 +1194,14 @@ static int amdgpu_cs_sync_rings(struct 
> >> amdgpu_cs_parser *p)
> >>
> >> sync_mode = amdgpu_bo_explicit_sync(bo) ?
> >> AMDGPU_SYNC_EXPLICIT : AMDGPU_SYNC_NE_OWNER;
> >> -   r = amdgpu_sync_resv(p->adev, &leader->sync, resv, 
> >> sync_mode,
> >> +   r = amdgpu_sync_resv(p->adev, &p->sync, resv, sync_mode,
> >>  &fpriv->vm);
> >> if (r)
> >>

Re: [PATCH 10/13] drm/amdgpu: use scheduler depenencies for CS

2022-12-21 Thread Mike Lothian
https://gitlab.freedesktop.org/drm/amd/-/issues/2309

On Wed, 21 Dec 2022 at 15:34, Mike Lothian  wrote:
>
> On Fri, 14 Oct 2022 at 09:47, Christian König
>  wrote:
> >
> > Entirely remove the sync obj in the job.
> >
> > Signed-off-by: Christian König 
> > ---
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 21 ++---
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h  |  2 ++
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c |  9 +
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_job.h |  1 -
> >  4 files changed, 13 insertions(+), 20 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > index d45b86bcf7fa..0528c2b1db6e 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > @@ -426,7 +426,7 @@ static int amdgpu_cs_p2_dependencies(struct 
> > amdgpu_cs_parser *p,
> > dma_fence_put(old);
> > }
> >
> > -   r = amdgpu_sync_fence(&p->gang_leader->sync, fence);
> > +   r = amdgpu_sync_fence(&p->sync, fence);
> > dma_fence_put(fence);
> > if (r)
> > return r;
> > @@ -448,7 +448,7 @@ static int amdgpu_syncobj_lookup_and_add(struct 
> > amdgpu_cs_parser *p,
> > return r;
> > }
> >
> > -   r = amdgpu_sync_fence(&p->gang_leader->sync, fence);
> > +   r = amdgpu_sync_fence(&p->sync, fence);
> > if (r)
> > goto error;
> >
> > @@ -1108,7 +1108,7 @@ static int amdgpu_cs_vm_handling(struct 
> > amdgpu_cs_parser *p)
> > if (r)
> > return r;
> >
> > -   r = amdgpu_sync_fence(&job->sync, fpriv->prt_va->last_pt_update);
> > +   r = amdgpu_sync_fence(&p->sync, fpriv->prt_va->last_pt_update);
> > if (r)
> > return r;
> >
> > @@ -1119,7 +1119,7 @@ static int amdgpu_cs_vm_handling(struct 
> > amdgpu_cs_parser *p)
> > if (r)
> > return r;
> >
> > -   r = amdgpu_sync_fence(&job->sync, bo_va->last_pt_update);
> > +   r = amdgpu_sync_fence(&p->sync, bo_va->last_pt_update);
> > if (r)
> > return r;
> > }
> > @@ -1138,7 +1138,7 @@ static int amdgpu_cs_vm_handling(struct 
> > amdgpu_cs_parser *p)
> > if (r)
> > return r;
> >
> > -   r = amdgpu_sync_fence(&job->sync, bo_va->last_pt_update);
> > +   r = amdgpu_sync_fence(&p->sync, bo_va->last_pt_update);
> > if (r)
> > return r;
> > }
> > @@ -1151,7 +1151,7 @@ static int amdgpu_cs_vm_handling(struct 
> > amdgpu_cs_parser *p)
> > if (r)
> > return r;
> >
> > -   r = amdgpu_sync_fence(&job->sync, vm->last_update);
> > +   r = amdgpu_sync_fence(&p->sync, vm->last_update);
> > if (r)
> > return r;
> >
> > @@ -1183,7 +1183,6 @@ static int amdgpu_cs_vm_handling(struct 
> > amdgpu_cs_parser *p)
> >  static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
> >  {
> > struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> > -   struct amdgpu_job *leader = p->gang_leader;
> > struct amdgpu_bo_list_entry *e;
> > unsigned int i;
> > int r;
> > @@ -1195,14 +1194,14 @@ static int amdgpu_cs_sync_rings(struct 
> > amdgpu_cs_parser *p)
> >
> > sync_mode = amdgpu_bo_explicit_sync(bo) ?
> > AMDGPU_SYNC_EXPLICIT : AMDGPU_SYNC_NE_OWNER;
> > -   r = amdgpu_sync_resv(p->adev, &leader->sync, resv, 
> > sync_mode,
> > +   r = amdgpu_sync_resv(p->adev, &p->sync, resv, sync_mode,
> >  &fpriv->vm);
> > if (r)
> > return r;
> > }
> >
> > -   for (i = 0; i < p->gang_size - 1; ++i) {
> > -   r = amdgpu_sync_clone(&leader->sync, &p->jobs[i]->sync);
> > +   for (i = 0; i < p->gang_size; ++i) {
> > +   r = amdgpu_sync_push_to_job(&p->sync, p->jobs[i]);
> >  

Re: [PATCH 10/13] drm/amdgpu: use scheduler depenencies for CS

2022-12-21 Thread Mike Lothian
On Fri, 14 Oct 2022 at 09:47, Christian König
 wrote:
>
> Entirely remove the sync obj in the job.
>
> Signed-off-by: Christian König 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 21 ++---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h  |  2 ++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c |  9 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.h |  1 -
>  4 files changed, 13 insertions(+), 20 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index d45b86bcf7fa..0528c2b1db6e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -426,7 +426,7 @@ static int amdgpu_cs_p2_dependencies(struct 
> amdgpu_cs_parser *p,
> dma_fence_put(old);
> }
>
> -   r = amdgpu_sync_fence(&p->gang_leader->sync, fence);
> +   r = amdgpu_sync_fence(&p->sync, fence);
> dma_fence_put(fence);
> if (r)
> return r;
> @@ -448,7 +448,7 @@ static int amdgpu_syncobj_lookup_and_add(struct 
> amdgpu_cs_parser *p,
> return r;
> }
>
> -   r = amdgpu_sync_fence(&p->gang_leader->sync, fence);
> +   r = amdgpu_sync_fence(&p->sync, fence);
> if (r)
> goto error;
>
> @@ -1108,7 +1108,7 @@ static int amdgpu_cs_vm_handling(struct 
> amdgpu_cs_parser *p)
> if (r)
> return r;
>
> -   r = amdgpu_sync_fence(&job->sync, fpriv->prt_va->last_pt_update);
> +   r = amdgpu_sync_fence(&p->sync, fpriv->prt_va->last_pt_update);
> if (r)
> return r;
>
> @@ -1119,7 +1119,7 @@ static int amdgpu_cs_vm_handling(struct 
> amdgpu_cs_parser *p)
> if (r)
> return r;
>
> -   r = amdgpu_sync_fence(&job->sync, bo_va->last_pt_update);
> +   r = amdgpu_sync_fence(&p->sync, bo_va->last_pt_update);
> if (r)
> return r;
> }
> @@ -1138,7 +1138,7 @@ static int amdgpu_cs_vm_handling(struct 
> amdgpu_cs_parser *p)
> if (r)
> return r;
>
> -   r = amdgpu_sync_fence(&job->sync, bo_va->last_pt_update);
> +   r = amdgpu_sync_fence(&p->sync, bo_va->last_pt_update);
> if (r)
> return r;
> }
> @@ -1151,7 +1151,7 @@ static int amdgpu_cs_vm_handling(struct 
> amdgpu_cs_parser *p)
> if (r)
> return r;
>
> -   r = amdgpu_sync_fence(&job->sync, vm->last_update);
> +   r = amdgpu_sync_fence(&p->sync, vm->last_update);
> if (r)
> return r;
>
> @@ -1183,7 +1183,6 @@ static int amdgpu_cs_vm_handling(struct 
> amdgpu_cs_parser *p)
>  static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
>  {
> struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> -   struct amdgpu_job *leader = p->gang_leader;
> struct amdgpu_bo_list_entry *e;
> unsigned int i;
> int r;
> @@ -1195,14 +1194,14 @@ static int amdgpu_cs_sync_rings(struct 
> amdgpu_cs_parser *p)
>
> sync_mode = amdgpu_bo_explicit_sync(bo) ?
> AMDGPU_SYNC_EXPLICIT : AMDGPU_SYNC_NE_OWNER;
> -   r = amdgpu_sync_resv(p->adev, &leader->sync, resv, sync_mode,
> +   r = amdgpu_sync_resv(p->adev, &p->sync, resv, sync_mode,
>  &fpriv->vm);
> if (r)
> return r;
> }
>
> -   for (i = 0; i < p->gang_size - 1; ++i) {
> -   r = amdgpu_sync_clone(&leader->sync, &p->jobs[i]->sync);
> +   for (i = 0; i < p->gang_size; ++i) {
> +   r = amdgpu_sync_push_to_job(&p->sync, p->jobs[i]);
> if (r)
> return r;
> }
> @@ -1248,7 +1247,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
> struct dma_fence *fence;
>
> fence = &p->jobs[i]->base.s_fence->scheduled;
> -   r = amdgpu_sync_fence(&leader->sync, fence);
> +   r = drm_sched_job_add_dependency(&leader->base, fence);
> if (r)
> goto error_cleanup;
> }
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
> index cbaa19b2b8a3..207e801c24ed 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h
> @@ -75,6 +75,8 @@ struct amdgpu_cs_parser {
>
> unsignednum_post_deps;
> struct amdgpu_cs_post_dep   *post_deps;
> +
> +   struct amdgpu_sync  sync;
>  };
>
>  int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index ba98d65835b4..b8494c3b3b8a 100644
> --- a/drivers/gpu/drm/amd

Re: [PATCH] Revert "drm/amdgpu: add drm buddy support to amdgpu"

2022-08-04 Thread Mike Lothian
Hi

When is this relanding?

Cheers

Mike

On Mon, 18 Jul 2022 at 21:40, Dixit, Ashutosh  wrote:
>
> On Thu, 14 Jul 2022 08:00:32 -0700, Christian König wrote:
> >
> > Am 14.07.22 um 15:33 schrieb Alex Deucher:
> > > On Thu, Jul 14, 2022 at 9:09 AM Christian König
> > >  wrote:
> > >> Hi Mauro,
> > >>
> > >> well the last time I checked drm-tip was clean.
> > >>
> > >> The revert is necessary because we had some problems with the commit
> > >> which we couldn't fix in the 5.19 cycle.
> > > Would it be worth reverting the revert and applying the actual fix[1]?
> > >   It's a huge revert unfortunately while the actual fix is like 10
> > > lines of code.  I'm concerned there will be subtle fallout from the
> > > revert due to how extensive it is.
> >
> > We have other bug fixes and cleanups around that patch which didn't made it
> > into 5.19 either. I don't want to create an ever greater mess.
> >
> > Real question is why building drm-tip work for me but not for others?
>
> Seeing this on latest drm-tip:
>
> drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c:54:1: error: redefinition of 
> ‘amdgpu_vram_mgr_first_block’
>54 | amdgpu_vram_mgr_first_block(struct list_head *list)
>   | ^~~
> In file included from drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h:29,
>  from drivers/gpu/drm/amd/amdgpu/amdgpu.h:73,
>  from drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c:28:
> drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h:57:1: note: previous definition 
> of ‘amdgpu_vram_mgr_first_block’ with type ‘struct drm_buddy_block *(struct 
> list_head *)’
>57 | amdgpu_vram_mgr_first_block(struct list_head *list)
>   | ^~~
> drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c:59:20: error: redefinition of 
> ‘amdgpu_is_vram_mgr_blocks_contiguous’
>59 | static inline bool amdgpu_is_vram_mgr_blocks_contiguous(struct 
> list_head *head)
>   |^~~~
> drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h:62:20: note: previous definition 
> of ‘amdgpu_is_vram_mgr_blocks_contiguous’ with type ‘bool(struct list_head 
> *)’ {aka ‘_Bool(struct list_head *)’}
>62 | static inline bool amdgpu_is_vram_mgr_blocks_contiguous(struct 
> list_head *head)
>   |^~~~
> make[4]: *** [scripts/Makefile.build:249: 
> drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.o] Error 1
> make[4]: *** Waiting for unfinished jobs
> make[3]: *** [scripts/Makefile.build:466: drivers/gpu/drm/amd/amdgpu] Error 2
> make[2]: *** [scripts/Makefile.build:466: drivers/gpu/drm] Error 2
> make[1]: *** [scripts/Makefile.build:466: drivers/gpu] Error 2


Re: [PATCH v12] drm/amdgpu: add drm buddy support to amdgpu

2022-06-23 Thread Mike Lothian
Hi

The buddy allocator is still causing me issues in 5.19-rc3
(https://gitlab.freedesktop.org/drm/amd/-/issues/2059)

I'm no longer seeing null pointers though, so I think the bulk move
fix did it's bit

Let me know if there's anything I can help with, now there aren't
freezes I can offer remote access to debug if it'll help

Cheers

Mike


Re: [PATCH] drm/amdgpu: Fix GTT size reporting in amdgpu_ioctl

2022-06-14 Thread Mike Lothian
On Mon, 13 Jun 2022 at 10:11, Michel Dänzer  wrote:
>
> On 2022-06-11 09:19, Christian König wrote:
> > Am 10.06.22 um 15:54 schrieb Michel Dänzer:
> >> From: Michel Dänzer 
> >>
> >> The commit below changed the TTM manager size unit from pages to
> >> bytes, but failed to adjust the corresponding calculations in
> >> amdgpu_ioctl.
> >>
> >> Fixes: dfa714b88eb0 ("drm/amdgpu: remove GTT accounting v2")
> >> Bug: https://gitlab.freedesktop.org/drm/amd/-/issues/1930
> >> Bug: https://gitlab.freedesktop.org/mesa/mesa/-/issues/6642
> >> Signed-off-by: Michel Dänzer 
> >
> > Ah, WTF! You won't believe how long I have been searching for this one.
>
> Don't feel too bad, similar things have happened to me before. :) Even after 
> Marek's GitLab comment got me on the right track, it still took a fair amount 
> of trial, error & head scratching to put this one to bed.
>
>
> --
> Earthling Michel Dänzer|  https://redhat.com
> Libre software enthusiast  | Mesa and Xwayland developer

That's indeed fixed the Tomb Raider / vulkan issue

I'm still seeing some strange warnings and errors on Linus's tree so
I've updated https://gitlab.freedesktop.org/drm/amd/-/issues/1992
https://gitlab.freedesktop.org/drm/amd/-/issues/2034, I'm not sure if
that's Buddy allocator, TTM Bulk moves or a whole new issue

If it's not too late feel free to add my tested by


Re: [PATCH] drm/amdgpu: Fix GTT size reporting in amdgpu_ioctl

2022-06-11 Thread Mike Lothian
Thanks for finding this

I'll have access to my machine on Monday and will close those issues off
once I've tested things

Cheers

Mike

On Sat, 11 Jun 2022, 09:19 Christian König, <
ckoenig.leichtzumer...@gmail.com> wrote:

> Am 10.06.22 um 15:54 schrieb Michel Dänzer:
> > From: Michel Dänzer 
> >
> > The commit below changed the TTM manager size unit from pages to
> > bytes, but failed to adjust the corresponding calculations in
> > amdgpu_ioctl.
> >
> > Fixes: dfa714b88eb0 ("drm/amdgpu: remove GTT accounting v2")
> > Bug: https://gitlab.freedesktop.org/drm/amd/-/issues/1930
> > Bug: https://gitlab.freedesktop.org/mesa/mesa/-/issues/6642
> > Signed-off-by: Michel Dänzer 
>
> Ah, WTF! You won't believe how long I have been searching for this one.
>
> Reviewed-by: Christian König 
>
> > ---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 2 --
> >   1 file changed, 2 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> > index 801f6fa692e9..6de63ea6687e 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> > @@ -642,7 +642,6 @@ int amdgpu_info_ioctl(struct drm_device *dev, void
> *data, struct drm_file *filp)
> >   atomic64_read(&adev->visible_pin_size),
> >   vram_gtt.vram_size);
> >   vram_gtt.gtt_size = ttm_manager_type(&adev->mman.bdev,
> TTM_PL_TT)->size;
> > - vram_gtt.gtt_size *= PAGE_SIZE;
> >   vram_gtt.gtt_size -= atomic64_read(&adev->gart_pin_size);
> >   return copy_to_user(out, &vram_gtt,
> >   min((size_t)size, sizeof(vram_gtt))) ?
> -EFAULT : 0;
> > @@ -675,7 +674,6 @@ int amdgpu_info_ioctl(struct drm_device *dev, void
> *data, struct drm_file *filp)
> >   mem.cpu_accessible_vram.usable_heap_size * 3 / 4;
> >
> >   mem.gtt.total_heap_size = gtt_man->size;
> > - mem.gtt.total_heap_size *= PAGE_SIZE;
> >   mem.gtt.usable_heap_size = mem.gtt.total_heap_size -
> >   atomic64_read(&adev->gart_pin_size);
> >   mem.gtt.heap_usage = ttm_resource_manager_usage(gtt_man);
>
>


Re: [PATCH v12] drm/amdgpu: add drm buddy support to amdgpu

2022-05-28 Thread Mike Lothian
On Sat, 28 May 2022 at 08:44, Paneer Selvam, Arunpravin
 wrote:
>
> [Public]
>
> Hi,
>
> After investigating quite some time on this issue, found freeze problem is 
> not with the amdgpu part of buddy allocator patch as the patch doesn’t throw 
> any issues when applied separately on top of the stable base of drm-next. 
> After digging more into this issue, the below patch seems to be the cause of 
> this problem,
>
> drm/ttm: rework bulk move handling v5
> https://cgit.freedesktop.org/drm/drm/commit/?id=fee2ede155423b0f7a559050a39750b98fe9db69
>
> when this patch applied on top of the stable (working version) of drm-next 
> without buddy allocator patch, we can see multiple issues listed below, each 
> thrown randomly at every GravityMark run, 1. general protection fault at 
> ttm_lru_bulk_move_tail() 2. NULL pointer deference at 
> ttm_lru_bulk_move_tail() 3. NULL pointer deference at ttm_resource_init().
>
> Regards,
> Arun.

Thanks for tracking it down, fee2ede155423b0f7a559050a39750b98fe9db69
isn't trivial to revert

Hopefully Christian can figure it out


Re: [PATCH v12] drm/amdgpu: add drm buddy support to amdgpu

2022-05-16 Thread Mike Lothian
Hi

The merge window for 5.19 will probably be opening next week, has
there been any progress with this bug?

Thanks

Mike

On Mon, 2 May 2022 at 17:31, Mike Lothian  wrote:
>
> On Mon, 2 May 2022 at 16:54, Arunpravin Paneer Selvam
>  wrote:
> >
> >
> >
> > On 5/2/2022 8:41 PM, Mike Lothian wrote:
> > > On Wed, 27 Apr 2022 at 12:55, Mike Lothian  wrote:
> > >> On Tue, 26 Apr 2022 at 17:36, Christian König  
> > >> wrote:
> > >>> Hi Mike,
> > >>>
> > >>> sounds like somehow stitching together the SG table for PRIME doesn't
> > >>> work any more with this patch.
> > >>>
> > >>> Can you try with P2P DMA disabled?
> > >> -CONFIG_PCI_P2PDMA=y
> > >> +# CONFIG_PCI_P2PDMA is not set
> > >>
> > >> If that's what you're meaning, then there's no difference, I'll upload
> > >> my dmesg to the gitlab issue
> > >>
> > >>> Apart from that can you take a look Arun?
> > >>>
> > >>> Thanks,
> > >>> Christian.
> > > Hi
> > >
> > > Have you had any success in replicating this?
> > Hi Mike,
> > I couldn't replicate on my Raven APU machine. I see you have 2 cards
> > initialized, one is Renoir
> > and the other is Navy Flounder. Could you give some more details, are
> > you running Gravity Mark
> > on Renoir and what is your system RAM configuration?
> > >
> > > Cheers
> > >
> > > Mike
> >
> Hi
>
> It's a PRIME laptop, it failed on the RENOIR too, it caused a lockup,
> but systemd managed to capture it, I'll attach it to the issue
>
> I've got 64GB RAM, the 6800M has 12GB VRAM
>
> Cheers
>
> Mike


[PATCH 3/3] drm/amdgpu/gfx11: Add missing break

2022-05-04 Thread Mike Lothian
This stops clang complaining:

drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c:5895:2: warning: unannotated 
fall-through between switch labels [-Wimplicit-fallthrough]
default:
^
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c:5895:2: note: insert 'break;' to avoid 
fall-through
default:
^
break;

Signed-off-by: Mike Lothian 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
index b6fc39edc862..e26f97f77db6 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
@@ -5892,6 +5892,7 @@ static int gfx_v11_0_set_priv_inst_fault_state(struct 
amdgpu_device *adev,
WREG32_FIELD15_PREREG(GC, 0, CP_INT_CNTL_RING0,
   PRIV_INSTR_INT_ENABLE,
   state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
+   break;
default:
break;
}
-- 
2.35.1



[PATCH 2/3] drm/amdgpu/gfx11: Initalise index

2022-05-04 Thread Mike Lothian
This stops clang complaining:

drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c:376:6: warning: variable 'index' is used 
uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
if (ring->is_mes_queue) {
^~
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c:433:30: note: uninitialized use occurs 
here
amdgpu_device_wb_free(adev, index);
^
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c:376:2: note: remove the 'if' if its 
condition is always false
if (ring->is_mes_queue) {
^
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c:364:16: note: initialize the variable 
'index' to silence this warning
unsigned index;
  ^
       = 0

Signed-off-by: Mike Lothian 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
index 184bf554acca..b6fc39edc862 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
@@ -361,7 +361,7 @@ static int gfx_v11_0_ring_test_ib(struct amdgpu_ring *ring, 
long timeout)
struct amdgpu_device *adev = ring->adev;
struct amdgpu_ib ib;
struct dma_fence *f = NULL;
-   unsigned index;
+   unsigned index = 0;
uint64_t gpu_addr;
volatile uint32_t *cpu_ptr;
long r;
-- 
2.35.1



[PATCH 1/3] drm/amdgpu/gfx10: Initalise index

2022-05-04 Thread Mike Lothian
This stops clang complaining:

drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c:3846:6: warning: variable 'index' is 
used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
if (ring->is_mes_queue) {
^~
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c:3903:30: note: uninitialized use occurs 
here
amdgpu_device_wb_free(adev, index);
^
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c:3846:2: note: remove the 'if' if its 
condition is always false
if (ring->is_mes_queue) {
^
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c:3839:16: note: initialize the variable 
'index' to silence this warning
unsigned index;
  ^
       = 0

Signed-off-by: Mike Lothian 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index fc289ee54a47..7ce62b12e5b4 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -3836,7 +3836,7 @@ static int gfx_v10_0_ring_test_ib(struct amdgpu_ring 
*ring, long timeout)
struct amdgpu_device *adev = ring->adev;
struct amdgpu_ib ib;
struct dma_fence *f = NULL;
-   unsigned index;
+   unsigned index = 0;
uint64_t gpu_addr;
volatile uint32_t *cpu_ptr;
long r;
-- 
2.35.1



[PATCH 0/3] amdgpu: A few fixes for clang warnings

2022-05-04 Thread Mike Lothian
Just a few simple fixes to get rid of warnings seen with clang 14

Mike Lothian (3):
  drm/amdgpu/gfx10: Initalise index
  drm/amdgpu/gfx11: Initalise index
  drm/amdgpu/gfx11: Add missing break

 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 2 +-
 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 3 ++-
 2 files changed, 3 insertions(+), 2 deletions(-)

-- 
2.35.1



Re: [PATCH v12] drm/amdgpu: add drm buddy support to amdgpu

2022-05-02 Thread Mike Lothian
On Mon, 2 May 2022 at 16:54, Arunpravin Paneer Selvam
 wrote:
>
>
>
> On 5/2/2022 8:41 PM, Mike Lothian wrote:
> > On Wed, 27 Apr 2022 at 12:55, Mike Lothian  wrote:
> >> On Tue, 26 Apr 2022 at 17:36, Christian König  
> >> wrote:
> >>> Hi Mike,
> >>>
> >>> sounds like somehow stitching together the SG table for PRIME doesn't
> >>> work any more with this patch.
> >>>
> >>> Can you try with P2P DMA disabled?
> >> -CONFIG_PCI_P2PDMA=y
> >> +# CONFIG_PCI_P2PDMA is not set
> >>
> >> If that's what you're meaning, then there's no difference, I'll upload
> >> my dmesg to the gitlab issue
> >>
> >>> Apart from that can you take a look Arun?
> >>>
> >>> Thanks,
> >>> Christian.
> > Hi
> >
> > Have you had any success in replicating this?
> Hi Mike,
> I couldn't replicate on my Raven APU machine. I see you have 2 cards
> initialized, one is Renoir
> and the other is Navy Flounder. Could you give some more details, are
> you running Gravity Mark
> on Renoir and what is your system RAM configuration?
> >
> > Cheers
> >
> > Mike
>
Hi

It's a PRIME laptop, it failed on the RENOIR too, it caused a lockup,
but systemd managed to capture it, I'll attach it to the issue

I've got 64GB RAM, the 6800M has 12GB VRAM

Cheers

Mike


Re: [PATCH v12] drm/amdgpu: add drm buddy support to amdgpu

2022-05-02 Thread Mike Lothian
On Wed, 27 Apr 2022 at 12:55, Mike Lothian  wrote:
>
> On Tue, 26 Apr 2022 at 17:36, Christian König  
> wrote:
> >
> > Hi Mike,
> >
> > sounds like somehow stitching together the SG table for PRIME doesn't
> > work any more with this patch.
> >
> > Can you try with P2P DMA disabled?
>
> -CONFIG_PCI_P2PDMA=y
> +# CONFIG_PCI_P2PDMA is not set
>
> If that's what you're meaning, then there's no difference, I'll upload
> my dmesg to the gitlab issue
>
> >
> > Apart from that can you take a look Arun?
> >
> > Thanks,
> > Christian.

Hi

Have you had any success in replicating this?

Cheers

Mike


Re: [PATCH v12] drm/amdgpu: add drm buddy support to amdgpu

2022-04-27 Thread Mike Lothian
On Tue, 26 Apr 2022 at 17:36, Christian König  wrote:
>
> Hi Mike,
>
> sounds like somehow stitching together the SG table for PRIME doesn't
> work any more with this patch.
>
> Can you try with P2P DMA disabled?

-CONFIG_PCI_P2PDMA=y
+# CONFIG_PCI_P2PDMA is not set

If that's what you're meaning, then there's no difference, I'll upload
my dmesg to the gitlab issue

>
> Apart from that can you take a look Arun?
>
> Thanks,
> Christian.


Re: [PATCH v12] drm/amdgpu: add drm buddy support to amdgpu

2022-04-26 Thread Mike Lothian
Hi

I'm having issues with this patch on my PRIME system and vulkan workloads

https://gitlab.freedesktop.org/drm/amd/-/issues/1992

Is there any chance you could take a look?

Cheers

Mike


Re: [PATCH 06/11] drm/amdgpu: remove GTT accounting v2

2022-03-09 Thread Mike Lothian
Hi

This patch seems to be causing me problems

https://gitlab.freedesktop.org/drm/amd/-/issues/1927

There are 3 issues I'm experiencing, two kernel bugs and a mesa bug

Cheers

Mike

On Mon, 14 Feb 2022 at 09:34, Christian König
 wrote:
>
> This is provided by TTM now.
>
> Also switch man->size to bytes instead of pages and fix the double
> printing of size and usage in debugfs.
>
> v2: fix size checking as well
>
> Signed-off-by: Christian König 
> Tested-by: Bas Nieuwenhuizen 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c | 49 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c |  8 ++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c  |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h |  2 -
>  4 files changed, 16 insertions(+), 45 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
> index e0c7fbe01d93..3bcd27ae379d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
> @@ -60,7 +60,7 @@ static ssize_t amdgpu_mem_info_gtt_total_show(struct device 
> *dev,
> struct ttm_resource_manager *man;
>
> man = ttm_manager_type(&adev->mman.bdev, TTM_PL_TT);
> -   return sysfs_emit(buf, "%llu\n", man->size * PAGE_SIZE);
> +   return sysfs_emit(buf, "%llu\n", man->size);
>  }
>
>  /**
> @@ -77,8 +77,9 @@ static ssize_t amdgpu_mem_info_gtt_used_show(struct device 
> *dev,
>  {
> struct drm_device *ddev = dev_get_drvdata(dev);
> struct amdgpu_device *adev = drm_to_adev(ddev);
> +   struct ttm_resource_manager *man = &adev->mman.gtt_mgr.manager;
>
> -   return sysfs_emit(buf, "%llu\n", 
> amdgpu_gtt_mgr_usage(&adev->mman.gtt_mgr));
> +   return sysfs_emit(buf, "%llu\n", ttm_resource_manager_usage(man));
>  }
>
>  static DEVICE_ATTR(mem_info_gtt_total, S_IRUGO,
> @@ -130,20 +131,17 @@ static int amdgpu_gtt_mgr_new(struct 
> ttm_resource_manager *man,
> struct amdgpu_gtt_node *node;
> int r;
>
> -   if (!(place->flags & TTM_PL_FLAG_TEMPORARY) &&
> -   atomic64_add_return(num_pages, &mgr->used) >  man->size) {
> -   atomic64_sub(num_pages, &mgr->used);
> -   return -ENOSPC;
> -   }
> -
> node = kzalloc(struct_size(node, base.mm_nodes, 1), GFP_KERNEL);
> -   if (!node) {
> -   r = -ENOMEM;
> -   goto err_out;
> -   }
> +   if (!node)
> +   return -ENOMEM;
>
> node->tbo = tbo;
> ttm_resource_init(tbo, place, &node->base.base);
> +   if (!(place->flags & TTM_PL_FLAG_TEMPORARY) &&
> +   ttm_resource_manager_usage(man) > man->size) {
> +   r = -ENOSPC;
> +   goto err_free;
> +   }
>
> if (place->lpfn) {
> spin_lock(&mgr->lock);
> @@ -169,11 +167,6 @@ static int amdgpu_gtt_mgr_new(struct 
> ttm_resource_manager *man,
>  err_free:
> ttm_resource_fini(man, &node->base.base);
> kfree(node);
> -
> -err_out:
> -   if (!(place->flags & TTM_PL_FLAG_TEMPORARY))
> -   atomic64_sub(num_pages, &mgr->used);
> -
> return r;
>  }
>
> @@ -196,25 +189,10 @@ static void amdgpu_gtt_mgr_del(struct 
> ttm_resource_manager *man,
> drm_mm_remove_node(&node->base.mm_nodes[0]);
> spin_unlock(&mgr->lock);
>
> -   if (!(res->placement & TTM_PL_FLAG_TEMPORARY))
> -   atomic64_sub(res->num_pages, &mgr->used);
> -
> ttm_resource_fini(man, res);
> kfree(node);
>  }
>
> -/**
> - * amdgpu_gtt_mgr_usage - return usage of GTT domain
> - *
> - * @mgr: amdgpu_gtt_mgr pointer
> - *
> - * Return how many bytes are used in the GTT domain
> - */
> -uint64_t amdgpu_gtt_mgr_usage(struct amdgpu_gtt_mgr *mgr)
> -{
> -   return atomic64_read(&mgr->used) * PAGE_SIZE;
> -}
> -
>  /**
>   * amdgpu_gtt_mgr_recover - re-init gart
>   *
> @@ -260,9 +238,6 @@ static void amdgpu_gtt_mgr_debug(struct 
> ttm_resource_manager *man,
> spin_lock(&mgr->lock);
> drm_mm_print(&mgr->mm, printer);
> spin_unlock(&mgr->lock);
> -
> -   drm_printf(printer, "man size:%llu pages,  gtt used:%llu pages\n",
> -  man->size, atomic64_read(&mgr->used));
>  }
>
>  static const struct ttm_resource_manager_func amdgpu_gtt_mgr_func = {
> @@ -288,14 +263,12 @@ int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, 
> uint64_t gtt_size)
> man->use_tt = true;
> man->func = &amdgpu_gtt_mgr_func;
>
> -   ttm_resource_manager_init(man, &adev->mman.bdev,
> - gtt_size >> PAGE_SHIFT);
> +   ttm_resource_manager_init(man, &adev->mman.bdev, gtt_size);
>
> start = AMDGPU_GTT_MAX_TRANSFER_SIZE * 
> AMDGPU_GTT_NUM_TRANSFER_WINDOWS;
> size = (adev->gmc.gart_size >> PAGE_SHIFT) - start;
> drm_mm_init(&mgr->mm, start, size);
> spin_lock_init(&mgr->lock);
> -   atomic64_set(&mgr->used, 0);
>
>  

Re: [Intel-gfx] [PATCH 0/3] Update firmware to v62.0.0

2021-06-16 Thread Mike Lothian
Hi

Is there a place where we can download these new firmware images?

Cheers

Mike

On Wed, 16 Jun 2021 at 00:55, Matthew Brost  wrote:

> As part of enabling GuC submission [1] we need to update to the latest
> and greatest firmware. This series does that. All backwards
> compatibility breaking changes squashed into a single patch #2. Same
> series sent to trybot [2] forcing GuC to be enabled to ensure we haven't
> broke something.
>
> v2: Address comments, looking for remaning RBs so patches can be
> squashed and sent for CI
> v3: Delete a few unused defines, squash patches
> v4: Add values back into kernel doc, fix docs warning
>
> Signed-off-by: Matthew Brost 
>
> [1] https://patchwork.freedesktop.org/series/89844
> [2] https://patchwork.freedesktop.org/series/91341
>
> Michal Wajdeczko (3):
>   drm/i915/guc: Introduce unified HXG messages
>   drm/i915/guc: Update firmware to v62.0.0
>   drm/i915/doc: Include GuC ABI documentation
>
>  Documentation/gpu/i915.rst|   8 +
>  .../gpu/drm/i915/gt/uc/abi/guc_actions_abi.h  | 107 ++
>  .../gt/uc/abi/guc_communication_ctb_abi.h | 126 +--
>  .../gt/uc/abi/guc_communication_mmio_abi.h|  65 ++--
>  .../gpu/drm/i915/gt/uc/abi/guc_messages_abi.h | 213 +++
>  drivers/gpu/drm/i915/gt/uc/intel_guc.c| 107 --
>  drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c|  45 +--
>  drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 356 +-
>  drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h |   6 +-
>  drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h   |  75 +---
>  drivers/gpu/drm/i915/gt/uc/intel_guc_log.c|  29 +-
>  drivers/gpu/drm/i915/gt/uc/intel_guc_log.h|   6 +-
>  drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c  |  26 +-
>  13 files changed, 748 insertions(+), 421 deletions(-)
>
> --
> 2.28.0
>
> ___
> Intel-gfx mailing list
> intel-...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
>


Re: [PATCH] drm/ttm: Remove pinned bos from LRU in ttm_bo_move_to_lru_tail() v2

2021-01-08 Thread Mike Lothian
Hi

This breaks things for me on my Prime system

https://gitlab.freedesktop.org/drm/misc/-/issues/23

Cheers

Mike
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: amdgpu does not support 3840x2160@30Hz on kaveri apu

2021-01-03 Thread Mike Lothian
Hi

You're probably best reporting the bug here:

https://gitlab.freedesktop.org/drm/amd/-/issues

Attach the output of dmesg from both Radeon and AMDGPU and the compositor /
Wayland logs (as you're not using X)

Cheers

Mike



On Sun, 3 Jan 2021, 06:32 Davide Corrado,  wrote:

> hello, I'd like to report this issue that I am having since I updated my
> display (samsung U28E590). The amdgpu does not support the native
> resolution of my new monitor, which is 3840x2160*.* Using a HDMI or DVI
> connection (I tried both, same results), the maximum supported refresh is
> 30Hz, so I'm stuck with that (don't have a displayport). The radeon module
> works fine, I'm having this issue just when I use amdgpu (which I'd like
> to, because performance is better).
>
> Some info of my hardware:
>
> cpu: AMD A10-7870K Radeon R7, 12 Compute Cores 4C+8G
> kernel version (I tried different ones and different linux distros, same
> results!): 5.9.16-200.fc33.x86_64 #1 SMP Mon Dec 21 14:08:22 UTC 2020
> x86_64 x86_64 x86_64 GNU/Linux
> Monitor: Samsung U28E590.
>
> description:
> If I boot the system using amdgpu and no video mode selection, the system
> boots but I don't get a screen during boot and in wayland. I can connect
> using ssh, so the system is running fine, just no display; If I force a
> full HD resolution with "video:" in the kernel line, I can see the boot
> process but the screen disappears when wayland starts (because the default
> resolution is 3840x2160@30Hz). Using a full HD monitor results in no
> issues, so it must be related to this very 4k resolution.
>
> As I have already stated, radeon module works with the same
> software/hardware configuration.
> thanks you so much for your time :-)
>
> --
> Davide Corrado
> UNIX Team Leader
>
> Via Abramo Lincoln, 25
> 20129 Milano
>
> Tel +39 3474259950
> ___
> amd-gfx mailing list
> amd-...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/amdgpu: fix check oder in amdgpu_bo_move

2020-12-14 Thread Mike Lothian
Tested-by: Mike Lothian 

Fixes https://gitlab.freedesktop.org/drm/amd/-/issues/1405

Can we make sure this gets into rc1?


On Tue, 17 Nov 2020 at 15:02, Nirmoy  wrote:
>
> Tested-by: Nirmoy Das 
> Tested on commit 96fb3cbef165db97c999a02efe2287ba4b8c1ceb (HEAD,
> drm-misc/drm-misc-next)
>
>
>
> On 11/16/20 8:14 PM, Christian König wrote:
> > Reorder the code to fix checking if blitting is available.
> >
> > Signed-off-by: Christian König 
> > Fixes: f5a89a5cae81 drm/amdgpu/ttm: use multihop
> > ---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 53 +++--
> >   1 file changed, 23 insertions(+), 30 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > index 676fb520e044..c438d290a6db 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > @@ -551,25 +551,12 @@ static int amdgpu_bo_move(struct ttm_buffer_object 
> > *bo, bool evict,
> >   struct ttm_resource *old_mem = &bo->mem;
> >   int r;
> >
> > - if ((old_mem->mem_type == TTM_PL_SYSTEM &&
> > -  new_mem->mem_type == TTM_PL_VRAM) ||
> > - (old_mem->mem_type == TTM_PL_VRAM &&
> > -  new_mem->mem_type == TTM_PL_SYSTEM)) {
> > - hop->fpfn = 0;
> > - hop->lpfn = 0;
> > - hop->mem_type = TTM_PL_TT;
> > - hop->flags = 0;
> > - return -EMULTIHOP;
> > - }
> > -
> >   if (new_mem->mem_type == TTM_PL_TT) {
> >   r = amdgpu_ttm_backend_bind(bo->bdev, bo->ttm, new_mem);
> >   if (r)
> >   return r;
> >   }
> >
> > - amdgpu_bo_move_notify(bo, evict, new_mem);
> > -
> >   /* Can't move a pinned BO */
> >   abo = ttm_to_amdgpu_bo(bo);
> >   if (WARN_ON_ONCE(abo->tbo.pin_count > 0))
> > @@ -579,24 +566,23 @@ static int amdgpu_bo_move(struct ttm_buffer_object 
> > *bo, bool evict,
> >
> >   if (old_mem->mem_type == TTM_PL_SYSTEM && bo->ttm == NULL) {
> >   ttm_bo_move_null(bo, new_mem);
> > - return 0;
> > + goto out;
> >   }
> >   if (old_mem->mem_type == TTM_PL_SYSTEM &&
> >   new_mem->mem_type == TTM_PL_TT) {
> >   ttm_bo_move_null(bo, new_mem);
> > - return 0;
> > + goto out;
> >   }
> > -
> >   if (old_mem->mem_type == TTM_PL_TT &&
> >   new_mem->mem_type == TTM_PL_SYSTEM) {
> >   r = ttm_bo_wait_ctx(bo, ctx);
> >   if (r)
> > - goto fail;
> > + return r;
> >
> >   amdgpu_ttm_backend_unbind(bo->bdev, bo->ttm);
> >   ttm_resource_free(bo, &bo->mem);
> >   ttm_bo_assign_mem(bo, new_mem);
> > - return 0;
> > + goto out;
> >   }
> >
> >   if (old_mem->mem_type == AMDGPU_PL_GDS ||
> > @@ -607,27 +593,37 @@ static int amdgpu_bo_move(struct ttm_buffer_object 
> > *bo, bool evict,
> >   new_mem->mem_type == AMDGPU_PL_OA) {
> >   /* Nothing to save here */
> >   ttm_bo_move_null(bo, new_mem);
> > - return 0;
> > + goto out;
> >   }
> >
> > - if (!adev->mman.buffer_funcs_enabled) {
> > + if (adev->mman.buffer_funcs_enabled) {
> > + if (((old_mem->mem_type == TTM_PL_SYSTEM &&
> > +   new_mem->mem_type == TTM_PL_VRAM) ||
> > +  (old_mem->mem_type == TTM_PL_VRAM &&
> > +   new_mem->mem_type == TTM_PL_SYSTEM))) {
> > + hop->fpfn = 0;
> > + hop->lpfn = 0;
> > + hop->mem_type = TTM_PL_TT;
> > + hop->flags = 0;
> > + return -EMULTIHOP;
> > + }
> > +
> > + r = amdgpu_move_blit(bo, evict, new_mem, old_mem);
> > + } else {
> >   r = -ENODEV;
> > - goto memcpy;
> >   }
> >
> > - r = amdgpu_move_blit(bo, evict, new_mem, old_mem);
> >   if (r) {
> > -memcpy:
> >   /* Check that all memory is CPU accessible */
> >   if (!am

Re: [PATCH 2/4] drm/amdgpu/ttm: use multihop

2020-12-12 Thread Mike Lothian
Hi

This patch is causing issues for me on both a Raven system and a Tonga
(PRIME) system

https://gitlab.freedesktop.org/drm/amd/-/issues/1405

It's only recently been merged into agd5f's tree - which is why I'm
only just noticing it

I realise this has now been submitted for rc1 - please can someone take a look

Thanks

Mike

On Mon, 9 Nov 2020 at 00:54, Dave Airlie  wrote:
>
> From: Dave Airlie 
>
> This removes the code to move resources directly between
> SYSTEM and VRAM in favour of using the core ttm mulithop code.
>
> Signed-off-by: Dave Airlie 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 136 +++-
>  1 file changed, 13 insertions(+), 123 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index ce0d82802333..e1458d575aa9 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -512,119 +512,6 @@ static int amdgpu_move_blit(struct ttm_buffer_object 
> *bo,
> return r;
>  }
>
> -/**
> - * amdgpu_move_vram_ram - Copy VRAM buffer to RAM buffer
> - *
> - * Called by amdgpu_bo_move().
> - */
> -static int amdgpu_move_vram_ram(struct ttm_buffer_object *bo, bool evict,
> -   struct ttm_operation_ctx *ctx,
> -   struct ttm_resource *new_mem)
> -{
> -   struct ttm_resource *old_mem = &bo->mem;
> -   struct ttm_resource tmp_mem;
> -   struct ttm_place placements;
> -   struct ttm_placement placement;
> -   int r;
> -
> -   /* create space/pages for new_mem in GTT space */
> -   tmp_mem = *new_mem;
> -   tmp_mem.mm_node = NULL;
> -   placement.num_placement = 1;
> -   placement.placement = &placements;
> -   placement.num_busy_placement = 1;
> -   placement.busy_placement = &placements;
> -   placements.fpfn = 0;
> -   placements.lpfn = 0;
> -   placements.mem_type = TTM_PL_TT;
> -   placements.flags = 0;
> -   r = ttm_bo_mem_space(bo, &placement, &tmp_mem, ctx);
> -   if (unlikely(r)) {
> -   pr_err("Failed to find GTT space for blit from VRAM\n");
> -   return r;
> -   }
> -
> -   r = ttm_tt_populate(bo->bdev, bo->ttm, ctx);
> -   if (unlikely(r))
> -   goto out_cleanup;
> -
> -   /* Bind the memory to the GTT space */
> -   r = amdgpu_ttm_backend_bind(bo->bdev, bo->ttm, &tmp_mem);
> -   if (unlikely(r)) {
> -   goto out_cleanup;
> -   }
> -
> -   /* blit VRAM to GTT */
> -   r = amdgpu_move_blit(bo, evict, &tmp_mem, old_mem);
> -   if (unlikely(r)) {
> -   goto out_cleanup;
> -   }
> -
> -   r = ttm_bo_wait_ctx(bo, ctx);
> -   if (unlikely(r))
> -   goto out_cleanup;
> -
> -   amdgpu_ttm_backend_unbind(bo->bdev, bo->ttm);
> -   ttm_resource_free(bo, &bo->mem);
> -   ttm_bo_assign_mem(bo, new_mem);
> -out_cleanup:
> -   ttm_resource_free(bo, &tmp_mem);
> -   return r;
> -}
> -
> -/**
> - * amdgpu_move_ram_vram - Copy buffer from RAM to VRAM
> - *
> - * Called by amdgpu_bo_move().
> - */
> -static int amdgpu_move_ram_vram(struct ttm_buffer_object *bo, bool evict,
> -   struct ttm_operation_ctx *ctx,
> -   struct ttm_resource *new_mem)
> -{
> -   struct ttm_resource *old_mem = &bo->mem;
> -   struct ttm_resource tmp_mem;
> -   struct ttm_placement placement;
> -   struct ttm_place placements;
> -   int r;
> -
> -   /* make space in GTT for old_mem buffer */
> -   tmp_mem = *new_mem;
> -   tmp_mem.mm_node = NULL;
> -   placement.num_placement = 1;
> -   placement.placement = &placements;
> -   placement.num_busy_placement = 1;
> -   placement.busy_placement = &placements;
> -   placements.fpfn = 0;
> -   placements.lpfn = 0;
> -   placements.mem_type = TTM_PL_TT;
> -   placements.flags = 0;
> -   r = ttm_bo_mem_space(bo, &placement, &tmp_mem, ctx);
> -   if (unlikely(r)) {
> -   pr_err("Failed to find GTT space for blit to VRAM\n");
> -   return r;
> -   }
> -
> -   /* move/bind old memory to GTT space */
> -   r = ttm_tt_populate(bo->bdev, bo->ttm, ctx);
> -   if (unlikely(r))
> -   return r;
> -
> -   r = amdgpu_ttm_backend_bind(bo->bdev, bo->ttm, &tmp_mem);
> -   if (unlikely(r)) {
> -   goto out_cleanup;
> -   }
> -
> -   ttm_bo_assign_mem(bo, &tmp_mem);
> -   /* copy to VRAM */
> -   r = amdgpu_move_blit(bo, evict, new_mem, old_mem);
> -   if (unlikely(r)) {
> -   goto out_cleanup;
> -   }
> -out_cleanup:
> -   ttm_resource_free(bo, &tmp_mem);
> -   return r;
> -}
> -
>  /**
>   * amdgpu_mem_visible - Check that memory can be accessed by 
> ttm_bo_move_memcpy
>   *
> @@ -664,6 +551,17 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, 
> bool e

Re: [pull] amdgpu, amdkfd, radeon drm-next-5.6

2019-12-12 Thread Mike Lothian
Hi

Please can amdgpu/raven_ta.bin be published somewhere

Thanks

Mike

On Wed, 11 Dec 2019 at 22:30, Alex Deucher  wrote:
>
> Hi Dave, Daniel,
>
> Kicking off 5.6 with new stuff from AMD.  There is a UAPI addition.  We
> added a new firmware for display, and this just adds the version query
> to our existing firmware query interface.  UMDs like mesa use this interface 
> to
> query things like CP or UVD firmware versions to see what features are
> supported.
>
> The following changes since commit 622b2a0ab647d2755f2c1f1000d3403e86a69763:
>
>   drm/amdgpu/vcn: finish delay work before release resources (2019-11-13 
> 15:29:42 -0500)
>
> are available in the Git repository at:
>
>   git://people.freedesktop.org/~agd5f/linux tags/drm-next-5.6-2019-12-11
>
> for you to fetch changes up to ad808910be68dcf8da5d837d4511d00ad5d3678a:
>
>   drm/amdgpu: fix license on Kconfig and Makefiles (2019-12-11 15:22:08 -0500)
>
> 
> drm-next-5.6-2019-12-11:
>
> amdgpu:
> - Add MST atomic routines
> - Add support for DMCUB (new helper microengine for displays)
> - Add OEM i2c support in DC
> - Use vstartup for vblank events on DCN
> - Simplify Kconfig for DC
> - Renoir fixes for DC
> - Clean up function pointers in DC
> - Initial support for HDCP 2.x
> - Misc code cleanups
> - GFX10 fixes
> - Rework JPEG engine handling for VCN
> - Add clock and power gating support for JPEG
> - BACO support for Arcturus
> - Cleanup PSP ring handling
> - Add framework for using BACO with runtime pm to save power
> - Move core pci state handling out of the driver for pm ops
> - Allow guest power control in 1 VF case with SR-IOV
> - SR-IOV fixes
> - RAS fixes
> - Support for power metrics on renoir
> - Golden settings updates for gfx10
> - Enable gfxoff on supported navi10 skus
> - Update MAINTAINERS
>
> amdkfd:
> - Clean up generational gfx code
> - Fixes for gfx10
> - DIQ fixes
> - Share more code with amdgpu
>
> radeon:
> - PPC DMA fix
> - Register checker fixes for r1xx/r2xx
> - Misc cleanups
>
> 
> Alex Deucher (34):
>   drm/amdgpu/display: fix the build when CONFIG_DRM_AMD_DC_DCN is not set
>   drm/amdgpu/display: fix warning when CONFIG_DRM_AMD_DC_DCN is not set
>   drm/amdgpu/soc15: move struct definition around to align with other 
> soc15 asics
>   drm/amdgpu/nv: add asic func for fetching vbios from rom directly
>   drm/amdgpu/powerplay: properly set PP_GFXOFF_MASK (v2)
>   drm/amdgpu: disable gfxoff when using register read interface
>   drm/amdgpu: remove experimental flag for Navi14
>   drm/amdgpu: disable gfxoff on original raven
>   Revert "drm/amd/display: enable S/G for RAVEN chip"
>   drm/amdgpu: add asic callback for BACO support
>   drm/amdgpu: add supports_baco callback for soc15 asics. (v2)
>   drm/amdgpu: add supports_baco callback for SI asics.
>   drm/amdgpu: add supports_baco callback for CIK asics.
>   drm/amdgpu: add supports_baco callback for VI asics.
>   drm/amdgpu: add supports_baco callback for NV asics.
>   drm/amdgpu: add a amdgpu_device_supports_baco helper
>   drm/amdgpu: rename amdgpu_device_is_px to amdgpu_device_supports_boco 
> (v2)
>   drm/amdgpu: add additional boco checks to runtime suspend/resume (v2)
>   drm/amdgpu: split swSMU baco_reset into enter and exit
>   drm/amdgpu: add helpers for baco entry and exit
>   drm/amdgpu: add baco support to runtime suspend/resume
>   drm/amdgpu: start to disentangle boco from runtime pm
>   drm/amdgpu: disentangle runtime pm and vga_switcheroo
>   drm/amdgpu: enable runtime pm on BACO capable boards if runpm=1
>   drm/amdgpu: simplify runtime suspend
>   drm/amd/display: add default clocks if not able to fetch them
>   MAINTAINERS: Drop Rex Zhu for amdgpu powerplay
>   drm/amdgpu: move pci handling out of pm ops
>   drm/amdgpu: flag vram lost on baco reset for VI/CIK
>   drm/amd/display: re-enable wait in pipelock, but add timeout
>   drm/radeon: fix r1xx/r2xx register checker for POT textures
>   drm/amdgpu: add header line for power profile on Arcturus
>   drm/amdgpu/display: add fallthrough comment
>   drm/amdgpu: fix license on Kconfig and Makefiles
>
> Alex Sierra (2):
>   drm/amdgpu: add flag to indicate amdgpu vm context
>   amd/amdgpu: force to trigger a no-retry-fault after a retry-fault
>
> Alvin Lee (1):
>   drm/amd/display: Changes in dc to allow full update in some cases
>
> Amanda Liu (1):
>   drm/amd/display: Fix screen tearing on vrr tests
>
> Andrey Grodzovsky (1):
>   drm/amdgpu: Fix BACO entry failure in NAVI10.
>
> Anthony Koo (8):
>   drm/amd/display: set MSA MISC1 bit 6 while sending colorimetry in VSC 
> SDP
>   drm/amd/display: Clean up some code with unused registers
>   drm/amd/display: cleanup of construct and destruct funcs
>   drm/amd/d

Re: [PATCH 0/7] gem_bo.resv prime unification, leftovers

2019-06-26 Thread Mike Lothian
I'll try testing this on my Skylake/Tonga setup tonight

On Tue, 25 Jun 2019 at 21:42, Daniel Vetter  wrote:
>
> Hi all,
>
> Here's the unmerged leftovers from my big prime cleanup series:
> - using the prepare_fb helper in vc4&msm, now hopefully fixed up. The
>   helper should be now even more useful.
>
> - amd&nv driver ->gem_prime_res_obj callback removal. I think this one
>   might have functional conflicts with Gerd's patch series to embed
>   drm_gem_object in ttm_bo, or at least needs to be re-reviewed before we
>   merge the 2nd series.
>
> Comments, testing, feedback as usual very much welcome.
>
> Thanks, Daniel
>
> Daniel Vetter (7):
>   drm/fb-helper: use gem_bo.resv, not dma_buf.resv in prepare_fb
>   drm/msm: Use drm_gem_fb_prepare_fb
>   drm/vc4: Use drm_gem_fb_prepare_fb
>   drm/radeon: Fill out gem_object->resv
>   drm/nouveau: Fill out gem_object->resv
>   drm/amdgpu: Fill out gem_object->resv
>   drm/prime: Ditch gem_prime_res_obj hook
>
>  Documentation/gpu/todo.rst   |  9 --
>  drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c  | 17 +---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h  |  1 -
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c  |  1 -
>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c   |  2 ++
>  drivers/gpu/drm/drm_gem_framebuffer_helper.c | 29 ++--
>  drivers/gpu/drm/drm_prime.c  |  3 --
>  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c|  8 ++
>  drivers/gpu/drm/msm/msm_atomic.c |  8 ++
>  drivers/gpu/drm/nouveau/nouveau_bo.c |  2 ++
>  drivers/gpu/drm/nouveau/nouveau_drm.c|  1 -
>  drivers/gpu/drm/nouveau/nouveau_gem.h|  1 -
>  drivers/gpu/drm/nouveau/nouveau_prime.c  |  7 -
>  drivers/gpu/drm/radeon/radeon_drv.c  |  2 --
>  drivers/gpu/drm/radeon/radeon_object.c   |  1 +
>  drivers/gpu/drm/radeon/radeon_prime.c|  7 -
>  drivers/gpu/drm/vc4/vc4_plane.c  |  5 ++--
>  include/drm/drm_drv.h| 12 
>  18 files changed, 26 insertions(+), 90 deletions(-)
>
> --
> 2.20.1
>
> ___
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH v2 0/3] A DRM API for adaptive sync and variable refresh rate support

2018-10-03 Thread Mike Lothian
Hi

I'm curious to know whether this will/could work over PRIME

If the discrete card is rendering slower than the display's frequency could
the frequency be dropped on integrated display? I think there are laptops
that have freesync now

Cheers

Mike

On Mon, 1 Oct 2018 at 08:15 Daniel Vetter  wrote:

> On Mon, Sep 24, 2018 at 02:15:34PM -0400, Nicholas Kazlauskas wrote:
> > These patches are part of a proposed new interface for supporting
> variable refresh rate via DRM properties.
> >
> > === Changes from v1 ===
> >
> > For drm:
> >
> > * The variable_refresh_capable property is now flagged as
> DRM_MODE_PROP_IMMUTABLE
> >
> > For drm/gpu/amd/display:
> >
> > * Patches no longer pull in IOCTL/FreeSync refactoring code
> > * FreeSync enable/disable behavior has been modified to reflect changes
> in userspace behavior from xf86-video-amdgpu and mesa
> >
> > === Adaptive sync and variable refresh rate ===
> >
> > Adaptive sync is part of the DisplayPort spec and allows for graphics
> adapters to drive displays with varying frame timings.
> >
> > Variable refresh rate (VRR) is essentially the same, but defined for
> HDMI.
> >
> > === Use cases for variable refresh rate ===
> >
> > Variable frame (flip) timings don't align well with fixed refresh rate
> displays. This results in stuttering, tearing and/or input lag. By
> adjusting the display refresh rate dynamically these issues can be reduced
> or eliminated.
> >
> > However, not all content is suitable for dynamic refresh adaptation.
> Content that is flipped infrequently or at random intervals tends to fair
> poorly. Multiple clients trying to flip under the same screen can similarly
> interfere with prediction.
> >
> > Userland needs a way to let the driver know when the content on the
> screen is suitable for variable refresh rate and if the user wishes to have
> the feature enabled.
> >
> > === DRM API to support variable refresh rates ===
> >
> > This patch introduces a new API via atomic properties on the DRM
> connector and CRTC.
> >
> > The connector has two new optional properties:
> >
> > * bool variable_refresh_capable - set by the driver if the hardware is
> capable of supporting variable refresh tech
> >
> > * bool variable_refresh_enabled - set by the user to enable variable
> refresh adjustment over the connector
> >
> > The CRTC has one additional default property:
> >
> > * bool variable_refresh - a content hint to the driver specifying that
> the CRTC contents are suitable for variable refresh adjustment
> >
> > == Overview for DRM driver developers ===
> >
> > Driver developers can attach the optional connector properties via
> drm_connector_attach_variable_refresh_properties on connectors that support
> variable refresh (typically DP or HDMI).
> >
> > The variable_refresh_capable property should be managed as the output on
> the connector changes. The property is read only from userspace.
> >
> > The variable_refresh_enabled property is intended to be a property
> controlled by userland as a global on/off switch for variable refresh
> technology. It should be checked before enabling variable refresh rate.
> >
> > === Overview for Userland developers ==
> >
> > The variable_refresh property on the CRTC should be set to true when the
> CRTCs are suitable for variable refresh rate. In practice this is probably
> an application like a game - a single window that covers the whole CRTC
> surface and is the only client issuing flips.
> >
> > To demonstrate the suitability of the API for variable refresh and
> dynamic adaptation there are additional patches using this API that
> implement adaptive variable refresh across kernel and userland projects:
> >
> > * DRM (dri-devel)
> > * amdgpu DRM kernel driver (amd-gfx)
> > * xf86-video-amdgpu (amd-gfx)
> > * mesa (mesa-dev)
> >
> > These patches enable adaptive variable refresh on X for AMD hardware
> provided that the user sets the variable_refresh_enabled property to true
> on supported connectors (ie. using xrandr --set-prop).
> >
> > The patches have been tested as working on upstream userland with the
> GNOME desktop environment under a single monitor setup. They also work on
> KDE in single monitor setup if the compositor is disabled.
> >
> > The patches require that the application window can issue screen flips
> via the Present extension to xf86-video-amdgpu. Due to Present extension
> limitations some desktop environments and multi-monitor setups are
> currently not compatible.
> >
> > Full implementation details for these changes can be reviewed in their
> respective mailing lists.
> >
> > === Previous discussions ===
> >
> > These patches are based upon feedback from patches and feedback from two
> previous threads on the subject which are linked below for reference:
> >
> > https://lists.freedesktop.org/archives/amd-gfx/2018-April/021047.html
> >
> https://lists.freedesktop.org/archives/dri-devel/2017-October/155207.html
> >
> https://lists.freedesktop.org/archives/dri-devel/2018-Sept

Re: [PATCH 2/2] drm/ttm: once more fix ttm_bo_bulk_move_lru_tail v2

2018-09-13 Thread Mike Lothian
Hi

This fixes a boot issue I had on Raven (
https://bugs.freedesktop.org/show_bug.cgi?id=107922)

Feel free to add to both patches:

Tested-by: Mike Lothian 

Cheers

Mike

On Thu, 13 Sep 2018 at 12:22 Christian König <
ckoenig.leichtzumer...@gmail.com> wrote:

> While cutting the lists we sometimes accidentally added a list_head from
> the stack to the LRUs, effectively corrupting the list.
>
> Remove the list cutting and use explicit list manipulation instead.
>
> v2: separate out new list_bulk_move_tail helper
>
> Signed-off-by: Christian König 
> ---
>  drivers/gpu/drm/ttm/ttm_bo.c | 46
> +++-
>  1 file changed, 20 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index 138c98902033..26b889f86670 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -247,47 +247,38 @@ void ttm_bo_move_to_lru_tail(struct
> ttm_buffer_object *bo,
>  }
>  EXPORT_SYMBOL(ttm_bo_move_to_lru_tail);
>
> -static void ttm_bo_bulk_move_helper(struct ttm_lru_bulk_move_pos *pos,
> -   struct list_head *lru, bool is_swap)
> -{
> -   struct list_head *list;
> -   LIST_HEAD(entries);
> -   LIST_HEAD(before);
> -
> -   reservation_object_assert_held(pos->last->resv);
> -   list = is_swap ? &pos->last->swap : &pos->last->lru;
> -   list_cut_position(&entries, lru, list);
> -
> -   reservation_object_assert_held(pos->first->resv);
> -   list = is_swap ? pos->first->swap.prev : pos->first->lru.prev;
> -   list_cut_position(&before, &entries, list);
> -
> -   list_splice(&before, lru);
> -   list_splice_tail(&entries, lru);
> -}
> -
>  void ttm_bo_bulk_move_lru_tail(struct ttm_lru_bulk_move *bulk)
>  {
> unsigned i;
>
> for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) {
> +   struct ttm_lru_bulk_move_pos *pos = &bulk->tt[i];
> struct ttm_mem_type_manager *man;
>
> -   if (!bulk->tt[i].first)
> +   if (!pos->first)
> continue;
>
> -   man = &bulk->tt[i].first->bdev->man[TTM_PL_TT];
> -   ttm_bo_bulk_move_helper(&bulk->tt[i], &man->lru[i], false);
> +   reservation_object_assert_held(pos->first->resv);
> +   reservation_object_assert_held(pos->last->resv);
> +
> +   man = &pos->first->bdev->man[TTM_PL_TT];
> +   list_bulk_move_tail(&man->lru[i], &pos->first->lru,
> +   &pos->last->lru);
> }
>
> for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) {
> +   struct ttm_lru_bulk_move_pos *pos = &bulk->vram[i];
> struct ttm_mem_type_manager *man;
>
> -   if (!bulk->vram[i].first)
> +   if (!pos->first)
> continue;
>
> -   man = &bulk->vram[i].first->bdev->man[TTM_PL_VRAM];
> -   ttm_bo_bulk_move_helper(&bulk->vram[i], &man->lru[i],
> false);
> +   reservation_object_assert_held(pos->first->resv);
> +   reservation_object_assert_held(pos->last->resv);
> +
> +   man = &pos->first->bdev->man[TTM_PL_VRAM];
> +   list_bulk_move_tail(&man->lru[i], &pos->first->lru,
> +   &pos->last->lru);
> }
>
> for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) {
> @@ -297,8 +288,11 @@ void ttm_bo_bulk_move_lru_tail(struct
> ttm_lru_bulk_move *bulk)
> if (!pos->first)
> continue;
>
> +   reservation_object_assert_held(pos->first->resv);
> +   reservation_object_assert_held(pos->last->resv);
> +
> lru = &pos->first->bdev->glob->swap_lru[i];
> -   ttm_bo_bulk_move_helper(&bulk->swap[i], lru, true);
> +   list_bulk_move_tail(lru, &pos->first->swap,
> &pos->last->swap);
> }
>  }
>  EXPORT_SYMBOL(ttm_bo_bulk_move_lru_tail);
> --
> 2.14.1
>
> ___
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v5 0/5] drm/ttm, amdgpu: Introduce LRU bulk move functionality

2018-09-02 Thread Mike Lothian
Hi

Is there an updated series? These no longer apply for me

Thanks

Mike

On Wed, 22 Aug 2018 at 09:42 Huang Rui  wrote:

> On Wed, Aug 22, 2018 at 04:24:02PM +0800, Christian König wrote:
> > Please commit patches #1, #2 and #3, doesn't make much sense to send
> > them out even more often.
> >
> > Jerry's comments on patch #4 sound valid to me as well, but with those
> > minor issues fixes/commented I think we can commit it.
> >
> > Thanks for taking care of this,
> > Christian.
>
> OK. Thanks to your time.
>
> Thanks,
> Ray
>
> >
> > Am 22.08.2018 um 09:52 schrieb Huang Rui:
> > > The idea and proposal is originally from Christian, and I continue to
> work to
> > > deliver it.
> > >
> > > Background:
> > > amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then
> move all of
> > > them on the end of LRU list one by one. Thus, that cause so many BOs
> moved to
> > > the end of the LRU, and impact performance seriously.
> > >
> > > Then Christian provided a workaround to not move PD/PT BOs on LRU with
> below
> > > patch:
> > > Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid
> > > validating VM PTs")
> > >
> > > However, the final solution should bulk move all PD/PT and PerVM BOs
> on the LRU
> > > instead of one by one.
> > >
> > > Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which
> need to be
> > > validated we move all BOs together to the end of the LRU without
> dropping the
> > > lock for the LRU.
> > >
> > > While doing so we note the beginning and end of this block in the LRU
> list.
> > >
> > > Now when amdgpu_vm_validate_pt_bos() is called and we don't have
> anything to do,
> > > we don't move every BO one by one, but instead cut the LRU list into
> pieces so
> > > that we bulk move everything to the end in just one operation.
> > >
> > > Test data:
> > >
> +--+-+---+---+
> > > |  |The Talos|Clpeak(OCL)|BusSpeedReadback(OCL)
>   |
> > > |  |Principle(Vulkan)|   |
>|
> > >
> ++
> > > |  | |   |0.319 ms(1k) 0.314
> ms(2K) 0.308 ms(4K) |
> > > | Original |  147.7 FPS  |  76.86 us |0.307 ms(8K) 0.310
> ms(16K) |
> > >
> ++
> > > | Orignial + WA| |   |0.254 ms(1K) 0.241
> ms(2K)  |
> > > |(don't move   |  162.1 FPS  |  42.15 us |0.230 ms(4K) 0.223
> ms(8K) 0.204 ms(16K)|
> > > |PT BOs on LRU)| |   |
>|
> > >
> ++
> > > | Bulk move|  163.1 FPS  |  40.52 us |0.244 ms(1K) 0.252
> ms(2K) 0.213 ms(4K) |
> > > |  | |   |0.214 ms(8K) 0.225
> ms(16K) |
> > >
> +--+-+---+---+
> > >
> > > After test them with above three benchmarks include vulkan and opencl.
> We can
> > > see the visible improvement than original, and even better than
> original with
> > > workaround.
> > >
> > > Changes from V1 -> V2:
> > > - Fix to missed the BOs in relocated/moved that should be also moved
> to the end
> > >of LRU.
> > >
> > > Changes from V2 -> V3:
> > > - Remove unused parameter and use list_for_each_entry instead of the
> one with
> > >save entry.
> > >
> > > Changes from V3 -> V4:
> > > - Move the amdgpu_vm_move_to_lru_tail after command submission, at
> that time,
> > >all bo will be back on idle list.
> > >
> > > Changes from V4 -> V5:
> > > - Remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable
> instread of
> > >validated, and move ttm_bo_bulk_move_lru_tail() also into
> > >amdgpu_vm_move_to_lru_tail().
> > >
> > > Thanks,
> > > Ray
> > >
> > > Christian König (2):
> > >drm/ttm: add helper structures for bulk moves on lru list
> > >drm/ttm: revise ttm_bo_move_to_lru_tail to support bulk moves
> > >
> > > Huang Rui (3):
> > >drm/ttm: add bulk move function on LRU
> > >drm/amdgpu: use bulk moves for efficient VM LRU handling (v5)
> > >drm/amdgpu: move PD/PT bos on LRU again
> > >
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 10 +
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 68
> +++--
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 -
> > >   drivers/gpu/drm/ttm/ttm_bo.c   | 78
> +-
> > >   include/drm/ttm/ttm_bo_api.h   | 16 ++-
> > >   include/drm/ttm/ttm_bo_driver.h| 28 
> > >   6 files changed, 186 insertions(+), 25 deletions(-)
> > >
> >
> ___
> amd-gfx mailing list
> amd-...@lists.freedesktop.org
> https://li

Re: [PATCH v2 0/5] drm/ttm, amdgpu: Introduce LRU bulk move functionality

2018-08-12 Thread Mike Lothian
Hi

I've been testing these patches over the weekend on my Tonga and Raven
systems

Tested-by: Mike Lothian 

Cheers

Mike

On Fri, 10 Aug 2018 at 12:56 Huang Rui  wrote:

> The idea and proposal is originally from Christian, and I continue to work
> to
> deliver it.
>
> Background:
> amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move
> all of
> them on the end of LRU list one by one. Thus, that cause so many BOs moved
> to
> the end of the LRU, and impact performance seriously.
>
> Then Christian provided a workaround to not move PD/PT BOs on LRU with
> below
> patch:
> "drm/amdgpu: band aid validating VM PTs"
> Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
>
> However, the final solution should bulk move all PD/PT and PerVM BOs on
> the LRU
> instead of one by one.
>
> Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need
> to be
> validated we move all BOs together to the end of the LRU without dropping
> the
> lock for the LRU.
>
> While doing so we note the beginning and end of this block in the LRU list.
>
> Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything
> to do,
> we don't move every BO one by one, but instead cut the LRU list into
> pieces so
> that we bulk move everything to the end in just one operation.
>
> Test data:
>
> +--+-+---+---+
> |  |The Talos|Clpeak(OCL)|BusSpeedReadback(OCL)
>   |
> |  |Principle(Vulkan)|   |
>|
>
> ++
> |  | |   |0.319 ms(1k) 0.314 ms(2K)
> 0.308 ms(4K) |
> | Original |  147.7 FPS  |  76.86 us |0.307 ms(8K) 0.310 ms(16K)
>|
>
> ++
> | Orignial + WA| |   |0.254 ms(1K) 0.241 ms(2K)
>   |
> |(don't move   |  162.1 FPS  |  42.15 us |0.230 ms(4K) 0.223 ms(8K)
> 0.204 ms(16K)|
> |PT BOs on LRU)| |   |
>|
>
> ++
> | Bulk move|  163.1 FPS  |  40.52 us |0.244 ms(1K) 0.252 ms(2K)
> 0.213 ms(4K) |
> |  | |   |0.214 ms(8K) 0.225 ms(16K)
>|
>
> +--+-+---+---+
>
> After test them with above three benchmarks include vulkan and opencl. We
> can
> see the visible improvement than original, and even better than original
> with
> workaround.
>
> Changes from V1 -> V2:
> - Fix to missed the BOs in relocated/moved that should be also moved to
> the end
>   of LRU.
>
> Thanks,
> Rui
>
> Christian König (2):
>   drm/ttm: add helper structures for bulk moves on lru list
>   drm/ttm: revise ttm_bo_move_to_lru_tail to support bulk moves
>
> Huang Rui (3):
>   drm/ttm: add bulk move function on LRU
>   drm/amdgpu: use bulk moves for efficient VM LRU handling (v2)
>   drm/amdgpu: move PD/PT bos on LRU again
>
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 77
> +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
>  drivers/gpu/drm/ttm/ttm_bo.c   | 78
> +-
>  include/drm/ttm/ttm_bo_api.h   | 16 ++-
>  include/drm/ttm/ttm_bo_driver.h| 28 
>  5 files changed, 184 insertions(+), 19 deletions(-)
>
> --
> 2.7.4
>
> ___
> amd-gfx mailing list
> amd-...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] MAINTAINERS: update drm tree

2018-07-06 Thread Mike Lothian
Hi

Any change of this moving to https or the gitlab instance where its on as
default?

Cheers

Mike

On Fri, 6 Jul 2018 at 16:25 Alex Deucher  wrote:

> On Fri, Jul 6, 2018 at 3:28 AM, Daniel Vetter 
> wrote:
> > Mail to dri-devel went out, linux-next was updated, but we forgot this
> > one here.
> >
> > Cc: David Airlie 
> > Signed-off-by: Daniel Vetter 
>
> Acked-by: Alex Deucher 
>
> > ---
> >  MAINTAINERS | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index 07d1576fc766..8edb34d28f23 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -581,7 +581,7 @@ W:  https://www.infradead.org/~dhowells/kafs/
> >
> >  AGPGART DRIVER
> >  M: David Airlie 
> > -T: git git://people.freedesktop.org/~airlied/linux (part of drm
> maint)
> > +T: git git://anongit.freedesktop.org/drm/drm
> >  S: Maintained
> >  F: drivers/char/agp/
> >  F: include/linux/agp*
> > @@ -4630,7 +4630,7 @@ F:include/uapi/drm/vmwgfx_drm.h
> >  DRM DRIVERS
> >  M: David Airlie 
> >  L: dri-devel@lists.freedesktop.org
> > -T: git git://people.freedesktop.org/~airlied/linux
> > +T: git git://anongit.freedesktop.org/drm/drm
> >  B: https://bugs.freedesktop.org/
> >  C: irc://chat.freenode.net/dri-devel
> >  S: Maintained
> > --
> > 2.18.0
> >
> > ___
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> ___
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 0/7] Modernize vga_switcheroo by using device link for HDA

2018-02-25 Thread Mike Lothian
Hi

I've not seen anything untoward with these patches with my AMD PRIME system here

Tested-by: Mike Lothian 

Cheers

Mike
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

2018-02-12 Thread Mike Lothian
On 12 February 2018 at 03:39, Lukas Wunner  wrote:
> On Mon, Feb 12, 2018 at 12:35:51AM +0000, Mike Lothian wrote:
>> I've not been able to reproduce the original problem you're trying to
>> solve on amdgpu thats with or without your patch set and the above
>> "trigger" too
>>
>> Is anything else required to trigger it, I started multiple DRI_PRIME
>> glxgears, in parallel, serial waiting the 12 seconds and serial within
>> the 12 seconds and I couldn't reproduce it
>
> The discrete GPU needs to runtime suspend, that's the trigger,
> so no DRI_PRIME executables should be running.  Just let it
> autosuspend after boot.  Do you see "waiting 12 sec" messages
> in dmesg?  If not it's not autosuspending.
>
> Thanks,
>
> Lukas

Hi

Yes I'm seeing those messages, I'm just not seeing the hangs

I've attached the dmesg in case you're interested

Regards

Mike


dmesg.nohangs
Description: Binary data
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

2018-02-11 Thread Mike Lothian
Hi

I've not been able to reproduce the original problem you're trying to
solve on amdgpu thats with or without your patch set and the above
"trigger" too

Is anything else required to trigger it, I started multiple DRI_PRIME
glxgears, in parallel, serial waiting the 12 seconds and serial within
the 12 seconds and I couldn't reproduce it

Regards

Mike
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

2018-02-11 Thread Mike Lothian
On 11 February 2018 at 09:38, Lukas Wunner  wrote:
> Fix a deadlock on hybrid graphics laptops that's been present since 2013:
>
> DRM drivers poll connectors in 10 sec intervals.  The poll worker is
> stopped on ->runtime_suspend with cancel_delayed_work_sync().  However
> the poll worker invokes the DRM drivers' ->detect callbacks, which call
> pm_runtime_get_sync().  If the poll worker starts after runtime suspend
> has begun, pm_runtime_get_sync() will wait for runtime suspend to finish
> with the intention of runtime resuming the device afterwards.  The result
> is a circular wait between poll worker and autosuspend worker.
>
> I'm seeing this deadlock so often it's not funny anymore. I've also found
> 3 nouveau bugzillas about it and 1 radeon bugzilla. (See patch [3/5] and
> [4/5].)
>
> One theoretical solution would be to add a flag to the ->detect callback
> to tell it that it's running in the poll worker's context.  In that case
> it doesn't need to call pm_runtime_get_sync() because the poll worker is
> only enabled while runtime active and we know that ->runtime_suspend
> waits for it to finish.  But this approach would require touching every
> single DRM driver's ->detect hook.  Moreover the ->detect hook is called
> from numerous other places, both in the DRM library as well as in each
> driver, making it necessary to trace every possible call chain and check
> if it's coming from the poll worker or not.  I've tried to do that for
> nouveau (see the call sites listed in the commit message of patch [3/5])
> and concluded that this approach is a nightmare to implement.
>
> Another reason for the infeasibility of this approach is that ->detect
> is called from within the DRM library without driver involvement, e.g.
> via DRM's sysfs interface.  In those cases, pm_runtime_get_sync() would
> have to be called by the DRM library, but the DRM library is supposed to
> stay generic, wheras runtime PM is driver-specific.
>
> So instead, I've come up with this much simpler solution which gleans
> from the current task's flags if it's a workqueue worker, retrieves the
> work_struct and checks if it's the poll worker.  All that's needed is
> one small helper in the workqueue code and another small helper in the
> DRM library.  This solution lends itself nicely to stable-backporting.
>
> The patches for radeon and amdgpu are compile-tested only, I only have a
> MacBook Pro with an Nvidia GK107 to test.  To test the patches, add an
> "msleep(12*1000);" at the top of the driver's ->runtime_suspend hook.
> This ensures that the poll worker runs after ->runtime_suspend has begun.
> Wait 12 sec after the GPU has begun runtime suspend, then check
> /sys/bus/pci/devices/:01:00.0/power/runtime_status.  Without this
> series, the status will be stuck at "suspending" and you'll get hung task
> errors in dmesg after a few minutes.
>
> i915, malidp and msm "solved" this issue by not stopping the poll worker
> on runtime suspend.  But this results in the GPU bouncing back and forth
> between D0 and D3 continuously.  That's a total no-go for GPUs which
> runtime suspend to D3cold since every suspend/resume cycle costs a
> significant amount of time and energy.  (i915 and malidp do not seem
> to acquire a runtime PM ref in the ->detect callbacks, which seems
> questionable.  msm however does and would also deadlock if it disabled
> the poll worker on runtime suspend.  cc += Archit, Liviu, intel-gfx)
>
> Please review.  Thanks,
>
> Lukas
>
> Lukas Wunner (5):
>   workqueue: Allow retrieval of current task's work struct
>   drm: Allow determining if current task is output poll worker
>   drm/nouveau: Fix deadlock on runtime suspend
>   drm/radeon: Fix deadlock on runtime suspend
>   drm/amdgpu: Fix deadlock on runtime suspend
>
>  drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c | 58 +---
>  drivers/gpu/drm/drm_probe_helper.c | 14 +
>  drivers/gpu/drm/nouveau/nouveau_connector.c| 18 +--
>  drivers/gpu/drm/radeon/radeon_connectors.c | 74 
> +-
>  include/drm/drm_crtc_helper.h  |  1 +
>  include/linux/workqueue.h  |  1 +
>  kernel/workqueue.c | 16 ++
>  7 files changed, 132 insertions(+), 50 deletions(-)
>
> --
> 2.15.1
>
> ___
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel


I wasn't quite sure where to add that msleep. I've tested the patches
as is on top of agd5f's wip branch without ill effects

I've had a radeon and now a amdgpu PRIME setup and don't believe I've
ever seen this issue

If you could pop a patch together for the msleep I'll give it a test on amdgpu

Cheers

Mike
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm: fix gpu scheduler link order

2018-01-24 Thread Mike Lothian
On 24 January 2018 at 10:46, Christian König
 wrote:
> It should initialize before the drivers using it.
>
> Signed-off-by: Christian König 
> Bug: https://bugs.freedesktop.org/show_bug.cgi?id=104736
> ---
>  drivers/gpu/drm/Makefile | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
> index dd5ae67f8e2b..50093ff4479b 100644
> --- a/drivers/gpu/drm/Makefile
> +++ b/drivers/gpu/drm/Makefile
> @@ -50,6 +50,7 @@ obj-$(CONFIG_DRM_MIPI_DSI) += drm_mipi_dsi.o
>  obj-$(CONFIG_DRM_PANEL_ORIENTATION_QUIRKS) += drm_panel_orientation_quirks.o
>  obj-$(CONFIG_DRM_ARM)  += arm/
>  obj-$(CONFIG_DRM_TTM)  += ttm/
> +obj-$(CONFIG_DRM_SCHED)+= scheduler/
>  obj-$(CONFIG_DRM_TDFX) += tdfx/
>  obj-$(CONFIG_DRM_R128) += r128/
>  obj-y  += amd/lib/
> @@ -102,4 +103,3 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/
>  obj-$(CONFIG_DRM_TINYDRM) += tinydrm/
>  obj-$(CONFIG_DRM_PL111) += pl111/
>  obj-$(CONFIG_DRM_TVE200) += tve200/
> -obj-$(CONFIG_DRM_SCHED)+= scheduler/
> --
> 2.14.1
>
> ___
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reviewed-by: Mike Lothian 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Initial release of AMD Open Source Driver for Vulkan

2017-12-22 Thread Mike Lothian
Hi

No I'm not using ICC however that section prevents you using Clang, it
basically says if not GCC then assumes Intel's compiler

Cheers

Mike

On Fri, 22 Dec 2017, 3:04 pm Mao, David,  wrote:

> Hi Lothian,
> Thanks for testing out out driver!
> Officially we recommend you to stick to GCC5 for now, however, we do have
> a fix for the constexpr issue mentioned below that just didn’t make it to
> this first release.
> According to your diff, are you using ICC?
> Could you let us know the compiler version as well as your distro?
>
> Thanks.
> Best Regards,
> David
>
>
> On Dec 22, 2017, at 9:48 PM, Mike Lothian  wrote:
>
> Congratulations on getting this out the door
>
> It didn't compile for me without these changes:
>
> In pal:
>
> diff --git a/src/util/math.cpp b/src/util/math.cpp
> index 46e9ede..3af4259 100644
> --- a/src/util/math.cpp
> +++ b/src/util/math.cpp
> @@ -54,7 +54,7 @@ static uint32 Float32ToFloatN(float f, const
> NBitFloatInfo& info);
>  static float FloatNToFloat32(uint32 fBits, const NBitFloatInfo& info);
>
>  // Initialize the descriptors for various N-bit floating point
> representations:
> -static constexpr NBitFloatInfo Float16Info =
> +static NBitFloatInfo Float16Info =
>  {
>  16,   // numBits
>  10,   //
> numFracBits
> @@ -72,7 +72,7 @@ static constexpr NBitFloatInfo Float16Info =
>  (23 - 10),//
> fracBitsDiff
>  };
>
> -static constexpr NBitFloatInfo Float11Info =
> +static NBitFloatInfo Float11Info =
>  {
>  11,   // numBits
>  6,//
> numFracBits
> @@ -90,7 +90,7 @@ static constexpr NBitFloatInfo Float11Info =
>  23 - 6,   //
> fracBitsDiff
>  };
>
> -static constexpr NBitFloatInfo Float10Info =
> +static NBitFloatInfo Float10Info =
>  {
>  10,   // numBits
>  5,//
> numFracBits
>
> In xgl:
>
> diff --git a/icd/CMakeLists.txt b/icd/CMakeLists.txt
> index 4e4d669..5006184 100644
> --- a/icd/CMakeLists.txt
> +++ b/icd/CMakeLists.txt
> @@ -503,16 +503,16 @@ if (UNIX)
>
>  target_link_libraries(xgl PRIVATE c stdc++ ${CMAKE_DL_LIBS} pthread)
>
> -if(NOT ICD_USE_GCC)
> -message(WARNING "Intel ICC untested in CMake.")
> -target_link_libraries(xgl PRIVATE -fabi-version=0 -static-intel)
> -endif()
> +#if(NOT ICD_USE_GCC)
> +#message(WARNING "Intel ICC untested in CMake.")
> +#target_link_libraries(xgl PRIVATE -fabi-version=0 -static-intel)
> +#endif()
>
>  if(CMAKE_BUILD_TYPE_RELEASE)
>  if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
>  execute_process(COMMAND ${CMAKE_C_COMPILER} -dumpversion
> OUTPUT_VARIABLE GCC_VERSION)
>  if (GCC_VERSION VERSION_GREATER 5.3 OR GCC_VERSION
> VERSION_EQUAL 5.3)
> -target_link_libraries(xgl PRIVATE -flto=4
> -fuse-linker-plugin -Wno-odr)
> +target_link_libraries(xgl PRIVATE -Wno-odr)
>  message(WARNING "LTO enabled for Linking")
>  endif()
>  endif()
> @@ -530,17 +530,17 @@ if (UNIX)
>
>  # CMAKE-TODO: What is whole-archive used for?
>  #target_link_libraries(xgl -Wl,--whole-archive ${ICD_LIBS}
> -Wl,--no-whole-archive)
> -if(CMAKE_BUILD_TYPE_RELEASE)
> -execute_process(COMMAND ${CMAKE_C_COMPILER} -dumpversion
> OUTPUT_VARIABLE GCC_VERSION)
> -if (GCC_VERSION VERSION_GREATER 5.3 OR GCC_VERSION VERSION_EQUAL
> 5.3)
> -target_link_libraries(xgl PRIVATE -Wl,--whole-archive
> ${PROJECT_BINARY_DIR}/pal/src/libpal.a -Wl,--no-whole-archive)
> -target_link_libraries(xgl PUBLIC -Wl,--whole-archive
> ${PROJECT_BINARY_DIR}/pal/metrohash/libmetrohash.a -Wl,--no-whole-archive)
> -target_link_libraries(xgl PUBLIC -Wl,--whole-archive
> ${PROJECT_BINARY_DIR}/pal/gpuopen/libgpuopen.a -Wl,--no-whole-archive)
> -target_link_libraries(xgl PUBLIC -Wl,--whole-archive
> ${PROJECT_BINARY_DIR}/pal/vam/libvam.a -Wl,--no-whole-archive)
> -target_link_libraries(xgl PUBLIC -Wl,--whole-archive
> ${PROJECT_BINARY_DIR}/pal/addrlib/libaddrlib.a -Wl,--no-whole-archive)
> -target_link_libraries(xgl PUBLIC -Wl,--whole-archive
> ${PROJECT_BINARY_DIR}/pal/jemalloc/libjemalloc.a 

Re: Initial release of AMD Open Source Driver for Vulkan

2017-12-22 Thread Mike Lothian
Congratulations on getting this out the door

It didn't compile for me without these changes:

In pal:

diff --git a/src/util/math.cpp b/src/util/math.cpp
index 46e9ede..3af4259 100644
--- a/src/util/math.cpp
+++ b/src/util/math.cpp
@@ -54,7 +54,7 @@ static uint32 Float32ToFloatN(float f, const
NBitFloatInfo& info);
 static float FloatNToFloat32(uint32 fBits, const NBitFloatInfo& info);

 // Initialize the descriptors for various N-bit floating point
representations:
-static constexpr NBitFloatInfo Float16Info =
+static NBitFloatInfo Float16Info =
 {
 16,   // numBits
 10,   //
numFracBits
@@ -72,7 +72,7 @@ static constexpr NBitFloatInfo Float16Info =
 (23 - 10),//
fracBitsDiff
 };

-static constexpr NBitFloatInfo Float11Info =
+static NBitFloatInfo Float11Info =
 {
 11,   // numBits
 6,//
numFracBits
@@ -90,7 +90,7 @@ static constexpr NBitFloatInfo Float11Info =
 23 - 6,   //
fracBitsDiff
 };

-static constexpr NBitFloatInfo Float10Info =
+static NBitFloatInfo Float10Info =
 {
 10,   // numBits
 5,//
numFracBits

In xgl:

diff --git a/icd/CMakeLists.txt b/icd/CMakeLists.txt
index 4e4d669..5006184 100644
--- a/icd/CMakeLists.txt
+++ b/icd/CMakeLists.txt
@@ -503,16 +503,16 @@ if (UNIX)

 target_link_libraries(xgl PRIVATE c stdc++ ${CMAKE_DL_LIBS} pthread)

-if(NOT ICD_USE_GCC)
-message(WARNING "Intel ICC untested in CMake.")
-target_link_libraries(xgl PRIVATE -fabi-version=0 -static-intel)
-endif()
+#if(NOT ICD_USE_GCC)
+#message(WARNING "Intel ICC untested in CMake.")
+#target_link_libraries(xgl PRIVATE -fabi-version=0 -static-intel)
+#endif()

 if(CMAKE_BUILD_TYPE_RELEASE)
 if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
 execute_process(COMMAND ${CMAKE_C_COMPILER} -dumpversion
OUTPUT_VARIABLE GCC_VERSION)
 if (GCC_VERSION VERSION_GREATER 5.3 OR GCC_VERSION
VERSION_EQUAL 5.3)
-target_link_libraries(xgl PRIVATE -flto=4
-fuse-linker-plugin -Wno-odr)
+target_link_libraries(xgl PRIVATE -Wno-odr)
 message(WARNING "LTO enabled for Linking")
 endif()
 endif()
@@ -530,17 +530,17 @@ if (UNIX)

 # CMAKE-TODO: What is whole-archive used for?
 #target_link_libraries(xgl -Wl,--whole-archive ${ICD_LIBS}
-Wl,--no-whole-archive)
-if(CMAKE_BUILD_TYPE_RELEASE)
-execute_process(COMMAND ${CMAKE_C_COMPILER} -dumpversion
OUTPUT_VARIABLE GCC_VERSION)
-if (GCC_VERSION VERSION_GREATER 5.3 OR GCC_VERSION VERSION_EQUAL
5.3)
-target_link_libraries(xgl PRIVATE -Wl,--whole-archive
${PROJECT_BINARY_DIR}/pal/src/libpal.a -Wl,--no-whole-archive)
-target_link_libraries(xgl PUBLIC -Wl,--whole-archive
${PROJECT_BINARY_DIR}/pal/metrohash/libmetrohash.a -Wl,--no-whole-archive)
-target_link_libraries(xgl PUBLIC -Wl,--whole-archive
${PROJECT_BINARY_DIR}/pal/gpuopen/libgpuopen.a -Wl,--no-whole-archive)
-target_link_libraries(xgl PUBLIC -Wl,--whole-archive
${PROJECT_BINARY_DIR}/pal/vam/libvam.a -Wl,--no-whole-archive)
-target_link_libraries(xgl PUBLIC -Wl,--whole-archive
${PROJECT_BINARY_DIR}/pal/addrlib/libaddrlib.a -Wl,--no-whole-archive)
-target_link_libraries(xgl PUBLIC -Wl,--whole-archive
${PROJECT_BINARY_DIR}/pal/jemalloc/libjemalloc.a -Wl,--no-whole-archive)
-endif()
-endif()
+#if(CMAKE_BUILD_TYPE_RELEASE)
+#execute_process(COMMAND ${CMAKE_C_COMPILER} -dumpversion
OUTPUT_VARIABLE GCC_VERSION)
+#if (GCC_VERSION VERSION_GREATER 5.3 OR GCC_VERSION VERSION_EQUAL
5.3)
+#target_link_libraries(xgl PRIVATE -Wl,--whole-archive
${PROJECT_BINARY_DIR}/pal/src/libpal.a -Wl,--no-whole-archive)
+#target_link_libraries(xgl PUBLIC -Wl,--whole-archive
${PROJECT_BINARY_DIR}/pal/metrohash/libmetrohash.a -Wl,--no-whole-archive)
+#target_link_libraries(xgl PUBLIC -Wl,--whole-archive
${PROJECT_BINARY_DIR}/pal/gpuopen/libgpuopen.a -Wl,--no-whole-archive)
+#target_link_libraries(xgl PUBLIC -Wl,--whole-archive
${PROJECT_BINARY_DIR}/pal/vam/libvam.a -Wl,--no-whole-archive)
+#target_link_libraries(xgl PUBLIC -Wl,--whole-archive
${PROJECT_BINARY_DIR}/pal/addrlib/libaddrlib.a -Wl,--no-whole-archive)
+#target_link_libraries(xgl PUBLIC -Wl,--whole-archive
${PROJECT_BINARY_DIR}/pal/jemalloc/libjemalloc.a -Wl,--no-whole-archive)
+#endif()
+#endif()

 #${ICD_TARGET}.so${SO_VERSION_NUMBER} : ${filter-out -Wl%,$(LLLIBS})


diff --git a/icd/api/llpc/util/llpcDebug.cpp
b/

Re: [RFC PATCH 0/7] drm/i915: add support for DisplayPort CEC-Tunneling-over-AUX

2017-05-25 Thread Mike Lothian
Hi

Sorry if this is off topic, I've got a Skylake Dell laptop with a USB-C
connector and no displayport

Which USB-C -> HDMI-2.0 connector do you recommend for stuff just working
based on your testing?

I've been putting off buying one until I knew 4K@60Hz would work, CEC would
be nice to have too

Thanks

Mike

On Thu, 25 May 2017 at 16:06 Hans Verkuil  wrote:

> From: Hans Verkuil 
>
> This patch series adds support for the DisplayPort CEC-Tunneling-over-AUX
> protocol.
>
> This patch series is based on v4.12-rc2.
>
> The first four patches add support for a new CEC capability which is
> needed for these devices and for two helper functions.
>
> Then the DP CEC registers are added using Clint's patch.
>
> The core CEC tunneling support is added to drm_dp_cec.c.
>
> And finally this is hooked up in the i915 driver.
>
> Ideally the cec driver is created and destroyed whenever the DP-to-HDMI
> adapter is connected/disconnected, but I have not been a able to find
> a way to distinguish between connecting/disconnecting the HDMI cable
> and connecting/disconnecting the actual DP-to-HDMI adapter.
>
> My current approach is to check the CEC tunneling support whenever a new
> display is connected:
>
> - if CEC tunneling is supported, but no CEC adapter exists, then create
> one.
> - if CEC tunneling is not supported, then unregister the CEC adapter if one
>   was created earlier.
> - if CEC tunneling is supported and the capabilities are identical to the
>   existing CEC adapter, then leave it be.
> - if CEC tunneling is supported and the capabilities are different to the
>   existing CEC adapter, then unregister that CEC adapter and register a
>   new one.
>
> This works well, but it would be much nicer if I would just know when the
> DP adapter is disconnected as opposed to when the HDMI cable is
> disconnected
> from the adapter. Suggestions are welcome.
>
> The other remaining problem is that there are DP/USB-C to HDMI adapters
> that
> support CEC tunneling in the chipset, but where the CEC pin is simply never
> hooked up. From the point of view of the driver CEC is supported, but
> you'll
> never see any other devices.
>
> I am considering sending a CEC POLL message to logical address 0 (the TV)
> to detect if the CEC pin is connected, but this is not 100% guaranteed to
> work. This can be put under a kernel config option, though.
>
> I think I need to do something for this since of the 5 USB-C to HDMI
> adapters I've tested that set the CEC tunneling capability, only 2 have
> the CEC pin hooked up. So this seems to be quite common.
>
> I have tested this with my Intel NUC7i5BNK and with the two working
> USB-C to HDMI adapters that I have found:
>
> a Samsung EE-PW700 adapter and a Kramer ADC-U31C/HF adapter (I think that's
> the model, I need to confirm this).
>
> As usual the specifications of these adapters never, ever tell you whether
> this is supported or not :-( It's trial and error to find one that works.
> In
> fact, of the 10 USB-C to HDMI adapters I tested 5 didn't support CEC
> tunneling
> at all, and of the remaining 5 only two had the CEC pin hooked up and so
> actually worked.
>
> BTW, all adapters that supported CEC tunneling used the Parade PS176 chip.
>
> Output of cec-ctl -S (discovers the CEC topology):
>
> $ cec-ctl -S
> Driver Info:
> Driver Name: i915
> Adapter Name   : DPDDC-C
> Capabilities   : 0x007e
> Logical Addresses
> Transmit
> Passthrough
> Remote Control Support
> Monitor All
> Driver version : 4.12.0
> Available Logical Addresses: 4
> Physical Address   : 3.0.0.0
> Logical Address Mask   : 0x0010
> CEC Version: 2.0
> Vendor ID  : 0x000c03 (HDMI)
> OSD Name   : 'Playback'
> Logical Addresses  : 1 (Allow RC Passthrough)
>
>   Logical Address  : 4 (Playback Device 1)
> Primary Device Type: Playback
> Logical Address Type   : Playback
> All Device Types   : Playback
> RC TV Profile  : None
> Device Features:
> None
>
> System Information for device 0 (TV) from device 4 (Playback
> Device 1):
> CEC Version: 1.4
> Physical Address   : 0.0.0.0
> Primary Device Type: TV
> Vendor ID  : 0xf0 (Samsung)
> OSD Name   : TV
> Menu Language  : eng
> Power Status   : On
>
> Topology:
>
> 0.0.0.0: TV
> 3.0.0.0: Playback Device 1
>
> Regards,
>
> Hans
>
> Clint Taylor (1):
>   drm/cec: Add CEC over Aux register definitions
>
> Hans Verkuil (6):
> 

Re: [Intel-gfx] [PATCH 02/15] drm: Remove drm_modeset_(un)lock_crtc

2017-04-14 Thread Mike Lothian
Hi

X no longer starts for me and I've bisected it back to this commit

During bisect I ended up on commits where my laptop would lockup during boot

Regards

Mike

On Tue, 4 Apr 2017 at 06:39 Daniel Vetter  wrote:

> On Tue, Apr 4, 2017 at 12:13 AM, kbuild test robot  wrote:
> > [if your patch is applied to the wrong git tree, please drop us a note
> to help improve the system]
>
> It should compile just fine on latest linux-next (if there is one)
> where this code in vmwgfx is already removed. Well you just need the
> latest drm-next from Dave Airlie.
> -Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 <+41%2079%20365%2057%2048> - http://blog.ffwll.ch
> ___
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/amdgpu: don't add files at control minor debugfs directory

2016-12-06 Thread Mike Lothian
Feel free to add a tested by from myself

Thanks for the fix

On Mon, 5 Dec 2016 at 20:33 Deucher, Alexander 
wrote:

> > -Original Message-
> > From: Nicolai Stange [mailto:nicstange at gmail.com]
> > Sent: Monday, December 05, 2016 3:30 PM
> > To: Daniel Vetter
> > Cc: Deucher, Alexander; Koenig, Christian; Michel Dänzer; linux-
> > kernel at vger.kernel.org; dri-devel at lists.freedesktop.org; Nicolai 
> > Stange
> > Subject: [PATCH] drm/amdgpu: don't add files at control minor debugfs
> > directory
> >
> > Since commit 8a357d10043c ("drm: Nerf DRM_CONTROL nodes"), a
> > struct drm_device's ->control member is always NULL.
> >
> > In the case of CONFIG_DEBUG_FS=y, amdgpu_debugfs_add_files() accesses
> > ->control->debugfs_root though. This results in a NULL pointer
> > dereference.
> >
> > Fix this by omitting the drm_debugfs_create_files() call for the
> > control minor debugfs directory which is now non-existent anyway.
> >
> > Fixes: 8a357d10043c ("drm: Nerf DRM_CONTROL nodes")
> > Signed-off-by: Nicolai Stange 
>
> Please add the bugzilla:
> https://bugs.freedesktop.org/show_bug.cgi?id=98915
> With that,
> Reviewed-by: Alex Deucher 
>
> > ---
> > Applicable to next-20161202. Compile-only tested.
> >
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 6 --
> >  1 file changed, 6 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > index deee2db..0cb3e82 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > @@ -2493,9 +2493,6 @@ int amdgpu_debugfs_add_files(struct
> > amdgpu_device *adev,
> >   adev->debugfs_count = i;
> >  #if defined(CONFIG_DEBUG_FS)
> >   drm_debugfs_create_files(files, nfiles,
> > -  adev->ddev->control->debugfs_root,
> > -  adev->ddev->control);
> > - drm_debugfs_create_files(files, nfiles,
> >adev->ddev->primary->debugfs_root,
> >adev->ddev->primary);
> >  #endif
> > @@ -2510,9 +2507,6 @@ static void amdgpu_debugfs_remove_files(struct
> > amdgpu_device *adev)
> >   for (i = 0; i < adev->debugfs_count; i++) {
> >   drm_debugfs_remove_files(adev->debugfs[i].files,
> >adev->debugfs[i].num_files,
> > -  adev->ddev->control);
> > - drm_debugfs_remove_files(adev->debugfs[i].files,
> > -  adev->debugfs[i].num_files,
> >adev->ddev->primary);
> >   }
> >  #endif
> > --
> > 2.10.2
>
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
-- next part --
An HTML attachment was scrubbed...
URL: 



[PATCH] [RFC] drm: Nerf DRM_CONTROL nodes

2016-12-06 Thread Mike Lothian
Thank you!

On Tue, 6 Dec 2016 at 01:47 Michel Dänzer  wrote:

> On 06/12/16 10:42 AM, Mike Lothian wrote:
> > Hi
> >
> > This patch (in drm-next and drm-intel-nightly) stops my system from
> > booting, I don't see any errors, just a black screen and a reboot after
> > the kernel has been selected
> >
> > I have confirmed that reverting this patch gets those two branches
> > working again
> >
> > Sorry to be the bearer of bad news - I'm guessing this is PRIME related
>
> It's not. You can choose between
>
> https://patchwork.freedesktop.org/patch/125754/
>
> or
>
> https://patchwork.freedesktop.org/patch/125644/
>
> for the fix. :)
>
>
> --
> Earthling Michel Dänzer   |   http://www.amd.com
> Libre software enthusiast | Mesa and X developer
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://lists.freedesktop.org/archives/dri-devel/attachments/20161206/7649e049/attachment.html>


[PATCH] [RFC] drm: Nerf DRM_CONTROL nodes

2016-12-06 Thread Mike Lothian
Hi

This patch (in drm-next and drm-intel-nightly) stops my system from
booting, I don't see any errors, just a black screen and a reboot after the
kernel has been selected

I have confirmed that reverting this patch gets those two branches working
again

Sorry to be the bearer of bad news - I'm guessing this is PRIME related

Mike

On Thu, 17 Nov 2016 at 07:42 Daniel Vetter  wrote:

On Fri, Oct 28, 2016 at 10:10:50AM +0200, Daniel Vetter wrote:
> Looking at the ioctl permission checks I noticed that it's impossible
> to import gem buffers into a control nodes, and fd2handle/handle2fd
> also don't work, so no joy with dma-bufs.
>
> The only way to do anything with a control node is by drawing stuff
> into a dumb buffer and displaying that. I suspect control nodes are an
> entirely unused thing, and a cursory check shows that there does not
> seem to be any callers of drmOpenControl nor of the other drmOpen
> functions using DRM_MODE_CONTROL.
>
> Since I don't like dead uabi, let's remove it. But since this would be
> a really big change I think it's better to start out small by simply
> not registering anything. We can garbage-collect the dead code later
> on, once we're sure it's really not used anywhere.
>
> Signed-off-by: Daniel Vetter 

Applied with Dave's irc-ack to drm-misc.
-Daniel

> ---
>  drivers/gpu/drm/drm_drv.c | 6 --
>  1 file changed, 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
> index 6efdba4993fc..f085e28ffc6f 100644
> --- a/drivers/gpu/drm/drm_drv.c
> +++ b/drivers/gpu/drm/drm_drv.c
> @@ -517,12 +517,6 @@ int drm_dev_init(struct drm_device *dev,
>   goto err_free;
>   }
>
> - if (drm_core_check_feature(dev, DRIVER_MODESET)) {
> - ret = drm_minor_alloc(dev, DRM_MINOR_CONTROL);
> - if (ret)
> - goto err_minors;
> - }
> -
>   if (drm_core_check_feature(dev, DRIVER_RENDER)) {
>   ret = drm_minor_alloc(dev, DRM_MINOR_RENDER);
>   if (ret)
> --
> 2.10.1
>

--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel at lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
-- next part --
An HTML attachment was scrubbed...
URL: 



[PATCH] drm/amdgpu: Attach exclusive fence to prime exported bo's. (v5)

2016-11-13 Thread Mike Lothian
Hi

I've tested this version now - works great

Thanks

Mike

On Wed, 9 Nov 2016 at 01:26 Mario Kleiner 
wrote:

> External clients which import our bo's wait only
> for exclusive dmabuf-fences, not on shared ones,
> ditto for bo's which we import from external
> providers and write to.
>
> Therefore attach exclusive fences on prime shared buffers
> if our exported buffer gets imported by an external
> client, or if we import a buffer from an external
> exporter.
>
> See discussion in thread:
> https://lists.freedesktop.org/archives/dri-devel/2016-October/122370.html
>
> Prime export tested on Intel iGPU + AMD Tonga dGPU as
> DRI3/Present Prime render offload, and with the Tonga
> standalone as primary gpu.
>
> v2: Add a wait for all shared fences before prime export,
> as suggested by Christian Koenig.
>
> v3: - Mark buffer prime_exported in amdgpu_gem_prime_pin,
> so we only use the exclusive fence when exporting a
> bo to external clients like a separate iGPU, but not
> when exporting/importing from/to ourselves as part of
> regular DRI3 fd passing.
>
> - Propagate failure of reservation_object_wait_rcu back
> to caller.
>
> v4: - Switch to a prime_shared_count counter instead of a
>   flag, which gets in/decremented on prime_pin/unpin, so
>   we can switch back to shared fences if all clients
>   detach from our exported bo.
>
> - Also switch to exclusive fence for prime imported bo's.
>
> v5: - Drop lret, instead use int ret -> long ret, as proposed
>   by Christian.
>
> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=95472
> Tested-by: Mike Lothian  (v1)
> Signed-off-by: Mario Kleiner 
> Reviewed-by: Christian König .
> Cc: Christian König 
> Cc: Michel Dänzer 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h |  1 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c   | 20 +++-
>  3 files changed, 21 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 039b57e..496f72b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -459,6 +459,7 @@ struct amdgpu_bo {
> u64 metadata_flags;
> void*metadata;
> u32 metadata_size;
> +   unsignedprime_shared_count;
> /* list of all virtual address to which this bo
>  * is associated to
>  */
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> index 651115d..c02db01f6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> @@ -132,7 +132,7 @@ static int amdgpu_bo_list_set(struct amdgpu_device
> *adev,
> entry->priority = min(info[i].bo_priority,
>   AMDGPU_BO_LIST_MAX_PRIORITY);
> entry->tv.bo = &entry->robj->tbo;
> -   entry->tv.shared = true;
> +   entry->tv.shared = !entry->robj->prime_shared_count;
>
> if (entry->robj->prefered_domains == AMDGPU_GEM_DOMAIN_GDS)
> gds_obj = entry->robj;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> index 7700dc2..3826d5a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> @@ -74,20 +74,36 @@ amdgpu_gem_prime_import_sg_table(struct drm_device
> *dev,
> if (ret)
> return ERR_PTR(ret);
>
> +   bo->prime_shared_count = 1;
> return &bo->gem_base;
>  }
>
>  int amdgpu_gem_prime_pin(struct drm_gem_object *obj)
>  {
> struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> -   int ret = 0;
> +   long ret = 0;
>
> ret = amdgpu_bo_reserve(bo, false);
> if (unlikely(ret != 0))
> return ret;
>
> +   /*
> +* Wait for all shared fences to complete before we switch to
> future
> +* use of exclusive fence on this prime shared bo.
> +*/
> +   ret = reservation_object_wait_timeout_rcu(bo->tbo.resv, true,
> false,
> + MAX_SCHEDULE_TIMEOUT);
> +   if (unlikely(ret < 0)) {
> +   DRM_DEBUG_PRIME("Fence wait failed: %li\n", ret);
> +   amdgpu_bo_unreserve(bo);
> +   return ret;

[PATCH] drm/i915: Before pageflip, also wait for shared dmabuf fences.

2016-10-29 Thread Mike Lothian
I turned on vsync and everything works great in tomb raider

:D

Thanks again to everyone who made this possible

On Fri, 28 Oct 2016 at 19:37 Mike Lothian  wrote:

> Hi Mario
>
> That fixes the tearing, it's been replaced with a strange stutter, like
> it's only showing half the number of frames being reported - it's really
> noticeable in tomb raider
>
> Thanks for your work on this, the stutter is much more manageable than the
> tearing was
>
> I've attached the patch that applies cleanly to 4.10-wip
>
>
>
> On Fri, 28 Oct 2016 at 18:37 Mario Kleiner 
> wrote:
>
>
>
> On 10/28/2016 03:34 AM, Michel Dänzer wrote:
> > On 27/10/16 10:33 PM, Mike Lothian wrote:
> >>
> >> Just another gentle ping to see where you are with this?
> >
> > I haven't got a chance to look into this any further.
> >
> >
>
> Fwiw., as a proof of concept, the attached experimental patch does work
> as tested on Intel HD Haswell + AMD R9 380 Tonga under amdgpu and
> DRI3/Present when applied to drm-next (updated from a few days ago).
> With DRI_PRIME=1 tearing for page-flipped fullscreen windows is gone
> under all loads. The tearing with "windowed" windows now looks as
> expected for regular tearing not related to Prime.
>
> ftrace confirms the i915 driver's pageflip function is waiting on the
> fence in reservation_object_wait_timeout_rcu() as it should.
>
> That entry->tv.shared needs to be set false for such buffers in
> amdgpu_bo_list_set() makes sense to me, as that is part of the buffer
> validation for command stream submission. There are other places in the
> driver where tv.shared is set, which i didn't check so far.
>
> I don't know which of these would need to be updated with a "exported
> bo" check as well, e.g., for video decoding or maybe gpu compute? Adding
> or removing the check to amdgpu_gem_va_update_vm(), e.g., made no
> difference. I assume that makes sense because that functions seems to
> deal with amdgpu internal vm page tables or page table entries for such
> a bo, not with something visible to external clients?
>
> All i can say is it fixes 3D rendering under DRI3 + Prime + pageflipping
> without causing any obvious new problems.
>
> -mario
>
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://lists.freedesktop.org/archives/dri-devel/attachments/20161029/36b61179/attachment.html>


[PATCH] drm/i915: Before pageflip, also wait for shared dmabuf fences.

2016-10-28 Thread Mike Lothian
Hi Mario

That fixes the tearing, it's been replaced with a strange stutter, like
it's only showing half the number of frames being reported - it's really
noticeable in tomb raider

Thanks for your work on this, the stutter is much more manageable than the
tearing was

I've attached the patch that applies cleanly to 4.10-wip



On Fri, 28 Oct 2016 at 18:37 Mario Kleiner 
wrote:

>
>
> On 10/28/2016 03:34 AM, Michel Dänzer wrote:
> > On 27/10/16 10:33 PM, Mike Lothian wrote:
> >>
> >> Just another gentle ping to see where you are with this?
> >
> > I haven't got a chance to look into this any further.
> >
> >
>
> Fwiw., as a proof of concept, the attached experimental patch does work
> as tested on Intel HD Haswell + AMD R9 380 Tonga under amdgpu and
> DRI3/Present when applied to drm-next (updated from a few days ago).
> With DRI_PRIME=1 tearing for page-flipped fullscreen windows is gone
> under all loads. The tearing with "windowed" windows now looks as
> expected for regular tearing not related to Prime.
>
> ftrace confirms the i915 driver's pageflip function is waiting on the
> fence in reservation_object_wait_timeout_rcu() as it should.
>
> That entry->tv.shared needs to be set false for such buffers in
> amdgpu_bo_list_set() makes sense to me, as that is part of the buffer
> validation for command stream submission. There are other places in the
> driver where tv.shared is set, which i didn't check so far.
>
> I don't know which of these would need to be updated with a "exported
> bo" check as well, e.g., for video decoding or maybe gpu compute? Adding
> or removing the check to amdgpu_gem_va_update_vm(), e.g., made no
> difference. I assume that makes sense because that functions seems to
> deal with amdgpu internal vm page tables or page table entries for such
> a bo, not with something visible to external clients?
>
> All i can say is it fixes 3D rendering under DRI3 + Prime + pageflipping
> without causing any obvious new problems.
>
> -mario
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://lists.freedesktop.org/archives/dri-devel/attachments/20161028/73d5e496/attachment.html>
-- next part --
A non-text attachment was scrubbed...
Name: 0001-drm-amdgpu-Attach-exclusive-fence-to-prime-exported-.patch
Type: text/x-patch
Size: 2235 bytes
Desc: not available
URL: 
<https://lists.freedesktop.org/archives/dri-devel/attachments/20161028/73d5e496/attachment.bin>


[PATCH] drm/i915: Before pageflip, also wait for shared dmabuf fences.

2016-10-27 Thread Mike Lothian
Hi

Just another gentle ping to see where you are with this?

Cheers

Mike

On Wed, 12 Oct 2016 at 01:40 Michel Dänzer  wrote:

> On 11/10/16 09:04 PM, Christian König wrote:
> > Am 11.10.2016 um 05:58 schrieb Michel Dänzer:
> >> On 07/10/16 09:34 PM, Mike Lothian wrote:
> >>> This has discussion has gone a little quiet
> >>>
> >>> Was there any agreement about what needed doing to get this working
> >>> for i965/amdgpu?
> >> Christian, do you see anything which could prevent the solution I
> >> outlined from working?
> >
> > I thought about that approach as well, but unfortunately it also has a
> > couple of downsides. Especially keeping the exclusive fence set while we
> > actually don't need it isn't really clean either.
>
> I was wondering if it's possible to have a singleton pseudo exclusive
> fence which is used for all BOs. That might keep the overhead acceptably
> low.
>
>
> > I'm currently a bit busy with other tasks and so put Nayan on a road to
> > get a bit into the kernel driver (he asked for that anyway).
> >
> > Implementing the simple workaround to sync when we export the DMA-buf
> > should be something like 20 lines of code and fortunately Nayan has an
> > I+A system and so can actually test it.
> >
> > If it turns out to be more problematic or somebody really starts to need
> > it I'm going to hack on that myself a bit more.
>
> If you mean only syncing when a DMA-buf is exported, that's not enough,
> as I explained before. The BOs being shared are long-lived, and
> synchronization between GPUs is required for every command stream
> submission.
>
>
> --
> Earthling Michel Dänzer   |   http://www.amd.com
> Libre software enthusiast | Mesa and X developer
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://lists.freedesktop.org/archives/dri-devel/attachments/20161027/fe07fbb4/attachment.html>


[PATCH] drm/i915: Before pageflip, also wait for shared dmabuf fences.

2016-10-07 Thread Mike Lothian
Hi

This has discussion has gone a little quiet

Was there any agreement about what needed doing to get this working
for i965/amdgpu?

Regards

Mike

On Mon, 26 Sep 2016 at 09:04 Daniel Vetter  wrote:
>
> On Mon, Sep 26, 2016 at 09:48:37AM +0900, Michel Dänzer wrote:
> > On 23/09/16 09:09 PM, Daniel Vetter wrote:
> > > On Fri, Sep 23, 2016 at 07:00:25PM +0900, Michel Dänzer wrote:
> > >> On 22/09/16 10:22 PM, Christian König wrote:
> > >>> Am 22.09.2016 um 15:05 schrieb Daniel Vetter:
> > 
> >  But the current approach in amdgpu_sync.c of declaring a fence as
> >  exclusive after the fact (if owners don't match) just isn't how
> >  reservation_object works. You can of course change that, but that
> >  means you must change all drivers implementing support for implicit
> >  fencing of dma-buf. Fixing amdgpu will be easier ;-)
> > >>>
> > >>> Well as far as I can see there is no way I can fix amdgpu in this case.
> > >>>
> > >>> The handling clearly needs to be changed on the receiving side of the
> > >>> reservation objects if I don't completely want to disable concurrent
> > >>> access to BOs in amdgpu.
> > >>
> > >> Anyway, we need a solution for this between radeon and amdgpu, and I
> > >> don't think a solution which involves those drivers using reservation
> > >> object semantics between them which are different from all other drivers
> > >> is a good idea.
> > >
> > > Afaik there's also amd+intel machines out there,
> >
> > Sure, what I meant was that even if we didn't care about those (which we
> > do), we'd still need a solution between our own drivers.
> >
> >
> > > so really the only option is to either fix amdgpu to correctly set
> > > exclusive fences on shared buffers (with the help of userspace hints).
> > > Or change all the existing drivers.
> >
> > I got some fresh perspective on the problem while taking a walk, and I'm
> > now fairly convinced that neither amdgpu userspace nor other drivers
> > need to be modified:
> >
> > It occurred to me that all the information we need is already there in
> > the exclusive and shared fences set by amdgpu. We just need to use it
> > differently to match the expectations of other drivers.
> >
> > We should be able to set some sort of "pseudo" fence as the exclusive
> > fence of the reservation object. When we are asked by another driver to
> > wait for this fence to signal, we take the current "real" exclusive
> > fence (which we can keep track of e.g. in our BO struct) and shared
> > fences, and wait for all of those to signal; then we mark the "pseudo"
> > exclusive fence as signalled.
> >
> > Am I missing anything which would prevent this from working?
>
> One thing to make sure is that you don't change the (real, private stored)
> fences you're waiting on over the lifetime of the public exclusive fence.
> That might lead to some hilarity wrt potentially creating fence depency
> loops. But I think as long as you guarantee that the private internal
> fences are always amdgpu ones, and never anything imported from a
> different driver even that should be fine. Not because this would break
> the loops, but since amgpud has a hangcheck it can still gurantee that the
> fence eventually fires even if there is a loop.
> -Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v12 5/7] drm/i915/skl: Ensure pipes with changed wms get added to the state

2016-09-26 Thread Mike Lothian
Hi

Is there any chance this could be removed from the upcoming drm-4.9
pull, at least until this issue has been fixed

Regards

Mike

On 21 September 2016 at 12:34, Mike Lothian  wrote:
> I've raised https://bugs.freedesktop.org/show_bug.cgi?id=97888 I'll
> attach the info you requested once I get back to my machine
>
> On 21 September 2016 at 07:56, Maarten Lankhorst
>  wrote:
>> Hey,
>>
>> Op 20-09-16 om 20:45 schreef Mike Lothian:
>>> Hi
>>>
>>> I've bisected back to this commit in the drm-intel-nightly branch
>>>
>>> 05a76d3d6ad1ee9f9814f88949cc9305fc165460 is the first bad commit
>>> commit 05a76d3d6ad1ee9f9814f88949cc9305fc165460
>>> Author: Lyude 
>>> Date:   Wed Aug 17 15:55:57 2016 -0400
>>>
>>>drm/i915/skl: Ensure pipes with changed wms get added to the state
>>>
>>>If we're enabling a pipe, we'll need to modify the watermarks on all
>>>active planes. Since those planes won't be added to the state on
>>>their own, we need to add them ourselves.
>>>
>>>Signed-off-by: Lyude 
>>>Reviewed-by: Matt Roper 
>>>Cc: stable at vger.kernel.org
>>>Cc: Ville Syrjälä 
>>>Cc: Daniel Vetter 
>>>Cc: Radhakrishna Sripada 
>>>Cc: Hans de Goede 
>>>Signed-off-by: Maarten Lankhorst 
>>>Link: 
>>> http://patchwork.freedesktop.org/patch/msgid/1471463761-26796-6-git-send-email-cpaul
>>>  at redhat.com
>>>
>>> The symptoms I'm seeing look like tearing at the top of the screen and
>>> it's especially noticeable in Chrome - reverting this commit makes the
>>> issue go away
>>>
>>> Let me know if you'd like me to raise a bug
>> Please do so, it's nice to refer to when making a fix for it.
>>
>> Could you attach the contents of /sys/kernel/debug/dri/0/i915_ddb_info for 
>> working and not-working in it?
>>
>> ~Maarten


[PATCH] drm/amdgpu: Removed unneeded variables adev and dev

2016-09-23 Thread Mike Lothian
Since commit 62336cc1ca1b96f33e3344ca6e910d30adca749a the variables adev
and dev are no longer required

Signed-off-by: Mike Lothian 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 5b592af..2be20b4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -480,9 +480,6 @@ amdgpu_pci_remove(struct pci_dev *pdev)
 static void
 amdgpu_pci_shutdown(struct pci_dev *pdev)
 {
-   struct drm_device *dev = pci_get_drvdata(pdev);
-   struct amdgpu_device *adev = dev->dev_private;
-
/* if we are running in a VM, make sure the device
 * torn down properly on reboot/shutdown.
 * unfortunately we can't detect certain
-- 
2.10.0



[PATCH] drm/amdgpu Remove now unneeded variable adev

2016-09-23 Thread Mike Lothian
Since commit 62336cc1ca1b96f33e3344ca6e910d30adca749a adev is now no
longer required in amdgpu_drv.c

Signed-off-by: Mike Lothian 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 5b592af..a5ec1f9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -481,7 +481,6 @@ static void
 amdgpu_pci_shutdown(struct pci_dev *pdev)
 {
struct drm_device *dev = pci_get_drvdata(pdev);
-   struct amdgpu_device *adev = dev->dev_private;

/* if we are running in a VM, make sure the device
 * torn down properly on reboot/shutdown.
-- 
2.10.0



[PATCH] drm/amdgpu Remove now unneeded variable adev

2016-09-23 Thread Mike Lothian
Sorry please ignore this - follow up momentarily

On Fri, 23 Sep 2016 at 21:14 Mike Lothian  wrote:

> Since commit 62336cc1ca1b96f33e3344ca6e910d30adca749a adev is now no
> longer required in amdgpu_drv.c
>
> Signed-off-by: Mike Lothian 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> index 5b592af..a5ec1f9 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> @@ -481,7 +481,6 @@ static void
>  amdgpu_pci_shutdown(struct pci_dev *pdev)
>  {
> struct drm_device *dev = pci_get_drvdata(pdev);
> -   struct amdgpu_device *adev = dev->dev_private;
>
> /* if we are running in a VM, make sure the device
>  * torn down properly on reboot/shutdown.
> --
> 2.10.0
>
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://lists.freedesktop.org/archives/dri-devel/attachments/20160923/8c1be32d/attachment.html>


[PATCH v12 5/7] drm/i915/skl: Ensure pipes with changed wms get added to the state

2016-09-21 Thread Mike Lothian
I've raised https://bugs.freedesktop.org/show_bug.cgi?id=97888 I'll
attach the info you requested once I get back to my machine

On 21 September 2016 at 07:56, Maarten Lankhorst
 wrote:
> Hey,
>
> Op 20-09-16 om 20:45 schreef Mike Lothian:
>> Hi
>>
>> I've bisected back to this commit in the drm-intel-nightly branch
>>
>> 05a76d3d6ad1ee9f9814f88949cc9305fc165460 is the first bad commit
>> commit 05a76d3d6ad1ee9f9814f88949cc9305fc165460
>> Author: Lyude 
>> Date:   Wed Aug 17 15:55:57 2016 -0400
>>
>>drm/i915/skl: Ensure pipes with changed wms get added to the state
>>
>>If we're enabling a pipe, we'll need to modify the watermarks on all
>>active planes. Since those planes won't be added to the state on
>>their own, we need to add them ourselves.
>>
>>Signed-off-by: Lyude 
>>Reviewed-by: Matt Roper 
>>Cc: stable at vger.kernel.org
>>Cc: Ville Syrjälä 
>>Cc: Daniel Vetter 
>>Cc: Radhakrishna Sripada 
>>Cc: Hans de Goede 
>>Signed-off-by: Maarten Lankhorst 
>>Link: 
>> http://patchwork.freedesktop.org/patch/msgid/1471463761-26796-6-git-send-email-cpaul
>>  at redhat.com
>>
>> The symptoms I'm seeing look like tearing at the top of the screen and
>> it's especially noticeable in Chrome - reverting this commit makes the
>> issue go away
>>
>> Let me know if you'd like me to raise a bug
> Please do so, it's nice to refer to when making a fix for it.
>
> Could you attach the contents of /sys/kernel/debug/dri/0/i915_ddb_info for 
> working and not-working in it?
>
> ~Maarten


[PATCH v12 5/7] drm/i915/skl: Ensure pipes with changed wms get added to the state

2016-09-21 Thread Mike Lothian
Will do.

On Wed, 21 Sep 2016 at 07:56 Maarten Lankhorst <
maarten.lankhorst at linux.intel.com> wrote:

> Hey,
>
> Op 20-09-16 om 20:45 schreef Mike Lothian:
> > Hi
> >
> > I've bisected back to this commit in the drm-intel-nightly branch
> >
> > 05a76d3d6ad1ee9f9814f88949cc9305fc165460 is the first bad commit
> > commit 05a76d3d6ad1ee9f9814f88949cc9305fc165460
> > Author: Lyude 
> > Date:   Wed Aug 17 15:55:57 2016 -0400
> >
> >drm/i915/skl: Ensure pipes with changed wms get added to the state
> >
> >If we're enabling a pipe, we'll need to modify the watermarks on all
> >active planes. Since those planes won't be added to the state on
> >their own, we need to add them ourselves.
> >
> >Signed-off-by: Lyude 
> >Reviewed-by: Matt Roper 
> >Cc: stable at vger.kernel.org
> >Cc: Ville Syrjälä 
> >Cc: Daniel Vetter 
> >Cc: Radhakrishna Sripada 
> >Cc: Hans de Goede 
> >Signed-off-by: Maarten Lankhorst 
> >Link:
> http://patchwork.freedesktop.org/patch/msgid/1471463761-26796-6-git-send-email-cpaul
>  at redhat.com
> >
> > The symptoms I'm seeing look like tearing at the top of the screen and
> > it's especially noticeable in Chrome - reverting this commit makes the
> > issue go away
> >
> > Let me know if you'd like me to raise a bug
> Please do so, it's nice to refer to when making a fix for it.
>
> Could you attach the contents of /sys/kernel/debug/dri/0/i915_ddb_info for
> working and not-working in it?
>
> ~Maarten
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://lists.freedesktop.org/archives/dri-devel/attachments/20160921/aa5647e9/attachment-0001.html>


[PATCH v12 5/7] drm/i915/skl: Ensure pipes with changed wms get added to the state

2016-09-20 Thread Mike Lothian
Hi

I've bisected back to this commit in the drm-intel-nightly branch

05a76d3d6ad1ee9f9814f88949cc9305fc165460 is the first bad commit
commit 05a76d3d6ad1ee9f9814f88949cc9305fc165460
Author: Lyude 
Date:   Wed Aug 17 15:55:57 2016 -0400

   drm/i915/skl: Ensure pipes with changed wms get added to the state

   If we're enabling a pipe, we'll need to modify the watermarks on all
   active planes. Since those planes won't be added to the state on
   their own, we need to add them ourselves.

   Signed-off-by: Lyude 
   Reviewed-by: Matt Roper 
   Cc: stable at vger.kernel.org
   Cc: Ville Syrjälä 
   Cc: Daniel Vetter 
   Cc: Radhakrishna Sripada 
   Cc: Hans de Goede 
   Signed-off-by: Maarten Lankhorst 
   Link: 
http://patchwork.freedesktop.org/patch/msgid/1471463761-26796-6-git-send-email-cpaul
 at redhat.com

The symptoms I'm seeing look like tearing at the top of the screen and
it's especially noticeable in Chrome - reverting this commit makes the
issue go away

Let me know if you'd like me to raise a bug

Cheers

Mike

(Re-sending from Gmail - as Inbox doesn't let me send as plain text)

On 17 August 2016 at 20:55, Lyude  wrote:
> If we're enabling a pipe, we'll need to modify the watermarks on all
> active planes. Since those planes won't be added to the state on
> their own, we need to add them ourselves.
>
> Signed-off-by: Lyude 
> Reviewed-by: Matt Roper 
> Cc: stable at vger.kernel.org
> Cc: Ville Syrjälä 
> Cc: Daniel Vetter 
> Cc: Radhakrishna Sripada 
> Cc: Hans de Goede 
> ---
>  drivers/gpu/drm/i915/intel_pm.c | 4 
>  1 file changed, 4 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index 849f039..a3d24cb 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -4117,6 +4117,10 @@ skl_compute_ddb(struct drm_atomic_state *state)
> ret = skl_allocate_pipe_ddb(cstate, ddb);
> if (ret)
> return ret;
> +
> +   ret = drm_atomic_add_affected_planes(state, 
> &intel_crtc->base);
> +   if (ret)
> +   return ret;
> }
>
> return 0;
> --
> 2.7.4
>
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v12 5/7] drm/i915/skl: Ensure pipes with changed wms get added to the state

2016-09-20 Thread Mike Lothian
Hi

I've bisected back to this commit in the drm-intel-nightly branch

05a76d3d6ad1ee9f9814f88949cc9305fc165460 is the first bad commit
commit 05a76d3d6ad1ee9f9814f88949cc9305fc165460
Author: Lyude 
Date:   Wed Aug 17 15:55:57 2016 -0400

   drm/i915/skl: Ensure pipes with changed wms get added to the state

   If we're enabling a pipe, we'll need to modify the watermarks on all
   active planes. Since those planes won't be added to the state on
   their own, we need to add them ourselves.

   Signed-off-by: Lyude 
   Reviewed-by: Matt Roper 
   Cc: stable at vger.kernel.org
   Cc: Ville Syrjälä 
   Cc: Daniel Vetter 
   Cc: Radhakrishna Sripada 
   Cc: Hans de Goede 
   Signed-off-by: Maarten Lankhorst 
   Link:
http://patchwork.freedesktop.org/patch/msgid/1471463761-26796-6-git-send-email-cpaul
 at redhat.com

The symptoms I'm seeing look like tearing at the top of the screen and it's
especially noticeable in Chrome - reverting this commit makes the issue go
away

Let me know if you'd like me to raise a bug

Cheers

Mike

On Wed, 17 Aug 2016 at 20:56 Lyude  wrote:

> If we're enabling a pipe, we'll need to modify the watermarks on all
> active planes. Since those planes won't be added to the state on
> their own, we need to add them ourselves.
>
> Signed-off-by: Lyude 
> Reviewed-by: Matt Roper 
> Cc: stable at vger.kernel.org
> Cc: Ville Syrjälä 
> Cc: Daniel Vetter 
> Cc: Radhakrishna Sripada 
> Cc: Hans de Goede 
> ---
>  drivers/gpu/drm/i915/intel_pm.c | 4 
>  1 file changed, 4 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/intel_pm.c
> b/drivers/gpu/drm/i915/intel_pm.c
> index 849f039..a3d24cb 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -4117,6 +4117,10 @@ skl_compute_ddb(struct drm_atomic_state *state)
> ret = skl_allocate_pipe_ddb(cstate, ddb);
> if (ret)
> return ret;
> +
> +   ret = drm_atomic_add_affected_planes(state,
> &intel_crtc->base);
> +   if (ret)
> +   return ret;
> }
>
> return 0;
> --
> 2.7.4
>
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
-- next part --
An HTML attachment was scrubbed...
URL: 



[PATCH] drm/prime: fix error path deadlock fail

2016-06-09 Thread Mike Lothian
If you let me know how, I'll test it

On Thu, 9 Jun 2016 at 20:29 Rob Clark  wrote:

> There were a couple messed up things about this fail path.
> (1) it would drop object_name_lock twice
> (2) drm_gem_handle_delete() (in drm_gem_remove_prime_handles())
> needs to grab prime_lock
>
> Reported-by: Alex Deucher 
> Signed-off-by: Rob Clark 
> ---
> Untested, but I think this should fix the potential deadlock that Alex
> reported.  Would be nice if someone with setup to reproduce could test
> this.
>
>  drivers/gpu/drm/drm_prime.c | 10 ++
>  1 file changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index aab0f3f..780589b 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -593,7 +593,7 @@ int drm_gem_prime_fd_to_handle(struct drm_device *dev,
> get_dma_buf(dma_buf);
> }
>
> -   /* drm_gem_handle_create_tail unlocks dev->object_name_lock. */
> +   /* _handle_create_tail unconditionally unlocks
> dev->object_name_lock. */
> ret = drm_gem_handle_create_tail(file_priv, obj, handle);
> drm_gem_object_unreference_unlocked(obj);
> if (ret)
> @@ -601,11 +601,10 @@ int drm_gem_prime_fd_to_handle(struct drm_device
> *dev,
>
> ret = drm_prime_add_buf_handle(&file_priv->prime,
> dma_buf, *handle);
> +   mutex_unlock(&file_priv->prime.lock);
> if (ret)
> goto fail;
>
> -   mutex_unlock(&file_priv->prime.lock);
> -
> dma_buf_put(dma_buf);
>
> return 0;
> @@ -615,11 +614,14 @@ fail:
>  * to detach.. which seems ok..
>  */
> drm_gem_handle_delete(file_priv, *handle);
> +   dma_buf_put(dma_buf);
> +   return ret;
> +
>  out_unlock:
> mutex_unlock(&dev->object_name_lock);
>  out_put:
> -   dma_buf_put(dma_buf);
> mutex_unlock(&file_priv->prime.lock);
> +   dma_buf_put(dma_buf);
> return ret;
>  }
>  EXPORT_SYMBOL(drm_gem_prime_fd_to_handle);
> --
> 2.5.5
>
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
-- next part --
An HTML attachment was scrubbed...
URL: 



[PATCH 00/12] Improve PX support in radeon and amdgpu

2016-06-02 Thread Mike Lothian
Are these in a branch somewhere? If not I'll try apply the mbox from
patchwork.freedesktop.org

On Wed, 1 Jun 2016 at 21:53 Alex Deucher  wrote:

> This patch set cleans up and attempts to make runtime pm more
> reliable in radeon and amdgpu on PX systems.  If you have a PX
> system that requires setting the runpm=0 module parameter for
> stability, please try this patch set.
>
> The main fix is that a minimum of 200ms of delay is required between
> a dGPU power down and a power up.
>
> This patch also properly handles the detection of the ATPX dGPU
> power control method properly and handles dGPU power control for
> platforms that do not support the ATPX dGPU power control method.
>
> Alex Deucher (12):
>   drm/amdgpu: disable power control on hybrid laptops
>   drm/amdgpu: clean up atpx power control handling
>   drm/amdgpu: add a delay after ATPX dGPU power off
>   drm/amdgpu/atpx: add a query for ATPX dGPU power control
>   drm/amdgpu: use PCI_D3hot for PX systems without dGPU power control
>   drm/amdgpu/atpx: drop forcing of dGPU power control
>   drm/radeon: disable power control on hybrid laptops
>   drm/radeon: clean up atpx power control handling
>   drm/radeon: add a delay after ATPX dGPU power off
>   drm/radeon/atpx: add a query for ATPX dGPU power control
>   drm/radeon: use PCI_D3hot for PX systems without dGPU power control
>   drm/radeon/atpx: drop forcing of dGPU power control
>
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h  |  2 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c | 50
> 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c  |  5 ++-
>  drivers/gpu/drm/radeon/radeon_atpx_handler.c | 49
> +++
>  drivers/gpu/drm/radeon/radeon_drv.c  |  7 +++-
>  5 files changed, 77 insertions(+), 36 deletions(-)
>
> --
> 2.5.5
>
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
-- next part --
An HTML attachment was scrubbed...
URL: 



[PATCH 3.5/6] drm/amd/powerplay: fix bugs of checking if dpm is running on Tonga

2016-05-13 Thread Mike Lothian
That did the trick thanks

On Fri, 13 May 2016 at 22:36 Alex Deucher  wrote:

> From: Eric Huang 
>
> Fixes OD failures on Tonga.
>
> Reviewed-by: Alex Deucher 
> Signed-off-by: Eric Huang 
> Signed-off-by: Alex Deucher 
> ---
>
> This fixes OD failures on Tonga in some cases.
>
>  drivers/gpu/drm/amd/powerplay/hwmgr/tonga_hwmgr.c | 8 
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/tonga_hwmgr.c
> b/drivers/gpu/drm/amd/powerplay/hwmgr/tonga_hwmgr.c
> index cb28335..7c3f82b 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/tonga_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/tonga_hwmgr.c
> @@ -5331,7 +5331,7 @@ static int tonga_freeze_sclk_mclk_dpm(struct
> pp_hwmgr *hwmgr)
> (data->need_update_smu7_dpm_table &
> (DPMTABLE_OD_UPDATE_SCLK + DPMTABLE_UPDATE_SCLK))) {
> PP_ASSERT_WITH_CODE(
> -   true == tonga_is_dpm_running(hwmgr),
> +   0 == tonga_is_dpm_running(hwmgr),
> "Trying to freeze SCLK DPM when DPM is disabled",
> );
> PP_ASSERT_WITH_CODE(
> @@ -5344,7 +5344,7 @@ static int tonga_freeze_sclk_mclk_dpm(struct
> pp_hwmgr *hwmgr)
> if ((0 == data->mclk_dpm_key_disabled) &&
> (data->need_update_smu7_dpm_table &
>  DPMTABLE_OD_UPDATE_MCLK)) {
> -   PP_ASSERT_WITH_CODE(true == tonga_is_dpm_running(hwmgr),
> +   PP_ASSERT_WITH_CODE(0 == tonga_is_dpm_running(hwmgr),
> "Trying to freeze MCLK DPM when DPM is disabled",
> );
> PP_ASSERT_WITH_CODE(
> @@ -5647,7 +5647,7 @@ static int tonga_unfreeze_sclk_mclk_dpm(struct
> pp_hwmgr *hwmgr)
> (data->need_update_smu7_dpm_table &
> (DPMTABLE_OD_UPDATE_SCLK + DPMTABLE_UPDATE_SCLK))) {
>
> -   PP_ASSERT_WITH_CODE(true == tonga_is_dpm_running(hwmgr),
> +   PP_ASSERT_WITH_CODE(0 == tonga_is_dpm_running(hwmgr),
> "Trying to Unfreeze SCLK DPM when DPM is disabled",
> );
> PP_ASSERT_WITH_CODE(
> @@ -5661,7 +5661,7 @@ static int tonga_unfreeze_sclk_mclk_dpm(struct
> pp_hwmgr *hwmgr)
> (data->need_update_smu7_dpm_table &
> DPMTABLE_OD_UPDATE_MCLK)) {
>
> PP_ASSERT_WITH_CODE(
> -   true == tonga_is_dpm_running(hwmgr),
> +   0 == tonga_is_dpm_running(hwmgr),
> "Trying to Unfreeze MCLK DPM when DPM is
> disabled",
> );
> PP_ASSERT_WITH_CODE(
> --
> 2.5.5
>
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
-- next part --
An HTML attachment was scrubbed...
URL: 



[PATCH 0/6] Initial sclk OD support for amdgpu

2016-05-13 Thread Mike Lothian
Hi

I gave this a spin but I just get:

[ 1073.096585] Trying to freeze SCLK DPM when DPM is disabled
[ 1073.097667] Trying to Unfreeze SCLK DPM when DPM is disabled
[ 1073.100118] Trying to freeze SCLK DPM when DPM is disabled
[ 1073.101618] Trying to Unfreeze SCLK DPM when DPM is disabled

DPM is enabled

Cheers

Mike

On Fri, 13 May 2016 at 19:56 Alex Deucher  wrote:

> On Fri, May 13, 2016 at 2:54 PM, Mike Lothian  wrote:
> > Sounds fancy but what does it do?
>
> Whoops, meant to define OD in the cover letter, the patches have the
> details.  OD = Overclocking.
>
> Alex
>
> >
> > On Fri, 13 May 2016 at 19:49 Alex Deucher  wrote:
> >>
> >> This adds initial OverDrive (OD) support for the gfx engine
> >> clock (sclk).  It's enabled by selecting a percentage (0-20)
> >> and writing it to a new sysfs file.  It's currently available
> >> on Tonga, Fiji, and Polaris.
> >>
> >> Eric Huang (6):
> >>   drm/amd/powerplay: fix a bug on updating sclk for Fiji
> >>   drm/amd/powerplay: fix a bug on updating sclk for Tonga
> >>   drm/amdgpu: add powerplay sclk OD support through sysfs
> >>   drm/amd/powerplay: add sclk OD support on Fiji
> >>   drm/amd/powerplay: add sclk OD support on Tonga
> >>   drm/amd/powerplay: add sclk OD support on Polaris10
> >>
> >>  drivers/gpu/drm/amd/amdgpu/amdgpu.h|  6 +++
> >>  drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c | 49
> >> ++
> >>  drivers/gpu/drm/amd/powerplay/amd_powerplay.c  | 44
> >> +++
> >>  drivers/gpu/drm/amd/powerplay/hwmgr/fiji_hwmgr.c   | 45
> >> +++-
> >>  .../gpu/drm/amd/powerplay/hwmgr/polaris10_hwmgr.c  | 44
> >> +++
> >>  drivers/gpu/drm/amd/powerplay/hwmgr/tonga_hwmgr.c  | 46
> >> +++-
> >>  drivers/gpu/drm/amd/powerplay/inc/amd_powerplay.h  |  2 +
> >>  drivers/gpu/drm/amd/powerplay/inc/hwmgr.h  |  2 +
> >>  8 files changed, 236 insertions(+), 2 deletions(-)
> >>
> >> --
> >> 2.5.5
> >>
> >> ___
> >> dri-devel mailing list
> >> dri-devel at lists.freedesktop.org
> >> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://lists.freedesktop.org/archives/dri-devel/attachments/20160513/4571a072/attachment.html>


[PATCH 0/6] Initial sclk OD support for amdgpu

2016-05-13 Thread Mike Lothian
Sounds fancy but what does it do?

On Fri, 13 May 2016 at 19:49 Alex Deucher  wrote:

> This adds initial OverDrive (OD) support for the gfx engine
> clock (sclk).  It's enabled by selecting a percentage (0-20)
> and writing it to a new sysfs file.  It's currently available
> on Tonga, Fiji, and Polaris.
>
> Eric Huang (6):
>   drm/amd/powerplay: fix a bug on updating sclk for Fiji
>   drm/amd/powerplay: fix a bug on updating sclk for Tonga
>   drm/amdgpu: add powerplay sclk OD support through sysfs
>   drm/amd/powerplay: add sclk OD support on Fiji
>   drm/amd/powerplay: add sclk OD support on Tonga
>   drm/amd/powerplay: add sclk OD support on Polaris10
>
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h|  6 +++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c | 49
> ++
>  drivers/gpu/drm/amd/powerplay/amd_powerplay.c  | 44
> +++
>  drivers/gpu/drm/amd/powerplay/hwmgr/fiji_hwmgr.c   | 45
> +++-
>  .../gpu/drm/amd/powerplay/hwmgr/polaris10_hwmgr.c  | 44
> +++
>  drivers/gpu/drm/amd/powerplay/hwmgr/tonga_hwmgr.c  | 46
> +++-
>  drivers/gpu/drm/amd/powerplay/inc/amd_powerplay.h  |  2 +
>  drivers/gpu/drm/amd/powerplay/inc/hwmgr.h  |  2 +
>  8 files changed, 236 insertions(+), 2 deletions(-)
>
> --
> 2.5.5
>
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
-- next part --
An HTML attachment was scrubbed...
URL: 



[RFC v2 0/8] drm: explicit fencing support

2016-04-25 Thread Mike Lothian
Hi

Out of interest will this allow tear free with PRIME?

Thanks

Mike

On Tue, 26 Apr 2016, 12:33 a.m. Gustavo Padovan, 
wrote:

> From: Gustavo Padovan 
>
> Hi,
>
> Currently the Linux Kernel only have an implicit fencing mechanism
> where the fence are attached directly to buffers and userspace is unaware
> of
> what is happening. On the other hand explicit fencing which is not
> supported
> yet by Linux but it expose fences to the userspace to handle fencing
> between
> producer/consumer explicitely.
>
> For that we use the Android Sync Framework[1], a explicit fencing mechanism
> that help the userspace handles fences directly. It has the concept of
> sync_file (called sync_fence in Android) that expose the driver's fences to
> userspace via file descriptors. File descriptors are useful because we can
> pass
> them around between process.
>
> The Sync Framework is currently in the staging tree and on the process to
> be de-staged[2].
>
> With explicit fencing we have a global mechanism that optimizes the flow of
> buffers between consumers and producers, avoid a lot of waiting. So instead
> of waiting for a buffer to be processed by the GPU before sending it to DRM
> in an Atomic IOCTL we can get a sync_file fd from the GPU driver at the
> moment
> we submit the buffer processing. The compositor then passes these fds to
> DRM in
> a atomic commit request, that will not be displayed until the fences
> signal,
> i.e, the GPU finished processing the buffer and it is ready to display. In
> DRM
> the fences we wait on before displaying a buffer are called in-fences.
>
> Vice-versa, we have out-fences, to sychronize the return of buffers to GPU
> (producer) to be processed again. When DRM receives an atomic request with
> a
> special flag set it generates one fence per-crtc and attach it to a
> per-crtc
> sync_file.  It then returns the array of sync_file fds to userspace as an
> atomic_ioctl out arg. With the fences available userspace can forward these
> fences to the GPU, where it will wait the fence to signal before starting
> to
> process on buffer again.
>
> Explicit fencing with Sync Framework allows buffer suballocation. Userspace
> get a large buffer and divides it into small ones and submit requests to
> process them, each subbuffer gets and sync_file fd and can be processed in
> parallel. This is not even possible with implicit fencing.
>
> While these are out-fences in DRM (the consumer) they become in-fences once
> they get to the GPU (the producer).
>
> DRM explicit fences are opt-in, as the default will still be implicit
> fencing.
> To enable explicit in-fences one just need to pass a sync_file fd in the
> FENCE_FD plane property. *In-fences are per-plane*, i.e., per framebuffer.
>
> For out-fences, just enabling DRM_MODE_ATOMIC_OUT_FENCE flag is enough.
> *Out-fences are per-crtc*.
>
> In-fences
> -
>
> In the first discussions on #dri-devel on IRC we decided to hide the Sync
> Framework from DRM drivers to reduce complexity, so as soon we get the fd
> via FENCE_FD plane property we convert the sync_file fd to a struct fence.
> However a sync_file might contain more than one fence, so we created the
> fence_collection concept. struct fence_collection is a subclass of struct
> fence and stores a group of fences that needs to be waited together, in
> other words, all the fences in the sync_file.
>
> Then we just use the already in place fence support to wait on those
> fences.
> Once the producer calls fence_signal() for all fences on wait we can
> proceed
> with the atomic commit and display the framebuffers. DRM drivers only
> needs to
> be converted to struct fence to make use of this feature.
>
> Out-fences
> --
>
> Passing the DRM_MODE_ATOMIC_OUT_FENCE flag to an atomic request enables
> out-fences. The kernel then creates a fence, attach it to a sync_file and
> install this file on a unused fd for each crtc. Userspace get the fence
> back
> as an array of per-crtc sync_file fds.
>
> DRM core use the already in place drm_event infrastructure to help signal
> fences, we've added a fence pointer to struct drm_pending_event. If the
> atomic
> update received requested an PAGE_FLIP_EVENT we just use the same
> drm_pending_event and set our fence there, otherwise we just create an
> event
> with a NULL file_priv to set our fence. On vblank we just call
> fence_signal()
> to signal that the buffer related to this fence is *now* on the screen.
> Note that this is exactly the opposite behaviour from Android, where the
> fences
> are signaled when they are not on display anymore, so free to be reused.
>
> No changes are required to DRM drivers to have out-fences support, apart
> from
> atomic support of course.
>
> Kernel tree
> ---
>
> For those who want all patches on this RFC are in my tree. The tree
> includes
> all sync frameworks patches needed at the moment:
>
>
> https://git.kernel.org/cgit/linux/kernel/git/padovan/linux.git/log/?h=fences
>
> I also hacked some po

[PATCH 00/29] Enabling new DAL display driver for amdgpu on Carrizo and Tonga

2016-02-11 Thread Mike Lothian
Hi

Does that mean Tonga is capable of HDMI 2.0 or is it only Carrizo

Cheers

Mike

On Thu, 11 Feb 2016 at 17:20 Harry Wentland  wrote:

> This set of patches enables the new DAL display driver for amdgpu on
> Carrizo
> Tonga, and Fiji ASICs. This driver will allow us going forward to bring
> display features on the open amdgpu driver (mostly) on par with the
> Catalyst
> driver.
>
> This driver adds support for
> - Atomic KMS API
> - MST
> - HDMI 2.0
> - Better powerplay integration
> - Support of HW bandwidth formula on Carrizo
> - Better multi-display support and handling of co-functionality
> - Broader support of display dongles
> - Timing synchronization between DP and HDMI
>
> This patch series is based on Alex Deucher's drm-next-4.6-wip tree.
>
>
>
> Andrey Grodzovsky (1):
>   drm/amd/dal: Force bw programming for DCE 10 until we start calculate
> BW.
>
> Harry Wentland (27):
>   drm/amd/dal: Add dal headers
>   drm/amd/dal: Add DAL Basic Types and Logger
>   drm/amd/dal: Fixed point arithmetic
>   drm/amd/dal: Asic Capabilities
>   drm/amd/dal: GPIO (General Purpose IO)
>   drm/amd/dal: Adapter Service
>   drm/amd/dal: BIOS Parser
>   drm/amd/dal: I2C Aux Manager
>   drm/amd/dal: IRQ Service
>   drm/amd/dal: GPU
>   drm/amd/dal: Audio
>   drm/amd/dal: Bandwidth calculations
>   drm/amd/dal: Add encoder HW programming
>   drm/amd/dal: Add clock source HW programming
>   drm/amd/dal: Add timing generator HW programming
>   drm/amd/dal: Add surface HW programming
>   drm/amd/dal: Add framebuffer compression HW programming
>   drm/amd/dal: Add input pixel processing HW programming
>   drm/amd/dal: Add output pixel processing HW programming
>   drm/amd/dal: Add transform & scaler HW programming
>   drm/amd/dal: Add Carrizo HW sequencer and resource
>   drm/amd/dal: Add Tonga/Fiji HW sequencer and resource
>   drm/amd/dal: Add empty encoder programming for virtual HW
>   drm/amd/dal: Add display core
>   drm/amd/dal: Adding amdgpu_dm for dal
>   drm/amdgpu: Use dal driver for Carrizo, Tonga, and Fiji
>   drm/amd/dal: Correctly interpret rotation as bit set
>
> Mykola Lysenko (1):
>   drm/amd/dal: fix flip clean-up state
>
>  drivers/gpu/drm/amd/amdgpu/Kconfig |3 +
>  drivers/gpu/drm/amd/amdgpu/Makefile|   17 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h|   10 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |   69 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c|4 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c |5 -
>  drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c|   20 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h   |   54 +-
>  drivers/gpu/drm/amd/amdgpu/vi.c|  250 +
>  drivers/gpu/drm/amd/dal/Kconfig|   48 +
>  drivers/gpu/drm/amd/dal/Makefile   |   21 +
>  drivers/gpu/drm/amd/dal/amdgpu_dm/Makefile |   17 +
>  drivers/gpu/drm/amd/dal/amdgpu_dm/amdgpu_dm.c  | 1468 ++
>  drivers/gpu/drm/amd/dal/amdgpu_dm/amdgpu_dm.h  |  168 +
>  .../gpu/drm/amd/dal/amdgpu_dm/amdgpu_dm_helpers.c  |  474 ++
>  drivers/gpu/drm/amd/dal/amdgpu_dm/amdgpu_dm_irq.c  |  820 
>  drivers/gpu/drm/amd/dal/amdgpu_dm/amdgpu_dm_irq.h  |  122 +
>  .../drm/amd/dal/amdgpu_dm/amdgpu_dm_mst_types.c|  480 ++
>  .../drm/amd/dal/amdgpu_dm/amdgpu_dm_mst_types.h|   36 +
>  .../gpu/drm/amd/dal/amdgpu_dm/amdgpu_dm_services.c |  457 ++
>  .../gpu/drm/amd/dal/amdgpu_dm/amdgpu_dm_types.c| 2577 ++
>  .../gpu/drm/amd/dal/amdgpu_dm/amdgpu_dm_types.h|  100 +
>  drivers/gpu/drm/amd/dal/dal_services.h |  266 ++
>  drivers/gpu/drm/amd/dal/dal_services_types.h   |   62 +
>  drivers/gpu/drm/amd/dal/dc/Makefile|   28 +
>  drivers/gpu/drm/amd/dal/dc/adapter/Makefile|   24 +
>  .../gpu/drm/amd/dal/dc/adapter/adapter_service.c   | 2089 
>  .../gpu/drm/amd/dal/dc/adapter/adapter_service.h   |   71 +
>  .../adapter/dce110/hw_ctx_adapter_service_dce110.c |  304 ++
>  .../adapter/dce110/hw_ctx_adapter_service_dce110.h |   40 +
>  .../diagnostics/hw_ctx_adapter_service_diag.c  |  133 +
>  .../diagnostics/hw_ctx_adapter_service_diag.h  |   33 +
>  .../amd/dal/dc/adapter/hw_ctx_adapter_service.c|  164 +
>  .../amd/dal/dc/adapter/hw_ctx_adapter_service.h|   86 +
>  .../drm/amd/dal/dc/adapter/wireless_data_source.c  |  208 +
>  .../drm/amd/dal/dc/adapter/wireless_data_source.h  |   80 +
>  .../gpu/drm/amd/dal/dc/asic_capability/Makefile|   35 +
>  .../amd/dal/dc/asic_capability/asic_capability.c   |  190 +
>  .../dc/asic_capability/carrizo_asic_capability.c   |  147 +
>  .../dc/asic_capability/carrizo_asic_capability.h   |   36 +
>  .../dal/dc/asic_capability/tonga_asic_capability.c |  146 +
>  .../dal/dc/asic_capability/tonga_asic_capability.h |   36 +
>  drivers/gpu/drm/amd/dal/dc/audio/Makefile  |   22 +
>  drivers/gpu/drm/amd/dal/dc/audio/audio.h   |  195 +
>  drivers/gpu/drm/amd/dal

[PATCH] drm/radeon Make CIK support optional

2016-02-08 Thread Mike Lothian
Hi

My logic was its not a good idea to have two drivers trying to take
ownership of the same hardware. I was thinking if I only make the CIK on
radon option dependant on staging, and again not allow CIK support on
amdgpu unless radeon isn't built or doesn't have CIK support itself.

That should mean you aren't reliant on linking order or blacklisting radeon
and people won't get the option to fiddle unless staging is switched on
(should that be the same for amdgpu too?)

Let me know what you think

Cheers

Mike

On Mon, 8 Feb 2016, 10:10 a.m. Mike Lothian  wrote:

> Thanks for the feedback
>
> I'll take a look at the PCI ID's tonight
>
> On Mon, 8 Feb 2016, 9:54 a.m. Christian König 
> wrote:
>
>> Am 08.02.2016 um 03:45 schrieb Mike Lothian:
>> > This will allow us to disable CIK support in the radeon driver, so both
>> > radeon and amdgpu can be around at the same time without conflicting
>> >
>> > Signed-of-by: Mike Lothian 
>> > ---
>> >
>> > I've tested this on my Kabini system radeon doesn't initalise when
>> compiled in but
>> > I do get these messages in my dmesg:
>> >
>> > [drm] radeon kernel modesetting enabled.
>> > [drm] initializing kernel modesetting (KABINI 0x1002:0x9832
>> 0x1025:0x0800).
>> > radeon :00:01.0: Fatal error during GPU init
>> > radeon: probe of :00:01.0 failed with error -22
>> >
>> > Am I going down the right route with this?
>>
>> Well, probably not but it's at least start.
>>
>> First of all the CIK support in AMDGPU isn't really mature at the
>> moment. We only used it for bringup of the initial driver and it still
>> has some bugs. So at least currently we don't want to encourage people
>> to use amdgpu over radeon for CIK parts.
>>
>> Additional to that the amdgpu support compiles perfectly fine even when
>> radeon has CIK support, so a Kconfig dependency between the two is
>> clearly not what we want.
>>
>> Last, but not least you need to make the PCI IDs in
>> include/drm/drm_pciids.h for CIK parts depend on the new configuration
>> option as well. This is why you run into an error with your patch.
>>
>> Regards,
>> Christian.
>>
>> >
>> >   drivers/gpu/drm/amd/amdgpu/Kconfig |  1 +
>> >   drivers/gpu/drm/radeon/Kconfig | 11 +++
>> >   drivers/gpu/drm/radeon/Makefile| 11 +++
>> >   drivers/gpu/drm/radeon/atombios_encoders.c |  5 +
>> >   drivers/gpu/drm/radeon/evergreen.c | 24
>> 
>> >   drivers/gpu/drm/radeon/radeon_asic.c   | 13 +
>> >   6 files changed, 61 insertions(+), 4 deletions(-)
>> >
>> > diff --git a/drivers/gpu/drm/amd/amdgpu/Kconfig
>> b/drivers/gpu/drm/amd/amdgpu/Kconfig
>> > index b30fcfa..bb58f17 100644
>> > --- a/drivers/gpu/drm/amd/amdgpu/Kconfig
>> > +++ b/drivers/gpu/drm/amd/amdgpu/Kconfig
>> > @@ -1,6 +1,7 @@
>> >   config DRM_AMDGPU_CIK
>> >   bool "Enable amdgpu support for CIK parts"
>> >   depends on DRM_AMDGPU
>> > + depends on !DRM_RADEON_CIK
>> >   help
>> > Choose this option if you want to enable experimental support
>> > for CIK asics.
>> > diff --git a/drivers/gpu/drm/radeon/Kconfig
>> b/drivers/gpu/drm/radeon/Kconfig
>> > index 9909f5c..32bc77e 100644
>> > --- a/drivers/gpu/drm/radeon/Kconfig
>> > +++ b/drivers/gpu/drm/radeon/Kconfig
>> > @@ -1,3 +1,14 @@
>> > +config DRM_RADEON_CIK
>> > + bool "Enable radeon support for CIK parts"
>> > + depends on DRM_RADEON
>> > + default y
>> > + help
>> > +   Choose this option if you want to enable support for CIK
>> > +   asics.
>> > +
>> > +   Consider disabling this option if you wish to enable CIK
>> > +   in the amdgpu driver.
>> > +
>> >   config DRM_RADEON_USERPTR
>> >   bool "Always enable userptr support"
>> >   depends on DRM_RADEON
>> > diff --git a/drivers/gpu/drm/radeon/Makefile
>> b/drivers/gpu/drm/radeon/Makefile
>> > index 08bd17d..6c43901 100644
>> > --- a/drivers/gpu/drm/radeon/Makefile
>> > +++ b/drivers/gpu/drm/radeon/Makefile
>> > @@ -72,13 +72,15 @@ radeon-y += radeon_device.o radeon_asic.o
>> radeon_kms.o \
>> >   evergreen.o evergreen_cs.o evergreen_blit_shaders.o \
>> &

[PATCH] drm/radeon Make CIK support optional

2016-02-08 Thread Mike Lothian
Thanks for the feedback

I'll take a look at the PCI ID's tonight

On Mon, 8 Feb 2016, 9:54 a.m. Christian König 
wrote:

> Am 08.02.2016 um 03:45 schrieb Mike Lothian:
> > This will allow us to disable CIK support in the radeon driver, so both
> > radeon and amdgpu can be around at the same time without conflicting
> >
> > Signed-of-by: Mike Lothian 
> > ---
> >
> > I've tested this on my Kabini system radeon doesn't initalise when
> compiled in but
> > I do get these messages in my dmesg:
> >
> > [drm] radeon kernel modesetting enabled.
> > [drm] initializing kernel modesetting (KABINI 0x1002:0x9832
> 0x1025:0x0800).
> > radeon :00:01.0: Fatal error during GPU init
> > radeon: probe of :00:01.0 failed with error -22
> >
> > Am I going down the right route with this?
>
> Well, probably not but it's at least start.
>
> First of all the CIK support in AMDGPU isn't really mature at the
> moment. We only used it for bringup of the initial driver and it still
> has some bugs. So at least currently we don't want to encourage people
> to use amdgpu over radeon for CIK parts.
>
> Additional to that the amdgpu support compiles perfectly fine even when
> radeon has CIK support, so a Kconfig dependency between the two is
> clearly not what we want.
>
> Last, but not least you need to make the PCI IDs in
> include/drm/drm_pciids.h for CIK parts depend on the new configuration
> option as well. This is why you run into an error with your patch.
>
> Regards,
> Christian.
>
> >
> >   drivers/gpu/drm/amd/amdgpu/Kconfig |  1 +
> >   drivers/gpu/drm/radeon/Kconfig | 11 +++
> >   drivers/gpu/drm/radeon/Makefile| 11 +++
> >   drivers/gpu/drm/radeon/atombios_encoders.c |  5 +
> >   drivers/gpu/drm/radeon/evergreen.c | 24
> 
> >   drivers/gpu/drm/radeon/radeon_asic.c   | 13 +
> >   6 files changed, 61 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/Kconfig
> b/drivers/gpu/drm/amd/amdgpu/Kconfig
> > index b30fcfa..bb58f17 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/Kconfig
> > +++ b/drivers/gpu/drm/amd/amdgpu/Kconfig
> > @@ -1,6 +1,7 @@
> >   config DRM_AMDGPU_CIK
> >   bool "Enable amdgpu support for CIK parts"
> >   depends on DRM_AMDGPU
> > + depends on !DRM_RADEON_CIK
> >   help
> > Choose this option if you want to enable experimental support
> > for CIK asics.
> > diff --git a/drivers/gpu/drm/radeon/Kconfig
> b/drivers/gpu/drm/radeon/Kconfig
> > index 9909f5c..32bc77e 100644
> > --- a/drivers/gpu/drm/radeon/Kconfig
> > +++ b/drivers/gpu/drm/radeon/Kconfig
> > @@ -1,3 +1,14 @@
> > +config DRM_RADEON_CIK
> > + bool "Enable radeon support for CIK parts"
> > + depends on DRM_RADEON
> > + default y
> > + help
> > +   Choose this option if you want to enable support for CIK
> > +   asics.
> > +
> > +   Consider disabling this option if you wish to enable CIK
> > +   in the amdgpu driver.
> > +
> >   config DRM_RADEON_USERPTR
> >   bool "Always enable userptr support"
> >   depends on DRM_RADEON
> > diff --git a/drivers/gpu/drm/radeon/Makefile
> b/drivers/gpu/drm/radeon/Makefile
> > index 08bd17d..6c43901 100644
> > --- a/drivers/gpu/drm/radeon/Makefile
> > +++ b/drivers/gpu/drm/radeon/Makefile
> > @@ -72,13 +72,15 @@ radeon-y += radeon_device.o radeon_asic.o
> radeon_kms.o \
> >   evergreen.o evergreen_cs.o evergreen_blit_shaders.o \
> >   evergreen_hdmi.o radeon_trace_points.o ni.o cayman_blit_shaders.o \
> >   atombios_encoders.o radeon_semaphore.o radeon_sa.o atombios_i2c.o
> si.o \
> > - si_blit_shaders.o radeon_prime.o cik.o cik_blit_shaders.o \
> > + si_blit_shaders.o radeon_prime.o \
> >   r600_dpm.o rs780_dpm.o rv6xx_dpm.o rv770_dpm.o rv730_dpm.o
> rv740_dpm.o \
> >   rv770_smc.o cypress_dpm.o btc_dpm.o sumo_dpm.o sumo_smc.o
> trinity_dpm.o \
> > - trinity_smc.o ni_dpm.o si_smc.o si_dpm.o kv_smc.o kv_dpm.o
> ci_smc.o \
> > - ci_dpm.o dce6_afmt.o radeon_vm.o radeon_ucode.o radeon_ib.o \
> > + trinity_smc.o ni_dpm.o si_smc.o si_dpm.o \
> > + dce6_afmt.o radeon_vm.o radeon_ucode.o radeon_ib.o \
> >   radeon_sync.o radeon_audio.o radeon_dp_auxch.o radeon_dp_mst.o
> >
> > +radeon-$(CONFIG_DRM_RADEON_CIK) += cik.o cik_blit_shaders.o kv_smc.o
> kv_dpm.o ci_smc.o ci_dpm.o
&g

[PATCH] drm/radeon Make CIK support optional

2016-02-08 Thread Mike Lothian
This will allow us to disable CIK support in the radeon driver, so both 
radeon and amdgpu can be around at the same time without conflicting

Signed-of-by: Mike Lothian 
---

I've tested this on my Kabini system radeon doesn't initalise when compiled in 
but
I do get these messages in my dmesg:

[drm] radeon kernel modesetting enabled.
[drm] initializing kernel modesetting (KABINI 0x1002:0x9832 0x1025:0x0800).
radeon :00:01.0: Fatal error during GPU init
radeon: probe of :00:01.0 failed with error -22

Am I going down the right route with this?

 drivers/gpu/drm/amd/amdgpu/Kconfig |  1 +
 drivers/gpu/drm/radeon/Kconfig | 11 +++
 drivers/gpu/drm/radeon/Makefile| 11 +++
 drivers/gpu/drm/radeon/atombios_encoders.c |  5 +
 drivers/gpu/drm/radeon/evergreen.c | 24 
 drivers/gpu/drm/radeon/radeon_asic.c   | 13 +
 6 files changed, 61 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/Kconfig 
b/drivers/gpu/drm/amd/amdgpu/Kconfig
index b30fcfa..bb58f17 100644
--- a/drivers/gpu/drm/amd/amdgpu/Kconfig
+++ b/drivers/gpu/drm/amd/amdgpu/Kconfig
@@ -1,6 +1,7 @@
 config DRM_AMDGPU_CIK
bool "Enable amdgpu support for CIK parts"
depends on DRM_AMDGPU
+   depends on !DRM_RADEON_CIK
help
  Choose this option if you want to enable experimental support
  for CIK asics.
diff --git a/drivers/gpu/drm/radeon/Kconfig b/drivers/gpu/drm/radeon/Kconfig
index 9909f5c..32bc77e 100644
--- a/drivers/gpu/drm/radeon/Kconfig
+++ b/drivers/gpu/drm/radeon/Kconfig
@@ -1,3 +1,14 @@
+config DRM_RADEON_CIK
+   bool "Enable radeon support for CIK parts"
+   depends on DRM_RADEON
+   default y
+   help
+ Choose this option if you want to enable support for CIK
+ asics.
+
+ Consider disabling this option if you wish to enable CIK
+ in the amdgpu driver.
+
 config DRM_RADEON_USERPTR
bool "Always enable userptr support"
depends on DRM_RADEON
diff --git a/drivers/gpu/drm/radeon/Makefile b/drivers/gpu/drm/radeon/Makefile
index 08bd17d..6c43901 100644
--- a/drivers/gpu/drm/radeon/Makefile
+++ b/drivers/gpu/drm/radeon/Makefile
@@ -72,13 +72,15 @@ radeon-y += radeon_device.o radeon_asic.o radeon_kms.o \
evergreen.o evergreen_cs.o evergreen_blit_shaders.o \
evergreen_hdmi.o radeon_trace_points.o ni.o cayman_blit_shaders.o \
atombios_encoders.o radeon_semaphore.o radeon_sa.o atombios_i2c.o si.o \
-   si_blit_shaders.o radeon_prime.o cik.o cik_blit_shaders.o \
+   si_blit_shaders.o radeon_prime.o \
r600_dpm.o rs780_dpm.o rv6xx_dpm.o rv770_dpm.o rv730_dpm.o rv740_dpm.o \
rv770_smc.o cypress_dpm.o btc_dpm.o sumo_dpm.o sumo_smc.o trinity_dpm.o 
\
-   trinity_smc.o ni_dpm.o si_smc.o si_dpm.o kv_smc.o kv_dpm.o ci_smc.o \
-   ci_dpm.o dce6_afmt.o radeon_vm.o radeon_ucode.o radeon_ib.o \
+   trinity_smc.o ni_dpm.o si_smc.o si_dpm.o \
+   dce6_afmt.o radeon_vm.o radeon_ucode.o radeon_ib.o \
radeon_sync.o radeon_audio.o radeon_dp_auxch.o radeon_dp_mst.o

+radeon-$(CONFIG_DRM_RADEON_CIK) += cik.o cik_blit_shaders.o kv_smc.o kv_dpm.o 
ci_smc.o ci_dpm.o
+
 radeon-$(CONFIG_MMU_NOTIFIER) += radeon_mn.o

 # add async DMA block
@@ -88,7 +90,8 @@ radeon-y += \
evergreen_dma.o \
ni_dma.o \
si_dma.o \
-   cik_sdma.o \
+
+radeon-$(CONFIG_DRM_RADEON_CIK) += cik_sdma.o

 # add UVD block
 radeon-y += \
diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c 
b/drivers/gpu/drm/radeon/atombios_encoders.c
index 01b20e1..2bb81d2 100644
--- a/drivers/gpu/drm/radeon/atombios_encoders.c
+++ b/drivers/gpu/drm/radeon/atombios_encoders.c
@@ -2506,10 +2506,15 @@ static void radeon_atom_encoder_prepare(struct 
drm_encoder *encoder)
/* this is needed for the pll/ss setup to work correctly in some cases 
*/
atombios_set_encoder_crtc_source(encoder);
/* set up the FMT blocks */
+#ifdef CONFIG_DRM_RADEON_CIK
if (ASIC_IS_DCE8(rdev))
dce8_program_fmt(encoder);
else if (ASIC_IS_DCE4(rdev))
dce4_program_fmt(encoder);
+#else
+   if (ASIC_IS_DCE4(rdev))
+   dce4_program_fmt(encoder);
+#endif
else if (ASIC_IS_DCE3(rdev))
dce3_program_fmt(encoder);
else if (ASIC_IS_AVIVO(rdev))
diff --git a/drivers/gpu/drm/radeon/evergreen.c 
b/drivers/gpu/drm/radeon/evergreen.c
index 2ad4628..f431946 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -209,12 +209,19 @@ extern void cayman_cp_int_cntl_setup(struct radeon_device 
*rdev,
 int ring, u32 cp_int_cntl);
 extern void cayman_vm_decode_fault(struct radeon_device *rdev,
   u32 status, u32 addr);
+
+#ifdef CONFIG_DRM_RADEON_CIK
 void cik_init_cp_pg_table(struct ra

[PATCH] drm/radeon: only init fbdev if we have connectors

2016-01-25 Thread Mike Lothian
Is something similar required for AMDGPU too?

On Mon, 25 Jan 2016 at 23:06 Rob Clark  wrote:

> This fixes an issue that was noticed on an optimus/prime laptop with
> a kernel that was old enough to not support the integrated intel gfx
> (which was driving all the outputs), but did have support for the
> discrete radeon gpu.  The end result was not falling back to VESA and
> leaving the user with a black screen.
>
> (Plus it is kind of silly to create an framebuffer device if there
> are no outputs hooked up to the gpu.)
>
> Signed-off-by: Rob Clark 
> ---
>  drivers/gpu/drm/radeon/radeon_display.c | 6 --
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/radeon/radeon_display.c
> b/drivers/gpu/drm/radeon/radeon_display.c
> index 1eca0ac..f8e776c 100644
> --- a/drivers/gpu/drm/radeon/radeon_display.c
> +++ b/drivers/gpu/drm/radeon/radeon_display.c
> @@ -1670,8 +1670,10 @@ int radeon_modeset_init(struct radeon_device *rdev)
> /* setup afmt */
> radeon_afmt_init(rdev);
>
> -   radeon_fbdev_init(rdev);
> -   drm_kms_helper_poll_init(rdev->ddev);
> +   if (!list_empty(&rdev->ddev->mode_config.connector_list)) {
> +   radeon_fbdev_init(rdev);
> +   drm_kms_helper_poll_init(rdev->ddev);
> +   }
>
> /* do pm late init */
> ret = radeon_pm_late_init(rdev);
> --
> 2.5.0
>
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>
-- next part --
An HTML attachment was scrubbed...
URL: 



No HDMI Audio with Radeon HD7750 on Acube Sam460ex AMCC powerpc 460ex board

2015-03-09 Thread Mike Lothian
got bisect skip should get you past any iterations that won't build
On 9 Mar 2015 09:47, "Julian Margetson"  wrote:

>  The issues get worse with the Kernel 4.0-rc1-3 versions.
> Both Radeon HD6750 and HD7750  oops if HDMI is active but boot with dvi .
> I had tried a bisect on 4.0.0-rc1 but could not get it finished.
>
> Kernel wont compile after 10th bisect
>
> . drivers/built-in.o: In function `drm_mode_atomic_ioctl':
> (.text+0x865dc): undefined reference to `__get_user_bad' make: ***
> [vmlinux] Error 1 root at julian-VirtualBox:/usr
>
> /src/linux# git bisect log git bisect start # bad:
> [c517d838eb7d07bbe9507871fab3931deccff539] Linux 4.0-rc1 git bisect bad
> c517d838eb7d07bbe9507871fab3931deccff539 # good:
> [bfa76d49576599a4b9f9b7a71f23d73d6dcff735] Linux 3.19 git bisect good
> bfa76d49576599a4b9f9b7a71f23d73d6dcff735 # good:
> [02f1f2170d2831b3233e91091c60a66622f29e82] kernel.h: remove ancient
> __FUNCTION__ hack git bisect good 02f1f2170d2831b3233e91091c60a66622f29e82
> # bad: [796e1c55717e9a6ff5c81b12289ffa1ffd919b6f] Merge branch 'drm-next'
> of git://people.freedesktop.org/~airlied/linux git bisect bad
> 796e1c55717e9a6ff5c81b12289ffa1ffd919b6f # good:
> [9682ec9692e5ac11c6caebd079324e727b19e7ce] Merge tag 'driver-core-3.20-rc1'
> of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core git
> bisect good 9682ec9692e5ac11c6caebd079324e727b19e7ce # good:
> [a9724125ad014decf008d782e60447c811391326] Merge tag 'tty-3.20-rc1' of
> git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty git bisect good
> a9724125ad014decf008d782e60447c811391326 # good:
> [f43dff0ee00a259f524ce17ba4f8030553c66590] Merge tag
> 'drm-amdkfd-next-fixes-2015-01-25' of git://
> people.freedesktop.org/~gabbayo/linux into drm-next git bisect good
> f43dff0ee00a259f524ce17ba4f8030553c66590 # bad:
> [cffe1e89dc9bf541a39d9287ced7c5addff07084] drm: sti: HDMI add audio
> infoframe git bisect bad cffe1e89dc9bf541a39d9287ced7c5addff07084 # good:
> [2f5b4ef15c60bc5292a3f006c018acb3da53737b] Merge tag
> 'drm/tegra/for-3.20-rc1' of git://anongit.freedesktop.org/tegra/linux
> into drm-next git bisect good 2f5b4ef15c60bc5292a3f006c018acb3da53737b #
> bad: [86588ce80ccd714793e9ba4140d7ae214229] drm/udl: optimize
> udl_compress_hline16 (v2) git bisect bad
> 86588ce80ccd714793e9ba4140d7ae214229 # bad:
> [d47df63393ed81977e0f6435988d9cbd70c867f7] drm/panel: simple: Add AVIC
> TM070DDH03 panel support git bisect bad
> d47df63393ed81977e0f6435988d9cbd70c867f7 # bad:
> [9469244d869623e8b54d9f3d4d00737e377af273] drm/atomic: Fix potential use of
> state after free git bisect bad 9469244d869623e8b54d9f3d4d00737e377af273
> git bisect skip There are only 'skip'ped commits left to test. The first
> bad commit could be any of: b486e0e6d599b9ca8667fb9a7d49b7383ee963c7
> eab3bbeffd152125ae0f90863b8e9bc8eef49423
> 960cd9d4fef6dd9e235c0e5c0d4ed027f8a48025
> f02ad907cd9e7fe3a6405d2d005840912f1ed258
> 6a425c2a9b37ca3d2c37e3c1cdf973dba53eaa79
> ee0a89cf3c2c550e6d877dda21dd2947afb90cb6
> 92890583627ee2a0518e55b063fcff86826fef96
> 95d6eb3b134e1826ed04cc92b224d93de13e281f
> 9469244d869623e8b54d9f3d4d00737e377af273 We cannot bisect more!
>
> [6.221759] snd_hda_intel 0001:81:00.1: enabling device ( -> 0002)
> [6.249169] snd_hda_intel 0001:81:00.1: Force to snoop mode by module 
> option
> [6.276522] ppc4xx_setup_msi_irqs: fail allocating msi interrupt
>
>  * Setting sensors limits   #[240G
>  * Setting up X socket directories...   #[240G
> [   28.336533] Unable to handle kernel paging request for data at address 
> 0x0008
> [   28.352724] Faulting instruction address: 0xc04a4d10
> [   30.386494] Oops: Kernel access of bad area, sig: 11 [#1]
> [   30.392213] PREEMPT Canyonlands
> [   30.395536] Modules linked in:
> [   30.398771] CPU: 0 PID: 2372 Comm: Xorg Not tainted 4.0.0-rc3-Sam460ex #1
> [   30.405912] task: eda9f580 ti: ee6d task.ti: ee6d
> [   30.411596] NIP: c04a4d10 LR: c03e6818 CTR: c03d7938
> [   30.416822] REGS: ee6d1c50 TRAP: 0300   Not tainted  (4.0.0-rc3-Sam460ex)
> [   30.423964] MSR: 00029000   CR: 24004442  XER: 
> [   30.430501] DEAR: 0008 ESR: 
> GPR00: c03e6818 ee6d1d00 eda9f580 eea9c000  000f ee6d1be8 
> GPR08: f60b  eeac8400 ee6d1cc0 28004422 b7b95ff8 b8214258 b8214590
> GPR16:    ee6d1e18 eeac8578 ee4ed300 0001 4000
> GPR24: 4000 c075acc8 fff2  eea9c000 0001 0001 eeb6c000
> [   30.465032] NIP [c04a4d10] radeon_audio_enable+0x4/0x18
> [   30.470538] LR [c03e6818] radeon_dvi_detect+0x388/0x3ac
> [   30.476030] Call Trace:
> [   30.487799] [ee6d1d00] [c03e6818] radeon_dvi_detect+0x388/0x3ac 
> (unreliable)
> [   30.504497] [ee6d1d30] [c0391a90] 
> drm_helper_probe_single_connector_modes_merge_bits+0xf4/0x434
> [   30.523217] [ee6d1d70] [c03adc98] drm_mode_getconnector+0xe4/0x330
> [   30.539561] [ee6d1e10] [c03a0988] drm_ioctl+0x348/0x464
> [   30.554828] [ee6d1ed0] [c00d0aac] do

[PATCH] Fallback to std DRI2CopyRegion when DRI2UpdatePrime fails

2014-10-06 Thread Mike Lothian
Is this the issue in KDE that when I start a game I have to toggle
compositing a couple of times to get it rendering?
On 6 Oct 2014 07:39, "Chris Wilson"  wrote:

> On Mon, Oct 06, 2014 at 11:04:51AM +0900, Michel D?nzer wrote:
> > On 05.10.2014 16:06, Chris Wilson wrote:
> > >I was looking at a bug report today of intel/ati prime and noticed a
> > >number of sna_share_pixmap_backing() failures (called from
> > >DRI2UpdatePrime). These were failing as the request was for the scanout
> > >buffer (which is tiled and so we refuse to share it, and since it is
> > >already on the scanout we refuse to change tiling).
> > >
> > >But looking at radeon_dri2_copy_region2(), if DRI2UpdatePrime() fails,
> > >the copy is aborted and the update lost. If the copy is made to the
> > >normal window drawable is that enough for it to be propagated back
> > >through damage tracking?
> >
> > Have you asked the reporter of that bug to test your patch?
>
> They didn't notice the issue, presumably because it only happens quite
> early in the DE startup. However, I do remember Dave mentioning what
> seemed to be a similar issue: a peristent blank window. Hence the
> inquisitive nature of the patch.
> -Chris
>
> --
> Chris Wilson, Intel Open Source Technology Centre
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>
-- next part --
An HTML attachment was scrubbed...
URL: 



[no subject]

2014-08-24 Thread Mike Lothian
Thanks for this

Good work
On 24 Aug 2014 14:15, "Christian K?nig"  wrote:

> Hello everyone,
>
> the following patches add UVD support for older ASICs (RV6xx, RS[78]80,
> RV7[79]0). For everybody wanting to test it I've also uploaded a branch to
> FDO:
> http://cgit.freedesktop.org/~deathsimple/linux/log/?h=uvd-r600-release
>
> Additionally to the patches you need UVD firmware as well, which can be
> found at the usual location:
> http://people.freedesktop.org/~agd5f/radeon_ucode/
>
> A small Mesa patch is needed as well, cause the older hardware doesn't
> support field based output of video frames. So unfortunately VDPAU/OpenGL
> interop won't work either.
>
> We can only provide best effort support for those older ASICs, but at
> least on my RS[78]80 based laptop it seems to work perfectly fine.
>
> Happy testing,
> Christian.
>
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>
-- next part --
An HTML attachment was scrubbed...
URL: 



[PATCH 00/17] Convert TTM to the new fence interface.

2014-07-09 Thread Mike Lothian
Hi Maarten

Will this stop the stuttering I've been seeing with DRI3 and PRIME? Or will
other patches / plumbing be required

Cheers

Mike
On 9 Jul 2014 13:29, "Maarten Lankhorst" 
wrote:

> This series applies on top of the driver-core-next branch of
> git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git
>
> Before converting ttm to the new fence interface I had to fix some
> drivers to require a reservation before poking with fence_obj.
> After flipping the switch RCU becomes available instead, and
> the extra reservations can be dropped again. :-)
>
> I've done at least basic testing on all the drivers I've converted
> at some point, but more testing is definitely welcomed!
>
> ---
>
> Maarten Lankhorst (17):
>   drm/ttm: add interruptible parameter to ttm_eu_reserve_buffers
>   drm/ttm: kill off some members to ttm_validate_buffer
>   drm/nouveau: add reservation to nouveau_gem_ioctl_cpu_prep
>   drm/nouveau: require reservations for nouveau_fence_sync and
> nouveau_bo_fence
>   drm/ttm: call ttm_bo_wait while inside a reservation
>   drm/ttm: kill fence_lock
>   drm/nouveau: rework to new fence interface
>   drm/radeon: add timeout argument to radeon_fence_wait_seq
>   drm/radeon: use common fence implementation for fences
>   drm/qxl: rework to new fence interface
>   drm/vmwgfx: get rid of different types of fence_flags entirely
>   drm/vmwgfx: rework to new fence interface
>   drm/ttm: flip the switch, and convert to dma_fence
>   drm/nouveau: use rcu in nouveau_gem_ioctl_cpu_prep
>   drm/radeon: use rcu waits in some ioctls
>   drm/vmwgfx: use rcu in vmw_user_dmabuf_synccpu_grab
>   drm/ttm: use rcu in core ttm
>
>  drivers/gpu/drm/nouveau/core/core/event.c |4
>  drivers/gpu/drm/nouveau/nouveau_bo.c  |   59 +---
>  drivers/gpu/drm/nouveau/nouveau_display.c |   25 +-
>  drivers/gpu/drm/nouveau/nouveau_fence.c   |  431
> +++--
>  drivers/gpu/drm/nouveau/nouveau_fence.h   |   22 +
>  drivers/gpu/drm/nouveau/nouveau_gem.c |   55 +---
>  drivers/gpu/drm/nouveau/nv04_fence.c  |4
>  drivers/gpu/drm/nouveau/nv10_fence.c  |4
>  drivers/gpu/drm/nouveau/nv17_fence.c  |2
>  drivers/gpu/drm/nouveau/nv50_fence.c  |2
>  drivers/gpu/drm/nouveau/nv84_fence.c  |   11 -
>  drivers/gpu/drm/qxl/Makefile  |2
>  drivers/gpu/drm/qxl/qxl_cmd.c |7
>  drivers/gpu/drm/qxl/qxl_debugfs.c |   16 +
>  drivers/gpu/drm/qxl/qxl_drv.h |   20 -
>  drivers/gpu/drm/qxl/qxl_fence.c   |   91 --
>  drivers/gpu/drm/qxl/qxl_kms.c |1
>  drivers/gpu/drm/qxl/qxl_object.c  |2
>  drivers/gpu/drm/qxl/qxl_object.h  |6
>  drivers/gpu/drm/qxl/qxl_release.c |  172 ++--
>  drivers/gpu/drm/qxl/qxl_ttm.c |   93 --
>  drivers/gpu/drm/radeon/radeon.h   |   15 -
>  drivers/gpu/drm/radeon/radeon_cs.c|   10 +
>  drivers/gpu/drm/radeon/radeon_device.c|   60 
>  drivers/gpu/drm/radeon/radeon_display.c   |   21 +
>  drivers/gpu/drm/radeon/radeon_fence.c |  283 +++
>  drivers/gpu/drm/radeon/radeon_gem.c   |   19 +
>  drivers/gpu/drm/radeon/radeon_object.c|8 -
>  drivers/gpu/drm/radeon/radeon_ttm.c   |   34 --
>  drivers/gpu/drm/radeon/radeon_uvd.c   |   10 -
>  drivers/gpu/drm/radeon/radeon_vm.c|   16 +
>  drivers/gpu/drm/ttm/ttm_bo.c  |  187 ++---
>  drivers/gpu/drm/ttm/ttm_bo_util.c |   28 --
>  drivers/gpu/drm/ttm/ttm_bo_vm.c   |3
>  drivers/gpu/drm/ttm/ttm_execbuf_util.c|  146 +++---
>  drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c|   47 ---
>  drivers/gpu/drm/vmwgfx/vmwgfx_drv.h   |1
>  drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c   |   24 --
>  drivers/gpu/drm/vmwgfx/vmwgfx_fence.c |  329 --
>  drivers/gpu/drm/vmwgfx/vmwgfx_fence.h |   35 +-
>  drivers/gpu/drm/vmwgfx/vmwgfx_resource.c  |   43 +--
>  include/drm/ttm/ttm_bo_api.h  |7
>  include/drm/ttm/ttm_bo_driver.h   |   29 --
>  include/drm/ttm/ttm_execbuf_util.h|   22 +
>  44 files changed, 1256 insertions(+), 1150 deletions(-)
>  delete mode 100644 drivers/gpu/drm/qxl/qxl_fence.c
>
> --
> Signature
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>
-- next part --
An HTML attachment was scrubbed...
URL: 



UVD on RS880

2014-01-09 Thread Mike Lothian
Fingers crossed this happens soon especially now that BluRays can be played
on Linux

1080p VC1 does not play well on a Phenom II X4 even when multi threaded
On 9 Jan 2014 12:16, "Christian K?nig"  wrote:

> Hi,
>
> The code for the first generation UVD blocks (RV6xx, RS780, RS880 and
> RV790) is already implemented and I'm only waiting for the OK to release it.
>
> The only problem is that I don't know if and when we are getting this OK
> for release. Maybe tomorrow, maybe never. It just doesn't has a high
> priority for the reviewer because we don't really sell that old hardware
> any more.
>
> Cheers,
> Christian.
>
> Am 09.01.2014 12:55, schrieb Boszormenyi Zoltan:
>
>> Hi,
>>
>> according to http://wiki.x.org/wiki/RadeonFeature/#4 UVD on RS880 is a
>> TODO.
>> I would like to ask about whether there are any plans to implement it and
>> a
>> vague approximation for the release date, if it's planned.
>>
>> Thanks in advance,
>> Zolt?n B?sz?rm?nyi
>>
>> ___
>> dri-devel mailing list
>> dri-devel at lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>>
>
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>
-- next part --
An HTML attachment was scrubbed...
URL: 



Problems with vgaswitcheroo and prime systems possibly due to ACPI

2013-12-19 Thread Mike Lothian
Hi Rafael

Ever since git commit bbd34fcdd1b201e996235731a7c98fd5197d9e51 I've been
having issues with vgaswitcheroo and the new runpm code on my laptop

I bisected it in bug https://bugs.freedesktop.org/show_bug.cgi?id=71930 and
with it being a regression I was hoping you could take a look

Thanks

Mike
-- next part --
An HTML attachment was scrubbed...
URL: 



Re: [Bug 50594] VDPAU checks version of wrong DRM driver

2013-10-12 Thread Mike Lothian
It was fixed for me a while back bit not tried it in a while since getting
vdpau working on i965
On 12 Oct 2013 23:50,  wrote:

>   *Comment # 4  on bug
> 50594  from 
> Axel
> *
>
> Is this bug still active or was the bug fix by Christian König implemented?
>
>  --
> You are receiving this mail because:
>
>- You are the assignee for the bug.
>
>
> ___
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>
>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 2/2] drm/radeon: add runtime PM support (v2)

2013-09-20 Thread Mike Lothian
It's probably easier if I just show you this:

http://pastebin.com/xpBJkZDw

Is that expected behaviour? I'm used to seeing something more definite when
echoing OFF into /sys/kernel/debug/vgaswitcheroo/switch

I've just confirmed with powertop that the laptop is using 26 watts when
idle and 47 watts with DRI_PRIME=1 glxgears running


On 20 September 2013 22:12, Alex Deucher  wrote:

> On Fri, Sep 20, 2013 at 6:10 PM, Mike Lothian  wrote:
> > Sorry that was a typo on my part. I'm using radeon.runpm=1
> >
> > I can see audio is switched off
> >
> > [drm] Disabling audio 0 support
> >
> > When I use DRI_PRIME=1 I see the DRM initialisation for my card each
> time I
> > fire up an application or run glxinfo
>
> Yup.  that's the card resuming.
>
> >
> > There isn't any explicit messages that the card is on of off however
> >
> > How can I tell if I have a powerxpress card?
>
> If you have a laptop with an integrated GPU and a discrete GPU.  You
> should see a message about atpx.
>
> Alex
>
> >
> > Thanks again
> >
> > Mike
> >
> > On 20 Sep 2013 22:05, "Alex Deucher"  wrote:
> >>
> >> On Fri, Sep 20, 2013 at 4:25 PM, Mike Lothian 
> wrote:
> >> > Hi
> >> >
> >> > Is there an easy way to check this is on?
> >>
> >> It's on by default if your system is a powerxpress system (hybrid
> laptop).
> >>
> >> >
> >> > I have radeon.dynpm=1 in grub but usually when I use switcheroo I see
> >> > messages saying the card if now off at the moment I can old see
> messages
> >> > saying when the card gets powered up
> >> >
> >>
> >> The option is radeon.runpm for this.  Note that only powerxpress
> >> systems are supported.  There is no support for powering down
> >> arbitrary cards yet.
> >>
> >> > Is it possible to have the on and off messages appearing?
> >>
> >> On a supported system, you will see suspend and resume messages when
> >> when the card is powered down/up.
> >>
> >> Alex
> >>
> >> >
> >> > Cheers
> >> >
> >> > Mike
> >> >
> >> >
> >> > On 20 September 2013 18:18, Alex Deucher 
> wrote:
> >> >>
> >> >> From: Dave Airlie 
> >> >>
> >> >> This hooks radeon up to the runtime PM system to enable
> >> >> dynamic power management for secondary GPUs in switchable
> >> >> and powerxpress laptops.
> >> >>
> >> >> v2: agd5f: clean up, add module parameter
> >> >>
> >> >> Signed-off-by: Dave Airlie 
> >> >> Signed-off-by: Alex Deucher 
> >> >> ---
> >> >>  drivers/gpu/drm/radeon/radeon.h  |   8 +-
> >> >>  drivers/gpu/drm/radeon/radeon_atpx_handler.c |   4 +
> >> >>  drivers/gpu/drm/radeon/radeon_connectors.c   |  63 --
> >> >>  drivers/gpu/drm/radeon/radeon_device.c   |  52 +---
> >> >>  drivers/gpu/drm/radeon/radeon_display.c  |  47 ++-
> >> >>  drivers/gpu/drm/radeon/radeon_drv.c  | 122
> >> >> +--
> >> >>  drivers/gpu/drm/radeon/radeon_drv.h  |   3 +
> >> >>  drivers/gpu/drm/radeon/radeon_ioc32.c|   2 +-
> >> >>  drivers/gpu/drm/radeon/radeon_irq_kms.c  |   8 +-
> >> >>  drivers/gpu/drm/radeon/radeon_kms.c  |  26 +-
> >> >>  10 files changed, 299 insertions(+), 36 deletions(-)
> >> >>
> >> >> diff --git a/drivers/gpu/drm/radeon/radeon.h
> >> >> b/drivers/gpu/drm/radeon/radeon.h
> >> >> index 986100a..ad54525 100644
> >> >> --- a/drivers/gpu/drm/radeon/radeon.h
> >> >> +++ b/drivers/gpu/drm/radeon/radeon.h
> >> >> @@ -98,6 +98,7 @@ extern int radeon_lockup_timeout;
> >> >>  extern int radeon_fastfb;
> >> >>  extern int radeon_dpm;
> >> >>  extern int radeon_aspm;
> >> >> +extern int radeon_runtime_pm;
> >> >>
> >> >>  /*
> >> >>   * Copy from radeon_drv.h so we don't have to include both and have
> >> >> conflicting
> >> >> @@ -2212,6 +2213,9 @@ struct radeon_device {
> >> >> /* clock, powergating flags */
> >> >> u32 cg_flags;
> >> >

[PATCH 2/2] drm/radeon: add runtime PM support (v2)

2013-09-20 Thread Mike Lothian
It's probably easier if I just show you this:

http://pastebin.com/xpBJkZDw

Is that expected behaviour? I'm used to seeing something more definite when
echoing OFF into /sys/kernel/debug/vgaswitcheroo/switch

I've just confirmed with powertop that the laptop is using 26 watts when
idle and 47 watts with DRI_PRIME=1 glxgears running


On 20 September 2013 22:12, Alex Deucher  wrote:

> On Fri, Sep 20, 2013 at 6:10 PM, Mike Lothian  wrote:
> > Sorry that was a typo on my part. I'm using radeon.runpm=1
> >
> > I can see audio is switched off
> >
> > [drm] Disabling audio 0 support
> >
> > When I use DRI_PRIME=1 I see the DRM initialisation for my card each
> time I
> > fire up an application or run glxinfo
>
> Yup.  that's the card resuming.
>
> >
> > There isn't any explicit messages that the card is on of off however
> >
> > How can I tell if I have a powerxpress card?
>
> If you have a laptop with an integrated GPU and a discrete GPU.  You
> should see a message about atpx.
>
> Alex
>
> >
> > Thanks again
> >
> > Mike
> >
> > On 20 Sep 2013 22:05, "Alex Deucher"  wrote:
> >>
> >> On Fri, Sep 20, 2013 at 4:25 PM, Mike Lothian 
> wrote:
> >> > Hi
> >> >
> >> > Is there an easy way to check this is on?
> >>
> >> It's on by default if your system is a powerxpress system (hybrid
> laptop).
> >>
> >> >
> >> > I have radeon.dynpm=1 in grub but usually when I use switcheroo I see
> >> > messages saying the card if now off at the moment I can old see
> messages
> >> > saying when the card gets powered up
> >> >
> >>
> >> The option is radeon.runpm for this.  Note that only powerxpress
> >> systems are supported.  There is no support for powering down
> >> arbitrary cards yet.
> >>
> >> > Is it possible to have the on and off messages appearing?
> >>
> >> On a supported system, you will see suspend and resume messages when
> >> when the card is powered down/up.
> >>
> >> Alex
> >>
> >> >
> >> > Cheers
> >> >
> >> > Mike
> >> >
> >> >
> >> > On 20 September 2013 18:18, Alex Deucher 
> wrote:
> >> >>
> >> >> From: Dave Airlie 
> >> >>
> >> >> This hooks radeon up to the runtime PM system to enable
> >> >> dynamic power management for secondary GPUs in switchable
> >> >> and powerxpress laptops.
> >> >>
> >> >> v2: agd5f: clean up, add module parameter
> >> >>
> >> >> Signed-off-by: Dave Airlie 
> >> >> Signed-off-by: Alex Deucher 
> >> >> ---
> >> >>  drivers/gpu/drm/radeon/radeon.h  |   8 +-
> >> >>  drivers/gpu/drm/radeon/radeon_atpx_handler.c |   4 +
> >> >>  drivers/gpu/drm/radeon/radeon_connectors.c   |  63 --
> >> >>  drivers/gpu/drm/radeon/radeon_device.c   |  52 +---
> >> >>  drivers/gpu/drm/radeon/radeon_display.c  |  47 ++-
> >> >>  drivers/gpu/drm/radeon/radeon_drv.c  | 122
> >> >> +--
> >> >>  drivers/gpu/drm/radeon/radeon_drv.h  |   3 +
> >> >>  drivers/gpu/drm/radeon/radeon_ioc32.c|   2 +-
> >> >>  drivers/gpu/drm/radeon/radeon_irq_kms.c  |   8 +-
> >> >>  drivers/gpu/drm/radeon/radeon_kms.c  |  26 +-
> >> >>  10 files changed, 299 insertions(+), 36 deletions(-)
> >> >>
> >> >> diff --git a/drivers/gpu/drm/radeon/radeon.h
> >> >> b/drivers/gpu/drm/radeon/radeon.h
> >> >> index 986100a..ad54525 100644
> >> >> --- a/drivers/gpu/drm/radeon/radeon.h
> >> >> +++ b/drivers/gpu/drm/radeon/radeon.h
> >> >> @@ -98,6 +98,7 @@ extern int radeon_lockup_timeout;
> >> >>  extern int radeon_fastfb;
> >> >>  extern int radeon_dpm;
> >> >>  extern int radeon_aspm;
> >> >> +extern int radeon_runtime_pm;
> >> >>
> >> >>  /*
> >> >>   * Copy from radeon_drv.h so we don't have to include both and have
> >> >> conflicting
> >> >> @@ -2212,6 +2213,9 @@ struct radeon_device {
> >> >> /* clock, powergating flags */
> >> >> u32 cg_flags;
> >> >

[PATCH 2/2] drm/radeon: add runtime PM support (v2)

2013-09-20 Thread Mike Lothian
Sorry that was a typo on my part. I'm using radeon.runpm=1

I can see audio is switched off

[drm] Disabling audio 0 support

When I use DRI_PRIME=1 I see the DRM initialisation for my card each time I
fire up an application or run glxinfo

There isn't any explicit messages that the card is on of off however

How can I tell if I have a powerxpress card?

Thanks again

Mike
On 20 Sep 2013 22:05, "Alex Deucher"  wrote:

> On Fri, Sep 20, 2013 at 4:25 PM, Mike Lothian  wrote:
> > Hi
> >
> > Is there an easy way to check this is on?
>
> It's on by default if your system is a powerxpress system (hybrid laptop).
>
> >
> > I have radeon.dynpm=1 in grub but usually when I use switcheroo I see
> > messages saying the card if now off at the moment I can old see messages
> > saying when the card gets powered up
> >
>
> The option is radeon.runpm for this.  Note that only powerxpress
> systems are supported.  There is no support for powering down
> arbitrary cards yet.
>
> > Is it possible to have the on and off messages appearing?
>
> On a supported system, you will see suspend and resume messages when
> when the card is powered down/up.
>
> Alex
>
> >
> > Cheers
> >
> > Mike
> >
> >
> > On 20 September 2013 18:18, Alex Deucher  wrote:
> >>
> >> From: Dave Airlie 
> >>
> >> This hooks radeon up to the runtime PM system to enable
> >> dynamic power management for secondary GPUs in switchable
> >> and powerxpress laptops.
> >>
> >> v2: agd5f: clean up, add module parameter
> >>
> >> Signed-off-by: Dave Airlie 
> >> Signed-off-by: Alex Deucher 
> >> ---
> >>  drivers/gpu/drm/radeon/radeon.h  |   8 +-
> >>  drivers/gpu/drm/radeon/radeon_atpx_handler.c |   4 +
> >>  drivers/gpu/drm/radeon/radeon_connectors.c   |  63 --
> >>  drivers/gpu/drm/radeon/radeon_device.c   |  52 +---
> >>  drivers/gpu/drm/radeon/radeon_display.c  |  47 ++-
> >>  drivers/gpu/drm/radeon/radeon_drv.c  | 122
> >> +--
> >>  drivers/gpu/drm/radeon/radeon_drv.h  |   3 +
> >>  drivers/gpu/drm/radeon/radeon_ioc32.c|   2 +-
> >>  drivers/gpu/drm/radeon/radeon_irq_kms.c  |   8 +-
> >>  drivers/gpu/drm/radeon/radeon_kms.c  |  26 +-
> >>  10 files changed, 299 insertions(+), 36 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/radeon/radeon.h
> >> b/drivers/gpu/drm/radeon/radeon.h
> >> index 986100a..ad54525 100644
> >> --- a/drivers/gpu/drm/radeon/radeon.h
> >> +++ b/drivers/gpu/drm/radeon/radeon.h
> >> @@ -98,6 +98,7 @@ extern int radeon_lockup_timeout;
> >>  extern int radeon_fastfb;
> >>  extern int radeon_dpm;
> >>  extern int radeon_aspm;
> >> +extern int radeon_runtime_pm;
> >>
> >>  /*
> >>   * Copy from radeon_drv.h so we don't have to include both and have
> >> conflicting
> >> @@ -2212,6 +2213,9 @@ struct radeon_device {
> >> /* clock, powergating flags */
> >> u32 cg_flags;
> >> u32 pg_flags;
> >> +
> >> +   struct dev_pm_domain vga_pm_domain;
> >> +   bool have_disp_power_ref;
> >>  };
> >>
> >>  int radeon_device_init(struct radeon_device *rdev,
> >> @@ -2673,8 +2677,8 @@ extern void
> radeon_ttm_placement_from_domain(struct
> >> radeon_bo *rbo, u32 domain);
> >>  extern bool radeon_ttm_bo_is_radeon_bo(struct ttm_buffer_object *bo);
> >>  extern void radeon_vram_location(struct radeon_device *rdev, struct
> >> radeon_mc *mc, u64 base);
> >>  extern void radeon_gtt_location(struct radeon_device *rdev, struct
> >> radeon_mc *mc);
> >> -extern int radeon_resume_kms(struct drm_device *dev, bool resume);
> >> -extern int radeon_suspend_kms(struct drm_device *dev, bool suspend);
> >> +extern int radeon_resume_kms(struct drm_device *dev, bool resume, bool
> >> fbcon);
> >> +extern int radeon_suspend_kms(struct drm_device *dev, bool suspend,
> bool
> >> fbcon);
> >>  extern void radeon_ttm_set_active_vram_size(struct radeon_device *rdev,
> >> u64 size);
> >>  extern void radeon_program_register_sequence(struct radeon_device
> *rdev,
> >>  const u32 *registers,
> >> diff --git a/drivers/gpu/drm/radeon/radeon_atpx_handler.c
> >> b/drivers/gpu/drm/radeon/ra

[PATCH 2/2] drm/radeon: add runtime PM support (v2)

2013-09-20 Thread Mike Lothian
Hi

Is there an easy way to check this is on?

I have radeon.dynpm=1 in grub but usually when I use switcheroo I see
messages saying the card if now off at the moment I can old see messages
saying when the card gets powered up

Is it possible to have the on and off messages appearing?

Cheers

Mike


On 20 September 2013 18:18, Alex Deucher  wrote:

> From: Dave Airlie 
>
> This hooks radeon up to the runtime PM system to enable
> dynamic power management for secondary GPUs in switchable
> and powerxpress laptops.
>
> v2: agd5f: clean up, add module parameter
>
> Signed-off-by: Dave Airlie 
> Signed-off-by: Alex Deucher 
> ---
>  drivers/gpu/drm/radeon/radeon.h  |   8 +-
>  drivers/gpu/drm/radeon/radeon_atpx_handler.c |   4 +
>  drivers/gpu/drm/radeon/radeon_connectors.c   |  63 --
>  drivers/gpu/drm/radeon/radeon_device.c   |  52 +---
>  drivers/gpu/drm/radeon/radeon_display.c  |  47 ++-
>  drivers/gpu/drm/radeon/radeon_drv.c  | 122
> +--
>  drivers/gpu/drm/radeon/radeon_drv.h  |   3 +
>  drivers/gpu/drm/radeon/radeon_ioc32.c|   2 +-
>  drivers/gpu/drm/radeon/radeon_irq_kms.c  |   8 +-
>  drivers/gpu/drm/radeon/radeon_kms.c  |  26 +-
>  10 files changed, 299 insertions(+), 36 deletions(-)
>
> diff --git a/drivers/gpu/drm/radeon/radeon.h
> b/drivers/gpu/drm/radeon/radeon.h
> index 986100a..ad54525 100644
> --- a/drivers/gpu/drm/radeon/radeon.h
> +++ b/drivers/gpu/drm/radeon/radeon.h
> @@ -98,6 +98,7 @@ extern int radeon_lockup_timeout;
>  extern int radeon_fastfb;
>  extern int radeon_dpm;
>  extern int radeon_aspm;
> +extern int radeon_runtime_pm;
>
>  /*
>   * Copy from radeon_drv.h so we don't have to include both and have
> conflicting
> @@ -2212,6 +2213,9 @@ struct radeon_device {
> /* clock, powergating flags */
> u32 cg_flags;
> u32 pg_flags;
> +
> +   struct dev_pm_domain vga_pm_domain;
> +   bool have_disp_power_ref;
>  };
>
>  int radeon_device_init(struct radeon_device *rdev,
> @@ -2673,8 +2677,8 @@ extern void radeon_ttm_placement_from_domain(struct
> radeon_bo *rbo, u32 domain);
>  extern bool radeon_ttm_bo_is_radeon_bo(struct ttm_buffer_object *bo);
>  extern void radeon_vram_location(struct radeon_device *rdev, struct
> radeon_mc *mc, u64 base);
>  extern void radeon_gtt_location(struct radeon_device *rdev, struct
> radeon_mc *mc);
> -extern int radeon_resume_kms(struct drm_device *dev, bool resume);
> -extern int radeon_suspend_kms(struct drm_device *dev, bool suspend);
> +extern int radeon_resume_kms(struct drm_device *dev, bool resume, bool
> fbcon);
> +extern int radeon_suspend_kms(struct drm_device *dev, bool suspend, bool
> fbcon);
>  extern void radeon_ttm_set_active_vram_size(struct radeon_device *rdev,
> u64 size);
>  extern void radeon_program_register_sequence(struct radeon_device *rdev,
>  const u32 *registers,
> diff --git a/drivers/gpu/drm/radeon/radeon_atpx_handler.c
> b/drivers/gpu/drm/radeon/radeon_atpx_handler.c
> index d96070b..6153ec1 100644
> --- a/drivers/gpu/drm/radeon/radeon_atpx_handler.c
> +++ b/drivers/gpu/drm/radeon/radeon_atpx_handler.c
> @@ -59,6 +59,10 @@ struct atpx_mux {
> u16 mux;
>  } __packed;
>
> +bool radeon_is_px(void) {
> +   return radeon_atpx_priv.atpx_detected;
> +}
> +
>  /**
>   * radeon_atpx_call - call an ATPX method
>   *
> diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c
> b/drivers/gpu/drm/radeon/radeon_connectors.c
> index 79159b5..5855b5b 100644
> --- a/drivers/gpu/drm/radeon/radeon_connectors.c
> +++ b/drivers/gpu/drm/radeon/radeon_connectors.c
> @@ -31,6 +31,8 @@
>  #include "radeon.h"
>  #include "atom.h"
>
> +#include 
> +
>  extern void
>  radeon_combios_connected_scratch_regs(struct drm_connector *connector,
>   struct drm_encoder *encoder,
> @@ -626,6 +628,11 @@ radeon_lvds_detect(struct drm_connector *connector,
> bool force)
> struct radeon_connector *radeon_connector =
> to_radeon_connector(connector);
> struct drm_encoder *encoder =
> radeon_best_single_encoder(connector);
> enum drm_connector_status ret = connector_status_disconnected;
> +   int r;
> +
> +   r = pm_runtime_get_sync(connector->dev->dev);
> +   if (r < 0)
> +   return connector_status_disconnected;
>
> if (encoder) {
> struct radeon_encoder *radeon_encoder =
> to_radeon_encoder(encoder);
> @@ -651,6 +658,8 @@ radeon_lvds_detect(struct drm_connector *connector,
> bool force)
> /* check acpi lid status ??? */
>
> radeon_connector_update_scratch_regs(connector, ret);
> +   pm_runtime_mark_last_busy(connector->dev->dev);
> +   pm_runtime_put_autosuspend(connector->dev->dev);
> return ret;
>  }
>
> @@ -750,6 +759,11 @@ radeon_vga_detect(struct drm_connector *connector,
> bool force)
> struct drm_encoder_helper_funcs *enc

Re: [git pull] drm tree for 3.12-rc1

2013-09-10 Thread Mike Lothian
Hi

The regular flash plugin 10.2 doesn't have vaapi (Intel) support

The Chrome flash plugin (10.7?) doesn't have any vaapi or vdpau support

You best option is to install the vdpau to vaapi wrapper and disable the
Chrome flash plugin https://github.com/i-rinat/libvdpau-va-gl

This will revert you back to the older 10.2 flash version which should work

I think I also had to override a few settings in the flash dot file to get
it to skip GPU validation along with following the instruction for the
above project

Its quite convoluted but least it makes YouTube work

Cheers

Mike
[ Dave - your linux.ie email generates bounces for me, trying redhat
instead ]

On Mon, Sep 9, 2013 at 11:25 PM, Sean V Kelley 
wrote:
>>
>> I'm also a bit bummed that hw acceleration of video doesn't seem to
>> work on Haswell, meaning that full-screen is now a jerky mess. I fear
>> that that is user-space libraries/X.org, but I thought I'd mention it
>> in the hope of getting a "oh, it's working for us, you'll get a fix
>> for it soon".
>
> Can you give a little more detail about video not working?  Video
> accel should work fine with the current versions of libva/intel-driver
> available in Fedora 19 - assuming that's what you're using.

It is indeed F19.

Easy test: go to youtube, and watch things that are in 1080p HD. They
play fine in a window (using about 70% CPU), but full-screened to
2560x1440 they play at about one or two frames per second.

Non-HD content seems to be fine even full-screen. Either just because
it's so much easier to do, or because some level of scaling is
hw-accelerated.

It may well be that I'm using chrome (and chrome seems to tend to use
its own library versions), and firefox indeed seems to be a bit
better. But by "a bit better" I mean closer to full frame rate in
full-screen, but lots of tearing - and it was stil using 70% CPU when
displaying in a window. So I think firefox is also still doing
everything in software but may be better about using threads for it.

My previous i5-670 which was inferior in almost every other way didn't
have these problems.. It had the same 2560x1440 display.

  Linus
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/radeon/dpm: Add #include to *_dpm.c files

2013-07-02 Thread Mike Lothian
Hi

I sent this to the wrong mailing list and it had the wrong patch format

Fixed thanks to glisse

Cheers

Mike


On 2 July 2013 21:34, Mike Lothian  wrote:

> Hi
>
> This patch allows me to compile using GCC 4.7.3 system when using ld.bfd -
> it doesn't seem to be required on my GCC 4.8.1 system using ld.gold - I'm
> not sure why
>
> Cheers
>
> Mike
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.freedesktop.org/archives/dri-devel/attachments/20130702/77461d2d/attachment.html>
-- next part --
A non-text attachment was scrubbed...
Name: 0001-Add-include-linux-seq_file.h-to-_dpm.c-files.patch
Type: application/octet-stream
Size: 2924 bytes
Desc: not available
URL: 
<http://lists.freedesktop.org/archives/dri-devel/attachments/20130702/77461d2d/attachment.obj>


Re: [PATCH] drm/radeon/dpm: Add #include to *_dpm.c files

2013-07-02 Thread Mike Lothian
Hi

I sent this to the wrong mailing list and it had the wrong patch format

Fixed thanks to glisse

Cheers

Mike


On 2 July 2013 21:34, Mike Lothian  wrote:

> Hi
>
> This patch allows me to compile using GCC 4.7.3 system when using ld.bfd -
> it doesn't seem to be required on my GCC 4.8.1 system using ld.gold - I'm
> not sure why
>
> Cheers
>
> Mike
>


0001-Add-include-linux-seq_file.h-to-_dpm.c-files.patch
Description: Binary data
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel