Re: DRM_UDL and GPU under Xserver

2018-04-13 Thread Daniel Vetter
On Mon, Apr 09, 2018 at 09:45:51AM +, Alexey Brodkin wrote:
> Hi Daniel,
> 
> On Mon, 2018-04-09 at 11:17 +0200, Daniel Vetter wrote:
> > On Mon, Apr 09, 2018 at 08:55:36AM +, Alexey Brodkin wrote:
> > > Hi Daniel,
> > > 
> > > On Mon, 2018-04-09 at 10:31 +0200, Daniel Vetter wrote:
> > > > On Thu, Apr 05, 2018 at 06:39:41PM +, Alexey Brodkin wrote:
> > > > > Hi Daniel, all,
> > > 
> > > [snip]
> > > 
> > > > > Ok it was quite some time ago so I forgot about that completely.
> > > > > I really made one trivial change in xf86-video-armada:
> > > > > >8--
> > > > > --- a/src/armada_module.c
> > > > > +++ b/src/armada_module.c
> > > > > @@ -26,7 +26,7 @@
> > > > >  #define ARMADA_NAME"armada"
> > > > >  #define ARMADA_DRIVER_NAME "armada"
> > > > >  
> > > > > -#define DRM_MODULE_NAMES   "armada-drm", "imx-drm"
> > > > > +#define DRM_MODULE_NAMES   "armada-drm", "imx-drm", "udl"
> > > > >  #define DRM_DEFAULT_BUS_ID NULL
> > > > > >8--
> > > > > 
> > > > > Otherwise Xserver fails on start which is expected given "imx-drm" is 
> > > > > intentionally removed.
> > > 
> > > Here I meant I explicitly disabled DRM_IMX in the kernel configuraion
> > > so that it is not used in run-time.
> > > 
> > > > You need to keep imx-drm around. And then light up the udl display using
> > > > prime. Afaiui it should all just work (but with maybe a few disconnected
> > > > outputs from imx-drm around that you don't need, but that's not a
> > > > problem).
> > > 
> > > And given my comment above I don't really see any difference between
> > > DRM_IMX and DRM_UDL (except their HW implmentation which I guess should
> > > not bother upper layers) so why do wee need to treat them differently?
> > > 
> > > Most probably I'm missing something but my thought was if we have
> > > 2 equally well supported KMS devices we may easily swap them and still
> > > have resulting setup functional.
> > 
> > armada is not a generic drm driver, but can only be used for armada-drm
> > and imx-drm. You can't just use it with any drm device, for that you need
> > a generic driver like -modesetting.
> 
> But "armada" is the name of xf86 "driver" only which then uses true 
> DRM_ETNAVIV
> kernel driver. That's why I'm a bit confused.
> 
> And from what I see DRM_ETNAVIV happily works with either DRM_xxx frame-buffer
> device be it DRM_IMX or DRM_UDL.

Names are irrelevant and often just historical accidents. Armada was
origianlly only for armada, but then extended to support etnaviv 2d core,
then extended to IMX.

That the kernel properly share buffers between all of them is kinda
orthogonal to what armada-the-X11-driver supports.

Yes graphics is complicated, that's why touching and changing random stuff
you don't fully understand is not a good idea :-)
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: DRM_UDL and GPU under Xserver

2018-04-10 Thread Alexey Brodkin
Hi Daniel,

On Mon, 2018-04-09 at 11:17 +0200, Daniel Vetter wrote:
> On Mon, Apr 09, 2018 at 08:55:36AM +, Alexey Brodkin wrote:
> > Hi Daniel,
> > 
> > On Mon, 2018-04-09 at 10:31 +0200, Daniel Vetter wrote:
> > > On Thu, Apr 05, 2018 at 06:39:41PM +, Alexey Brodkin wrote:
> > > > Hi Daniel, all,
> > 
> > [snip]
> > 
> > > > Ok it was quite some time ago so I forgot about that completely.
> > > > I really made one trivial change in xf86-video-armada:
> > > > >8--
> > > > --- a/src/armada_module.c
> > > > +++ b/src/armada_module.c
> > > > @@ -26,7 +26,7 @@
> > > >  #define ARMADA_NAME"armada"
> > > >  #define ARMADA_DRIVER_NAME "armada"
> > > >  
> > > > -#define DRM_MODULE_NAMES   "armada-drm", "imx-drm"
> > > > +#define DRM_MODULE_NAMES   "armada-drm", "imx-drm", "udl"
> > > >  #define DRM_DEFAULT_BUS_ID NULL
> > > > >8--
> > > > 
> > > > Otherwise Xserver fails on start which is expected given "imx-drm" is 
> > > > intentionally removed.
> > 
> > Here I meant I explicitly disabled DRM_IMX in the kernel configuraion
> > so that it is not used in run-time.
> > 
> > > You need to keep imx-drm around. And then light up the udl display using
> > > prime. Afaiui it should all just work (but with maybe a few disconnected
> > > outputs from imx-drm around that you don't need, but that's not a
> > > problem).
> > 
> > And given my comment above I don't really see any difference between
> > DRM_IMX and DRM_UDL (except their HW implmentation which I guess should
> > not bother upper layers) so why do wee need to treat them differently?
> > 
> > Most probably I'm missing something but my thought was if we have
> > 2 equally well supported KMS devices we may easily swap them and still
> > have resulting setup functional.
> 
> armada is not a generic drm driver, but can only be used for armada-drm
> and imx-drm. You can't just use it with any drm device, for that you need
> a generic driver like -modesetting.

But "armada" is the name of xf86 "driver" only which then uses true DRM_ETNAVIV
kernel driver. That's why I'm a bit confused.

And from what I see DRM_ETNAVIV happily works with either DRM_xxx frame-buffer
device be it DRM_IMX or DRM_UDL.

-Alexey
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: DRM_UDL and GPU under Xserver

2018-04-10 Thread Alexey Brodkin
Hi Daniel,

On Mon, 2018-04-09 at 10:31 +0200, Daniel Vetter wrote:
> On Thu, Apr 05, 2018 at 06:39:41PM +, Alexey Brodkin wrote:
> > Hi Daniel, all,

[snip]

> > Ok it was quite some time ago so I forgot about that completely.
> > I really made one trivial change in xf86-video-armada:
> > >8--
> > --- a/src/armada_module.c
> > +++ b/src/armada_module.c
> > @@ -26,7 +26,7 @@
> >  #define ARMADA_NAME"armada"
> >  #define ARMADA_DRIVER_NAME "armada"
> >  
> > -#define DRM_MODULE_NAMES   "armada-drm", "imx-drm"
> > +#define DRM_MODULE_NAMES   "armada-drm", "imx-drm", "udl"
> >  #define DRM_DEFAULT_BUS_ID NULL
> > >8--
> > 
> > Otherwise Xserver fails on start which is expected given "imx-drm" is 
> > intentionally removed.

Here I meant I explicitly disabled DRM_IMX in the kernel configuraion
so that it is not used in run-time.

> You need to keep imx-drm around. And then light up the udl display using
> prime. Afaiui it should all just work (but with maybe a few disconnected
> outputs from imx-drm around that you don't need, but that's not a
> problem).

And given my comment above I don't really see any difference between
DRM_IMX and DRM_UDL (except their HW implmentation which I guess should
not bother upper layers) so why do wee need to treat them differently?

Most probably I'm missing something but my thought was if we have
2 equally well supported KMS devices we may easily swap them and still
have resulting setup functional.

-Alexey
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: DRM_UDL and GPU under Xserver

2018-04-09 Thread Daniel Vetter
On Mon, Apr 09, 2018 at 08:55:36AM +, Alexey Brodkin wrote:
> Hi Daniel,
> 
> On Mon, 2018-04-09 at 10:31 +0200, Daniel Vetter wrote:
> > On Thu, Apr 05, 2018 at 06:39:41PM +, Alexey Brodkin wrote:
> > > Hi Daniel, all,
> 
> [snip]
> 
> > > Ok it was quite some time ago so I forgot about that completely.
> > > I really made one trivial change in xf86-video-armada:
> > > >8--
> > > --- a/src/armada_module.c
> > > +++ b/src/armada_module.c
> > > @@ -26,7 +26,7 @@
> > >  #define ARMADA_NAME"armada"
> > >  #define ARMADA_DRIVER_NAME "armada"
> > >  
> > > -#define DRM_MODULE_NAMES   "armada-drm", "imx-drm"
> > > +#define DRM_MODULE_NAMES   "armada-drm", "imx-drm", "udl"
> > >  #define DRM_DEFAULT_BUS_ID NULL
> > > >8--
> > > 
> > > Otherwise Xserver fails on start which is expected given "imx-drm" is 
> > > intentionally removed.
> 
> Here I meant I explicitly disabled DRM_IMX in the kernel configuraion
> so that it is not used in run-time.
> 
> > You need to keep imx-drm around. And then light up the udl display using
> > prime. Afaiui it should all just work (but with maybe a few disconnected
> > outputs from imx-drm around that you don't need, but that's not a
> > problem).
> 
> And given my comment above I don't really see any difference between
> DRM_IMX and DRM_UDL (except their HW implmentation which I guess should
> not bother upper layers) so why do wee need to treat them differently?
> 
> Most probably I'm missing something but my thought was if we have
> 2 equally well supported KMS devices we may easily swap them and still
> have resulting setup functional.

armada is not a generic drm driver, but can only be used for armada-drm
and imx-drm. You can't just use it with any drm device, for that you need
a generic driver like -modesetting.

Just enable all the drivers you have for your hw, and it should work.
Hacking on drivers without knowing what they expect tends to not work so
well.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: DRM_UDL and GPU under Xserver

2018-04-09 Thread Daniel Vetter
On Thu, Apr 05, 2018 at 06:39:41PM +, Alexey Brodkin wrote:
> Hi Daniel, all,
> 
> On Thu, 2018-04-05 at 15:44 +0200, Daniel Vetter wrote:
> > On Thu, Apr 05, 2018 at 11:10:03AM +, Alexey Brodkin wrote:
> > > Hi Daniel, Lucas,
> > > 
> > > On Thu, 2018-04-05 at 12:59 +0200, Daniel Vetter wrote:
> > > > On Thu, Apr 5, 2018 at 12:29 PM, Lucas Stach  
> > > > wrote:
> > > > > Am Donnerstag, den 05.04.2018, 11:32 +0200 schrieb Daniel Vetter:
> > > > > > On Thu, Apr 5, 2018 at 9:16 AM, Alexey Brodkin
> > > > > >  wrote:
> > > > > > > Hi Daniel,
> > > > > > > 
> > > > > > > On Thu, 2018-04-05 at 08:18 +0200, Daniel Vetter wrote:
> > > > > > > > On Wed, Apr 4, 2018 at 10:06 PM, Alexey Brodkin
> > > > > > > >  wrote:
> > > > > > > > > Hello,
> > > > > > > > > 
> > > > > > > > > We're trying to use DisplayLink USB2-to-HDMI adapter to render
> > > > > > > > > GPU-accelerated graphics.
> > > > > > > > > Hardware setup is as simple as a devboard + DisplayLink
> > > > > > > > > adapter.
> > > > > > > > > Devboards we use for this experiment are:
> > > > > > > > >  * Wandboard Quad (based on IMX6 SoC with Vivante GPU) or
> > > > > > > > >  * HSDK (based on Synopsys ARC HS38 SoC with Vivante GPU as
> > > > > > > > > well)
> > > > > > > > > 
> > > > > > > > > I'm sure any other board with DRM supported GPU will work,
> > > > > > > > > those we just used
> > > > > > > > > as the very recent Linux kernels could be easily run on them
> > > > > > > > > both.
> > > > > > > > > 
> > > > > > > > > Basically the problem is UDL needs to be explicitly notified
> > > > > > > > > about new data
> > > > > > > > > to be rendered on the screen compared to typical bit-streamers
> > > > > > > > > that infinitely
> > > > > > > > > scan a dedicated buffer in memory.
> > > > > > > > > 
> > > > > > > > > In case of UDL there're just 2 ways for this notification:
> > > > > > > > >  1) DRM_IOCTL_MODE_PAGE_FLIP that calls drm_crtc_funcs-
> > > > > > > > > > page_flip()
> > > > > > > > > 
> > > > > > > > >  2) DRM_IOCTL_MODE_DIRTYFB that calls drm_framebuffer_funcs-
> > > > > > > > > > dirty()
> > > > > > > > > 
> > > > > > > > > But neither of IOCTLs happen when we run Xserver with xf86-
> > > > > > > > > video-armada driver
> > > > > > > > > (see 
> > > > > > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__git.ar
> > > > > > > > > m.linux.org.uk_cgit_xf86-2Dvideo-2Darmada.git_log_-3Fh-
> > > > > > > > > 3Dunstable-2Ddevel=DwIBaQ&;
> > > > > > > > > c=DPL6_X_6JkXFx7AXWqB0tg=lqdeeSSEes0GFDDl656eViXO7breS55ytWkh
> > > > > > > > > pk5R81I=oEAlP64L9vkuUs_k3kGwwwlN1WJbDMJbCo0uDhwKwwk=3ZHj-
> > > > > > > > > 6JXZBLSTWg_4KMnL0VNi7z8c0RxHzj2U5ywVIw=).
> > > > > > > > > 
> > > > > > > > > Is it something missing in Xserver or in UDL driver?
> > > > > > > > 
> > > > > > > > Use the -modesetting driverr for UDL, that one works correctly.
> > > > > > > 
> > > > > > > If you're talking about "modesetting" driver of Xserver [1] then
> > > > > > > indeed
> > > > > > > picture is displayed on the screen. But there I guess won't be any
> > > > > > > 3D acceleration.
> > > > > > > 
> > > > > > > At least that's what was suggested to me earlier here [2] by 
> > > > > > > Lucas:
> > > > > > > >8---
> > > > > > > For 3D acceleration to work under X you need the etnaviv specific
> > > > > > > DDX
> > > > > > > driver, which can be found here:
> > > > > > > 
> > > > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__git.arm.linux.org.uk_cgit_xf86-2Dvideo-2Darmada.git_log_-3Fh-3Dunsta=DwIBaQ=DPL6_X
> > > > > > > _6Jk
> > > > > > > XFx7AXWqB0tg=lqdeeSSEes0GFDDl656eViXO7breS55ytWkhpk5R81I=FleDFAQb2lBcZk5DMld7qpeSrB5Srsb4XPQecA5BPvU=YUzMQWe3lpC_pjGqRjb4MvRYh16ZBbe
> > > > > > > alqf
> > > > > > > rywlqjKE=
> > > > > > > ble-devel
> > > > > > 
> > > > > > You definitely want to use -modesetting for UDL. And I thought with
> > > > > > glamour and the corresponding mesa work you should also get
> > > > > > accelaration. Insisting that you must use a driver-specific ddx is
> > > > > > broken, the world doesn't work like that anymore.
> > > > > 
> > > > > On etnaviv the world definitely still works like this. The etnaviv DDX
> > > > > uses the dedicated 2D hardware of the Vivante GPUs, which is much
> > > > > faster and efficient than going through Glamor.
> > > > > Especially since almost all X accel operations are done on linear
> > > > > buffers, while the 3D GPU can only ever do tiled on both sampler and
> > > > > render, where some multi-pipe 3D cores can't even read the tiling they
> > > > > write out. So Glamor is an endless copy fest using the resolve engine
> > > > > on those.
> > > > 
> > > > Ah right, I've forgotten about the vivante 2d cores again.
> > > > 
> > > > > If using etnaviv with UDL is a use-case that need to be supported, one
> > > > > would need to port the UDL specifics from -modesetting to the 

Re: DRM_UDL and GPU under Xserver

2018-04-05 Thread Alexey Brodkin
Hi Daniel, Lucas,

On Thu, 2018-04-05 at 12:59 +0200, Daniel Vetter wrote:
> On Thu, Apr 5, 2018 at 12:29 PM, Lucas Stach  wrote:
> > Am Donnerstag, den 05.04.2018, 11:32 +0200 schrieb Daniel Vetter:
> > > On Thu, Apr 5, 2018 at 9:16 AM, Alexey Brodkin
> > >  wrote:
> > > > Hi Daniel,
> > > > 
> > > > On Thu, 2018-04-05 at 08:18 +0200, Daniel Vetter wrote:
> > > > > On Wed, Apr 4, 2018 at 10:06 PM, Alexey Brodkin
> > > > >  wrote:
> > > > > > Hello,
> > > > > > 
> > > > > > We're trying to use DisplayLink USB2-to-HDMI adapter to render
> > > > > > GPU-accelerated graphics.
> > > > > > Hardware setup is as simple as a devboard + DisplayLink
> > > > > > adapter.
> > > > > > Devboards we use for this experiment are:
> > > > > >  * Wandboard Quad (based on IMX6 SoC with Vivante GPU) or
> > > > > >  * HSDK (based on Synopsys ARC HS38 SoC with Vivante GPU as
> > > > > > well)
> > > > > > 
> > > > > > I'm sure any other board with DRM supported GPU will work,
> > > > > > those we just used
> > > > > > as the very recent Linux kernels could be easily run on them
> > > > > > both.
> > > > > > 
> > > > > > Basically the problem is UDL needs to be explicitly notified
> > > > > > about new data
> > > > > > to be rendered on the screen compared to typical bit-streamers
> > > > > > that infinitely
> > > > > > scan a dedicated buffer in memory.
> > > > > > 
> > > > > > In case of UDL there're just 2 ways for this notification:
> > > > > >  1) DRM_IOCTL_MODE_PAGE_FLIP that calls drm_crtc_funcs-
> > > > > > > page_flip()
> > > > > > 
> > > > > >  2) DRM_IOCTL_MODE_DIRTYFB that calls drm_framebuffer_funcs-
> > > > > > > dirty()
> > > > > > 
> > > > > > But neither of IOCTLs happen when we run Xserver with xf86-
> > > > > > video-armada driver
> > > > > > (see https://urldefense.proofpoint.com/v2/url?u=http-3A__git.ar
> > > > > > m.linux.org.uk_cgit_xf86-2Dvideo-2Darmada.git_log_-3Fh-
> > > > > > 3Dunstable-2Ddevel=DwIBaQ&;
> > > > > > c=DPL6_X_6JkXFx7AXWqB0tg=lqdeeSSEes0GFDDl656eViXO7breS55ytWkh
> > > > > > pk5R81I=oEAlP64L9vkuUs_k3kGwwwlN1WJbDMJbCo0uDhwKwwk=3ZHj-
> > > > > > 6JXZBLSTWg_4KMnL0VNi7z8c0RxHzj2U5ywVIw=).
> > > > > > 
> > > > > > Is it something missing in Xserver or in UDL driver?
> > > > > 
> > > > > Use the -modesetting driverr for UDL, that one works correctly.
> > > > 
> > > > If you're talking about "modesetting" driver of Xserver [1] then
> > > > indeed
> > > > picture is displayed on the screen. But there I guess won't be any
> > > > 3D acceleration.
> > > > 
> > > > At least that's what was suggested to me earlier here [2] by Lucas:
> > > > >8---
> > > > For 3D acceleration to work under X you need the etnaviv specific
> > > > DDX
> > > > driver, which can be found here:
> > > > 
> > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__git.arm.linux.org.uk_cgit_xf86-2Dvideo-2Darmada.git_log_-3Fh-3Dunsta=DwIBaQ=DPL6_X_6Jk
> > > > XFx7AXWqB0tg=lqdeeSSEes0GFDDl656eViXO7breS55ytWkhpk5R81I=FleDFAQb2lBcZk5DMld7qpeSrB5Srsb4XPQecA5BPvU=YUzMQWe3lpC_pjGqRjb4MvRYh16ZBbealqf
> > > > rywlqjKE=
> > > > ble-devel
> > > 
> > > You definitely want to use -modesetting for UDL. And I thought with
> > > glamour and the corresponding mesa work you should also get
> > > accelaration. Insisting that you must use a driver-specific ddx is
> > > broken, the world doesn't work like that anymore.
> > 
> > On etnaviv the world definitely still works like this. The etnaviv DDX
> > uses the dedicated 2D hardware of the Vivante GPUs, which is much
> > faster and efficient than going through Glamor.
> > Especially since almost all X accel operations are done on linear
> > buffers, while the 3D GPU can only ever do tiled on both sampler and
> > render, where some multi-pipe 3D cores can't even read the tiling they
> > write out. So Glamor is an endless copy fest using the resolve engine
> > on those.
> 
> Ah right, I've forgotten about the vivante 2d cores again.
> 
> > If using etnaviv with UDL is a use-case that need to be supported, one
> > would need to port the UDL specifics from -modesetting to the -armada
> > DDX.
> 
> I don't think this makes sense.

I'm not really sure it has something to do with Etnaviv in particular.
Given UDL might be attached to any board with any GPU that would mean we'd
need to add those "UDL specifics from -modesetting" in all xf86-video-drivers,
right?

> > > Lucas, can you pls clarify? Also, why does -armada bind against all
> > > kms drivers, that's probaly too much.
> > 
> > I think that's a local modification done by Alexey. The armada driver
> > only binds to armada and imx-drm by default.

Actually it all magically works without any modifications.
I just start X with the following xorg.conf [1]:
>8--
Section "Device"
Identifier  "Driver0"
Screen  0
Driver  "armada"

Re: DRM_UDL and GPU under Xserver

2018-04-05 Thread Alexey Brodkin
Hi Daniel, all,

On Thu, 2018-04-05 at 15:44 +0200, Daniel Vetter wrote:
> On Thu, Apr 05, 2018 at 11:10:03AM +, Alexey Brodkin wrote:
> > Hi Daniel, Lucas,
> > 
> > On Thu, 2018-04-05 at 12:59 +0200, Daniel Vetter wrote:
> > > On Thu, Apr 5, 2018 at 12:29 PM, Lucas Stach  
> > > wrote:
> > > > Am Donnerstag, den 05.04.2018, 11:32 +0200 schrieb Daniel Vetter:
> > > > > On Thu, Apr 5, 2018 at 9:16 AM, Alexey Brodkin
> > > > >  wrote:
> > > > > > Hi Daniel,
> > > > > > 
> > > > > > On Thu, 2018-04-05 at 08:18 +0200, Daniel Vetter wrote:
> > > > > > > On Wed, Apr 4, 2018 at 10:06 PM, Alexey Brodkin
> > > > > > >  wrote:
> > > > > > > > Hello,
> > > > > > > > 
> > > > > > > > We're trying to use DisplayLink USB2-to-HDMI adapter to render
> > > > > > > > GPU-accelerated graphics.
> > > > > > > > Hardware setup is as simple as a devboard + DisplayLink
> > > > > > > > adapter.
> > > > > > > > Devboards we use for this experiment are:
> > > > > > > >  * Wandboard Quad (based on IMX6 SoC with Vivante GPU) or
> > > > > > > >  * HSDK (based on Synopsys ARC HS38 SoC with Vivante GPU as
> > > > > > > > well)
> > > > > > > > 
> > > > > > > > I'm sure any other board with DRM supported GPU will work,
> > > > > > > > those we just used
> > > > > > > > as the very recent Linux kernels could be easily run on them
> > > > > > > > both.
> > > > > > > > 
> > > > > > > > Basically the problem is UDL needs to be explicitly notified
> > > > > > > > about new data
> > > > > > > > to be rendered on the screen compared to typical bit-streamers
> > > > > > > > that infinitely
> > > > > > > > scan a dedicated buffer in memory.
> > > > > > > > 
> > > > > > > > In case of UDL there're just 2 ways for this notification:
> > > > > > > >  1) DRM_IOCTL_MODE_PAGE_FLIP that calls drm_crtc_funcs-
> > > > > > > > > page_flip()
> > > > > > > > 
> > > > > > > >  2) DRM_IOCTL_MODE_DIRTYFB that calls drm_framebuffer_funcs-
> > > > > > > > > dirty()
> > > > > > > > 
> > > > > > > > But neither of IOCTLs happen when we run Xserver with xf86-
> > > > > > > > video-armada driver
> > > > > > > > (see https://urldefense.proofpoint.com/v2/url?u=http-3A__git.ar
> > > > > > > > m.linux.org.uk_cgit_xf86-2Dvideo-2Darmada.git_log_-3Fh-
> > > > > > > > 3Dunstable-2Ddevel=DwIBaQ&;
> > > > > > > > c=DPL6_X_6JkXFx7AXWqB0tg=lqdeeSSEes0GFDDl656eViXO7breS55ytWkh
> > > > > > > > pk5R81I=oEAlP64L9vkuUs_k3kGwwwlN1WJbDMJbCo0uDhwKwwk=3ZHj-
> > > > > > > > 6JXZBLSTWg_4KMnL0VNi7z8c0RxHzj2U5ywVIw=).
> > > > > > > > 
> > > > > > > > Is it something missing in Xserver or in UDL driver?
> > > > > > > 
> > > > > > > Use the -modesetting driverr for UDL, that one works correctly.
> > > > > > 
> > > > > > If you're talking about "modesetting" driver of Xserver [1] then
> > > > > > indeed
> > > > > > picture is displayed on the screen. But there I guess won't be any
> > > > > > 3D acceleration.
> > > > > > 
> > > > > > At least that's what was suggested to me earlier here [2] by Lucas:
> > > > > > >8---
> > > > > > For 3D acceleration to work under X you need the etnaviv specific
> > > > > > DDX
> > > > > > driver, which can be found here:
> > > > > > 
> > > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__git.arm.linux.org.uk_cgit_xf86-2Dvideo-2Darmada.git_log_-3Fh-3Dunsta=DwIBaQ=DPL6_X
> > > > > > _6Jk
> > > > > > XFx7AXWqB0tg=lqdeeSSEes0GFDDl656eViXO7breS55ytWkhpk5R81I=FleDFAQb2lBcZk5DMld7qpeSrB5Srsb4XPQecA5BPvU=YUzMQWe3lpC_pjGqRjb4MvRYh16ZBbe
> > > > > > alqf
> > > > > > rywlqjKE=
> > > > > > ble-devel
> > > > > 
> > > > > You definitely want to use -modesetting for UDL. And I thought with
> > > > > glamour and the corresponding mesa work you should also get
> > > > > accelaration. Insisting that you must use a driver-specific ddx is
> > > > > broken, the world doesn't work like that anymore.
> > > > 
> > > > On etnaviv the world definitely still works like this. The etnaviv DDX
> > > > uses the dedicated 2D hardware of the Vivante GPUs, which is much
> > > > faster and efficient than going through Glamor.
> > > > Especially since almost all X accel operations are done on linear
> > > > buffers, while the 3D GPU can only ever do tiled on both sampler and
> > > > render, where some multi-pipe 3D cores can't even read the tiling they
> > > > write out. So Glamor is an endless copy fest using the resolve engine
> > > > on those.
> > > 
> > > Ah right, I've forgotten about the vivante 2d cores again.
> > > 
> > > > If using etnaviv with UDL is a use-case that need to be supported, one
> > > > would need to port the UDL specifics from -modesetting to the -armada
> > > > DDX.
> > > 
> > > I don't think this makes sense.
> > 
> > I'm not really sure it has something to do with Etnaviv in particular.
> > Given UDL might be attached to any board with any GPU that would mean we'd
> > need to add those "UDL specifics from -modesetting" in all 

Re: DRM_UDL and GPU under Xserver

2018-04-05 Thread Alexey Brodkin
Hi Daniel,

On Thu, 2018-04-05 at 08:18 +0200, Daniel Vetter wrote:
> On Wed, Apr 4, 2018 at 10:06 PM, Alexey Brodkin
>  wrote:
> > Hello,
> > 
> > We're trying to use DisplayLink USB2-to-HDMI adapter to render 
> > GPU-accelerated graphics.
> > Hardware setup is as simple as a devboard + DisplayLink adapter.
> > Devboards we use for this experiment are:
> >  * Wandboard Quad (based on IMX6 SoC with Vivante GPU) or
> >  * HSDK (based on Synopsys ARC HS38 SoC with Vivante GPU as well)
> > 
> > I'm sure any other board with DRM supported GPU will work, those we just 
> > used
> > as the very recent Linux kernels could be easily run on them both.
> > 
> > Basically the problem is UDL needs to be explicitly notified about new data
> > to be rendered on the screen compared to typical bit-streamers that 
> > infinitely
> > scan a dedicated buffer in memory.
> > 
> > In case of UDL there're just 2 ways for this notification:
> >  1) DRM_IOCTL_MODE_PAGE_FLIP that calls drm_crtc_funcs->page_flip()
> >  2) DRM_IOCTL_MODE_DIRTYFB that calls drm_framebuffer_funcs->dirty()
> > 
> > But neither of IOCTLs happen when we run Xserver with xf86-video-armada 
> > driver
> > (see 
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__git.arm.linux.org.uk_cgit_xf86-2Dvideo-2Darmada.git_log_-3Fh-3Dunstable-2Ddevel=DwIBaQ;
> > c=DPL6_X_6JkXFx7AXWqB0tg=lqdeeSSEes0GFDDl656eViXO7breS55ytWkhpk5R81I=oEAlP64L9vkuUs_k3kGwwwlN1WJbDMJbCo0uDhwKwwk=3ZHj-
> > 6JXZBLSTWg_4KMnL0VNi7z8c0RxHzj2U5ywVIw=).
> > 
> > Is it something missing in Xserver or in UDL driver?
> 
> Use the -modesetting driverr for UDL, that one works correctly.

If you're talking about "modesetting" driver of Xserver [1] then indeed
picture is displayed on the screen. But there I guess won't be any 3D 
acceleration.

At least that's what was suggested to me earlier here [2] by Lucas:
>8---
For 3D acceleration to work under X you need the etnaviv specific DDX
driver, which can be found here:

http://git.arm.linux.org.uk/cgit/xf86-video-armada.git/log/?h=unstable-devel
>8---

[1] 
https://cgit.freedesktop.org/xorg/xserver/tree/hw/xfree86/drivers/modesetting
[2] 
http://lists.infradead.org/pipermail/linux-snps-arc/2017-November/003031.html

-Alexey
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: DRM_UDL and GPU under Xserver

2018-04-05 Thread Daniel Vetter
On Thu, Apr 05, 2018 at 11:10:03AM +, Alexey Brodkin wrote:
> Hi Daniel, Lucas,
> 
> On Thu, 2018-04-05 at 12:59 +0200, Daniel Vetter wrote:
> > On Thu, Apr 5, 2018 at 12:29 PM, Lucas Stach  wrote:
> > > Am Donnerstag, den 05.04.2018, 11:32 +0200 schrieb Daniel Vetter:
> > > > On Thu, Apr 5, 2018 at 9:16 AM, Alexey Brodkin
> > > >  wrote:
> > > > > Hi Daniel,
> > > > > 
> > > > > On Thu, 2018-04-05 at 08:18 +0200, Daniel Vetter wrote:
> > > > > > On Wed, Apr 4, 2018 at 10:06 PM, Alexey Brodkin
> > > > > >  wrote:
> > > > > > > Hello,
> > > > > > > 
> > > > > > > We're trying to use DisplayLink USB2-to-HDMI adapter to render
> > > > > > > GPU-accelerated graphics.
> > > > > > > Hardware setup is as simple as a devboard + DisplayLink
> > > > > > > adapter.
> > > > > > > Devboards we use for this experiment are:
> > > > > > >  * Wandboard Quad (based on IMX6 SoC with Vivante GPU) or
> > > > > > >  * HSDK (based on Synopsys ARC HS38 SoC with Vivante GPU as
> > > > > > > well)
> > > > > > > 
> > > > > > > I'm sure any other board with DRM supported GPU will work,
> > > > > > > those we just used
> > > > > > > as the very recent Linux kernels could be easily run on them
> > > > > > > both.
> > > > > > > 
> > > > > > > Basically the problem is UDL needs to be explicitly notified
> > > > > > > about new data
> > > > > > > to be rendered on the screen compared to typical bit-streamers
> > > > > > > that infinitely
> > > > > > > scan a dedicated buffer in memory.
> > > > > > > 
> > > > > > > In case of UDL there're just 2 ways for this notification:
> > > > > > >  1) DRM_IOCTL_MODE_PAGE_FLIP that calls drm_crtc_funcs-
> > > > > > > > page_flip()
> > > > > > > 
> > > > > > >  2) DRM_IOCTL_MODE_DIRTYFB that calls drm_framebuffer_funcs-
> > > > > > > > dirty()
> > > > > > > 
> > > > > > > But neither of IOCTLs happen when we run Xserver with xf86-
> > > > > > > video-armada driver
> > > > > > > (see https://urldefense.proofpoint.com/v2/url?u=http-3A__git.ar
> > > > > > > m.linux.org.uk_cgit_xf86-2Dvideo-2Darmada.git_log_-3Fh-
> > > > > > > 3Dunstable-2Ddevel=DwIBaQ&;
> > > > > > > c=DPL6_X_6JkXFx7AXWqB0tg=lqdeeSSEes0GFDDl656eViXO7breS55ytWkh
> > > > > > > pk5R81I=oEAlP64L9vkuUs_k3kGwwwlN1WJbDMJbCo0uDhwKwwk=3ZHj-
> > > > > > > 6JXZBLSTWg_4KMnL0VNi7z8c0RxHzj2U5ywVIw=).
> > > > > > > 
> > > > > > > Is it something missing in Xserver or in UDL driver?
> > > > > > 
> > > > > > Use the -modesetting driverr for UDL, that one works correctly.
> > > > > 
> > > > > If you're talking about "modesetting" driver of Xserver [1] then
> > > > > indeed
> > > > > picture is displayed on the screen. But there I guess won't be any
> > > > > 3D acceleration.
> > > > > 
> > > > > At least that's what was suggested to me earlier here [2] by Lucas:
> > > > > >8---
> > > > > For 3D acceleration to work under X you need the etnaviv specific
> > > > > DDX
> > > > > driver, which can be found here:
> > > > > 
> > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__git.arm.linux.org.uk_cgit_xf86-2Dvideo-2Darmada.git_log_-3Fh-3Dunsta=DwIBaQ=DPL6_X_6Jk
> > > > > XFx7AXWqB0tg=lqdeeSSEes0GFDDl656eViXO7breS55ytWkhpk5R81I=FleDFAQb2lBcZk5DMld7qpeSrB5Srsb4XPQecA5BPvU=YUzMQWe3lpC_pjGqRjb4MvRYh16ZBbealqf
> > > > > rywlqjKE=
> > > > > ble-devel
> > > > 
> > > > You definitely want to use -modesetting for UDL. And I thought with
> > > > glamour and the corresponding mesa work you should also get
> > > > accelaration. Insisting that you must use a driver-specific ddx is
> > > > broken, the world doesn't work like that anymore.
> > > 
> > > On etnaviv the world definitely still works like this. The etnaviv DDX
> > > uses the dedicated 2D hardware of the Vivante GPUs, which is much
> > > faster and efficient than going through Glamor.
> > > Especially since almost all X accel operations are done on linear
> > > buffers, while the 3D GPU can only ever do tiled on both sampler and
> > > render, where some multi-pipe 3D cores can't even read the tiling they
> > > write out. So Glamor is an endless copy fest using the resolve engine
> > > on those.
> > 
> > Ah right, I've forgotten about the vivante 2d cores again.
> > 
> > > If using etnaviv with UDL is a use-case that need to be supported, one
> > > would need to port the UDL specifics from -modesetting to the -armada
> > > DDX.
> > 
> > I don't think this makes sense.
> 
> I'm not really sure it has something to do with Etnaviv in particular.
> Given UDL might be attached to any board with any GPU that would mean we'd
> need to add those "UDL specifics from -modesetting" in all xf86-video-drivers,
> right?

X server supports multiple drivers (for different devices) in parallel.
You should be using armada for the imx-drm thing, and modesetting for udl.
And through the magic of prime it should even figure out that the device

> > > > Lucas, can you pls clarify? Also, why 

Re: DRM_UDL and GPU under Xserver

2018-04-05 Thread Daniel Vetter
On Thu, Apr 5, 2018 at 12:29 PM, Lucas Stach  wrote:
> Am Donnerstag, den 05.04.2018, 11:32 +0200 schrieb Daniel Vetter:
>> On Thu, Apr 5, 2018 at 9:16 AM, Alexey Brodkin
>>  wrote:
>> > Hi Daniel,
>> >
>> > On Thu, 2018-04-05 at 08:18 +0200, Daniel Vetter wrote:
>> > > On Wed, Apr 4, 2018 at 10:06 PM, Alexey Brodkin
>> > >  wrote:
>> > > > Hello,
>> > > >
>> > > > We're trying to use DisplayLink USB2-to-HDMI adapter to render
>> > > > GPU-accelerated graphics.
>> > > > Hardware setup is as simple as a devboard + DisplayLink
>> > > > adapter.
>> > > > Devboards we use for this experiment are:
>> > > >  * Wandboard Quad (based on IMX6 SoC with Vivante GPU) or
>> > > >  * HSDK (based on Synopsys ARC HS38 SoC with Vivante GPU as
>> > > > well)
>> > > >
>> > > > I'm sure any other board with DRM supported GPU will work,
>> > > > those we just used
>> > > > as the very recent Linux kernels could be easily run on them
>> > > > both.
>> > > >
>> > > > Basically the problem is UDL needs to be explicitly notified
>> > > > about new data
>> > > > to be rendered on the screen compared to typical bit-streamers
>> > > > that infinitely
>> > > > scan a dedicated buffer in memory.
>> > > >
>> > > > In case of UDL there're just 2 ways for this notification:
>> > > >  1) DRM_IOCTL_MODE_PAGE_FLIP that calls drm_crtc_funcs-
>> > > > >page_flip()
>> > > >  2) DRM_IOCTL_MODE_DIRTYFB that calls drm_framebuffer_funcs-
>> > > > >dirty()
>> > > >
>> > > > But neither of IOCTLs happen when we run Xserver with xf86-
>> > > > video-armada driver
>> > > > (see https://urldefense.proofpoint.com/v2/url?u=http-3A__git.ar
>> > > > m.linux.org.uk_cgit_xf86-2Dvideo-2Darmada.git_log_-3Fh-
>> > > > 3Dunstable-2Ddevel=DwIBaQ&;
>> > > > c=DPL6_X_6JkXFx7AXWqB0tg=lqdeeSSEes0GFDDl656eViXO7breS55ytWkh
>> > > > pk5R81I=oEAlP64L9vkuUs_k3kGwwwlN1WJbDMJbCo0uDhwKwwk=3ZHj-
>> > > > 6JXZBLSTWg_4KMnL0VNi7z8c0RxHzj2U5ywVIw=).
>> > > >
>> > > > Is it something missing in Xserver or in UDL driver?
>> > >
>> > > Use the -modesetting driverr for UDL, that one works correctly.
>> >
>> > If you're talking about "modesetting" driver of Xserver [1] then
>> > indeed
>> > picture is displayed on the screen. But there I guess won't be any
>> > 3D acceleration.
>> >
>> > At least that's what was suggested to me earlier here [2] by Lucas:
>> > >8---
>> > For 3D acceleration to work under X you need the etnaviv specific
>> > DDX
>> > driver, which can be found here:
>> >
>> > http://git.arm.linux.org.uk/cgit/xf86-video-armada.git/log/?h=unsta
>> > ble-devel
>>
>> You definitely want to use -modesetting for UDL. And I thought with
>> glamour and the corresponding mesa work you should also get
>> accelaration. Insisting that you must use a driver-specific ddx is
>> broken, the world doesn't work like that anymore.
>
> On etnaviv the world definitely still works like this. The etnaviv DDX
> uses the dedicated 2D hardware of the Vivante GPUs, which is much
> faster and efficient than going through Glamor.
> Especially since almost all X accel operations are done on linear
> buffers, while the 3D GPU can only ever do tiled on both sampler and
> render, where some multi-pipe 3D cores can't even read the tiling they
> write out. So Glamor is an endless copy fest using the resolve engine
> on those.

Ah right, I've forgotten about the vivante 2d cores again.

> If using etnaviv with UDL is a use-case that need to be supported, one
> would need to port the UDL specifics from -modesetting to the -armada
> DDX.

I don't think this makes sense.

>> Lucas, can you pls clarify? Also, why does -armada bind against all
>> kms drivers, that's probaly too much.
>
> I think that's a local modification done by Alexey. The armada driver
> only binds to armada and imx-drm by default.

Yeah, that sounds a lot more reasonable than trying to teach -armada
about all the things -modesetting already knows to be able to be a
generic kms driver.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: DRM_UDL and GPU under Xserver

2018-04-05 Thread Lucas Stach
Am Donnerstag, den 05.04.2018, 11:32 +0200 schrieb Daniel Vetter:
> On Thu, Apr 5, 2018 at 9:16 AM, Alexey Brodkin
>  wrote:
> > Hi Daniel,
> > 
> > On Thu, 2018-04-05 at 08:18 +0200, Daniel Vetter wrote:
> > > On Wed, Apr 4, 2018 at 10:06 PM, Alexey Brodkin
> > >  wrote:
> > > > Hello,
> > > > 
> > > > We're trying to use DisplayLink USB2-to-HDMI adapter to render
> > > > GPU-accelerated graphics.
> > > > Hardware setup is as simple as a devboard + DisplayLink
> > > > adapter.
> > > > Devboards we use for this experiment are:
> > > >  * Wandboard Quad (based on IMX6 SoC with Vivante GPU) or
> > > >  * HSDK (based on Synopsys ARC HS38 SoC with Vivante GPU as
> > > > well)
> > > > 
> > > > I'm sure any other board with DRM supported GPU will work,
> > > > those we just used
> > > > as the very recent Linux kernels could be easily run on them
> > > > both.
> > > > 
> > > > Basically the problem is UDL needs to be explicitly notified
> > > > about new data
> > > > to be rendered on the screen compared to typical bit-streamers
> > > > that infinitely
> > > > scan a dedicated buffer in memory.
> > > > 
> > > > In case of UDL there're just 2 ways for this notification:
> > > >  1) DRM_IOCTL_MODE_PAGE_FLIP that calls drm_crtc_funcs-
> > > > >page_flip()
> > > >  2) DRM_IOCTL_MODE_DIRTYFB that calls drm_framebuffer_funcs-
> > > > >dirty()
> > > > 
> > > > But neither of IOCTLs happen when we run Xserver with xf86-
> > > > video-armada driver
> > > > (see https://urldefense.proofpoint.com/v2/url?u=http-3A__git.ar
> > > > m.linux.org.uk_cgit_xf86-2Dvideo-2Darmada.git_log_-3Fh-
> > > > 3Dunstable-2Ddevel=DwIBaQ&;
> > > > c=DPL6_X_6JkXFx7AXWqB0tg=lqdeeSSEes0GFDDl656eViXO7breS55ytWkh
> > > > pk5R81I=oEAlP64L9vkuUs_k3kGwwwlN1WJbDMJbCo0uDhwKwwk=3ZHj-
> > > > 6JXZBLSTWg_4KMnL0VNi7z8c0RxHzj2U5ywVIw=).
> > > > 
> > > > Is it something missing in Xserver or in UDL driver?
> > > 
> > > Use the -modesetting driverr for UDL, that one works correctly.
> > 
> > If you're talking about "modesetting" driver of Xserver [1] then
> > indeed
> > picture is displayed on the screen. But there I guess won't be any
> > 3D acceleration.
> > 
> > At least that's what was suggested to me earlier here [2] by Lucas:
> > >8---
> > For 3D acceleration to work under X you need the etnaviv specific
> > DDX
> > driver, which can be found here:
> > 
> > http://git.arm.linux.org.uk/cgit/xf86-video-armada.git/log/?h=unsta
> > ble-devel
> 
> You definitely want to use -modesetting for UDL. And I thought with
> glamour and the corresponding mesa work you should also get
> accelaration. Insisting that you must use a driver-specific ddx is
> broken, the world doesn't work like that anymore.

On etnaviv the world definitely still works like this. The etnaviv DDX
uses the dedicated 2D hardware of the Vivante GPUs, which is much
faster and efficient than going through Glamor.
Especially since almost all X accel operations are done on linear
buffers, while the 3D GPU can only ever do tiled on both sampler and
render, where some multi-pipe 3D cores can't even read the tiling they
write out. So Glamor is an endless copy fest using the resolve engine
on those.

If using etnaviv with UDL is a use-case that need to be supported, one
would need to port the UDL specifics from -modesetting to the -armada
DDX.

> Lucas, can you pls clarify? Also, why does -armada bind against all
> kms drivers, that's probaly too much.

I think that's a local modification done by Alexey. The armada driver
only binds to armada and imx-drm by default.

Regards,
Lucas
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: DRM_UDL and GPU under Xserver

2018-04-05 Thread Jose Abreu
Hi Alexey, Daniel,

On 05-04-2018 10:32, Daniel Vetter wrote:
> On Thu, Apr 5, 2018 at 9:16 AM, Alexey Brodkin
>  wrote:
>> Hi Daniel,
>>
>> On Thu, 2018-04-05 at 08:18 +0200, Daniel Vetter wrote:
>>> On Wed, Apr 4, 2018 at 10:06 PM, Alexey Brodkin
>>>  wrote:
 Hello,

 We're trying to use DisplayLink USB2-to-HDMI adapter to render 
 GPU-accelerated graphics.
 Hardware setup is as simple as a devboard + DisplayLink adapter.
 Devboards we use for this experiment are:
  * Wandboard Quad (based on IMX6 SoC with Vivante GPU) or
  * HSDK (based on Synopsys ARC HS38 SoC with Vivante GPU as well)

 I'm sure any other board with DRM supported GPU will work, those we just 
 used
 as the very recent Linux kernels could be easily run on them both.

 Basically the problem is UDL needs to be explicitly notified about new data
 to be rendered on the screen compared to typical bit-streamers that 
 infinitely
 scan a dedicated buffer in memory.

 In case of UDL there're just 2 ways for this notification:
  1) DRM_IOCTL_MODE_PAGE_FLIP that calls drm_crtc_funcs->page_flip()
  2) DRM_IOCTL_MODE_DIRTYFB that calls drm_framebuffer_funcs->dirty()

 But neither of IOCTLs happen when we run Xserver with xf86-video-armada 
 driver
 (see 
 https://urldefense.proofpoint.com/v2/url?u=http-3A__git.arm.linux.org.uk_cgit_xf86-2Dvideo-2Darmada.git_log_-3Fh-3Dunstable-2Ddevel=DwIBaQ;
 c=DPL6_X_6JkXFx7AXWqB0tg=lqdeeSSEes0GFDDl656eViXO7breS55ytWkhpk5R81I=oEAlP64L9vkuUs_k3kGwwwlN1WJbDMJbCo0uDhwKwwk=3ZHj-
 6JXZBLSTWg_4KMnL0VNi7z8c0RxHzj2U5ywVIw=).

 Is it something missing in Xserver or in UDL driver?
>>> Use the -modesetting driverr for UDL, that one works correctly.
>> If you're talking about "modesetting" driver of Xserver [1] then indeed
>> picture is displayed on the screen. But there I guess won't be any 3D 
>> acceleration.
>>
>> At least that's what was suggested to me earlier here [2] by Lucas:
>> >8---
>> For 3D acceleration to work under X you need the etnaviv specific DDX
>> driver, which can be found here:
>>
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__git.arm.linux.org.uk_cgit_xf86-2Dvideo-2Darmada.git_log_-3Fh-3Dunstable-2Ddevel=DwIGaQ=DPL6_X_6JkXFx7AXWqB0tg=yaVFU4TjGY0gVF8El1uKcisy6TPsyCl9uN7Wsis-qhY=NxAlhKaLZI6JlMs1pUBPHr79zFQ8ytECx0wtkVRrkeQ=9XSI0qYPrADUy1eUuKDexVOT98l9APph-ArYowGWwow=
> You definitely want to use -modesetting for UDL. And I thought with
> glamour and the corresponding mesa work you should also get
> accelaration. Insisting that you must use a driver-specific ddx is
> broken, the world doesn't work like that anymore.

I think what Alexey wants to do is not supported by -modesetting
driver. He wants to offload rendering to a Vivante GPU and then
display the result in *another* output ... For this I think full
PRIME support is needed, right? I see -modesetting has
drmPrimeFDToHandle but no drmPrimeHandleToFD support. In other
words -modesetting can not export buffers to another -modesetting
driver, it can only import them (?)

Thanks and Best Regards,
Jose Miguel Abreu

>
> Lucas, can you pls clarify? Also, why does -armada bind against all
> kms drivers, that's probaly too much.
> -Daniel
>
>> >8---
>>
>> [1] 
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__cgit.freedesktop.org_xorg_xserver_tree_hw_xfree86_drivers_modesetting=DwIGaQ=DPL6_X_6JkXFx7AXWqB0tg=yaVFU4TjGY0gVF8El1uKcisy6TPsyCl9uN7Wsis-qhY=NxAlhKaLZI6JlMs1pUBPHr79zFQ8ytECx0wtkVRrkeQ=yvnVItPaOgvVT8aJFwTO5XXLCCmlSiD89JwhcGeo7MI=
>> [2] 
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_pipermail_linux-2Dsnps-2Darc_2017-2DNovember_003031.html=DwIGaQ=DPL6_X_6JkXFx7AXWqB0tg=yaVFU4TjGY0gVF8El1uKcisy6TPsyCl9uN7Wsis-qhY=NxAlhKaLZI6JlMs1pUBPHr79zFQ8ytECx0wtkVRrkeQ=8gdiQaxAN7AT_2vwaIpXpXfVPSPKPM275rlmcQZKu28=
>>
>> -Alexey
>
>

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: DRM_UDL and GPU under Xserver

2018-04-05 Thread Daniel Vetter
On Thu, Apr 5, 2018 at 9:16 AM, Alexey Brodkin
 wrote:
> Hi Daniel,
>
> On Thu, 2018-04-05 at 08:18 +0200, Daniel Vetter wrote:
>> On Wed, Apr 4, 2018 at 10:06 PM, Alexey Brodkin
>>  wrote:
>> > Hello,
>> >
>> > We're trying to use DisplayLink USB2-to-HDMI adapter to render 
>> > GPU-accelerated graphics.
>> > Hardware setup is as simple as a devboard + DisplayLink adapter.
>> > Devboards we use for this experiment are:
>> >  * Wandboard Quad (based on IMX6 SoC with Vivante GPU) or
>> >  * HSDK (based on Synopsys ARC HS38 SoC with Vivante GPU as well)
>> >
>> > I'm sure any other board with DRM supported GPU will work, those we just 
>> > used
>> > as the very recent Linux kernels could be easily run on them both.
>> >
>> > Basically the problem is UDL needs to be explicitly notified about new data
>> > to be rendered on the screen compared to typical bit-streamers that 
>> > infinitely
>> > scan a dedicated buffer in memory.
>> >
>> > In case of UDL there're just 2 ways for this notification:
>> >  1) DRM_IOCTL_MODE_PAGE_FLIP that calls drm_crtc_funcs->page_flip()
>> >  2) DRM_IOCTL_MODE_DIRTYFB that calls drm_framebuffer_funcs->dirty()
>> >
>> > But neither of IOCTLs happen when we run Xserver with xf86-video-armada 
>> > driver
>> > (see 
>> > https://urldefense.proofpoint.com/v2/url?u=http-3A__git.arm.linux.org.uk_cgit_xf86-2Dvideo-2Darmada.git_log_-3Fh-3Dunstable-2Ddevel=DwIBaQ;
>> > c=DPL6_X_6JkXFx7AXWqB0tg=lqdeeSSEes0GFDDl656eViXO7breS55ytWkhpk5R81I=oEAlP64L9vkuUs_k3kGwwwlN1WJbDMJbCo0uDhwKwwk=3ZHj-
>> > 6JXZBLSTWg_4KMnL0VNi7z8c0RxHzj2U5ywVIw=).
>> >
>> > Is it something missing in Xserver or in UDL driver?
>>
>> Use the -modesetting driverr for UDL, that one works correctly.
>
> If you're talking about "modesetting" driver of Xserver [1] then indeed
> picture is displayed on the screen. But there I guess won't be any 3D 
> acceleration.
>
> At least that's what was suggested to me earlier here [2] by Lucas:
> >8---
> For 3D acceleration to work under X you need the etnaviv specific DDX
> driver, which can be found here:
>
> http://git.arm.linux.org.uk/cgit/xf86-video-armada.git/log/?h=unstable-devel

You definitely want to use -modesetting for UDL. And I thought with
glamour and the corresponding mesa work you should also get
accelaration. Insisting that you must use a driver-specific ddx is
broken, the world doesn't work like that anymore.

Lucas, can you pls clarify? Also, why does -armada bind against all
kms drivers, that's probaly too much.
-Daniel

> >8---
>
> [1] 
> https://cgit.freedesktop.org/xorg/xserver/tree/hw/xfree86/drivers/modesetting
> [2] 
> http://lists.infradead.org/pipermail/linux-snps-arc/2017-November/003031.html
>
> -Alexey



-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


DRM_UDL and GPU under Xserver

2018-04-05 Thread Alexey Brodkin
Hello,

We're trying to use DisplayLink USB2-to-HDMI adapter to render GPU-accelerated 
graphics.
Hardware setup is as simple as a devboard + DisplayLink adapter.
Devboards we use for this experiment are:
 * Wandboard Quad (based on IMX6 SoC with Vivante GPU) or
 * HSDK (based on Synopsys ARC HS38 SoC with Vivante GPU as well)

I'm sure any other board with DRM supported GPU will work, those we just used
as the very recent Linux kernels could be easily run on them both.

Basically the problem is UDL needs to be explicitly notified about new data
to be rendered on the screen compared to typical bit-streamers that infinitely
scan a dedicated buffer in memory.

In case of UDL there're just 2 ways for this notification:
 1) DRM_IOCTL_MODE_PAGE_FLIP that calls drm_crtc_funcs->page_flip()
 2) DRM_IOCTL_MODE_DIRTYFB that calls drm_framebuffer_funcs->dirty()

But neither of IOCTLs happen when we run Xserver with xf86-video-armada driver
(see 
http://git.arm.linux.org.uk/cgit/xf86-video-armada.git/log/?h=unstable-devel).

Is it something missing in Xserver or in UDL driver?

Regards,
Alexey





___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: DRM_UDL and GPU under Xserver

2018-04-05 Thread Daniel Vetter
On Wed, Apr 4, 2018 at 10:06 PM, Alexey Brodkin
 wrote:
> Hello,
>
> We're trying to use DisplayLink USB2-to-HDMI adapter to render 
> GPU-accelerated graphics.
> Hardware setup is as simple as a devboard + DisplayLink adapter.
> Devboards we use for this experiment are:
>  * Wandboard Quad (based on IMX6 SoC with Vivante GPU) or
>  * HSDK (based on Synopsys ARC HS38 SoC with Vivante GPU as well)
>
> I'm sure any other board with DRM supported GPU will work, those we just used
> as the very recent Linux kernels could be easily run on them both.
>
> Basically the problem is UDL needs to be explicitly notified about new data
> to be rendered on the screen compared to typical bit-streamers that infinitely
> scan a dedicated buffer in memory.
>
> In case of UDL there're just 2 ways for this notification:
>  1) DRM_IOCTL_MODE_PAGE_FLIP that calls drm_crtc_funcs->page_flip()
>  2) DRM_IOCTL_MODE_DIRTYFB that calls drm_framebuffer_funcs->dirty()
>
> But neither of IOCTLs happen when we run Xserver with xf86-video-armada driver
> (see 
> http://git.arm.linux.org.uk/cgit/xf86-video-armada.git/log/?h=unstable-devel).
>
> Is it something missing in Xserver or in UDL driver?

Use the -modesetting driverr for UDL, that one works correctly.
Kernel-driver specific X drivers are kinda deprecated, and stuff like
this (and other bugfixes and improvements that don't propagate around)
are the reason for that.
-Daniel

>
> Regards,
> Alexey
>
>
>
>
>



-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel