On19.Dec.2016 09:48, Pekka Paalanen wrote:

On Fri, 16 Dec 2016 11:35:48 -0600
DRC<dcomman...@users.sourceforge.net>  wrote:

On 12/16/16 3:06 AM, Pekka Paalanen wrote:
I should probably tell a little more, because what I explained above is
a simplification due to using a single path for all buffer types.
...
Thanks again.  This is all very new to me, and I guess I don't fully
understand where these buffer types would come into play.  Bearing in
mind that I really don't care whether non-OpenGL applications are
hardware-accelerated, does that simplify anything?  It would be
sufficient if only OpenGL applications could render using the GPU.  The
compositor itself doesn't necessarily need to.
I do not know of any OpenGL (EGL, actually) implementation that would
allow using the GPU while the compositor is not accepting hardware
(EGL-based) buffers. The reason is that it simply cannot be performant
in fully local usage scenario. When I say this, I mean in a way that
would be transparent to the application. Applications themselves could
initialize GPU support on an EGL platform other than Wayland, render
with the GPU, call glReadPixels, and then use wl_shm buffers for
sending the content to the compositor. However, this obviously needs
explicit coding in the application, and I would not expect anyone to do
it, because in a usual case of a fully local graphics stack without any
remoting, it would make a huge performance hit.

Well, I suppose some proprietary implementations have used wl_shm for
hardware-rendered content, but we have always considered that a bug,
especially when no other transport has been implemented.

A bit of background:
http://ppaalanen.blogspot.fi/2012/11/on-supporting-wayland-gl-clients-and.html

Oh, this might be a nice reading before that one:
http://ppaalanen.blogspot.fi/2012/03/what-does-egl-do-in-wayland-stack.html

Lastly, and I believe this is the most sad part for you, is that NVIDIA
proprietary drivers do not work (the way we would like).

NVIDIA has been proposing for years a solution that is completely
different to anything explained above: EGLStreams, and for the same
amount of years, the community has been unimpressed with the design.
Anyway, NVIDIA did implement their design and even wrote patches for
Weston which we have not merged. Other compositors (e.g. Mutter) may
choose to support EGLStreams as a temporary solution.
I guess I was hoping to take advantage of the EGL_PLATFORM_DEVICE_EXT
extension that allows for off-screen OpenGL rendering.  It currently
works with nVidia's drivers:
https://gist.github.com/dcommander/ee1247362201552b2532
Right. You can do that, but then you need to write the application to
use it...

Hmm, indeed, maybe it would be possible if you are imposing your own
EGL middle-man library between the application and the real EGL library.

That's definitely a idea to look into. I cannot say off-hand why it
would not work, so maybe it can work. :-)

To summarize, with that approach, you would have the client send only
wl_shm buffers to the compositor, and the compositor never needs to
touch EGL at all. It also has the benefit that the read-back cost
(glReadPixels) is completely in the client process, so the compositor
will not stall on it, and you don't need the stuff I explained about in
the compositor. And you get support for the proprietary drivers!

Sorry for not realizing the "wrap libEGL.so" approach earlier.


Thanks,
pq


Yeah, and how does this look like when put in context with Waltham?



Regards
Christian Stroetmann
_______________________________________________
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Reply via email to