Re: RFC - GLX Extension to control GLXVND dispatching for PRIME GPU offloading

2019-04-24 Thread Kyle Brenneman

On 4/23/19 4:28 PM, Aaron Plattner wrote:

On 4/17/19 8:51 AM, Kyle Brenneman wrote:
For GPU offloading in libglvnd, where individual clients can run with 
an alternate GPU and client-side vendor library, we'd need some way 
for that alternate vendor library to communicate with its server-side 
counterpart. Normally, the server's GLXVND layer would dispatch any 
GLX requests to whichever driver is running an X screen. This is a 
GLX extension that allows a client to tell the server to send GLX 
requests to a different driver instead.


The basic idea is that the server keeps a separate (screen -> 
GLXServerVendor) mapping for each client. The current global mapping 
is used as the default for each new client, but the client can send a 
request to change its own mapping. That way, if the client uses a 
different vendor library, then the client-side vendor can arrange for 
any GLX requests to go to the matching server-side driver.


The extension uses Atoms as an ID to identify each GLXServerVendor, 
using a string provided by the driver. That way, the client-side 
driver can know which Atom it needs to use without having to define 
an extra query. The client can send a request with a screen number 
and a vendor ID to tell the server to dispatch any GLX requests for 
that screen to the specified vendor. A client can also send None as a 
vendor ID to revert to whatever GLXServerVendor would handle that 
screen by default.


I also added a GLXVendorPrivate/GLXVendorPrivateWithReply-style 
request, which sends a request to a specific vendor based on a vendor 
ID, without having to worry about which vendor is assigned to a 
screen at the moment. Strictly speaking, a vendor library could get 
the same result by adding a regular GLXVendorPrivate request, and 
providing a dispatch function that always routes the request to 
itself, but that seems like it's more of an implementation detail of 
GLXVND.


Also, this extension doesn't define any errors or queries to check 
whether a GLXServerVendor can support a given screen. These requests 
would be sent by a client-side vendor library (not by libglvnd or an 
application), so each driver would be responsible for figuring out on 
its own which screens it can support.


Anyway, I've got a draft of the extension spec here, and I've written 
up a series of patches for the X server to implement it here:
https://gitlab.freedesktop.org/kbrenneman/xserver/tree/GLX_EXT_server_vendor_select 



Hi Kyle,

Have you gotten any feedback on the commits there? It might help to 
create an xorg/xserver merge request for them. You can use the "WIP:" 
prefix on the MR title if you just want to request feedback without 
actually getting it merged.
Not a bad idea. I've posted a merge request with the patches here, and 
attached the extension spec:

https://gitlab.freedesktop.org/xorg/xserver/merge_requests/179

-Kyle


Comments and questions welcome.

-Kyle Brenneman


Name

 EXT_server_vendor_select

Name Strings

 GLX_EXT_server_vendor_select

Contact

 Kyle Brenneman, NVIDIA, kbrenneman at nvidia.com

Contributors

 Kyle Brenneman

Status

 XXX - Not complete yet!!!

Version

 Last Modified Date: April 11, 2019
 Revision: 1

Number

 OpenGL Extension #???

Dependencies

 GLX version 1.2 is required.

 This specification is written against the wording of the GLX 1.3
 Protocol Encoding Specification.

Overview

 In multi-GPU systems, a client may decide at runtime which device
 and driver to use for GLX, for example to choose between a
 high-performance and low-power device.

 This extension defines a set of requests that allow a client to
 specify which server-side driver should handle GLX requests from 
the

 sending client for a particular screen.

IP Status

 No known IP claims.

New Procedures and Functions

 None

New Tokens

 None

Additions to the GLX Specification

 None. These requests are intended to be used by a client-side GLX
 implementation, not by an application. Therefore, this extension
 does not define any new functions or changes to the GLX
 specification.

GLX Protocol

 Get a List of Server-Side Drivers

 Name: glXQueryServerVendorIDsEXT

 Description:
 This request fetches a list of available server-side
 drivers, and the current vendor ID selected for each 
screen.

 Each driver is identified by an Atom, with a string chosen
 by the driver.

 The reply contains a list of the currently selected vendors
 first, with one Atom for each screen. This will be the
 vendor selected with the glXSelectScreenServerVendorIDEXT
 request, or the default vendor if the client has not sent a
 glXSelectScreenServerVendorIDEXT request for a screen.

 If a screen is using the default vendor, and the vendor 
does

 not hav

RFC - GLX Extension to control GLXVND dispatching for PRIME GPU offloading

2019-04-17 Thread Kyle Brenneman
For GPU offloading in libglvnd, where individual clients can run with an 
alternate GPU and client-side vendor library, we'd need some way for 
that alternate vendor library to communicate with its server-side 
counterpart. Normally, the server's GLXVND layer would dispatch any GLX 
requests to whichever driver is running an X screen. This is a GLX 
extension that allows a client to tell the server to send GLX requests 
to a different driver instead.


The basic idea is that the server keeps a separate (screen -> 
GLXServerVendor) mapping for each client. The current global mapping is 
used as the default for each new client, but the client can send a 
request to change its own mapping. That way, if the client uses a 
different vendor library, then the client-side vendor can arrange for 
any GLX requests to go to the matching server-side driver.


The extension uses Atoms as an ID to identify each GLXServerVendor, 
using a string provided by the driver. That way, the client-side driver 
can know which Atom it needs to use without having to define an extra 
query. The client can send a request with a screen number and a vendor 
ID to tell the server to dispatch any GLX requests for that screen to 
the specified vendor. A client can also send None as a vendor ID to 
revert to whatever GLXServerVendor would handle that screen by default.


I also added a GLXVendorPrivate/GLXVendorPrivateWithReply-style request, 
which sends a request to a specific vendor based on a vendor ID, without 
having to worry about which vendor is assigned to a screen at the 
moment. Strictly speaking, a vendor library could get the same result by 
adding a regular GLXVendorPrivate request, and providing a dispatch 
function that always routes the request to itself, but that seems like 
it's more of an implementation detail of GLXVND.


Also, this extension doesn't define any errors or queries to check 
whether a GLXServerVendor can support a given screen. These requests 
would be sent by a client-side vendor library (not by libglvnd or an 
application), so each driver would be responsible for figuring out on 
its own which screens it can support.


Anyway, I've got a draft of the extension spec here, and I've written up 
a series of patches for the X server to implement it here:

https://gitlab.freedesktop.org/kbrenneman/xserver/tree/GLX_EXT_server_vendor_select

Comments and questions welcome.

-Kyle Brenneman


Name

EXT_server_vendor_select

Name Strings

GLX_EXT_server_vendor_select

Contact

    Kyle Brenneman, NVIDIA, kbrenneman at nvidia.com

Contributors

    Kyle Brenneman

Status

XXX - Not complete yet!!!

Version

Last Modified Date: April 11, 2019
Revision: 1

Number

OpenGL Extension #???

Dependencies

GLX version 1.2 is required.

This specification is written against the wording of the GLX 1.3
Protocol Encoding Specification.

Overview

In multi-GPU systems, a client may decide at runtime which device
and driver to use for GLX, for example to choose between a
high-performance and low-power device.

This extension defines a set of requests that allow a client to
specify which server-side driver should handle GLX requests from the
sending client for a particular screen.

IP Status

No known IP claims.

New Procedures and Functions

None

New Tokens

None

Additions to the GLX Specification

None. These requests are intended to be used by a client-side GLX
implementation, not by an application. Therefore, this extension
does not define any new functions or changes to the GLX
specification.

GLX Protocol

Get a List of Server-Side Drivers

Name: glXQueryServerVendorIDsEXT

Description:
This request fetches a list of available server-side
drivers, and the current vendor ID selected for each screen.
Each driver is identified by an Atom, with a string chosen
by the driver.

The reply contains a list of the currently selected vendors
first, with one Atom for each screen. This will be the
vendor selected with the glXSelectScreenServerVendorIDEXT
request, or the default vendor if the client has not sent a
glXSelectScreenServerVendorIDEXT request for a screen.

If a screen is using the default vendor, and the vendor does
not have a vendor ID, then the corresponding Atom in the
reply will be None.

After the currently selected vendors, the reply will contain
a list of all available vendor ID's.

Note that the list of available vendors is global, not
per-screen. The client-side driver is responsible for
determining which screens it can support.

Encoding:
1   CARD8   opcode (X assigned)
1   17  GLX opcode (glXVendorPrivateWithReply)
2   3   requ

Re: [Mesa-dev] RFC - libglvnd and GLXVND vendor enumeration to facilitate GLX multi-vendor PRIME GPU offload

2019-02-13 Thread Kyle Brenneman via xorg-devel

On 2/13/19 2:32 PM, Andy Ritger wrote:

On Wed, Feb 13, 2019 at 12:15:02PM -0700, Kyle Brenneman wrote:

On 02/12/2019 01:58 AM, Michel Dänzer wrote:

On 2019-02-11 5:18 p.m., Andy Ritger wrote:

On Mon, Feb 11, 2019 at 12:09:26PM +0100, Michel Dänzer wrote:

On 2019-02-08 11:43 p.m., Kyle Brenneman wrote:

Also, is Mesa the only client-side vendor library that works with the
Xorg GLX module? I vaguely remember that there was at least one other
driver that did, but I don't remember the details anymore.

AFAIK, the amdgpu-pro OpenGL driver can work with the Xorg GLX module
(or its own forked version of it).

Maybe the amdgpu-pro OpenGL driver uses a fork of the Xorg GLX module
(or sets the "GlxVendorLibrary" X configuration option?), but it doesn't
look to me like the in-tree Xorg GLX module could report anything other
than "mesa" for GLX_VENDOR_NAMES_EXT, without custom user configuration.

GLX_VENDOR_NAMES_EXT, which client-side glvnd uses to pick the
libGLX_${vendor}.so to load, is implemented in the Xorg GLX module
with this:

xserver/glx/glxcmds.c:__glXDisp_QueryServerString():

  case GLX_VENDOR_NAMES_EXT:
  if (pGlxScreen->glvnd) {
  ptr = pGlxScreen->glvnd;
  break;
  }

pGlxScreen->glvnd appears to be assigned here, defaulting to "mesa",
though allowing an xorg.conf override via the "GlxVendorLibrary" option:

xserver/glx/glxdri2.c:__glXDRIscreenProbe():

  xf86ProcessOptions(pScrn->scrnIndex, pScrn->options, options);
  glvnd = xf86GetOptValString(options, GLXOPT_VENDOR_LIBRARY);
  if (glvnd)
  screen->base.glvnd = xnfstrdup(glvnd);
  free(options);

  if (!screen->base.glvnd)
  screen->base.glvnd = strdup("mesa");

And swrast unconditionally sets pGlxScreen->glvnd to "mesa":

xserver/glx/glxdriswrast.c:__glXDRIscreenProbe():

  screen->base.glvnd = strdup("mesa");

Is there more to this that I'm missing?

I don't think so, I suspect we were just assuming slightly different
definitions of "works". :)



That should get fixed, but since that applies to the libglvnd's normal
vendor selection, I'd say it's orthogonal to GPU offloading. Off the top of
my head, the "GlxVendorLibrary" option ought to work regardless of which
__GLXprovider it finds. I think it would be possible to add a function to
let a driver override the GLX_VENDOR_NAMES_EXT string, too.

I think the point, though, is that thus far, libGLX_mesa.so is the only
glvnd client-side GLX implementation that will be loaded for use with
Xorg's GLX.  Thus, it doesn't seem to refute ajax's comment from earlier
in the thread:
I don't see that those are related. The GLX_VENDOR_NAMES_EXT string 
tells libglvnd which vendor to use by default. GPU offloading, more or 
less by definition, means using something other than the default.



At the other extreme, the server could do nearly all the work of
generating the possible __GLX_VENDOR_LIBRARY_NAME strings (with the
practical downside of each server-side GLX vendor needing to enumerate
the GPUs it can drive, in order to generate the hardware-specific
identifiers).

I don't think this downside is much of a burden? If you're registering
a provider other than Xorg's you're already doing it from the DDX
driver



-Kyle



___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [Mesa-dev] RFC - libglvnd and GLXVND vendor enumeration to facilitate GLX multi-vendor PRIME GPU offload

2019-02-13 Thread Kyle Brenneman via xorg-devel

On 02/12/2019 01:58 AM, Michel Dänzer wrote:

On 2019-02-11 5:18 p.m., Andy Ritger wrote:

On Mon, Feb 11, 2019 at 12:09:26PM +0100, Michel Dänzer wrote:

On 2019-02-08 11:43 p.m., Kyle Brenneman wrote:

Also, is Mesa the only client-side vendor library that works with the
Xorg GLX module? I vaguely remember that there was at least one other
driver that did, but I don't remember the details anymore.

AFAIK, the amdgpu-pro OpenGL driver can work with the Xorg GLX module
(or its own forked version of it).

Maybe the amdgpu-pro OpenGL driver uses a fork of the Xorg GLX module
(or sets the "GlxVendorLibrary" X configuration option?), but it doesn't
look to me like the in-tree Xorg GLX module could report anything other
than "mesa" for GLX_VENDOR_NAMES_EXT, without custom user configuration.

GLX_VENDOR_NAMES_EXT, which client-side glvnd uses to pick the
libGLX_${vendor}.so to load, is implemented in the Xorg GLX module
with this:

   xserver/glx/glxcmds.c:__glXDisp_QueryServerString():

 case GLX_VENDOR_NAMES_EXT:
 if (pGlxScreen->glvnd) {
 ptr = pGlxScreen->glvnd;
 break;
 }

pGlxScreen->glvnd appears to be assigned here, defaulting to "mesa",
though allowing an xorg.conf override via the "GlxVendorLibrary" option:

   xserver/glx/glxdri2.c:__glXDRIscreenProbe():

 xf86ProcessOptions(pScrn->scrnIndex, pScrn->options, options);
 glvnd = xf86GetOptValString(options, GLXOPT_VENDOR_LIBRARY);
 if (glvnd)
 screen->base.glvnd = xnfstrdup(glvnd);
 free(options);

 if (!screen->base.glvnd)
 screen->base.glvnd = strdup("mesa");

And swrast unconditionally sets pGlxScreen->glvnd to "mesa":

   xserver/glx/glxdriswrast.c:__glXDRIscreenProbe():

 screen->base.glvnd = strdup("mesa");

Is there more to this that I'm missing?

I don't think so, I suspect we were just assuming slightly different
definitions of "works". :)


That should get fixed, but since that applies to the libglvnd's normal 
vendor selection, I'd say it's orthogonal to GPU offloading. Off the top 
of my head, the "GlxVendorLibrary" option ought to work regardless of 
which __GLXprovider it finds. I think it would be possible to add a 
function to let a driver override the GLX_VENDOR_NAMES_EXT string, too.


-Kyle

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [Mesa-dev] RFC - libglvnd and GLXVND vendor enumeration to facilitate GLX multi-vendor PRIME GPU offload

2019-02-13 Thread Kyle Brenneman via xorg-devel

On 02/11/2019 02:51 PM, Andy Ritger wrote:

On Fri, Feb 08, 2019 at 03:43:25PM -0700, Kyle Brenneman wrote:

On 2/8/19 2:33 PM, Andy Ritger wrote:

On Fri, Feb 08, 2019 at 03:01:33PM -0500, Adam Jackson wrote:

On Fri, 2019-02-08 at 10:19 -0800, Andy Ritger wrote:


(1) If configured for PRIME GPU offloading (environment variable or
  application profile), client-side libglvnd could load the possible
  libGLX_${vendor}.so libraries it finds, and call into each to
  find which vendor (and possibly which GPU) matches the specified
  string. Once a vendor is selected, the vendor library could optionally
  tell the X server which GLX vendor to use server-side for this
  client connection.

I'm not a huge fan of the "dlopen everything" approach, if it can be
avoided.

Yes, I agree.

I'm pretty sure libglvnd could avoid unnecessarily loading vendor libraries
without adding nearly so much complexity.

If libglvnd just has a list of additional vendor library names to try, then
you could just have a flag to tell libglvnd to check some server string for
that name before it loads the vendor. If a client-side vendor would need a
server-side counterpart to work, then libglvnd can check for that. The
server only needs to keep a list of names to send back, which would be a
trivial (and backward-compatible) addition to the GLXVND interface.

Also, even without that, I don't think the extra dlopen calls would be a
problem in practice. It would only ever happen in applications that are
configured for offloading, which are (more-or-less by definition)
heavy-weight programs, so an extra millisecond or so of startup time is
probably fine.

But why incur that loading if we don't need to?
As I noted, we can still avoid loading extra loads even with an (almost) 
strictly client-based design. You don't need to do any sort of 
server-based device enumeration, all you need is something in the server 
to add a string to a list that the client can query.


But, there's no reason that query can't be optional, and there's no 
reason it has to be coupled with anything else.





I think I'd rather have a new enum for GLXQueryServerString
that elaborates on GLX_VENDOR_NAMES_EXT (perhaps GLX_VENDOR_MAP_EXT),
with the returned string a space-delimited list of :.
libGL could accept either a profile or a vendor name in the environment
variable, and the profile can be either semantic like
performance/battery, or a hardware selector, or whatever else.

This would probably be a layered extension, call it GLX_EXT_libglvnd2,
which you'd check for in the (already per-screen) server extension
string before trying to actually use.

That all sounds reasonable to me.


At the other extreme, the server could do nearly all the work of
generating the possible __GLX_VENDOR_LIBRARY_NAME strings (with the
practical downside of each server-side GLX vendor needing to enumerate
the GPUs it can drive, in order to generate the hardware-specific
identifiers).

I don't think this downside is much of a burden? If you're registering
a provider other than Xorg's you're already doing it from the DDX
driver (I think? Are y'all doing that from your libglx instead?), and
when that initializes it already knows which device it's driving.

Right.  It will be easy enough for the NVIDIA X driver + NVIDIA server-side GLX.

Kyle and I were chatting about this, and we weren't sure whether people
would object to doing that for the Xorg GLX provider: to create the
hardware names, Xorg's GLX would need to enumerate all the DRM devices
and list them all as possible : pairs for the Xorg
GLX-driven screens.  But, now that I look at it more closely, it looks
like drmGetDevices2() would work well for that.

So, if you're not concerned with that burden, I'm not.  I'll try coding
up the Xorg GLX part of things and see how it falls into place.

That actually is one of my big concerns: I'd like to come up with something
that can give something equivalent to Mesa's existing DRI_PRIME setting, and
requiring that logic to be in the server seems like a very poor match. You'd
need to take all of the device selection and enumeration stuff from Mesa and
transplant it into the Xorg GLX module, and then you'd need to define some
sort of protocol to get that data back into Mesa where you actually need it.
Or else you need to duplicate it between the client and server, which seems
like the worst of both worlds.

Is this actually a lot of code?  I'll try to put together a prototype so
we can see how much it is, but if it is just calling drmGetDevices2() and
then building PCI BusID-based names, that doesn't seem unreasonable to me.
The fact that it's required *at all* tells you that a server-based 
design doesn't match the reality of existing drivers. I've also seen 
ideas for GLX implementations based on EGL or Vulkan, which probably 
wouldn't be able to work with server-side device enumeration.


And like I pointed out, adding that requirement doesn't give you

Re: RFC - libglvnd and GLXVND vendor enumeration to facilitate GLX multi-vendor PRIME GPU offload

2019-02-13 Thread Kyle Brenneman via xorg-devel

On 02/08/2019 11:19 AM, Andy Ritger wrote:

(I'll omit EGL and Vulkan for the moment, for the sake of focus, and those
APIs have programmatic ways to enumerate and select GPUs.  Though, some
of what we decide here for GLX we may want to leverage for other APIs.)


Today, GLX implementations loaded into the X server register themselves
on a per-screen basis, GLXVND in the server dispatches GLX requests to
the registered vendor per screen, and libglvnd determines the client-side
vendor library to use by querying the per-screen GLX_VENDOR_NAMES_EXT
string from the X server (e.g., "mesa" or "nvidia").

The GLX_VENDOR_NAMES_EXT string can be overridden within libglvnd
through the __GLX_VENDOR_LIBRARY_NAME environment variable, though I
don't believe that is used much currently.

To enable GLX to be used in a multi-vendor PRIME GPU offload environment,
it seems there are several desirable user-visible behaviors:

* By default, users should get the same behavior we have today (i.e.,
   the GLX implementation used within the client and the server, for an X
   screen, is dictated by the X driver of the X screen).

* The user should be able to request a different GLX vendor for use on a
   per-process basis through either an environment variable (potentially
   reusing __GLX_VENDOR_LIBRARY_NAME) or possibly a future application
   profile mechanism in libglvnd.

* To make configuration optionally more "portable", the selection override
   mechanism should be able to refer to more generic names like
   "performance" or "battery", and those generic names should be mapped
   to specific GPUs/vendors on a per-system basis.

* To make configuration optionally more explicit, the selection override
   mechanism should be able to distinguish between individual GPUs by
   using hardware specific identifiers such as PCI BusID-based names like
   what DRI_PRIME currently honors (e.g., "pci-_03_00_0").

Do those behaviors seem reasonable?

If so, it seems like there are two general directions we could take to
implement that infrastructure in client-side libglvnd and GLXVND within
the X server, if the user or application profile requests a particular
vendor, either by vendor name (e.g., "mesa"/"nvidia"), functional
name (e.g., "battery"/"performance"), or hardware-based name (e.g.,
"pci-_03_00_0"/pci-_01_00_0"):

(1) If configured for PRIME GPU offloading (environment variable or
 application profile), client-side libglvnd could load the possible
 libGLX_${vendor}.so libraries it finds, and call into each to
 find which vendor (and possibly which GPU) matches the specified
 string. Once a vendor is selected, the vendor library could optionally
 tell the X server which GLX vendor to use server-side for this
 client connection.

(2) The GLX implementations within the X server could, when registering
 with GLXVND, tell GLXVND which screens they can support for PRIME
 GPU offloading.  That list could be queried by client-side libglvnd,
 and then used to interpret __GLX_VENDOR_LIBRARY_NAME and pick the
 corresponding vendor library to load.  Client-side would tell the X
 server which GLX vendor to use server-side for this client connection.

In either direction, if the user-requested string is a hardware-based
name ("pci-_03_00_0"), the GLX vendor library presumably needs to be
told that GPU, so that the vendor implementation can use the right GPU
(in the case that the vendor supports multiple GPUs in the system).

But, both (1) and (2) are really just points on a continuum.  I suppose
the more general question is: how much of the implementation should go
in the server and how much should go in the client?

At one extreme, the client could do nearly all the work (with the
practical downside of potentially loading multiple vendor libraries in
order to interpret __GLX_VENDOR_LIBRARY_NAME).

At the other extreme, the server could do nearly all the work of
generating the possible __GLX_VENDOR_LIBRARY_NAME strings (with the
practical downside of each server-side GLX vendor needing to enumerate
the GPUs it can drive, in order to generate the hardware-specific
identifiers).

I'm not sure where on that spectrum it makes the most sense to land,
and I'm curious what others think.

Thanks,
- Andy



For a more concrete example, this is what I've been working on for a 
client-based interface:

https://github.com/kbrenneman/libglvnd/tree/libglx-gpu-offloading

For this design, I've tried to keep the interface as simple as possible 
and to impose as few requirements or assumptions as possible. The basic 
idea behind it is that the only thing that a GLX application has to care 
about is calling GLX functions, and the only thing that libglvnd has to 
care about is forwarding those functions to the correct vendor library.


The general design is this:
* Libglvnd gets a list of alternate vendor libraries from an app profile 
(config file, environment variable, whatever)
* For each vendor in 

Re: [Mesa-dev] RFC - libglvnd and GLXVND vendor enumeration to facilitate GLX multi-vendor PRIME GPU offload

2019-02-08 Thread Kyle Brenneman

On 2/8/19 2:33 PM, Andy Ritger wrote:

On Fri, Feb 08, 2019 at 03:01:33PM -0500, Adam Jackson wrote:

On Fri, 2019-02-08 at 10:19 -0800, Andy Ritger wrote:


(1) If configured for PRIME GPU offloading (environment variable or
 application profile), client-side libglvnd could load the possible
 libGLX_${vendor}.so libraries it finds, and call into each to
 find which vendor (and possibly which GPU) matches the specified
 string. Once a vendor is selected, the vendor library could optionally
 tell the X server which GLX vendor to use server-side for this
 client connection.

I'm not a huge fan of the "dlopen everything" approach, if it can be
avoided.

Yes, I agree.
I'm pretty sure libglvnd could avoid unnecessarily loading vendor 
libraries without adding nearly so much complexity.


If libglvnd just has a list of additional vendor library names to try, 
then you could just have a flag to tell libglvnd to check some server 
string for that name before it loads the vendor. If a client-side vendor 
would need a server-side counterpart to work, then libglvnd can check 
for that. The server only needs to keep a list of names to send back, 
which would be a trivial (and backward-compatible) addition to the 
GLXVND interface.


Also, even without that, I don't think the extra dlopen calls would be a 
problem in practice. It would only ever happen in applications that are 
configured for offloading, which are (more-or-less by definition) 
heavy-weight programs, so an extra millisecond or so of startup time is 
probably fine.






I think I'd rather have a new enum for GLXQueryServerString
that elaborates on GLX_VENDOR_NAMES_EXT (perhaps GLX_VENDOR_MAP_EXT),
with the returned string a space-delimited list of :.
libGL could accept either a profile or a vendor name in the environment
variable, and the profile can be either semantic like
performance/battery, or a hardware selector, or whatever else.

This would probably be a layered extension, call it GLX_EXT_libglvnd2,
which you'd check for in the (already per-screen) server extension
string before trying to actually use.

That all sounds reasonable to me.


At the other extreme, the server could do nearly all the work of
generating the possible __GLX_VENDOR_LIBRARY_NAME strings (with the
practical downside of each server-side GLX vendor needing to enumerate
the GPUs it can drive, in order to generate the hardware-specific
identifiers).

I don't think this downside is much of a burden? If you're registering
a provider other than Xorg's you're already doing it from the DDX
driver (I think? Are y'all doing that from your libglx instead?), and
when that initializes it already knows which device it's driving.

Right.  It will be easy enough for the NVIDIA X driver + NVIDIA server-side GLX.

Kyle and I were chatting about this, and we weren't sure whether people
would object to doing that for the Xorg GLX provider: to create the
hardware names, Xorg's GLX would need to enumerate all the DRM devices
and list them all as possible : pairs for the Xorg
GLX-driven screens.  But, now that I look at it more closely, it looks
like drmGetDevices2() would work well for that.

So, if you're not concerned with that burden, I'm not.  I'll try coding
up the Xorg GLX part of things and see how it falls into place.
That actually is one of my big concerns: I'd like to come up with 
something that can give something equivalent to Mesa's existing 
DRI_PRIME setting, and requiring that logic to be in the server seems 
like a very poor match. You'd need to take all of the device selection 
and enumeration stuff from Mesa and transplant it into the Xorg GLX 
module, and then you'd need to define some sort of protocol to get that 
data back into Mesa where you actually need it. Or else you need to 
duplicate it between the client and server, which seems like the worst 
of both worlds.


By comparison, if libglvnd just hands the problem off to the vendor 
libraries, then you could do either. A vendor library could do its 
device enumeration in the client like Mesa does, or it could send a 
request to query something from the server, using whatever protocol you 
want -- whatever makes the most sense for that particular driver.


More generally, I worry that defining a (vendor+device+descriptor) list 
as an interface between libglvnd and the server means baking in a lot of 
unnecessary assumptions and requirements for drivers that we could 
otherwise avoid without losing any functionality.


Also, is Mesa the only client-side vendor library that works with the 
Xorg GLX module? I vaguely remember that there was at least one other 
driver that did, but I don't remember the details anymore.





Two follow-up questions:

(1) Even when direct-rendering, NVIDIA's OpenGL/GLX implementation sends
 GLX protocol (MakeCurrent, etc).  So, we'd like something client-side
 to be able to request that server-side GLXVND route GLX protocol for the
 calling client connection 

[PATCH xserver] GLX: Fix a use after free error with the GLVND vendor handle.

2018-04-06 Thread Kyle Brenneman
The GLVND layer will destroy all of the vendor handles at the end of each
server generation, but the GLX module then tries to re-use the same (now-freed)
handle in xorgGlxServerInit at the start of the next generation.

In xorgGlxCloseExtension, explicitly destroy the vendor handle and set it to
NULL so that the next call to xorgGlxServerInit will recreate it.
---
 glx/glxext.c | 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/glx/glxext.c b/glx/glxext.c
index f1355ce..46ff192 100644
--- a/glx/glxext.c
+++ b/glx/glxext.c
@@ -56,6 +56,7 @@ RESTYPE __glXContextRes;
 RESTYPE __glXDrawableRes;
 
 static DevPrivateKeyRec glxClientPrivateKeyRec;
+static GlxServerVendor *glvnd_vendor = NULL;
 
 #define glxClientPrivateKey ()
 
@@ -317,6 +318,10 @@ GetGLXDrawableBytes(void *value, XID id, ResourceSizePtr 
size)
 static void
 xorgGlxCloseExtension(const ExtensionEntry *extEntry)
 {
+if (glvnd_vendor != NULL) {
+glxServer.destroyVendor(glvnd_vendor);
+glvnd_vendor = NULL;
+}
 lastGLContext = NULL;
 }
 
@@ -497,11 +502,9 @@ xorgGlxServerPreInit(const ExtensionEntry *extEntry)
 return glxGeneration == serverGeneration;
 }
 
-static GlxServerVendor *
+static void
 xorgGlxInitGLVNDVendor(void)
 {
-static GlxServerVendor *glvnd_vendor = NULL;
-
 if (glvnd_vendor == NULL) {
 GlxServerImports *imports = NULL;
 imports = glxServer.allocateServerImports();
@@ -515,13 +518,11 @@ xorgGlxInitGLVNDVendor(void)
 glxServer.freeServerImports(imports);
 }
 }
-return glvnd_vendor;
 }
 
 static void
 xorgGlxServerInit(CallbackListPtr *pcbl, void *param, void *ext)
 {
-GlxServerVendor *glvnd_vendor;
 const ExtensionEntry *extEntry = ext;
 int i;
 
@@ -529,7 +530,7 @@ xorgGlxServerInit(CallbackListPtr *pcbl, void *param, void 
*ext)
 return;
 }
 
-glvnd_vendor = xorgGlxInitGLVNDVendor();
+xorgGlxInitGLVNDVendor();
 if (!glvnd_vendor) {
 return;
 }
-- 
2.7.4

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

[PATCH RFC xserver] Don't delete GLX's extensionInitCallback list during a reset

2018-03-02 Thread Kyle Brenneman
Using a CallbackListPtr to handle InitExtensions for GLX vendor libraries works
on the first server generation, but at the end of the generation, the server
will clear the callback list. On the second generation and beyond, every screen
ends up with NULL for the vendor handle assigned to it.

This patch avoids that problem by setting the callback list to point to a
static CallbackListRec. Since it's not allocated through CreateCallbackList, it
doesn't get added to the listsToDelete array.

Using a static CallbackListRec like this feels kind of hacky to me, but I
haven't come up with anything better.

The best alternative I've thought of would be to modify CreateContextList so
that it only optionally adds the list to listsToDelete, and then call that
to initialize the callback list. But, we'd need to make sure that the callback
list gets initialized before anything else has a chance to call AddExtension,
which seems likely to be error-prone.

I could also add functions to GlxServerExports to add and remove callbacks
instead of exposing a CallbackListPtr directly. I'm not sure if that would be
better or worse.

If anyone has a better idea or would otherwise prefer setting this up
differently, then let me know and I'll be happy to send out an updated patch.

-Kyle



---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

[PATCH xserver] Don't delete GLX's extensionInitCallback list during a reset.

2018-03-02 Thread Kyle Brenneman
When a callback list is initialized using CreateCallbackList via AddCallback,
the list gets added to the listsToCleanup array, and as a result the list gets
deleted at the end of the server generation.

But, vendor libraries add themselves to that callback list only once, not once
per generation, so if you delete the list, then no vendor will register itself
on the next generation, and GLX breaks.

Instead, use a static CallbackListRec for the extensionInitCallback list. That
way, it doesn't get added to listsToCleanup, and doesn't get deleted during a
reset.
---
 glx/vndext.c | 15 ---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/glx/vndext.c b/glx/vndext.c
index c8d7532..cef306a 100644
--- a/glx/vndext.c
+++ b/glx/vndext.c
@@ -40,7 +40,8 @@
 #include "vndservervendor.h"
 
 int GlxErrorBase = 0;
-static CallbackListPtr vndInitCallbackList;
+static CallbackListRec vndInitCallbackList;
+static CallbackListPtr vndInitCallbackListPtr = 
 static DevPrivateKeyRec glvXGLVScreenPrivKey;
 static DevPrivateKeyRec glvXGLVClientPrivKey;
 
@@ -187,6 +188,14 @@ GLXReset(ExtensionEntry *extEntry)
 GlxVendorExtensionReset(extEntry);
 GlxDispatchReset();
 GlxMappingReset();
+
+if ((dispatchException & DE_TERMINATE) == DE_TERMINATE) {
+while (vndInitCallbackList.list != NULL) {
+CallbackPtr next = vndInitCallbackList.list->next;
+free(vndInitCallbackList.list);
+vndInitCallbackList.list = next;
+}
+}
 }
 
 void
@@ -220,7 +229,7 @@ GlxExtensionInit(void)
 }
 
 GlxErrorBase = extEntry->errorBase;
-CallCallbacks(, extEntry);
+CallCallbacks(, extEntry);
 }
 
 static int
@@ -280,7 +289,7 @@ _X_EXPORT const GlxServerExports glxServer = {
 .majorVersion = 0,
 .minorVersion = 0,
 
-.extensionInitCallback = ,
+.extensionInitCallback = ,
 
 .allocateServerImports = GlxAllocateServerImports,
 .freeServerImports = GlxFreeServerImports,
-- 
2.7.4

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 2/4] glx: Use vnd layer for dispatch (v3)

2018-02-05 Thread Kyle Brenneman

On 02/02/2018 12:15 PM, Adam Jackson wrote:

The big change here is MakeCurrent and context tag tracking. We now
delegate context tags entirely to the vnd layer, and simply store a
pointer to the context state as the tag data. If a context is deleted
while it's current, we allocate a fake ID for the context and move the
context state there, so the tag data still points to a real context. As
a result we can stop trying so hard to detach the client from contexts
at disconnect time and just let resource destruction handle it.

Since vnd handles all the MakeCurrent protocol now, our request handlers
for it can just be return BadImplementation. We also remove a bunch of
LEGAL_NEW_RESOURCE, because now by the time we're called vnd has already
allocated its tracking resource on that XID.

v2: Update to match v2 of the vnd import, and remove more redundant work
like request length checks.

v3: Add/remove the XID map from the vendor private thunk, not the
backend. (Kyle Brenneman)

Signed-off-by: Adam Jackson <a...@redhat.com>
---
  configure.ac   |   2 +-
  glx/createcontext.c|   2 -
  glx/glxcmds.c  | 212 +
  glx/glxcmdsswap.c  |  98 +---
  glx/glxext.c   | 348 +
  glx/glxext.h   |   4 +
  glx/glxscreens.h   |   1 +
  glx/glxserver.h|   5 -
  glx/xfont.c|   2 -
  hw/kdrive/ephyr/ephyr.c|   2 +-
  hw/kdrive/ephyr/meson.build|   1 +
  hw/kdrive/src/kdrive.c |   3 +
  hw/vfb/InitOutput.c|   2 +
  hw/vfb/meson.build |   3 +-
  hw/xfree86/Makefile.am |   5 +
  hw/xfree86/common/xf86Init.c   |   2 +-
  hw/xfree86/dixmods/glxmodule.c |   1 +
  hw/xfree86/meson.build |   1 +
  hw/xquartz/darwin.c|   4 +-
  hw/xwayland/Makefile.am|   1 +
  hw/xwayland/meson.build|   1 +
  hw/xwayland/xwayland.c |   2 +
  include/glx_extinit.h  |   5 +-
  23 files changed, 328 insertions(+), 379 deletions(-)



In __glXDisp_DestroyContext, doesn't it need to record the fake XID that 
it generates? If I'm reading it right, if the client deletes a current 
context and later unbinds it, then xorgGlxMakeCurrent will call 
FreeResourceByType with the original (already freed) XID, not with the 
fake one.


In xorgGlxThunkRequest, it needs to call glxServer.removeXIDMap to 
remove the XID for a GLXDestroyGLXPbufferSGIX request.


The handling for created XID's in xorgGlxThunkRequest looks correct. As 
a minor note, you could set the "resource" variable in the switch 
statement, rather than in a separate if/else block.


Also, I think the "if (!vendor)" block after the switch is dead code. 
Every branch in the switch assigns something to vendor and returns an 
error if it gets NULL.


-Kyle

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 4/7] glx: Use vnd layer for dispatch (v2)

2018-02-01 Thread Kyle Brenneman

On 02/01/2018 02:31 PM, Adam Jackson wrote:

On Wed, 2018-01-10 at 13:57 -0700, Kyle Brenneman wrote:


xorgGlxThunkRequest should be calling addXIDMap and removeXIDMap, not
the internal handlers.

I'm not sure that can be right. If it did, and adding the map failed,
there's no way to get the backend to clean up; at least, not the way
GlxAddXIDMap is currently written. Perhaps that should be changed to
call FreeResource on the XID if AddResource fails? (Like the one place
in the whole server where that'd be correct...)
It works if the stub calls addXIDMap first, before calling into the 
vendor. Then, if addXIDMap fails, then the stub can return BadAlloc 
before the vendor has a chance to do anything that would require cleanup.


If the vendor returns an error code, then the stub would call 
removeXIDMap to clean up before returning.


That's also why the LEGAL_NEW_RESOURCE call has to be moved to the 
dispatch stub: The addXIDMap call happens before calling into the 
vendor, so by the time the vendor gets called, there's a resource 
defined for that XID and LEGAL_NEW_RESOURCE would fail.





The default branch there may need some additional
attention if it would handle any requests that create or destroy resources.

It would. The only other vendorpriv extension I know of that does that
is the underspecified and probably unimplementable
GLX_SGIX_video_source though, so it's not really a big deal.

- ajax


___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 4/7] glx: Use vnd layer for dispatch (v2)

2018-01-10 Thread Kyle Brenneman


On 01/10/2018 01:57 PM, Kyle Brenneman wrote:

On 01/10/2018 11:05 AM, Adam Jackson wrote:

The big change here is MakeCurrent and context tag tracking. We now
delegate context tags entirely to the vnd layer, and simply store a
pointer to the context state as the tag data. If a context is deleted
while it's current, we allocate a fake ID for the context and move the
context state there, so the tag data still points to a real context. As
a result we can stop trying so hard to detach the client from contexts
at disconnect time and just let resource destruction handle it.

Since vnd handles all the MakeCurrent protocol now, our request handlers
for it can just be return BadImplementation.

We also remove a bunch of LEGAL_NEW_RESOURCE, because now by the time
we're called vnd has already allocated its tracking resource on that
XID. Note that we only do this for core GLX requests, for vendor private
requests we still need to call LEGAL_NEW_RESOURCE and in addition need
to call up to addXIDMap and friends.

v2: Update to match v2 of the vnd import, and remove more redundant work
like request length checks.

Signed-off-by: Adam Jackson <a...@redhat.com>
---
  configure.ac   |   2 +-
  glx/createcontext.c|   2 -
  glx/glxcmds.c  | 275 
--

  glx/glxcmdsswap.c  |  98 +---
  glx/glxext.c   | 329 
-

  glx/glxext.h   |   4 +
  glx/glxscreens.h   |   1 +
  glx/glxserver.h|   5 -
  glx/xfont.c|   2 -
  hw/kdrive/ephyr/ephyr.c|   2 +-
  hw/kdrive/ephyr/meson.build|   1 +
  hw/kdrive/src/kdrive.c |   3 +
  hw/vfb/InitOutput.c|   2 +
  hw/vfb/meson.build |   3 +-
  hw/xfree86/Makefile.am |   5 +
  hw/xfree86/common/xf86Init.c   |   2 +-
  hw/xfree86/dixmods/glxmodule.c |   1 +
  hw/xfree86/meson.build |   1 +
  hw/xquartz/darwin.c|   4 +-
  hw/xwayland/Makefile.am|   1 +
  hw/xwayland/meson.build|   1 +
  hw/xwayland/xwayland.c |   2 +
  include/glx_extinit.h  |   5 +-
  23 files changed, 359 insertions(+), 392 deletions(-)




xorgGlxThunkRequest should be calling addXIDMap and removeXIDMap, not 
the internal handlers. The default branch there may need some 
additional attention if it would handle any requests that create or 
destroy resources.


And, in answer to the question next to xorgGlxThunkRequest, those 
could indeed be generated. The generate_dispatch_stubs.py script in 
the libglvnd repo only generates the core GLX requests, but it should 
be able to handle of those GLXVendorPrivate requests as well.


-Kyle



You've still got the xorgVendorInitClosure struct left over in glxext.c, 
too. I don't think anything uses it now.

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 4/7] glx: Use vnd layer for dispatch (v2)

2018-01-10 Thread Kyle Brenneman

On 01/10/2018 11:05 AM, Adam Jackson wrote:

The big change here is MakeCurrent and context tag tracking. We now
delegate context tags entirely to the vnd layer, and simply store a
pointer to the context state as the tag data. If a context is deleted
while it's current, we allocate a fake ID for the context and move the
context state there, so the tag data still points to a real context. As
a result we can stop trying so hard to detach the client from contexts
at disconnect time and just let resource destruction handle it.

Since vnd handles all the MakeCurrent protocol now, our request handlers
for it can just be return BadImplementation.

We also remove a bunch of LEGAL_NEW_RESOURCE, because now by the time
we're called vnd has already allocated its tracking resource on that
XID. Note that we only do this for core GLX requests, for vendor private
requests we still need to call LEGAL_NEW_RESOURCE and in addition need
to call up to addXIDMap and friends.

v2: Update to match v2 of the vnd import, and remove more redundant work
like request length checks.

Signed-off-by: Adam Jackson 
---
  configure.ac   |   2 +-
  glx/createcontext.c|   2 -
  glx/glxcmds.c  | 275 --
  glx/glxcmdsswap.c  |  98 +---
  glx/glxext.c   | 329 -
  glx/glxext.h   |   4 +
  glx/glxscreens.h   |   1 +
  glx/glxserver.h|   5 -
  glx/xfont.c|   2 -
  hw/kdrive/ephyr/ephyr.c|   2 +-
  hw/kdrive/ephyr/meson.build|   1 +
  hw/kdrive/src/kdrive.c |   3 +
  hw/vfb/InitOutput.c|   2 +
  hw/vfb/meson.build |   3 +-
  hw/xfree86/Makefile.am |   5 +
  hw/xfree86/common/xf86Init.c   |   2 +-
  hw/xfree86/dixmods/glxmodule.c |   1 +
  hw/xfree86/meson.build |   1 +
  hw/xquartz/darwin.c|   4 +-
  hw/xwayland/Makefile.am|   1 +
  hw/xwayland/meson.build|   1 +
  hw/xwayland/xwayland.c |   2 +
  include/glx_extinit.h  |   5 +-
  23 files changed, 359 insertions(+), 392 deletions(-)




xorgGlxThunkRequest should be calling addXIDMap and removeXIDMap, not 
the internal handlers. The default branch there may need some additional 
attention if it would handle any requests that create or destroy resources.


And, in answer to the question next to xorgGlxThunkRequest, those could 
indeed be generated. The generate_dispatch_stubs.py script in the 
libglvnd repo only generates the core GLX requests, but it should be 
able to handle of those GLXVendorPrivate requests as well.


-Kyle

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

[RFC PATCH] glxvnd: Various fixes

2017-10-20 Thread Kyle Brenneman
This is a follow-on to the GLXVND patches by Adam Jackson, sent out on
8/30/2017. This still mostly proof-of-concept, but it's enough to at least
build and run.

Fix various compiler warnings.

Fix GlxGetXIDMap so that it calls dixLookupResourceByClass instead of
dixLookupResourceByType.

Update the autotools build system to include the glxvnd files.

In glxvnd, don't skip initializing GLX if the vendor list is empty. The init
callbacks themselves might allocate vendor handles.

Reworked the GLVND vendor initialization functions to work with the full
xfree86 server.

xorgGlxCreateVendor now just registers the xorgGlxServerInit callback with
GLVND. All of the actual initialization is in xorgGlxServerInit.

xorgGlxServerInit now initializes all screens, not just one. It'll go through
the provider stack for each of them.

Added __glXProviderStack and GlxPushProvider back in.
---
 glx/Makefile.am|   6 +-
 glx/glxext.c   | 115 +-
 glx/vnd_dispatch_stubs.c   | 139 -
 glx/vndcmds.c  |   7 ++-
 glx/vndext.c   |  15 +++--
 glx/vndserver.h|   2 +
 glx/vndservermapping.c |   2 +-
 hw/kdrive/ephyr/ephyr.c|   2 +-
 hw/vfb/InitOutput.c|   2 +-
 hw/xfree86/dixmods/glxmodule.c |   1 +
 hw/xquartz/darwin.c|   2 +-
 hw/xwayland/xwayland.c |   2 +-
 hw/xwin/winscrinit.c   |   2 +-
 include/glx_extinit.h  |   2 +-
 14 files changed, 181 insertions(+), 118 deletions(-)

diff --git a/glx/Makefile.am b/glx/Makefile.am
index 699de63..6af56c1 100644
--- a/glx/Makefile.am
+++ b/glx/Makefile.am
@@ -80,6 +80,10 @@ libglx_la_SOURCES = \
 singlesize.h \
 swap_interval.c \
 unpack.h \
-xfont.c
+xfont.c \
+   vndcmds.c \
+   vndext.c \
+   vndservermapping.c \
+   vndservervendor.c
 
 libglx_la_LIBADD = $(DLOPEN_LIBS)
diff --git a/glx/glxext.c b/glx/glxext.c
index 19d83f4..f699ce7 100644
--- a/glx/glxext.c
+++ b/glx/glxext.c
@@ -297,6 +297,15 @@ glxClientCallback(CallbackListPtr *list, void *closure, 
void *data)
 
 //
 
+static __GLXprovider *__glXProviderStack = &__glXDRISWRastProvider;
+
+void
+GlxPushProvider(__GLXprovider * provider)
+{
+provider->next = __glXProviderStack;
+__glXProviderStack = provider;
+}
+
 static Bool
 checkScreenVisuals(void)
 {
@@ -493,62 +502,84 @@ xorgGlxServerPreInit(const ExtensionEntry *extEntry)
 return glxGeneration == serverGeneration;
 }
 
+static GlxServerVendor *glvnd_vendor = NULL;
+
+static GlxServerVendor *
+xorgGlxInitGLVNDVendor(void)
+{
+if (glvnd_vendor == NULL) {
+GlxServerImports *imports = NULL;
+imports = glxServer.allocateServerImports();
+
+if (imports != NULL) {
+imports->extensionCloseDown = xorgGlxCloseExtension;
+imports->handleRequest = xorgGlxHandleRequest;
+imports->getDispatchAddress = xorgGlxGetDispatchAddress;
+imports->makeCurrent = xorgGlxMakeCurrent;
+glvnd_vendor = glxServer.createVendor(imports);
+glxServer.freeServerImports(imports);
+}
+}
+return glvnd_vendor;
+}
+
 static void
 xorgGlxServerInit(CallbackListPtr *pcbl, void *param, void *ext)
 {
 const ExtensionEntry *extEntry = ext;
-xorgVendorInitClosure *closure = param;
-ScreenPtr screen = closure->screen;
-__GLXprovider *p = closure->provider;
-__GLXscreen *s = NULL;
+int i;
 
-if (!xorgGlxServerPreInit(extEntry))
-goto out;
+if (!xorgGlxServerPreInit(extEntry)) {
+return;
+}
+
+if (!xorgGlxInitGLVNDVendor()) {
+return;
+}
 
-if (!(s = p->screenProbe(screen)))
-goto out;
+for (i = 0; i < screenInfo.numScreens; i++) {
+ScreenPtr pScreen = screenInfo.screens[i];
+__GLXprovider *p;
 
-if (!glxServer.setScreenVendor(screen, closure->vendor))
-goto out;
+if (glxServer.getVendorForScreen(NULL, pScreen) != NULL) {
+// There's already a vendor registered.
+LogMessage(X_INFO, "GLX: Another vendor is already registered for 
screen %d\n", i);
+continue;
+}
+
+for (p = __glXProviderStack; p != NULL; p = p->next) {
+__GLXscreen *glxScreen = p->screenProbe(pScreen);
+if (glxScreen != NULL) {
+LogMessage(X_INFO,
+   "GLX: Initialized %s GL provider for screen %d\n",
+   p->name, i);
+break;
+}
 
-out:
-/* XXX chirp on error */
-free(param);
-return;
+}
+
+if (p) {
+glxServer.setScreenVendor(pScreen, glvnd_vendor);
+} else {
+LogMessage(X_INFO,
+   "GLX: no usable GL providers found for screen %d\n", 

Re: [RFC PATCH xserver 0/5] Server-side vendor neutral dispatch for GLX

2017-10-20 Thread Kyle Brenneman

On 08/30/2017 12:58 PM, Adam Jackson wrote:

The idea here is that the DDX creates a GLX provider during AddScreen,
and then GlxExtensionInit walks the list of created providers and calls
their setup functions to initialize GLX for a screen. If you have
heterogeneous GPUs in a Zaphod setup this would let you have GLX on
both. If you often change between drivers with different GLX stacks,
this lets the driver ask for the right thing instead of requiring
xorg.conf changes.

That's a lie, of course, because in this series the xfree86 DDX doesn't
implicitly register a provider for you. I'm not sure what the best way
to handle this is. I'd like not to have to touch every driver, and I'd
like it if the DRI2 provider was only probed if the screen called
DRI2ScreenInit, and I'd like it if that didn't rely knowing which order
CallCallbacks was going to walk its list; I may not get everything I
want. It might be worth just teaching the vnd layer about the swrast
provider and letting it claim any otherwise-unclaimed screens, even
though that feels like a layering violation.

Other things that aren't quite handled yet:

- autotools build system
- windows and osx builds
- dmx not ported
- libglx should be loadable for more than just xfree86

Still, feedback much appreciated.

- ajax

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel
Based on some experiments I've been working on with this interface, I've 
got a few ideas and suggestions that I can offer.


Using swrast as a last-resort fallback for an unclaimed screen makes 
sense, because as far as I can tell, it can work no matter what hardware 
or driver (or lack thereof) is there. As such, special-casing it from 
the VND layer seems reasonable, and avoids relying on the callback order.


But, both swrast and the DRI2 provider are currently implemented 
internally under what would effectively be a single VND vendor handle. 
If each provider had its own vendor handle, then that would be a cleaner 
match to the VND interface, but that seems like it would be a more 
invasive change.


However, I think just using the same linked list of providers that it's 
got now would still work, so we could implement the VND interface with 
swrast and DRI2 still lumped together, and separate them out later, 
without having to break compatibility.


I've got some follow-on patches to get the xfree86 server working with 
this interface, plus a few other random fixes. I wouldn't call them 
ready for production yet, but I'll send them out so that anyone who's 
interested can try it out.



In the meantime, though, I've been looking into making GPU offloading 
work between drivers, and I was hoping to get some feedback as well.


The basic idea is that we'd extend the VND interface so that each screen 
has a primary vendor (from whichever driver is actually driving the 
desktop), but you can register any number of secondary vendors as well.


Each client would then have its own (screen -> vendor) map. By default, 
it would use the primary vendor for each screen, but the client could 
send some sort of extension request to tell the server to use an 
alternative. After that, the dispatching code is the same -- each 
request gets forwarded to the appropriate vendor based on a (client, 
screen) pair.


Creating contexts, allocating rendering surfaces, and so on would be an 
internal detail between the client and server vendor libraries, so the 
VND interface doesn't need to care about it. I'm also assuming that 
whatever communication you need between drivers to get the resulting 
image onto the screen will be separate from the VND interface.


I think X visuals would be the hardest part, because the secondary (off 
screen) vendor might have a different set of visuals that it can render 
to, but a GLX client still needs to be able to pass one of those visuals 
to XCreateWindow. That's easy enough to describe at the protocol level, 
but defining the resulting driver behavior gets tricky.


-Kyle

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 3/5] glx: Import glxvnd server module

2017-08-30 Thread Kyle Brenneman



On 08/30/2017 12:58 PM, Adam Jackson wrote:

From: Kyle Brenneman <kbrenne...@nvidia.com>

This is based on an out-of-tree module written by Kyle:

https://github.com/kbrenneman/libglvnd/tree/server-libglx

I (ajax) did a bunch of cosmetic fixes, ported it off xfree86 API,
added request length checks, and fixed a minor bug or two.

Signed-off-by: Adam Jackson <a...@redhat.com>
---
  glx/meson.build  |  21 ++
  glx/vnd_dispatch_stubs.c | 510 +++
  glx/vndcmds.c| 479 
  glx/vndext.c | 265 
  glx/vndserver.h  | 117 +++
  glx/vndservermapping.c   | 196 ++
  glx/vndservervendor.c|  91 +
  glx/vndservervendor.h|  68 +++
  include/Makefile.am  |   1 +
  include/glxvndabi.h  | 279 ++
  include/meson.build  |   1 +
  11 files changed, 2028 insertions(+)
  create mode 100644 glx/vnd_dispatch_stubs.c
  create mode 100644 glx/vndcmds.c
  create mode 100644 glx/vndext.c
  create mode 100644 glx/vndserver.h
  create mode 100644 glx/vndservermapping.c
  create mode 100644 glx/vndservervendor.c
  create mode 100644 glx/vndservervendor.h
  create mode 100644 include/glxvndabi.h


Note that there's a few parts of that interface that I'm still working 
on, so I wouldn't mind some feedback on what people think about it. Off 
the top of my head:


The allocateServerImports and freeServerImports functions are there to 
make it easier to extend the GlxServerImports struct later without 
breaking backward compatibility. Would it be better/easier to just give 
the expected size of the GlxServerImports struct and let the caller 
allocate it?


The getVendorForScreen function takes a ClientPtr parameter. I included 
that so that at some point in the future, it might be able to handle 
different (screen -> vendor) mappings for different clients. But, that 
means it doesn't match up with the setScreenVendor function. I'm not 
sure what the best solution is here.


Passing the vendor-private pointer in and out of 
GlxServerImports::makeCurrent as a pointer and pointer-to-pointer feels 
awkward to me, but I haven't come up with anything better yet. It also 
doesn't have any way to change the pointer before the next makeCurrent 
call, though I don't know if that's likely to be needed.


Also worth noting is that in the libglvnd tree, I've got a script to 
generate the dispatch stubs in vnd_dispatch_stubs.c.


-Kyle

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [RFC] Server side glvnd

2017-07-18 Thread Kyle Brenneman
I was actually just getting ready to send this out. This is what I've 
come up with for a server-side GLVND-like interface:

https://github.com/kbrenneman/libglvnd/tree/server-libglx

What I've got there is a proof-of-concept GLX server module that can 
dispatch to multiple vendor libraries based on screen. The interface is 
defined in include/glvnd/glxserverabi.h, and the implementation is in 
src/glxserver.


There's still more to do, but I've been able to get the NVIDIA driver 
and GLX module running on one screen, and the Xorg dummy video driver 
and GLX module running on a second, and I can create and use direct and 
indirect contexts on both screens from a single client.



Dispatching works similarly to the client libraries. Each GLX request 
has a dispatch stub which looks up a vendor based on something in the 
request (tag, screen number, or XID), and it then forwards the request 
to that vendor. The vast majority of dispatch stubs can be generated by 
a script.


As with the client libraries, I set it up so that each vendor can 
provide dispatch stubs to handle any extension requests, including 
VendorPrivate requests. In addition, since requests that create or 
destroy resources include the XID in the request, the dispatch stubs can 
handle updating the (XID -> vendor) mapping. That way, you don't have to 
modify a vendor's existing request handlers.



MakeCurrent requests require a fair amount of special handling, so I 
defined a separate callback function for them. That way, if it's 
switching between different vendors, it can call each vendor's 
MakeCurrent callback.


Context tags are, as you note, pretty ugly. In order to avoid conflicts 
between multiple vendors, the GLVND layer has to create and assign tags. 
To make that easier to deal with, the MakeCurrent callback lets the 
vendor library provide a (void *) pointer for each tag that can be used 
for arbitrary private data. Xorg's GLX module doesn't especially depend 
on how the tag value is assigned, it just needs to be able to look up a 
context from a tag, so changing it to go through that GLVND pointer 
isn't too hard (unless I missed something in my testing, which is 
entirely possible).


Alternately, a vendor library could use the pointer that GLVND maintains 
to store whatever context tag value it would have used.


As a side note, the implementation that I've got uses array indexes for 
context tags instead of XID's.


For SwapBuffers or any other request that can take an optional context 
tag, the dispatch stub dispatches by the context tag if the request has 
one, and the drawable if it doesn't. Dispatching based only on the 
drawable could cause problems if they map to different vendors, because 
then a vendor might try to look up (and dereference) the private data 
from another vendor.



The part that's the most different from the client library is how I set 
up vendor selection. Instead of GLVND trying to query or otherwise 
figure out which vendor to use for each screen, the display driver 
simply assigns a vendor library to use:


- In the PreInit callback, the driver calls into GLVND to create an 
opaque vendor handle. At that point, the driver provides an 
initialization callback function.
- When the GLX extension is initialized, it calls the initialization 
callback for each vendor handle. In that callback, the vendor fills in a 
function table. This part is analogous to the __glx_Main and __egl_Main 
functions exported from the EGL and GLX client vendor libraries.
- Then, in the CreateScreenResoruces callback, the driver assigns a 
vendor handle to each screen.


Since a display driver can reasonably be expected to know what GLX 
implementation will work, that lets it piggyback on the server's 
existing driver selection capability.



I haven't addressed Prime or yet, because I'm still trying to figure out 
how it works. But, we could probably set it up to use a separate (screen 
-> vendor) mapping for each client. Maybe a new GLX extension to let the 
client send some additional data to the server at the beginning, which 
the server could then use to decide what mapping to set up.


As for Xinerama, I think it would largely be orthogonal to the GLVND 
layer. But, if you've got two screens that both use the same vendor 
library, then it shouldn't look any different from the vendor library's 
perspective than the non-GLVND case would. Xinerama between different 
vendor libraries would be a lot harder, though.


Anyway, I'd like to hear what people think about it. Comments and 
questions are all welcome.


-Kyle

On 07/18/2017 10:20 AM, Aaron Plattner wrote:

Adding Kyle to To -- he's been working on something similar.

On 07/18/2017 08:43 AM, Adam Jackson wrote:

I've been thinking about how to get multiple GL stacks to coexist
within the server, along the lines of libglvnd on the client side. This
is a bit of a brain dump, the intrepid can find some work in progress
along these lines here:


Re: Xorg glx module: GLVND, EGL, or ... ?

2016-12-27 Thread Kyle Brenneman
The driver order is configurable, and typically set when the drivers are 
installed.


Each driver adds a JSON file to a directory to specify the name of the 
library, similar to the way the Vulkan loader finds ICD's. libEGL sorts 
those by filename, which defines the order that it'll try each driver. 
Then, when the app calls eglGetPlatformDisplay, it uses whatever driver 
succeeds first.


You can override that order by setting the environment variable 
__EGL_VENDOR_LIBRARY_FILENAMES to a colon-separated list of JSON files. 
Note that libEGL only reads those files once, though, when it first 
loads the vendor libraries. Setting the environment variable right 
before calling eglGetPlatformDisplay may not work if you've called any 
other EGL functions beforehand.


-Kyle

On 12/27/2016 11:43 PM, Yu, Qiang wrote:

For EGL, when the app calls eglGetPlatformDisplay or
eglGetDisplay(EGL_DEFAULT_DISPLAY), then libglvnd will just try each
driver in the order they're listed until it finds one that works. You
could select between two drivers based on an environment variable like
DRI_PRIME just by having one driver or the other succeed.

[yuq] You mean if both vendor driver work for a call, the first success one will
be chosen, and an environment variable in first vendor driver can be used to
choose return fail if it is not what we want? If work like this, can we
specify the driver list order? Because vendor can only change its own driver
to add this environment variable. Otherwise we need to add this env to mesa.

Regards,
Qiang

On 12/27/2016 08:26 PM, Yu, Qiang wrote:

So use like this per application?
DRI_PRIME=1 __GLX_VENDOR_LIBRARY_NAME=xxx glxgears
DRI_PRIME=1 EGL_PLATFORM=xxx es2gears

Another problem is if two EGL vendor can both be used, how do I
select which to use within one application? For example, in xserver,
load two DDX for two GPU: modesetting DDX use mesa EGL,
amdgpu DDX use amdgpu-pro EGL (it can use mesa too).
The interface is the same (both is initialized from gbm fd).
Which is the default one?

Will this work? In amdgpu DDX code temporarily set EGL_PLATFORM=amdgpu-pro
when init, unset when init is done.

Regards,
Qiang

From: Kyle Brenneman <kbrenne...@nvidia.com>
Sent: Wednesday, December 28, 2016 10:18:13 AM
To: Yu, Qiang; Adam Jackson; Emil Velikov; Michel Dänzer
Cc: ML xorg-devel
Subject: Re: Xorg glx module: GLVND, EGL, or ... ?

GLVND doesn't respond to DRI_PRIME (and probably shouldn't, since that's
very driver-specific), but it has an environment variable that you can
use to override which vendor library it selects.

That's entirely on the client side, so whatever driver to tell it to use
still needs to be able to talk to the server.

-Kyle

On 12/27/2016 07:06 PM, Yu, Qiang wrote:

Yes, mesa can handle DRI_PRIME alone. But my use case is:
1. PRIME GPU (iGPU) use mesa libGL
2. Secondary GPU (dGPU) use close source libGL

If this can be done, we can use dynamic GPU offload in hybrid GPU platforms,
currently we have to switch between GPUs statically (change xorg.conf).

When DRI2, secondary GPU has a GPUScreen on the xserver side which can
be used to obtain vendor info (although not implemented). But DRI3, client
just do offload when DRI_PRIME=1 is set without inform xserver.

The only method I can think of is using a config file for GLVND which records 
the
secondary GPU's vendor to use when DRI_PRIME is set like:
 

What's your opinion?

Regards,
Qiang

From: Kyle Brenneman <kbrenne...@nvidia.com>
Sent: Wednesday, December 28, 2016 1:05:50 AM
To: Yu, Qiang; Adam Jackson; Emil Velikov; Michel Dänzer
Cc: ML xorg-devel
Subject: Re: Xorg glx module: GLVND, EGL, or ... ?

Is DRI_PRIME handled within the Mesa?

If so, then no support from GLVND is needed. The GLVND libraries would
simply dispatch any function calls to Mesa, which in turn would handle
those calls the same way it would in a non-GLVND system.

-Kyle

On 12/23/2016 07:31 PM, Yu, Qiang wrote:

Hi guys,

Does GLVND support DRI_PRIME=1? If the secondary GPU uses different
libGL than primary GPU, how GLVND get the vendor to use?

Regards,
Qiang

From: Adam Jackson <a...@redhat.com>
Sent: Saturday, December 17, 2016 6:02:18 AM
To: Emil Velikov; Michel Dänzer
Cc: Kyle Brenneman; Yu, Qiang; ML xorg-devel
Subject: Re: Xorg glx module: GLVND, EGL, or ... ?

On Thu, 2016-12-15 at 16:08 +, Emil Velikov wrote:


Example:
Would happen if we one calls glXMakeCurrent which internally goes down
to eglMakeCurrent ? Are we going to clash since (iirc) one is not
allowed to do both on the same GL ctx ?

No, for the same reason this already isn't a problem. If you
glXMakeCurrent an indirect context, the X server does not itself call
glXMakeCurrent. All it does is record the client's binding. Only when
we go do to actual indirect rendering (or mutate context state) does
libglx actually make that context "

Re: Xorg glx module: GLVND, EGL, or ... ?

2016-12-27 Thread Kyle Brenneman
The __GLX_VENDOR_LIBRARY_NAME variable is enough to force libGLX.so to 
use a specific driver instead of whatever name the X server sends back. 
Whether the DRI_PRIME variable would be needed depends on the driver 
that you give it.


For EGL, when the app calls eglGetPlatformDisplay or 
eglGetDisplay(EGL_DEFAULT_DISPLAY), then libglvnd will just try each 
driver in the order they're listed until it finds one that works. You 
could select between two drivers based on an environment variable like 
DRI_PRIME just by having one driver or the other succeed.


Setting EGL_PLATFORM only affects a call to eglGetDisplay with a 
non-NULL native display, where it has to guess which platform to use. In 
a case where you know you're using GBM, it's better to just use 
eglGetPlatformDisplay.


-Kyle

On 12/27/2016 08:26 PM, Yu, Qiang wrote:

So use like this per application?
DRI_PRIME=1 __GLX_VENDOR_LIBRARY_NAME=xxx glxgears
DRI_PRIME=1 EGL_PLATFORM=xxx es2gears

Another problem is if two EGL vendor can both be used, how do I
select which to use within one application? For example, in xserver,
load two DDX for two GPU: modesetting DDX use mesa EGL,
amdgpu DDX use amdgpu-pro EGL (it can use mesa too).
The interface is the same (both is initialized from gbm fd).
Which is the default one?

Will this work? In amdgpu DDX code temporarily set EGL_PLATFORM=amdgpu-pro
when init, unset when init is done.

Regards,
Qiang

From: Kyle Brenneman <kbrenne...@nvidia.com>
Sent: Wednesday, December 28, 2016 10:18:13 AM
To: Yu, Qiang; Adam Jackson; Emil Velikov; Michel Dänzer
Cc: ML xorg-devel
Subject: Re: Xorg glx module: GLVND, EGL, or ... ?

GLVND doesn't respond to DRI_PRIME (and probably shouldn't, since that's
very driver-specific), but it has an environment variable that you can
use to override which vendor library it selects.

That's entirely on the client side, so whatever driver to tell it to use
still needs to be able to talk to the server.

-Kyle

On 12/27/2016 07:06 PM, Yu, Qiang wrote:

Yes, mesa can handle DRI_PRIME alone. But my use case is:
1. PRIME GPU (iGPU) use mesa libGL
2. Secondary GPU (dGPU) use close source libGL

If this can be done, we can use dynamic GPU offload in hybrid GPU platforms,
currently we have to switch between GPUs statically (change xorg.conf).

When DRI2, secondary GPU has a GPUScreen on the xserver side which can
be used to obtain vendor info (although not implemented). But DRI3, client
just do offload when DRI_PRIME=1 is set without inform xserver.

The only method I can think of is using a config file for GLVND which records 
the
secondary GPU's vendor to use when DRI_PRIME is set like:
 

What's your opinion?

Regards,
Qiang

From: Kyle Brenneman <kbrenne...@nvidia.com>
Sent: Wednesday, December 28, 2016 1:05:50 AM
To: Yu, Qiang; Adam Jackson; Emil Velikov; Michel Dänzer
Cc: ML xorg-devel
Subject: Re: Xorg glx module: GLVND, EGL, or ... ?

Is DRI_PRIME handled within the Mesa?

If so, then no support from GLVND is needed. The GLVND libraries would
simply dispatch any function calls to Mesa, which in turn would handle
those calls the same way it would in a non-GLVND system.

-Kyle

On 12/23/2016 07:31 PM, Yu, Qiang wrote:

Hi guys,

Does GLVND support DRI_PRIME=1? If the secondary GPU uses different
libGL than primary GPU, how GLVND get the vendor to use?

Regards,
Qiang

From: Adam Jackson <a...@redhat.com>
Sent: Saturday, December 17, 2016 6:02:18 AM
To: Emil Velikov; Michel Dänzer
Cc: Kyle Brenneman; Yu, Qiang; ML xorg-devel
Subject: Re: Xorg glx module: GLVND, EGL, or ... ?

On Thu, 2016-12-15 at 16:08 +, Emil Velikov wrote:


Example:
Would happen if we one calls glXMakeCurrent which internally goes down
to eglMakeCurrent ? Are we going to clash since (iirc) one is not
allowed to do both on the same GL ctx ?

No, for the same reason this already isn't a problem. If you
glXMakeCurrent an indirect context, the X server does not itself call
glXMakeCurrent. All it does is record the client's binding. Only when
we go do to actual indirect rendering (or mutate context state) does
libglx actually make that context "current". That context is a tuple of
the protocol state and a DRI driver context; it could just as easily be
an EGL context instead of DRI.

- ajax


___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: Xorg glx module: GLVND, EGL, or ... ?

2016-12-27 Thread Kyle Brenneman
GLVND doesn't respond to DRI_PRIME (and probably shouldn't, since that's 
very driver-specific), but it has an environment variable that you can 
use to override which vendor library it selects.


That's entirely on the client side, so whatever driver to tell it to use 
still needs to be able to talk to the server.


-Kyle

On 12/27/2016 07:06 PM, Yu, Qiang wrote:

Yes, mesa can handle DRI_PRIME alone. But my use case is:
1. PRIME GPU (iGPU) use mesa libGL
2. Secondary GPU (dGPU) use close source libGL

If this can be done, we can use dynamic GPU offload in hybrid GPU platforms,
currently we have to switch between GPUs statically (change xorg.conf).

When DRI2, secondary GPU has a GPUScreen on the xserver side which can
be used to obtain vendor info (although not implemented). But DRI3, client
just do offload when DRI_PRIME=1 is set without inform xserver.

The only method I can think of is using a config file for GLVND which records 
the
secondary GPU's vendor to use when DRI_PRIME is set like:
 

What's your opinion?

Regards,
Qiang

From: Kyle Brenneman <kbrenne...@nvidia.com>
Sent: Wednesday, December 28, 2016 1:05:50 AM
To: Yu, Qiang; Adam Jackson; Emil Velikov; Michel Dänzer
Cc: ML xorg-devel
Subject: Re: Xorg glx module: GLVND, EGL, or ... ?

Is DRI_PRIME handled within the Mesa?

If so, then no support from GLVND is needed. The GLVND libraries would
simply dispatch any function calls to Mesa, which in turn would handle
those calls the same way it would in a non-GLVND system.

-Kyle

On 12/23/2016 07:31 PM, Yu, Qiang wrote:

Hi guys,

Does GLVND support DRI_PRIME=1? If the secondary GPU uses different
libGL than primary GPU, how GLVND get the vendor to use?

Regards,
Qiang

From: Adam Jackson <a...@redhat.com>
Sent: Saturday, December 17, 2016 6:02:18 AM
To: Emil Velikov; Michel Dänzer
Cc: Kyle Brenneman; Yu, Qiang; ML xorg-devel
Subject: Re: Xorg glx module: GLVND, EGL, or ... ?

On Thu, 2016-12-15 at 16:08 +, Emil Velikov wrote:


Example:
Would happen if we one calls glXMakeCurrent which internally goes down
to eglMakeCurrent ? Are we going to clash since (iirc) one is not
allowed to do both on the same GL ctx ?

No, for the same reason this already isn't a problem. If you
glXMakeCurrent an indirect context, the X server does not itself call
glXMakeCurrent. All it does is record the client's binding. Only when
we go do to actual indirect rendering (or mutate context state) does
libglx actually make that context "current". That context is a tuple of
the protocol state and a DRI driver context; it could just as easily be
an EGL context instead of DRI.

- ajax


___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: Xorg glx module: GLVND, EGL, or ... ?

2016-12-27 Thread Kyle Brenneman

Is DRI_PRIME handled within the Mesa?

If so, then no support from GLVND is needed. The GLVND libraries would 
simply dispatch any function calls to Mesa, which in turn would handle 
those calls the same way it would in a non-GLVND system.


-Kyle

On 12/23/2016 07:31 PM, Yu, Qiang wrote:

Hi guys,

Does GLVND support DRI_PRIME=1? If the secondary GPU uses different
libGL than primary GPU, how GLVND get the vendor to use?

Regards,
Qiang

From: Adam Jackson <a...@redhat.com>
Sent: Saturday, December 17, 2016 6:02:18 AM
To: Emil Velikov; Michel Dänzer
Cc: Kyle Brenneman; Yu, Qiang; ML xorg-devel
Subject: Re: Xorg glx module: GLVND, EGL, or ... ?

On Thu, 2016-12-15 at 16:08 +, Emil Velikov wrote:


Example:
Would happen if we one calls glXMakeCurrent which internally goes down
to eglMakeCurrent ? Are we going to clash since (iirc) one is not
allowed to do both on the same GL ctx ?

No, for the same reason this already isn't a problem. If you
glXMakeCurrent an indirect context, the X server does not itself call
glXMakeCurrent. All it does is record the client's binding. Only when
we go do to actual indirect rendering (or mutate context state) does
libglx actually make that context "current". That context is a tuple of
the protocol state and a DRI driver context; it could just as easily be
an EGL context instead of DRI.

- ajax


___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: Xorg glx module: GLVND, EGL, or ... ?

2016-12-15 Thread Kyle Brenneman

On 12/15/2016 10:53 AM, Hans de Goede wrote:

Hi,

On 15-12-16 17:08, Emil Velikov wrote:

On 15 December 2016 at 08:15, Michel Dänzer  wrote:


Hi Adam, Andy, Kyle,


even with GLVND in place and used by Mesa and other GL implementations,
one remaining issue preventing peaceful coexistence of Mesa based and
other GLX implementations is that other GLX implementations tend to 
ship

their own, mutually incompatible versions of the Xorg glx module. I'm
not sure about all the reasons for this, but an important one is that
the glx module in the xserver tree has been using the DRI driver
interface directly, which can only work with Mesa.

The "xfree86: Extend OutputClass config sections" series from Hans 
just landed.

With it one can correctly attribute/select the correct libglx.so,
which should tackle the issue ;-)


Not if you want to run some apps one GPU and other apps on the other
GPU ...

Regards,

Hans


More specifically, to allow different drivers on different screens, 
we'll need to define some interface that would dispatch each GLX request 
to the appropriate driver. Basically, a server-side counterpart to 
libglvnd's libGLX.so.


A server interface should be a lot simpler than the client interface, 
though. Everything runs on one thread, we only have to care about 
mapping based on XID's and context tags, we don't have to worry about 
multiple servers with different sets of vendors, we can ignore basically 
everything that libGLdispatch.so is needed for in the client, and the 
opcodes nicely define a dispatch table for us.


Using EGL might work, although I don't know my way around the server 
well enough to comment on what that would take. The biggest problem that 
I see is that if you've got a single, common library that translates GLX 
requests to EGL calls, then how would a vendor library define a new GLX 
extension?


Also note that they may or may not be mutually exclusive. If you can use 
libEGL.so to draw to a surface within the server, then I'd expect you 
could implement a vendor library that works by calling into EGL. Mind 
you, if you had two vendors trying to do that, then things would get hairy.


-Kyle

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel