Re: [Mesa-dev] RFC - libglvnd and GLXVND vendor enumeration to facilitate GLX multi-vendor PRIME GPU offload

2019-02-13 Thread Andy Ritger via xorg-devel
On Wed, Feb 13, 2019 at 12:15:02PM -0700, Kyle Brenneman wrote:
> On 02/12/2019 01:58 AM, Michel Dänzer wrote:
> > On 2019-02-11 5:18 p.m., Andy Ritger wrote:
> > > On Mon, Feb 11, 2019 at 12:09:26PM +0100, Michel Dänzer wrote:
> > > > On 2019-02-08 11:43 p.m., Kyle Brenneman wrote:
> > > > > Also, is Mesa the only client-side vendor library that works with the
> > > > > Xorg GLX module? I vaguely remember that there was at least one other
> > > > > driver that did, but I don't remember the details anymore.
> > > > AFAIK, the amdgpu-pro OpenGL driver can work with the Xorg GLX module
> > > > (or its own forked version of it).
> > > Maybe the amdgpu-pro OpenGL driver uses a fork of the Xorg GLX module
> > > (or sets the "GlxVendorLibrary" X configuration option?), but it doesn't
> > > look to me like the in-tree Xorg GLX module could report anything other
> > > than "mesa" for GLX_VENDOR_NAMES_EXT, without custom user configuration.
> > > 
> > > GLX_VENDOR_NAMES_EXT, which client-side glvnd uses to pick the
> > > libGLX_${vendor}.so to load, is implemented in the Xorg GLX module
> > > with this:
> > > 
> > >xserver/glx/glxcmds.c:__glXDisp_QueryServerString():
> > > 
> > >  case GLX_VENDOR_NAMES_EXT:
> > >  if (pGlxScreen->glvnd) {
> > >  ptr = pGlxScreen->glvnd;
> > >  break;
> > >  }
> > > 
> > > pGlxScreen->glvnd appears to be assigned here, defaulting to "mesa",
> > > though allowing an xorg.conf override via the "GlxVendorLibrary" option:
> > > 
> > >xserver/glx/glxdri2.c:__glXDRIscreenProbe():
> > > 
> > >  xf86ProcessOptions(pScrn->scrnIndex, pScrn->options, options);
> > >  glvnd = xf86GetOptValString(options, GLXOPT_VENDOR_LIBRARY);
> > >  if (glvnd)
> > >  screen->base.glvnd = xnfstrdup(glvnd);
> > >  free(options);
> > > 
> > >  if (!screen->base.glvnd)
> > >  screen->base.glvnd = strdup("mesa");
> > > 
> > > And swrast unconditionally sets pGlxScreen->glvnd to "mesa":
> > > 
> > >xserver/glx/glxdriswrast.c:__glXDRIscreenProbe():
> > > 
> > >  screen->base.glvnd = strdup("mesa");
> > > 
> > > Is there more to this that I'm missing?
> > I don't think so, I suspect we were just assuming slightly different
> > definitions of "works". :)
> > 
> > 
> That should get fixed, but since that applies to the libglvnd's normal
> vendor selection, I'd say it's orthogonal to GPU offloading. Off the top of
> my head, the "GlxVendorLibrary" option ought to work regardless of which
> __GLXprovider it finds. I think it would be possible to add a function to
> let a driver override the GLX_VENDOR_NAMES_EXT string, too.

I think the point, though, is that thus far, libGLX_mesa.so is the only
glvnd client-side GLX implementation that will be loaded for use with
Xorg's GLX.  Thus, it doesn't seem to refute ajax's comment from earlier
in the thread:

>>> At the other extreme, the server could do nearly all the work of
>>> generating the possible __GLX_VENDOR_LIBRARY_NAME strings (with the
>>> practical downside of each server-side GLX vendor needing to enumerate
>>> the GPUs it can drive, in order to generate the hardware-specific
>>> identifiers).
>> I don't think this downside is much of a burden? If you're registering
>> a provider other than Xorg's you're already doing it from the DDX
>> driver 


> -Kyle
> 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

[PATCH] xfree86/modes: Add "NoOutputInitialSize" option

2019-02-13 Thread Andy Ritger via xorg-devel
Normally, the X server infers the initial screen size based on any
connected outputs.  However, if no outputs are connected, the X server
picks a default screen size of 1024 x 768.  This option overrides the
default screen size to use when no outputs are connected.  In contrast
to the "Virtual" Display SubSection entry, which applies unconditionally,
"NoOutputInitialSize" is only used if no outputs are detected when the
X server starts.

Parse this option in the new exported helper function
xf86AssignNoOutputInitialSize(), so that other XFree86 loadable drivers
can use it, even if they don't use xf86InitialConfiguration().

Signed-off-by: Andy Ritger 
---
 hw/xfree86/man/xorg.conf.man |  9 
 hw/xfree86/modes/xf86Crtc.c  | 42 +++-
 hw/xfree86/modes/xf86Crtc.h  |  4 
 3 files changed, 50 insertions(+), 5 deletions(-)

diff --git a/hw/xfree86/man/xorg.conf.man b/hw/xfree86/man/xorg.conf.man
index 2c18252b72d9..35eec9558bbf 100644
--- a/hw/xfree86/man/xorg.conf.man
+++ b/hw/xfree86/man/xorg.conf.man
@@ -1494,6 +1494,15 @@ option.
 Enable printing of additional debugging information about modesetting to
 the server log.
 .TP 7
+.BI "Option \*qNoOutputInitialSize\*q \*q" width " " height \*q
+Normally, the X server infers the initial screen size based on any
+connected outputs.  However, if no outputs are connected, the X server
+picks a default screen size of 1024 x 768.  This option overrides the
+default screen size to use when no outputs are connected.  In contrast to
+the \*qVirtual\*q Display SubSection entry, which applies unconditionally,
+\*qNoOutputInitialSize\*q is only used if no outputs are detected when the X
+server starts.
+.TP 7
 .BI "Option \*qPreferCloneMode\*q \*q" boolean \*q
 If enabled, bring up monitors of a screen in clone mode instead of horizontal
 extended layout by default. (Defaults to off; the video driver can change the
diff --git a/hw/xfree86/modes/xf86Crtc.c b/hw/xfree86/modes/xf86Crtc.c
index 37a45bb3aff9..b3b84cc13a77 100644
--- a/hw/xfree86/modes/xf86Crtc.c
+++ b/hw/xfree86/modes/xf86Crtc.c
@@ -500,11 +500,13 @@ static OptionInfoRec xf86OutputOptions[] = {
 enum {
 OPTION_MODEDEBUG,
 OPTION_PREFER_CLONEMODE,
+OPTION_NO_OUTPUT_INITIAL_SIZE,
 };
 
 static OptionInfoRec xf86DeviceOptions[] = {
 {OPTION_MODEDEBUG, "ModeDebug", OPTV_BOOLEAN, {0}, FALSE},
 {OPTION_PREFER_CLONEMODE, "PreferCloneMode", OPTV_BOOLEAN, {0}, FALSE},
+{OPTION_NO_OUTPUT_INITIAL_SIZE, "NoOutputInitialSize", OPTV_STRING, {0}, 
FALSE},
 {-1, NULL, OPTV_NONE, {0}, FALSE},
 };
 
@@ -2484,6 +2486,32 @@ xf86TargetUserpref(ScrnInfoPtr scrn, xf86CrtcConfigPtr 
config,
 return FALSE;
 }
 
+void
+xf86AssignNoOutputInitialSize(ScrnInfoPtr scrn, const OptionInfoRec *options,
+  int *no_output_width, int *no_output_height)
+{
+int width = 0, height = 0;
+const char *no_output_size =
+xf86GetOptValString(options, OPTION_NO_OUTPUT_INITIAL_SIZE);
+
+*no_output_width = NO_OUTPUT_DEFAULT_WIDTH;
+*no_output_height = NO_OUTPUT_DEFAULT_HEIGHT;
+
+if (no_output_size == NULL) {
+return;
+}
+
+if (sscanf(no_output_size, "%d %d", &width, &height) != 2) {
+xf86DrvMsg(scrn->scrnIndex, X_ERROR,
+   "\"NoOutputInitialSize\" string \"%s\" not of form "
+   "\"width height\"\n", no_output_size);
+return;
+}
+
+*no_output_width = width;
+*no_output_height = height;
+}
+
 /**
  * Construct default screen configuration
  *
@@ -2507,6 +2535,7 @@ xf86InitialConfiguration(ScrnInfoPtr scrn, Bool canGrow)
 DisplayModePtr *modes;
 Bool *enabled;
 int width, height;
+int no_output_width, no_output_height;
 int i = scrn->scrnIndex;
 Bool have_outputs = TRUE;
 Bool ret;
@@ -2528,6 +2557,9 @@ xf86InitialConfiguration(ScrnInfoPtr scrn, Bool canGrow)
 else
 height = config->maxHeight;
 
+xf86AssignNoOutputInitialSize(scrn, config->options,
+  &no_output_width, &no_output_height);
+
 xf86ProbeOutputModes(scrn, width, height);
 
 crtcs = xnfcalloc(config->num_output, sizeof(xf86CrtcPtr));
@@ -2540,7 +2572,7 @@ xf86InitialConfiguration(ScrnInfoPtr scrn, Bool canGrow)
 xf86DrvMsg(i, X_WARNING,
  "Unable to find connected outputs - setting %dx%d "
"initial framebuffer\n",
-   NO_OUTPUT_DEFAULT_WIDTH, NO_OUTPUT_DEFAULT_HEIGHT);
+   no_output_width, no_output_height);
 have_outputs = FALSE;
 }
 else {
@@ -2641,10 +2673,10 @@ xf86InitialConfiguration(ScrnInfoPtr scrn, Bool canGrow)
 xf86DefaultScreenLimits(scrn, &width, &height, canGrow);
 
 if (have_outputs == FALSE) {
-if (width < NO_OUTPUT_DEFAULT_WIDTH &&
-height < NO_OUTPUT_DEFAULT_HEIGHT) {
-width = NO_OUTPUT_DEFAULT_WIDTH;
-height = NO_OUTPUT_D

Re: [Mesa-dev] RFC - libglvnd and GLXVND vendor enumeration to facilitate GLX multi-vendor PRIME GPU offload

2019-02-11 Thread Andy Ritger via xorg-devel
On Fri, Feb 08, 2019 at 03:43:25PM -0700, Kyle Brenneman wrote:
> On 2/8/19 2:33 PM, Andy Ritger wrote:
> > On Fri, Feb 08, 2019 at 03:01:33PM -0500, Adam Jackson wrote:
> > > On Fri, 2019-02-08 at 10:19 -0800, Andy Ritger wrote:
> > > 
> > > > (1) If configured for PRIME GPU offloading (environment variable or
> > > >  application profile), client-side libglvnd could load the possible
> > > >  libGLX_${vendor}.so libraries it finds, and call into each to
> > > >  find which vendor (and possibly which GPU) matches the specified
> > > >  string. Once a vendor is selected, the vendor library could 
> > > > optionally
> > > >  tell the X server which GLX vendor to use server-side for this
> > > >  client connection.
> > > I'm not a huge fan of the "dlopen everything" approach, if it can be
> > > avoided.
> > Yes, I agree.
> I'm pretty sure libglvnd could avoid unnecessarily loading vendor libraries
> without adding nearly so much complexity.
> 
> If libglvnd just has a list of additional vendor library names to try, then
> you could just have a flag to tell libglvnd to check some server string for
> that name before it loads the vendor. If a client-side vendor would need a
> server-side counterpart to work, then libglvnd can check for that. The
> server only needs to keep a list of names to send back, which would be a
> trivial (and backward-compatible) addition to the GLXVND interface.
> 
> Also, even without that, I don't think the extra dlopen calls would be a
> problem in practice. It would only ever happen in applications that are
> configured for offloading, which are (more-or-less by definition)
> heavy-weight programs, so an extra millisecond or so of startup time is
> probably fine.

But why incur that loading if we don't need to?

> > > I think I'd rather have a new enum for GLXQueryServerString
> > > that elaborates on GLX_VENDOR_NAMES_EXT (perhaps GLX_VENDOR_MAP_EXT),
> > > with the returned string a space-delimited list of :.
> > > libGL could accept either a profile or a vendor name in the environment
> > > variable, and the profile can be either semantic like
> > > performance/battery, or a hardware selector, or whatever else.
> > > 
> > > This would probably be a layered extension, call it GLX_EXT_libglvnd2,
> > > which you'd check for in the (already per-screen) server extension
> > > string before trying to actually use.
> > That all sounds reasonable to me.
> > 
> > > > At the other extreme, the server could do nearly all the work of
> > > > generating the possible __GLX_VENDOR_LIBRARY_NAME strings (with the
> > > > practical downside of each server-side GLX vendor needing to enumerate
> > > > the GPUs it can drive, in order to generate the hardware-specific
> > > > identifiers).
> > > I don't think this downside is much of a burden? If you're registering
> > > a provider other than Xorg's you're already doing it from the DDX
> > > driver (I think? Are y'all doing that from your libglx instead?), and
> > > when that initializes it already knows which device it's driving.
> > Right.  It will be easy enough for the NVIDIA X driver + NVIDIA server-side 
> > GLX.
> > 
> > Kyle and I were chatting about this, and we weren't sure whether people
> > would object to doing that for the Xorg GLX provider: to create the
> > hardware names, Xorg's GLX would need to enumerate all the DRM devices
> > and list them all as possible : pairs for the Xorg
> > GLX-driven screens.  But, now that I look at it more closely, it looks
> > like drmGetDevices2() would work well for that.
> > 
> > So, if you're not concerned with that burden, I'm not.  I'll try coding
> > up the Xorg GLX part of things and see how it falls into place.
> That actually is one of my big concerns: I'd like to come up with something
> that can give something equivalent to Mesa's existing DRI_PRIME setting, and
> requiring that logic to be in the server seems like a very poor match. You'd
> need to take all of the device selection and enumeration stuff from Mesa and
> transplant it into the Xorg GLX module, and then you'd need to define some
> sort of protocol to get that data back into Mesa where you actually need it.
> Or else you need to duplicate it between the client and server, which seems
> like the worst of both worlds.

Is this actually a lot of code?  I'll try to put together a prototype so
we can see how much it is, but if it is just calling drmGetDevices2() and
then building PCI BusID-based names, that doesn't seem unreasonable to me.

> By comparison, if libglvnd just hands the problem off to the vendor
> libraries, then you could do either. A vendor library could do its device
> enumeration in the client like Mesa does, or it could send a request to
> query something from the server, using whatever protocol you want --
> whatever makes the most sense for that particular driver.
> 
> More generally, I worry that defining a (vendor+device+descriptor) list as
> an interface between libglvnd and the server