On Thursday 30 December 2004 21:44, Jon Smirl wrote: > I've been out of the DRI loop for a while since all I do is help feed > and change two crying babies, but we also need think about the GL > standalone case. libdrm is linked into the dri.so's so that they can > be used standalone. Wouldn't another solution be to just fix the > DRI.so's to not export the libdrm symbols?
Maybe. Depends on if the linker would choose to resolve the drm symbols from the server's libdrm module or from the ones linked statically into the driver. But then you still have two copies of the same code in the same process and attempting to use the same symbols, and to me that sounds like a design issue. > If we proceed down the X on > GL path this problem would go away since X would stop using libdrm. No, it won't. For X on GL, the server still loads a DRI driver. It's just already loaded early during eglCreateContext or whatever, rather than loaded explicitly by libglx. The server still needs to be a window system for the 5000 GLX applications that already exist, and we'll still want to be able to do accelerated indirect GLX. > Since I haven't looked at the code in a month, can someone remind me > why X needs to directly access libdrm and not use a DRI interface? For > example the whole drmOpen sequence of where X searches for and loads > DRM drivers would just go away if we let the OS automatically load the > drivers based on PCI IDs. The current Linux DRM driver supports this. Several of the DDXes and libdri call into libdrm. I don't have a good enumeration of why they do that though. Even in an X-on-GL world, you need a server process, and that server will want to act as a GL renderer for indirect clients. Fixing libglx to use a DRI driver as the renderer (instead of GLcore) should in fact make an Xgl server simpler. - ajax
pgpgfDDh2vXcB.pgp
Description: PGP signature