Re: Distributed Multihead X

2011-02-04 Thread tom fogal
Enrico Weigelt  writes:
> * tom fogal  schrieb:
> 
> > We're pretty sure the issue started coming up when we began
> > dlopen()ing the OpenGL library.  The problem appears to be getting
> > NULL function pointers when glXGetProcAddressARB'ing some or all
> > OpenGL functions (though I'll note that I do not have access to the
> > failing system -- however we've seen that crash many times before
> > and the crash report is consistent).
>
> What's the exact reason for using dlopen() ?

Our application uses multiple OpenGL implementations.  We don't know
which one to use until we've parsed command line arguments.

-tom
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Distributed Multihead X

2011-02-01 Thread tom fogal
We're having some issues getting our software to work on a display well
which is configured to use DMX.

We're pretty sure the issue started coming up when we began dlopen()ing
the OpenGL library.  The problem appears to be getting NULL function
pointers when glXGetProcAddressARB'ing some or all OpenGL functions
(though I'll note that I do not have access to the failing system --
however we've seen that crash many times before and the crash report is
consistent).

Looking at the docs that are available, it seems like there should
be some sort of GLX proxy library -- perhaps a full replacement for
libGL, since GLX must live in the GL library?  My hypothesis is that
we're loading the 'backend' GLX library, i.e. what the GLX proxy should
be forwarding to, instead of the proxy itself, and therefore Badness
ensues.

Side note, is DMX alive?  I mean, it appears to be shipping and all,
but the main source of documentation is dmx.sf.net -- which hasn't seen
an update since 2004.

I haven't found much documentation as to the architecture.  If someone
could enlighten me, it would be much appreciated.

Thanks for any help,

-tom
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: EDID autodetection fail, whom to blame?

2011-01-18 Thread tom fogal
Atilla Filiz  writes:
> Some of the newer Sony Vaio laptops with Nvidia GPUs show a totally
> black screen when X starts with the propiatery drivers.
[snip]
> What would be the proper solution to this problem? Is it a fault
> correctable by Xorg or are Nvidia people to be contacted to resolve
> this?

Sounds like an nvidia bug, since noveau works.

They provide an `nvidia-bug-report.sh' or something like that to
prepare a bug report && send it off.

-tom
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: X startup and authorization

2010-05-26 Thread tom fogal
Hi Alan, thanks for your reply.  I hit a snag + worked around it, and
so I wanted to report back to the list for posterity.

Comments/suggestions of course still welcome.

Alan Coopersmith  writes:
> tom fogal wrote:
> > The first issue is startup: we fork and then exec `xinit' in the
> > child; the parent sets the appropriate DISPLAY and renders into
> > it.  The issue we have is that the parent does not know when the
> > X server is "ready" for rendering.  Our current "solution" is to
> > sleep a bit, which obviously isn't great.  How can I tell when a
> > server has initialized?
>
> The xserver notifies its parent - xinit/xdm/etc. - via a signal, at
> which point they start clients - presumably you'd put something in
> the .xinitrc to in turn signal your parent process that called xinit.
>
> You could of course take xinit out of the middle and exec the X
> server from your software so it gets the signal or error return
> directly - you can see from the source it's not that complicated.

FWIW, I gave this a try, and spent a while banging my head against the
desk trying to figure out what was wrong.  It seems like my distro
inserts an extra layer -- Xwrapper.config(5) -- which comes between
/usr/bin/X and actually getting an X server.  I imagine this wrapper
was the one getting the signal, because my code would always lock up at
the sigsuspend and just wait forever.

*sigh*.  I imagine other distros do something similar, and I of course
don't want it to be "broken by default" from a user's point of view.
So I went back to an xinit approach, specifically:

  xinit sleep 28800 -- display ...

The `sleep' client is for a couple reasons: first, if my process dies
(segfaults or something), then there's an as-root X server which in
some cases can be impossible to kill; at least it dies *eventually*
in this approach. Secondly, I really don't *want* ~/.xinitrc to get
sourced -- it's a source of unanticipated behavior.

To detect errors, I've got a loop checking XOpenDisplay.  That alone
will never let you know that the X server failed startup, though, so I
save the PID and do a waitpid(... WNOHANG) in the loop and make sure
the X server hasn't died yet.

Haven't gotten to the auth stuff yet, but your directions/pointers
seemed straightforward -- thanks.

-tom
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


X startup and authorization

2010-05-24 Thread tom fogal
We've got a parallel application which can start up some X servers and
use them as a `render farm' in some sense.  I'm looking for some advice
on how I can improve the existing setup.

The first issue is startup: we fork and then exec `xinit' in the child;
the parent sets the appropriate DISPLAY and renders into it.  The
issue we have is that the parent does not know when the X server is
"ready" for rendering.  Our current "solution" is to sleep a bit, which
obviously isn't great.  How can I tell when a server has initialized?
Ideally I would like to know if/when it failed to initialize, and pull
out some sort of error message so that I could report it back to the
user.

The second is authorization.  I'll admit to not understanding this
thoroughly.  It sounds like I want to create an MIT-MAGIC-COOKIE-1,
tell the X server about it when I start (how?), and then shove it in a
file which I point to using the XAUTHORITY environment var.  Does that
sound like the best approach in terms of portability?  Ideally whatever
I implement would work with some pretty old X servers (5 years or so).

Also on the topic of authorization -- where is the API for this?
`xhost' and `xauth' must be using Xlib functions to do their magic,
right?  I'd really like to avoid having to system() or fork/exec to do
proper authorization.

-tom
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: dga does't work correctly

2009-01-09 Thread tom fogal
Steven Newbury  writes:
[snip]
> (including using GLSL shaders for overlay effects, though sadly the shader 
> fails to compile with the intel driver for some reason [1])
[snip]
> [1]
> src/osd/sdl/gl_shader_tool.c:375: GL Error: object 0x3 compilation failed
> src/osd/sdl/gl_shader_tool.c:375 glInfoLog: Error:
> Error: failed to preprocess the source.
> 
> failed to process shader: <
> #pragma optimize (on)
> #pragma debug (off)

Try removing the pragma; doesn't look like it's supported yet in Mesa.

t...@tomasu dev grep -i pragma mesa/src/mesa/shader/slang/*.c
mesa/src/mesa/shader/slang/slang_preprocess.c:#define TOKEN_PRAGMA8

vs. `if', which seems to have an implementation:

t...@tomasu dev grep -i token_if mesa/src/mesa/shader/slang/*.c
mesa/src/mesa/shader/slang/slang_preprocess.c:#define TOKEN_IF3
mesa/src/mesa/shader/slang/slang_preprocess.c: case TOKEN_IF:

Just a guess.

-tom
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: compiling 32bit libraries on x86-64 system

2009-01-01 Thread tom fogal
Dirk Hohndel  writes:
> I must be missing some 'configure' magic here...
> 
> For some modules (like the X server) it's rather straight forward to
> build 32bit on a 64bit system. Something like
> 
> LDFLAGS=-L/opt/X11R7-32/lib ./configure --prefix=/opt/X11R7-32
> --enable-32-bit --build=x86-linux

It's not usually relevant, but these days recommended practice is
setting the variables *after* ./configure, e.g.:

./configure LDFLAGS=-L/opt/... \
--prefix=... --enable-32-bit --build=...

> But for some other modules this fails (for example, libX11 doesn't know
> the --enable-32-bit flag). Worse, there doesn't seem to be a clean way
> to separate 32 and 64bit libraries and have apps pick up the right libs
> when running a custom build...
>
> Any suggestions?

At least w/ gcc, can you just add -m32 to your compiler command line?

./configure CFLAGS="-m32" LDFLAGS=-L/opt/... \
--prefix=... --enable-32-bit --build=...

-tom
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: glXGetCurrentDisplay returning NULL even with appropriate visual

2008-11-21 Thread tom fogal
Brian Paul <[EMAIL PROTECTED]> writes:
> tom fogal wrote:
[snip]
> > The display (`dpy' variable) passed is NULL, which seems quite
> > suspect.  Assuming that is indeed the problem, the root issue
> > appears to be a:
> > 
> > glXQueryVersion(glXGetCurrentDisplay(), ...);
> > 
> > line, where glXGetCurrentDisplay returns NULL.
> 
> I suspect you haven't yet called glXMakeCurrent().  From the man page:

Oh, oops, sorry, I forgot to mention that I did indeed check this.

This post made me think to check the return value of glXMakeCurrent,
though it does appear to report success.

It also made me think to try calling glXGetCurrentContext in other
places.  It appears like right before I call glewInit my context is
valid / acceptable.  Then, of course, it crashes in a sub-call of
glewInit ... so I wonder if glew is doing things to invalidate my
(pre-existing) context.

Ahh, I've got something to go on now!  Thanks for the help.  I'm sure
I'll be back if that doesn't turn out to be the root cause.

Best,

-tom
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


glXGetCurrentDisplay returning NULL even with appropriate visual

2008-11-21 Thread tom fogal
Hi, I've run into a snag in what seems like glX initialization.

Through debug versions of client libraries [1], I think I've
established why my application is segfaulting in XQueryExtension.  The
display (`dpy' variable) passed is NULL, which seems quite suspect.
Assuming that is indeed the problem, the root issue appears to be a:

glXQueryVersion(glXGetCurrentDisplay(), ...);

line, where glXGetCurrentDisplay returns NULL.  Quite strangely, I
can call glXGetCurrentDisplay manually in the debugger, and it gives
me what seems to be a valid display; certainly not NULL.  My DISPLAY
seems set correctly; an XOpenDisplay call succeeds; glXChooseVisual
gives me a pointer instead of NULL.  I've thrown a `system("glxinfo");'
call in various places, and I get the appropriate output (direct
rendering is enabled, GL extensions are consistent, list of visuals
seems reasonable).

Since I'm using GLEW, I wonder if it is related to:

http://www.nabble.com/using-chromium-and-glew--td15953506.html

As a summary, the user is trying to use Chromium with GLEW, and
dynamically loading glXGetCurrentDisplay via glXGetProcAddress appears
not to work when Chromium is thrown into the mix.  I'm not using
Chromium, however.

I have tried GLEW 1.3.4 and GLEW 1.5.1, which give little discernible
difference.  It seems like, in the 1.5.1 case, the Display I get back
from glXGetCurrentDisplay (when called manually in the debugger) has a
much larger set of values in the `event_vec' and `wire_vec' fields.

Attached is the relevant portions of a gdb session.  Any ideas what I
am (or my dependent libraries are...) doing wrong?

Thanks,

-tom

[1] Wow.  I haven't built X manually in quite some time, and it turned
out to be wonderfully easy / work as expected to compile debug
client libraries and then use LD_LIBRARY_PATH to debug my
application.  Splitting to a modular build w/ autoconf was an
excellent idea, great job.

Program received signal SIGSEGV, Segmentation fault.
0x2b9cdb0f9074 in XQueryExtension (dpy=0x0, name=0x2b9cdaf78a2a "GLX", 
major_opcode=0x7fffda65d344, first_event=0x7fffda65d348, 
first_error=0x7fffda65d34c)
at QuExt.c:46
46  LockDisplay(dpy);
Current language:  auto; currently c
(gdb) up
#1  0x2b9cdb0e6f8b in XInitExtension (dpy=0x0, name=0x2b9cdaf78a2a "GLX") 
at InitExt.c:49
49  if (!XQueryExtension(dpy, name,
(gdb)
#2  0x2b9cdd7deeb1 in XextAddDisplay (extinfo=0x8e0390, dpy=0x0, 
ext_name=0x2b9cdaf78a2a "GLX", hooks=0x2b9cdb088180, nevents=17, data=0x0) at 
extutil.c:112
112 dpyinfo->codes = XInitExtension (dpy, ext_name);
(gdb)
#3  0x2b9cdaf3e45c in ?? () from /usr/lib64/libGL.so.1
(gdb)
#4  0x2b9cdaf30b99 in glXQueryVersion () from /usr/lib64/libGL.so.1
(gdb)
#5  0x2b9cd65476c8 in glxewContextInit () at glew/src/glew.c:6987
6987  glXQueryVersion(glXGetCurrentDisplay(), &major, &minor);
(gdb) p glXGetCurrentDisplay()
$1 = 9301840
(gdb) p (Display*) glXGetCurrentDisplay()
$2 = (struct _XDisplay *) 0x8def50
(gdb) p *(Display*) glXGetCurrentDisplay()
$3 = {ext_data = 0x8de7b0, free_funcs = 0x8e0260, fd = 17, conn_checker = 0, 
proto_major_version = 11, proto_minor_version = 0,
  vendor = 0x8dd8f0 "The X.Org Foundation", resource_base = 25165824, 
resource_mask = 2097151, resource_id = 0, resource_shift = 0,
  resource_alloc = 0x2b9cdb11328a <_XAllocID>, byte_order = 0, bitmap_unit = 
32, bitmap_pad = 32, bitmap_bit_order = 0, nformats = 7, pixmap_format = 
0x8e02b0,
  vnumber = 11, release = 7020, head = 0xb12fc0, tail = 0x16e95270, qlen = 
9, last_request_read = 81, request = 81, last_req = 0x2b9cdb3faba4 "",
  buffer = 0x8ef2b0 "\003\023\002", bufptr = 0x8ef2b0 "\003\023\002", bufmax = 
0x9ef2b0 "", max_request_size = 65535, db = 0x0,
  synchandler = 0x2b9cdb107b6e <_XPrivSyncFunction>, display_name = 0x8dd9f0 
":0.0", default_screen = 0, nscreens = 1, screens = 0x8de620, motion_buffer = 
256,
  flags = 8, min_keycode = 8, max_keycode = 255, keysyms = 0x0, modifiermap = 
0x0, keysyms_per_keycode = 0,
  xdefaults = 0x8de980 
"XTerm*background:\tblack\nXTerm*cursorBlink:\ttrue\nXTerm*cusorColor:\tyellow\nXTerm*font:\t10x20\nXTerm*foreground:\twhite\nXTerm*loginShell:\ttrue\nXTerm*scrollBar:\tfalse\n",
 scratch_buffer = 0x0, scratch_length = 0, ext_number = 4, ext_procs = 
0x8deda0, event_vec = {0x2b9cdb10892a <_XUnknownWireEvent>,
0x2b9cdb10892a <_XUnknownWireEvent>, 0x2b9cdb108940 <_XWireToEvent> 
, 0x2b9cdd7dea7c <_xgeWireToEvent>,
0x2b9cdb10892a <_XUnknownWireEvent> , 0x2b9cdb183ab3 
, 0x2b9cdb10892a <_XUnknownWireEvent> }, 
wire_vec = {
0x2b9cdb108935 <_XUnknownNativeEvent>, 0x2b9cdb108935 
<_XUnknownNativeEvent>, 0 , 0x2b9cdd7deb3a <_xgeEventToWire>,
0x2b9cdb108935 <_XUnknownNativeEvent> }, lock_meaning = 
0, lock = 0x0, async_handlers = 0x0, bigreq_size = 4194303, lock_fns = 0x0,
  idlist_alloc = 0x2b9cdb113322 <_XAllocIDs>, key_bindings = 0x0, cursor_font = 
0, atoms = 0x0, mode_switch = 0, num_lock = 0, context_d

Debugging glX apps

2008-11-11 Thread tom fogal
I'm hitting an issue appropriately establishing a glX context (it
seems), and I'm looking for a good way to debug it.

Quick overview in case someone happens to recognize the issue: I'm
getting a segfault in X functions when trying to initialize GLEW.
XOpenDisplay has succeded, and I'm not sure yet if glXCreateContext
succeeded, but it seems like the library I'm using would have aborted
if it had failed.  Call stack:

#0 XQueryExtension () from /usr/lib64/libX11.so.6
#1 XInitExtension () from /usr/lib64/libX11.so.6
#2 XextAddDisplay () from /usr/lib64/libXext.so.6
#3 ?? () from /usr/lib64/libGL.so.1
#4 glXQueryVersion () from /usr/lib64/libGL.so.1
#5 glxewContextInit () at glew/src/glew.c:6987
#6 glewInit () at glew/src/glew.c:7186

(addresses removed)

Is there a list of environment variables which affect X / GLX
behavior?  For example, it would be nice if I could set an environment
variable which would force XSynchronize(3) to `on', or cause library
functions to abort() rather than return error codes.

Any other tips for debugging?  My next step[s] are to build private
debug versions of some X libraries and force loading them via
LD_LIBRARY_PATH or similar.

Thanks,

-tom
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg