On 2011-04-09 23:13, Michael Walle wrote: > There is a bug in nvidia's binary GPU driver, which causes a segmentation > fault if linked to libGL. > > Signed-off-by: Michael Walle <mich...@walle.cc> > --- > configure | 1 + > 1 files changed, 1 insertions(+), 0 deletions(-) > > diff --git a/configure b/configure > index 2bb3faa..be40a31 100755 > --- a/configure > +++ b/configure > @@ -177,6 +177,7 @@ spice="" > rbd="" > smartcard="" > smartcard_nss="" > +opengl="no" > > # parse CC options first > for opt do
I stumbled over this issue as well, but was unhappy about simply disabling OpenGL without understanding what goes wrong. Still, I can't provide a better solution yet, just some maybe helpful insights: ... 00908000-0134f000 rw-p 00000000 00:00 0 [heap] 41944000-43944000 rwxp 00000000 00:00 0 ... That's an extract of /proc/$QEMU_PID/maps with -lGL, and below without it: 00908000-010bd000 rw-p 00000000 00:00 0 010bd000-010be000 rwxp 00000000 00:00 0 010be000-01335000 rw-p 00000000 00:00 0 [heap] 40514000-42514000 rwxp 00000000 00:00 0 IOW, we are lacking the executable code_gen_prologue page. This problem behaves like a heisenbug, ie. it disappears here when I run qemu in gdb or under strace. But ftrace confirms that the qemu process issues mprotect to make the whole heap non-executable after setting up that TCG buffer - it just doesn't tell me which part of qemu is responsible for this. Note that I'm still forced to use this wonderful binary nvidia stuff. Anyone any ideas? Jan
signature.asc
Description: OpenPGP digital signature