Xglx and hardware renderers

2005-02-22 Thread Alexander E. Patrakov
I played a bit with Xglx as of Sunday, Feb 20, 2005, and various OpenGL
implementations available for Xorg 6.8.1 and r200 graphics card (PCI ID is
1002:5961 (rev 01)). Some remarks follow.

1) On Mesa and drm CVS as of Sunday, Feb 20, 2005, there are several
artifacts. For reference, I have put a screenshot of Kedit window with
Plastic theme at the following URL:

http://ums.usu.ru/~patrakov/screenshot4.png

As you can see, the vertical scroll bar contains many black horizontal
stripes on that screenshot, while it should be solid. The top of some
letters in the menu is cut off. There is also some strange black horizontal
line at the top of the window decoration and also at the top of horizontal
scrollbar.

Strangely, running xcompmgr inside Xglx fixes most of those problems.

2) Same artifacts are present when using fglrx instead of the open-source
driver. Is my video card OK or out of order? Could anyone please compare
with non-ATI hardware renderers and post screenshots?

If there are such buggy hardware renderers in the wild and one can't
declare such video cards as broken (i.e. can't return them to the shop due
to warranty), is the whole idea of hardware-accelerated Xglx viable?

3) I couldn't start Xglx at 1024x768 with Mesa as of Sunday, Feb 20, 2005
with LIBGL_ALWAYS_INDIRECT=1 in the environment. The error is:

X Error of failed request:  BadLength (poly request too large or internal
Xlib length error)
  Major opcode of failed request:  145 (GLX)
  Minor opcode of failed request:  1 (X_GLXRender)
  Serial number of failed request:  85
  Current serial number in output stream:  86

The largest possible screen size without this error is 64x64.

4) With libGL.so from fglrx, but LIBGL_ALWAYS_INDIRECT=1, there are no
artifacts.

-- 
Alexander E. Patrakov



---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Xglx and hardware renderers

2005-02-22 Thread Ian Romanick
Alexander E. Patrakov wrote:
3) I couldn't start Xglx at 1024x768 with Mesa as of Sunday, Feb 20, 2005
with LIBGL_ALWAYS_INDIRECT=1 in the environment. The error is:
X Error of failed request:  BadLength (poly request too large or internal
Xlib length error)
  Major opcode of failed request:  145 (GLX)
  Minor opcode of failed request:  1 (X_GLXRender)
  Serial number of failed request:  85
  Current serial number in output stream:  86
It sounds like there may be a bug in the new GLX protocol code.  Can you 
generate a debug version of indirect.c and figure out which command 
generates the error?  To do this, you'll need to cd to src/mesa/glapi 
and run the following command.  After that, you'll have to rebuild Mesa 
with 'linux-dri' or 'linux-dri-x86' or some such.

python glX_proto_send.py -d -m proto  ../../glx/x11/indirect.c
You'll need to use LD_PRELOAD to force the use of that libGL.  With this 
debug libGL, it will log a message before every GL command and do a 
glFinish after.  The last command logged is likely the one with the bug.

---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Solo Xgl..

2005-02-22 Thread Brian Paul
Adam Jackson wrote:
On Sunday 20 February 2005 13:20, Brian Paul wrote:
Adam Jackson wrote:
I'm working on this, actually.  Right now I'm doing it as an EGL-GLX
translation layer so we can get glitz retargeted at the EGL API.  Turning
that into a dispatch layer wouldn't be too tough, particularly since a
good bit of the engine is already written in miniglx.  I've nearly got it
to the point of being able to run eglinfo, but it seems to have uncovered
a bug or two in the fbconfig handling.
I actually started writing some EGL interface code a few months ago,
but haven't touched it since.  Give me a day or two to clean it up.
Then let's exchange code and see what we've got.

I pounded out most of the rest of the API compat today.  This is good enough 
to run eglinfo and return mostly correct answers (caveat is always slow for 
some reason), and of the 25ish egl* entrypoints only around three are still 
stubs.

Apply patch to a newish Mesa checkout, add egl.c to sources in 
src/glx/x11/Makefile, build libGL.
While you were working on a translation layer I was working on a 
general-purpose implementation of the EGL API.

I've put my sources at http://www.mesa3d.org/beta/egl/ for anyone who 
wants to check it out.

Just remember this is largely untested prototype code, there's lots of 
loose ends, everything is subject to change, etc, etc.

-Brian
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-users] DIVPS ( XMM0, XMM1 ) SIGFPE

2005-02-22 Thread Brian Paul
Mathieu Malaterre wrote:
Hello,
I am using Mesa 6.2.1 and I am getting a SIGFPE:
[Thread debugging using libthread_db enabled]
[New Thread 1100658336 (LWP 13415)]
Program received signal SIGFPE, Arithmetic exception.
[Switching to Thread 1100658336 (LWP 13415)]
_mesa_test_os_sse_exception_support () at x86/common_x86_asm.S:193
193 DIVPS   ( XMM0, XMM1 )
Current language:  auto; currently asm
(gdb) up
#1  0x4017f7e4 in check_os_sse_support () at x86/common_x86.c:192
192   _mesa_test_os_sse_exception_support();
After some googling it seems this has already been reported:
http://www.mail-archive.com/dri-devel@lists.sourceforge.net/msg08493.html
Hmmm, I must have missed that posting to the dri-devel list.  I'll try 
out the patch and check it in if it seems OK.

-Brian
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-users] DIVPS ( XMM0, XMM1 ) SIGFPE

2005-02-22 Thread Brian Paul
Brian Paul wrote:
Mathieu Malaterre wrote:
Hello,
I am using Mesa 6.2.1 and I am getting a SIGFPE:
[Thread debugging using libthread_db enabled]
[New Thread 1100658336 (LWP 13415)]
Program received signal SIGFPE, Arithmetic exception.
[Switching to Thread 1100658336 (LWP 13415)]
_mesa_test_os_sse_exception_support () at x86/common_x86_asm.S:193
193 DIVPS   ( XMM0, XMM1 )
Current language:  auto; currently asm
(gdb) up
#1  0x4017f7e4 in check_os_sse_support () at x86/common_x86.c:192
192   _mesa_test_os_sse_exception_support();
After some googling it seems this has already been reported:
http://www.mail-archive.com/dri-devel@lists.sourceforge.net/msg08493.html

Hmmm, I must have missed that posting to the dri-devel list.  I'll try 
out the patch and check it in if it seems OK.

I misread the date on that posting.  That was two years ago.  The code 
in common_x86_asm.S has changed since then.  So I don't think the 
patch is relevant anymore.

However, the problem you report isn't really an issue.  When gdb stops 
upon the exception, just type 'continue'.

There's a big comment about this in the current file.
-Brian
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Bug 2593] New: gl-117 causes gpu lockups

2005-02-22 Thread bugzilla-daemon
Please do not reply to this email: if you want to comment on the bug, go to
   
the URL shown below and enter yourcomments there. 
   
https://bugs.freedesktop.org/show_bug.cgi?id=2593  
 
   Summary: gl-117 causes gpu lockups
   Product: Mesa
   Version: CVS
  Platform: PC
OS/Version: Linux
Status: NEW
  Severity: major
  Priority: P2
 Component: Drivers/DRI/r200
AssignedTo: dri-devel@lists.sourceforge.net
ReportedBy: [EMAIL PROTECTED]


gl-117 (1.3.1) causes gpu lockups on my rv250 with Mesa CVS head dri driver.
Happens both with hw and sw tcl. The game starts, and the intro plays fine.
Moving the cursor around causes an instant gpu lockup as soon as the cursor hits
a menu entry (i.e. the menu entry color would change from blue to yellow,
sometimes the menu entry changes the color before the lockup, sometimes not).   
   
 
 
--   
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email 
 
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: 32/64-bit ioctl compatibility

2005-02-22 Thread Shrijeet Mukherjee

  Changing the drm and Mesa at once incompatibly isn't going to get past me,
  and I haven't proven that Egberts patch isn't backwards compat, but nobody
  has proven to me that it doesn't break anything, and as I have no access
  to any 64-bit hardware it is up to other people to convince me ...
 
 The hardest bit that I have seen so far is dealing with the offset and
 handle fields in the drm_map_t.  I'll push on it a bit further then,
 if no-one else is hacking on this.

This is a topic that we could use some clarification on, is there a
suggested use for offset and handle. As you can imagine we(SGI) with
our itanium systems have to deal with this a fair bit.

My interpretation is that handle is a unique opaque identifier to a drm
resource on the board (which needs to match the native word size of the
platform you are running on, or unique is hard to guarantee)

OTOH, offset is a value that the board manager (normally in the 2D
driver in the current X setup) uses to setup/access said resources. This
needs to be of the size of the bus width of the card.

If this is correct, it brings up another interesting issue. How do we
pass to the mmap calls 64-bit OS offsets when the actual offset (for all
current gfx cards I know of) is a 32-bit entity, which brings up the
reverse option to what was suggested .. should we just pass the handle
instead of the offset, since that already is a kernel virtual address
that linux wants. (this should work on linux platforms atleast)





---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Solo Xgl..

2005-02-22 Thread Adam Jackson
On Tuesday 22 February 2005 11:48, Brian Paul wrote:
 Adam Jackson wrote:
  I pounded out most of the rest of the API compat today.  This is good
  enough to run eglinfo and return mostly correct answers (caveat is always
  slow for some reason), and of the 25ish egl* entrypoints only around
  three are still stubs.
 
  Apply patch to a newish Mesa checkout, add egl.c to sources in
  src/glx/x11/Makefile, build libGL.

 While you were working on a translation layer I was working on a
 general-purpose implementation of the EGL API.

Excellent!  I was hoping our work wouldn't overlap.

I should probably describe where I see this going.  All the egl* entrypoints 
would call through a dispatch table (think glapi.c) that determines whether 
to use the GLX translation or the native engine.  The native engine would 
fill the role that miniglx currently holds.

In practical terms, what this means is:

$ Xegl -drm /dev/dri/card0 :0   # starts a server on the first video card
$ DISPLAY=:0 Xegl :1# runs a nested Xgl server under :0

would work the way you expect.  (Obviously I'm handwaving away the fact that 
the Xgl server doesn't support the GLX extension yet, and that there's no EGL 
backend for glitz yet.  The latter was actually my motivation for doing the 
GLX translation, so we could have glitz ported before attempting to bring it 
up native.)

So.  Naive EGL applications would Just Work, whether or not there's a display 
server already running.  The EGL dispatch layer would be responsible for 
checking some magic bit of per-card state that says whether there's currently 
a live display server on the device, and route the EGL API accordingly.

This magic bit of per-card state would be exposed by some new EGL extension, 
call it EGL_display_server.  Non-naive applications like EGL, in the presence 
of this extension, will register themselves as display servers for the given 
device(s?) when they start up.  This bit of state then gets handed down to 
the DRM layer (or its moral equivalent for non-DRI drivers).  (Plenty of 
other magic can happen here, for example releasing this display server lock 
on VT switch.) [1]

After which, the only hard part (sigh) is setting video modes.  This may want 
to be an EGL extension as well, and would have some precedent (eg 
GLX_MESA_set_3dfx_mode).  Of course we can implement this any which way we 
like, it's just that exposing the API through EGL makes it easier for apps to 
do this both across vendors and across platforms.

Hopefully this doesn't sound completely insane.  Comments?

- ajax

1 - One question at this point would be why not make the first EGL app to 
start on a device always take the lock?  I could envision (probably embedded) 
environments that want, essentially, cooperative windowing, where (for 
example) each window maps to a hardware quad, textured through a pbuffer or 
fbo, and the Z buffer is used to implement stacking order, with some message 
passing between display apps so they don't fight.  This is certainly not a 
use case I care about, but other people might...


pgpd6fF7C7R1y.pgp
Description: PGP signature


Re: Solo Xgl..

2005-02-22 Thread Brian Paul
Adam Jackson wrote:
On Tuesday 22 February 2005 11:48, Brian Paul wrote:
Adam Jackson wrote:
I pounded out most of the rest of the API compat today.  This is good
enough to run eglinfo and return mostly correct answers (caveat is always
slow for some reason), and of the 25ish egl* entrypoints only around
three are still stubs.
Apply patch to a newish Mesa checkout, add egl.c to sources in
src/glx/x11/Makefile, build libGL.
While you were working on a translation layer I was working on a
general-purpose implementation of the EGL API.

Excellent!  I was hoping our work wouldn't overlap.
I should probably describe where I see this going.  All the egl* entrypoints 
would call through a dispatch table (think glapi.c) that determines whether 
to use the GLX translation or the native engine.  The native engine would 
fill the role that miniglx currently holds.
My code already does that.  The EGL-miniglx translation would just be 
another driver.  I always thought it would be nice if the indirect 
rendering code for GLX were just another loadable driver.  The EGL 
code would support that idea.


In practical terms, what this means is:
$ Xegl -drm /dev/dri/card0 :0   # starts a server on the first video card
$ DISPLAY=:0 Xegl :1# runs a nested Xgl server under :0
would work the way you expect.  (Obviously I'm handwaving away the fact that 
the Xgl server doesn't support the GLX extension yet, and that there's no EGL 
backend for glitz yet.  The latter was actually my motivation for doing the 
GLX translation, so we could have glitz ported before attempting to bring it 
up native.)

So.  Naive EGL applications would Just Work, whether or not there's a display 
server already running.  The EGL dispatch layer would be responsible for 
checking some magic bit of per-card state that says whether there's currently 
a live display server on the device, and route the EGL API accordingly.
Right.
My code right now does something clunky: the parameter passed to 
eglGetDisplay() is interpreted as a string, rather than a Display *. 
The value of the string determines which driver to load, either by 
name or screen number like :0.  If the code determined that the 
value isn't a string, treat it as a real X Display *.  Thereafter, 
each EGLDisplay handle is associated with a particular driver 
instance.  This is experimental.


This magic bit of per-card state would be exposed by some new EGL extension, 
call it EGL_display_server.  Non-naive applications like EGL, in the presence 
of this extension, will register themselves as display servers for the given 
device(s?) when they start up.  This bit of state then gets handed down to 
the DRM layer (or its moral equivalent for non-DRI drivers).  (Plenty of 
other magic can happen here, for example releasing this display server lock 
on VT switch.) [1]

After which, the only hard part (sigh) is setting video modes.  This may want 
to be an EGL extension as well, and would have some precedent (eg 
GLX_MESA_set_3dfx_mode).  Of course we can implement this any which way we 
like, it's just that exposing the API through EGL makes it easier for apps to 
do this both across vendors and across platforms.
The eglscreen.c file has some ideas for a few functions for setting 
screen size/refresh/etc.  This is totally experimental too.


Hopefully this doesn't sound completely insane.  Comments?
- ajax
1 - One question at this point would be why not make the first EGL app to 
start on a device always take the lock?  I could envision (probably embedded) 
environments that want, essentially, cooperative windowing, where (for 
example) each window maps to a hardware quad, textured through a pbuffer or 
fbo, and the Z buffer is used to implement stacking order, with some message 
passing between display apps so they don't fight.  This is certainly not a 
use case I care about, but other people might...
Yeah, if you think about things for a while you eventually find that 
the EGL API/interface might be used at two different levels: below the 
X server and as a user-accessible API.

-Brian
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Xglx and hardware renderers

2005-02-22 Thread Alexander E. Patrakov
Ian Romanick wrote:

 Alexander E. Patrakov wrote:
 X Error of failed request:  BadLength (poly request too large or internal
 Xlib length error)
snip
 run the following command.  After that, you'll have to rebuild Mesa
 with 'linux-dri' or 'linux-dri-x86' or some such.
 
 python glX_proto_send.py -d -m proto  ../../glx/x11/indirect.c
 
Will do that tomorrow.

-- 
Alexander E. Patrakov



---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Linux-fbdev-devel] Resource management.

2005-02-22 Thread James Simmons

 As far as I know none of the significant contributors on either fbdev
 or DRM are being paid to work on the project.

So I have noticed. There is much to do but no real man power. We are 
talking about this merging but at our rate it will take 5 years to happen. 
We don't have the man power to do this. So I'm not going to bother 
merging. Its all pipe dreams here.



---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Linux-fbdev-devel] Resource management.

2005-02-22 Thread Alex Deucher
On Tue, 22 Feb 2005 17:23:03 + (GMT), James Simmons
[EMAIL PROTECTED] wrote:
 
  As far as I know none of the significant contributors on either fbdev
  or DRM are being paid to work on the project.
 
 So I have noticed. There is much to do but no real man power. We are
 talking about this merging but at our rate it will take 5 years to happen.
 We don't have the man power to do this. So I'm not going to bother
 merging. Its all pipe dreams here.
 
 

with that attitude it's never gonna happen.  I work almost exclusively
on X, but once we get at least one sample driver done (probably
radeon, I would be more than happy to devote my limited development
resources to the new drm/fb super driver.  Right now the kernel FB
drivers have no benefit for me so I don't use/develop them.  The drm
just works and I'm more interested in the crtc/modes/outputs handling
than the command processor control stuff.  I think a lot of X
developers (and porobably IHVs) will get on board when this happens. 
X is undermanned as well, but we've managed to do a pretty good job of
supported a lot of features on a fair number of cards.

Alex


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: POSTing of video cards (WAS: Solo Xgl..)

2005-02-22 Thread Linus Torvalds


On Mon, 21 Feb 2005, Jon Smirl wrote:

 I was working on the assumption that all PCI based, VGA class hardware
 that is not the boot device needs to be posted.

I don't think that's true. We certainly don't _want_ it to be true in the 
long run - and even now there are cards that we can initialize fully 
without using the BIOS at all.

 And that the posting should occur before the drivers are
 loaded.

Personally, I'd much rather let the driver be involved in the decision.

That may mean that the probe routine knows how to initialize the card, but
it may mean that it does an exec_usermodehelper() kind of thing.  
Actually, I'd prefer it if this was largely up to udev: if the driver
notices that it can't initialize the card, why not just enumerate it
enough that udev knows about it (that's pretty much automatic), and let
the driver just ignore the card until some (possibly much later) date when
the user level scripts have found it and initialized it.

That would imply that the driver have some re-attach entrypoint (which 
migth be a ioctl, but might also be just a /sysfs file access), which is 
the user-lands way of saying try again - I've now initialized the 
hardware.

The advantage of that kind of disconnected initialization is that you
don't _need_ to have the card initialization in initramfs or other very
early boot sequence. It gets _detected_ early on, but you can then delay
initializing it arbitrarily long, and it obviously won't be usable until 
that point (but who cares? The ones that do care can put the things in 
their initramfs, others may decide to do it only once the system is 
up-and-running and /usr has been NFS-mounted).

Linus


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Solo Xgl..

2005-02-22 Thread Jon Smirl
On Tue, 22 Feb 2005 11:16:47 -0700, Brian Paul
[EMAIL PROTECTED] wrote:

Are you aware of this?

http://sourceforge.net/projects/dogless
Can also wrap on the standard OpenGL impl with ES simulation.

-- 
Jon Smirl
[EMAIL PROTECTED]


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Solo Xgl..

2005-02-22 Thread Jon Smirl
Is this useful?

http://studierstube.org/klimt/index.php

-- 
Jon Smirl
[EMAIL PROTECTED]


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Solo Xgl..

2005-02-22 Thread Adam Jackson
On Tuesday 22 February 2005 15:08, Jon Smirl wrote:
 On Tue, 22 Feb 2005 11:16:47 -0700, Brian Paul
 [EMAIL PROTECTED] wrote:

 Are you aware of this?

 http://sourceforge.net/projects/dogless
 Can also wrap on the standard OpenGL impl with ES simulation.

Both dogless and klimt are GPL-licensed so they're not really suitable.

- ajax


pgpjZqVOl9FUw.pgp
Description: PGP signature


Re: ports/76257: nvidia_driver breaks xorg-clients build

2005-02-22 Thread Felix Kühling
Am Dienstag, den 22.02.2005, 08:15 -0800 schrieb Eric Anholt:
 On Tue, 2005-02-22 at 00:08 -0500, Mikhail Teterin wrote:
   Could you try if the attached patch against xdriinfo.c works with
   NVidia's GLX? If it does, then I'll commit this to Xorg CVS.
  
  Using glXGetProcAddressARB instead of glXGetProcAddress allows the
  utility to build, but it still does not work -- the calls return NULL at
  run-time.
  
  If your program only works with Xorg's GLX, than, indeed, the error
  message may need to become more informative.
  
  If, on the other hand, using the glXGetProcAddressARB works when linked
  with Xorg's GLX -- why bother with the #ifdefs at all? Just use the old
  call for the time being. :-)
 
 As mentioned earlier in the PR, changing to glXGetProcAddressARB results
 in xdriinfo segfaulting, at least for me on amd64.

Eric, could you find out where it's segfaulting?


-- 
| Felix Kühling [EMAIL PROTECTED] http://fxk.de.vu |
| PGP Fingerprint: 6A3C 9566 5B30 DDED 73C3  B152 151C 5CC1 D888 E595 |



---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_ide95alloc_id396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Solo Xgl..

2005-02-22 Thread Jon Smirl
On Tue, 22 Feb 2005 15:15:37 -0500, Adam Jackson [EMAIL PROTECTED] wrote:
 On Tuesday 22 February 2005 15:08, Jon Smirl wrote:
  On Tue, 22 Feb 2005 11:16:47 -0700, Brian Paul
  [EMAIL PROTECTED] wrote:
 
  Are you aware of this?
 
  http://sourceforge.net/projects/dogless
  Can also wrap on the standard OpenGL impl with ES simulation.
 
 Both dogless and klimt are GPL-licensed so they're not really suitable.

Are they useful enough to ask for a license change?

GPL doesn't stop you from using them as a reference, you just can't
copy from them directly.

 
 - ajax
 
 
 


-- 
Jon Smirl
[EMAIL PROTECTED]


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Solo Xgl..

2005-02-22 Thread Jon Smirl
klimt says they are down to 150K now. I can try and get a license
change if we are interested.

-- 
Jon Smirl
[EMAIL PROTECTED]


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Solo Xgl..

2005-02-22 Thread Adam Jackson
On Tuesday 22 February 2005 15:20, Jon Smirl wrote:
 On Tue, 22 Feb 2005 15:15:37 -0500, Adam Jackson [EMAIL PROTECTED] wrote:
  On Tuesday 22 February 2005 15:08, Jon Smirl wrote:
   On Tue, 22 Feb 2005 11:16:47 -0700, Brian Paul
   [EMAIL PROTECTED] wrote:
  
   Are you aware of this?
  
   http://sourceforge.net/projects/dogless
   Can also wrap on the standard OpenGL impl with ES simulation.
 
  Both dogless and klimt are GPL-licensed so they're not really suitable.

 Are they useful enough to ask for a license change?

I don't think so.  Really all we're aiming for here, now, is the EGL API, and 
Brian and I have both pretty much done that already (though in different 
directions).  We already have a world-class GL engine in Mesa.

Klimt simply doesn't aim as high.  There are several features missing from 
Klimt that would be total showstoppers for what we're trying to use GL for - 
alpha buffer, glReadPixels, multitexturing, stencil tests, clip planes...  So 
while it provides something resembling the EGL API it lacks features we want.

Dogless is win32 only, it seems.  And it's actually an inversion of the model 
we're thinking about.  Dogless appears to translate WGL to EGL, so you can 
have some tiny EGL stack and then run Quake on it (where presumably this 
stack mirrors the stack you're going to put on your embedded device).  So it 
doesn't even provide the EGL API to begin with.

There's in my mind three pieces of the OpenGL|ES stack here:

- EGL, the API that binds you to your native windowing system (or in our case,
  provides it)
- The GL engine
- The various OES_* extensions

#1 turns out to be pretty easy to add.  #2 we already have.  #3 isn't really 
interesting for desktop hardware but is also pretty easy to add should 
someone want to invest the effort (most of them are just adding support for 
various small data types, which we can upconvert transparently before handing 
to the existing Mesa engine).

So, since we pretty much have the API, and since the API is the new piece 
we're trying to leverage, I don't really see the point in chasing after 
license changes for software that isn't capable of what we want to use GL 
for.

- ajax


pgpTiT9bjCGQw.pgp
Description: PGP signature


Re: POSTing of video cards (WAS: Solo Xgl..)

2005-02-22 Thread Linus Torvalds


On Tue, 22 Feb 2005, Dmitry Torokhov wrote:
 
 This sounds awfully like firmware loader that seems to be working just
 fine for a range of network cards and other devices.

Yes. HOWEVER - and note how firmware loading for this case is not validly
done at device discovery, but at ifconfig time.

Ie device discovery (probing) is a _separate_ phase entirely, and happens 
much earlier. We should initialize the hardware only when it actually gets 
acively used some way by user space.

Linus


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: R300 lockups...

2005-02-22 Thread Adam K Kirchhoff
Vladimir Dergachev wrote:

On Mon, 21 Feb 2005, Adam K Kirchhoff wrote:
FYI, I've now tried neverputt in a window, instead of fullscreen, and 
I'm getting the same lockups as I was previously getting (full 
lockups, including mouse, requiring me to ssh in a reboot). It 
finally occurred to me to check dmesg:

[drm:radeon_cp_dispatch_swap] *ERROR* Engine timed out before swap 
buffer blit

Hmmm - can you try reducing the window size of neverputt ? And, 
perhaps, the refresh rate and resolution of your screen ?

  thank you !
 Vladimir Dergachev

No luck.  I setup my xorg.conf file to limit X to 640x480, and used 
xrandr to drop the refresh rate to 60...  Launched neverputt at 640x480, 
fullscreen.  Lockup was nearly instantaneous...  The music continues, at 
least till neverputt dies, and the mouse moves around.  Rebooted and 
tried again...  Exact same result.  At least when I was running it at 
1024x768 on a mergedfb desktop of 2560x1024, I was able to play a hole 
or two of golf...

Two times now, I've tried running it at 640x480 on my large mergedfb 
desktop.  I get further than I did when the screen resolution was 
640x480, but not much.

I just tried two times now running it at 1280x1024 on my large mergedfb 
desktop, and it plays fine for a number of holes.  Usually locks up 
between holes.

My conclusion is that these lockups are occuring when the framerate is 
at it's highest (ie. low resolution, low texture, low activity), which I 
believe is a situation someone else described on here not to long ago.

Adam
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Solo Xgl..

2005-02-22 Thread Jon Smirl
On Tue, 22 Feb 2005 15:45:16 -0500, Adam Jackson [EMAIL PROTECTED] wrote:
 Dogless is win32 only, it seems.  And it's actually an inversion of the model
 we're thinking about.  Dogless appears to translate WGL to EGL, so you can
 have some tiny EGL stack and then run Quake on it (where presumably this
 stack mirrors the stack you're going to put on your embedded device).  So it
 doesn't even provide the EGL API to begin with.

Doc says it works both ways. EGL-WGL and WGL-EGL. The khronos site
says to use it for developing EGL apps on the desktop.

 There's in my mind three pieces of the OpenGL|ES stack here:
 
 - EGL, the API that binds you to your native windowing system (or in our case,
   provides it)
 - The GL engine
 - The various OES_* extensions
 
 #1 turns out to be pretty easy to add.  #2 we already have.  #3 isn't really
 interesting for desktop hardware but is also pretty easy to add should
 someone want to invest the effort (most of them are just adding support for
 various small data types, which we can upconvert transparently before handing
 to the existing Mesa engine).
 
 So, since we pretty much have the API, and since the API is the new piece
 we're trying to leverage, I don't really see the point in chasing after
 license changes for software that isn't capable of what we want to use GL
 for.
 
 - ajax
 
 
 


-- 
Jon Smirl
[EMAIL PROTECTED]


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [r300] Radeon 9600se mostly working..

2005-02-22 Thread Nicolai Haehnle
On Monday 21 February 2005 17:40, John Clemens wrote:
  On Mon, 21 Feb 2005, John Clemens wrote:
 
  give it a go on my fanless 9600se (RV350 AP).
 
  How much memory do you have ? What kind of CPU and motherboard ?
 
 Duron 1.8G, 256MB ddr, old(ish) via km266 motherboard in a shuttle sk41g. 
 Gentoo.  The card has 128Mb ram.
 
  - glxinfo states r300 DRI is enabled. (AGP4x, NO-TCL)
  - glxgears gives me about 250fps with drm debug=1, ~625fps without 
debug
   on.
 
 should I be concerned that these fps are too low?  others seem to be 
 reporting around 1000..

Well, I'm not sure about the value with debug off, it does seem rather low, 
but perhaps reasonable if you are using immediate mode (which is still the 
default in CVS, I believe - check r300_run_render in r300_render.c).
Your debug FPS is rather high, actually - I only get around 50fps in 
glxgears with enabled DRM debugging (even less if I also enable debug 
messages from the userspace driver).

  - tuxracer runs ok at 640x480 fullscreen
   - ice textures look psychadelicly blue
   - at 1280x1024, (and somewhat at 800x600 windowed), i get these
 errors:
  [drm:radeon_cp_dispatch_swap] *ERROR* Engine timed out before swap 
buffer 
  blit
 
 ...
 
  The swap buffer blit is just a copy - for example a copy from back 
buffer to 
  front buffer. Since the engine timed out before swap buffer blit it 
means 
  that the commands before it were at fault. Which is puzzling as you 
point out 
  that everything works in 640x480.
 
 Just to elaborate:  640x480 runs fine.  at 800x600 windowed, it plays 
 fine, but if a scene gets more complicated i see some jerkyness.. i.e., 
 the scene freezes for a second or two and then jumps ahead, and i get a 
 few messages in the log.  At 1280x1024, this happens all the time, so it 
 appears the game is locked, and I get a stream of those messages in the 
 log file.  alt-F switching to the console works, and switching back i get 
 about 2 seconds more of movement, and then soft-lock again (persumably 
 because the card re-inits on VC switch).  I can switch to the VC and kill 
 it and all's fine.  Judging from what you're saying, the card isn't 
 locked, it just isn't able to draw a full scene before it times out.

Well, this is certainly interesting, and it does sound like userspace is 
generating so many drawing commands that the card is simply too slow to 
process them all. My guess is that the one-two second freezes are causes by 
the X server when it, too, thinks that the engine has timed out and 
initiates a reset sequence.

This is actually an interesting problem. Here are some issues to think 
about:
1) The SWAP ioctl should really report an error to userspace when the engine 
has timed out.
2) I agree that it would make sense to monitor the ring buffer somehow. 
Perhaps a wait_for_ringbuffer that is called at the top of wait_for_fifo? 
In the fast path, this costs an additional I/O read operation, otherwise 
it should be essentially be no different performance-wise.
3) Come to think of it, couldn't the card just issue an IRQ when it's done?
4) If a drawing command takes very long, can we identify the userspace 
process that is responsible for sending the command buffer that caused the 
delay, and can we deal with this process somehow? Perhaps we could insert 
an age marker before and after the processing in the command buffer ioctl.

The last point actually touches on a bigger subject: scheduling access to 
the graphics card. To get an idea of what I'm talking about, launch a 
terminal emulator and glxgears side by side. Then run yes in the terminal 
emulator. glxgears will essentially lock up.

cu,
Nicolai


pgpRFAJMWkcWT.pgp
Description: PGP signature


Re: R300 lockups...

2005-02-22 Thread Nicolai Haehnle
On Tuesday 22 February 2005 21:57, Adam K Kirchhoff wrote:
 No luck.  I setup my xorg.conf file to limit X to 640x480, and used 
 xrandr to drop the refresh rate to 60...  Launched neverputt at 640x480, 
 fullscreen.  Lockup was nearly instantaneous...  The music continues, at 
 least till neverputt dies, and the mouse moves around.  Rebooted and 
 tried again...  Exact same result.  At least when I was running it at 
 1024x768 on a mergedfb desktop of 2560x1024, I was able to play a hole 
 or two of golf...
 
 Two times now, I've tried running it at 640x480 on my large mergedfb 
 desktop.  I get further than I did when the screen resolution was 
 640x480, but not much.
 
 I just tried two times now running it at 1280x1024 on my large mergedfb 
 desktop, and it plays fine for a number of holes.  Usually locks up 
 between holes.
 
 My conclusion is that these lockups are occuring when the framerate is 
 at it's highest (ie. low resolution, low texture, low activity), which I 
 believe is a situation someone else described on here not to long ago.

That was me, so I can confirm that, and it *is* different from the problem 
reported by John Clemens in the other thread (the one called [r300] Radeon 
9600se mostly working).

Unfortunately, I won't have access to my test setup for the next weeks, so I 
don't have anything new.

cu,
Nicolai

 Adam


pgpxmWm01RE3d.pgp
Description: PGP signature


[Bug 2596] New: Disabling DRI. [drm] failed to load kernel module i915 (EE) I810(0): [dri] DRIScreenInit failed.

2005-02-22 Thread bugzilla-daemon
Please do not reply to this email: if you want to comment on the bug, go to
   
the URL shown below and enter yourcomments there. 
   
https://bugs.freedesktop.org/show_bug.cgi?id=2596  
 
   Summary: Disabling DRI. [drm] failed to load kernel module i915
(EE) I810(0): [dri] DRIScreenInit failed.
   Product: DRI
   Version: XOrg CVS
  Platform: PC
OS/Version: FreeBSD
Status: NEW
  Severity: major
  Priority: P2
 Component: DRM modules
AssignedTo: dri-devel@lists.sourceforge.net
ReportedBy: [EMAIL PROTECTED]


Hello,
yesterday i look that xorg 6.8.2 is out. I was happy because it can patch bug
with i810 cards (see release information). But i don't know why it don't.
I get unofficial patch for FreeBSD from freebsd-x11 mailing list (see
http://lists.freebsd.org/pipermail/freebsd-x11/2005-February/001647.html in that
mailing list) and applying it, then i run portupgrade -aF xorg and portupgrade
-aR xorg. Compilation without errors. Freebsd patch works probably ok.
BUT after compilation, i try run and still getting as before with 6.8.1 this 
errors:

(II) I810(0): VESA BIOS detected
(II) I810(0): VESA VBE Version 3.0
(II) I810(0): VESA VBE Total Mem: 16192 kB
(II) I810(0): VESA VBE OEM: Intel(r)852MG/852MGE/855MG/855MGE Graphics Chip
Accelerated VGA BIOS
(II) I810(0): VESA VBE OEM Software Rev: 1.0
(II) I810(0): VESA VBE OEM Vendor: Intel Corporation
(II) I810(0): VESA VBE OEM Product: Intel(r)852MG/852MGE/855MG/855MGE Graphics
Controller
(II) I810(0): VESA VBE OEM Product Rev: Hardware Version 0.0
(==) I810(0): Default visual is TrueColor
(II) I810(0): Allocated 128 kB for the ring buffer at 0x0
(II) I810(0): Allocating at least 768 scanlines for pixmap cache
(II) I810(0): Initial framebuffer allocation size: 3072 kByte
(II) I810(0): Allocated 4 kB for HW cursor at 0x7fff000 (0x03796000)
(WW) I810(0): xf86AllocateGARTMemory: allocation of 4 pages failed
(Cannot allocate memory)
(EE) I810(0): Failed to allocate HW (ARGB) cursor space.
(II) I810(0): Allocated 4 kB for Overlay registers at 0x7ffe000 (0x01697000).
(II) I810(0): Allocated 64 kB for the scratch buffer at 0x7fee000
drmOpenDevice: node name is /dev/dri/card0
drmOpenDevice: open result is -1, (No such file or directory)
drmOpenDevice: open result is -1, (No such file or directory)
drmOpenDevice: Open failed
drmOpenDevice: node name is /dev/dri/card0
drmOpenDevice: open result is -1, (No such file or directory)
drmOpenDevice: open result is -1, (No such file or directory)
drmOpenDevice: Open failed
[drm] failed to load kernel module i915
(II) I810(0): [drm] drmOpen failed
(EE) I810(0): [dri] DRIScreenInit failed. Disabling DRI.

I have 5.3-RELEASE kernel, cvsuped and fresh (2 or 3 days before) portupgraded
system. Runing before xorg-6.8.1 was broken too :-( In xorg.conf a loading dri
and gl modules + section dir (0666), i810 driver with VideoRam 32768...
I'm completly stucked with this problem. Probably xorg's problem, but he told
that i810 in 6.8.2 be ok. Please can somebody help? I writed this problem to
freebsd-x11 mailing list too.  
 
 
--   
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email 
 
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel