Re: xscreensaver freezes, reboot required

2005-09-29 Thread Roland Scheidegger

Jason Cook wrote:
I do appologize if I'm cross posting here. I'm not sure which forum 
would be best to address this. I have a encountered an error when 
running xscreensaver-demo as a normal user. In Slackware 10.2 the

daemon is not running by default and there is a popup window that
promts to turn it on. If I click 'okay' then the screen freezes and I
must reboot. I can't SysRq and I don't have another box to try
logging into it and shutting it down. If I select 'cancel' then and
then start the daemon from the menu everthing seems to work fine. No
lockups. I encounter a similar issue with a dialog box in the game
S.C.O.U.R.G.E: Heroes of a lesser renoun. Upon starting the actual
game (once characters are created) I am presented with dialog box.
The game runs behind this box and I can move the box around the
screen, but as soon as I close this box then the game continues
running for maybe 1-2 seconds before the system locks up again. Time
to reboot.

Not sure, but it could be something which is fixed in xorg cvs (or mesa
cvs). For instance the rasterization fallback was known to cause lockups 
(though I'm still not quite sure why should have been only segfault - 
maybe the lock never got released afterwards or something like that).



I've tried to debug the game with gdb but I get no output before the
 screen freezes.

Debugging gpu lockups without a 2nd box is a lost cause :-(.


P.S. By the way is it normal to see visual corruption in glxgears
when page flipping is turned on? Again I'll post some screenshots if
it will help.
No, that's not normal (unless you switch to another VT and back, there's 
a bug with that). If the corruption looks like parts are missing (i.e. 
black areas) then that would probably mean stuff is drawn to the wrong 
(i.e. front) buffer. Not sure why that would happen, again it could be 
something which is already fixed, but I'm using pageflip since ages and 
can't remember that it just didn't run at all in a released version 
(that said I rarely use released versions...).


Roland


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


DRM problem on r300/ppc

2005-09-29 Thread Dario Laera

Hi all,
since some week I'm no more able to load DRI, this is the error:

(II) RADEON(0): [agp] 8192 kB allocated with handle 0x0001
(EE) RADEON(0): [agp] Could not add ring mapping
(EE) RADEON(0): [agp] AGP failed to initialize. Disabling the DRI.
(II) RADEON(0): [agp] You may want to make sure the agpgart kernel module
is loaded before the radeon kernel module.

I get this error with linux-2.6.13 and 2.6.9 both with agpgart and 
uninorth driver built in. The drm module is from cvs, and I've tried 
with xorg-6.8.99.15 and cvs. I'm running on a powebook5,2 with r300 
card. You can find some logs and xorg.conf on 
http://laera.web.cs.unibo.it/drm/ .

I'm sure that's my fault, but I can't figure out the error.

Thanx in advance,
Dario.

--
Laera Dario
Undergraduate student at Computer Science
University of Bologna
ICQ# 203250303 /==/ http://laera.web.cs.unibo.it
Mail to: laera_at_cs.unibo.it  dario_at_astec.ms


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Linux OpenGL ABI discussion

2005-09-29 Thread Alan Cox
On Iau, 2005-09-29 at 09:49 +0200, Christoph Hellwig wrote:
 On Wed, Sep 28, 2005 at 04:07:56PM -0700, Andy Ritger wrote:
  Some of the topics raised include:
  
  - minimum OpenGL version required by libGL
  - SONAME change to libGL
  - libGL installation path
 
 I think the single most important point is to explicitly disallow
 vendor-supplied libGL binaries in the LSB.  Every other LSB componenet
 relies on a single backing implementation for a reason, and in practic

That is not actually true. It defines a set of API and ABI behaviours
which are generally based on a single existing common implementation.

 the Nvidia libGL just causes endless pain where people acceidentally
 link against it.  The DRI libGL should be declare the one and official
 one, and people who need extended features over it that aren't in the
 driver-specific backend will need to contribute them back.

If the LSB standard deals with libGL API/ABI interfaces then any
application using other interfaces/feature set items would not be LSB
compliant. Educating users to link with the base libGL is an education
problem not directly inside the LSB remit beyond the LSB test tools.

In addition the way GL extensions work mean its fairly sane for an
application to ask for extensions and continue using different
approaches if they are not available. In fact this is done anyway for
hardware reasons. There is a lack of is XYZ accelerated as an API but
that is an upstream flaw.

Alan



---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Linux OpenGL ABI discussion

2005-09-29 Thread Nicolai Haehnle
On Thursday 29 September 2005 18:30, Alan Cox wrote:
 On Iau, 2005-09-29 at 09:49 +0200, Christoph Hellwig wrote:
  On Wed, Sep 28, 2005 at 04:07:56PM -0700, Andy Ritger wrote:
   Some of the topics raised include:
   
   - minimum OpenGL version required by libGL
   - SONAME change to libGL
   - libGL installation path
  
  I think the single most important point is to explicitly disallow
  vendor-supplied libGL binaries in the LSB.  Every other LSB componenet
  relies on a single backing implementation for a reason, and in practic
 
 That is not actually true. It defines a set of API and ABI behaviours
 which are generally based on a single existing common implementation.
 
  the Nvidia libGL just causes endless pain where people acceidentally
  link against it.  The DRI libGL should be declare the one and official
  one, and people who need extended features over it that aren't in the
  driver-specific backend will need to contribute them back.
 
 If the LSB standard deals with libGL API/ABI interfaces then any
 application using other interfaces/feature set items would not be LSB
 compliant. Educating users to link with the base libGL is an education
 problem not directly inside the LSB remit beyond the LSB test tools.
 
 In addition the way GL extensions work mean its fairly sane for an
 application to ask for extensions and continue using different
 approaches if they are not available. In fact this is done anyway for
 hardware reasons. There is a lack of is XYZ accelerated as an API but
 that is an upstream flaw.

The real issue with an IHV-supplied libGL.so is mixing vendors' graphics 
cards. As an OpenGL user (i.e. a developer of applications that link 
against libGL), I regularly switch graphics cards around to make sure 
things work with all the relevant major vendors. Having a vendor-supplied 
libGL.so makes this unnecessarily difficult on the software side (add to 
that the custom-installed header files that have ever so slightly different 
semantics, and there is a whole lot of fun to be had).

Not to mention the use case with two graphics cards installed at the same 
time, from different vendors. While the above problem is annoying but 
acceptable, there's simply no reasonable way to use two graphics cards from 
vendors that insist on their custom libGL.so. Having to hack around with 
LD_LIBRARY_PATH and the likes is ridiculous.

I'm not too familiar with the exact details of the DRI client-server 
protocol, so maybe it may be necessary to turn the libGL.so into even more 
of a skeleton, and reduce the basic DRI protocol to a simple tell me the 
client side driver name, so that IHVs can combine (for example) custom GLX 
extensions with direct rendering.

cu,
Nicolai


pgpeJXl3bNGf2.pgp
Description: PGP signature


Re: Linux OpenGL ABI discussion

2005-09-29 Thread Adam Jackson
On Thursday 29 September 2005 04:35, Dave Airlie wrote:
 I have to agree with Christoph, the libGL should be a
 one-size-fits-all and capable of loading drivers from any vendor, I'm
 not sure what is so hard about this apart from the fact that neither
 vendor has seemed willing to help out infrastructure on the basis of
 some belief that they shouldn't have to (maybe because they don't on
 Windows) or maybe because they don't want to be seen to collaborate on
 things there is hardly any major secrets in the libGL interface
 that should stop it...

There is exactly one secret: how to go from GL entrypoint to driver dispatch 
table as fast as possible while still being thread-correct and etc.  However 
this can be read right out of the compiled object with any reasonable 
disassembler, so it's not much of a secret.

 As far as I know idr did a lot of work recently on libGL so we can
 expose GL extensions for vendors like ATI without them having to ship
 their own driver (I'm not sure if ATI contributed anything more than a
 list of things needed).. I think he mentioned this was a bit more
 difficult for glx.. but I'm sure it should be possible...

We already had this thread:

http://lists.freedesktop.org/archives/dri-egl/2005-July/000565.html

In particular, Andy's response about why they're uninterested in a common 
libGL is basically The Last Word on the subject.  It would require that 
nvidia expend time, effort, and money to get to the same level of 
functionality they already have.  This applies equally to any other IHV, and 
to ISVs like XiG and SciTech too for that matter.  You can have whatever 
opinion you like about that stance, but it's simply an economic reality.

It's also irrelevant.  libGL simply needs to provide ABI guarantees.  
Specifying driver compatibility is outside the scope of the LSB.

I would make the case that the sonumber for a libGL that supports OpenGL 2.0 
should start with 1.  DSO version numbers are for ABI changes, and OpenGL 2.0 
is simply not backwards-incompatible with OpenGL 1.5 for the set of 
entrypoints they share.  It's not like 2.0 changes the prototype for glEnd() 
or anything.  So, 1.6.  Or 1.10 or whatever, if we really think that people 
want to do more GL 1.x versions.

I would also make the case that the LSB should in no case require an 
implementation to have features unavailable in open source.  In particular, 
requiring GL 2.0 would be broken.  Remember what the L stands for here.

The deeper issue here is whether it's actually useful to require some minimum 
level of functionality even when large swaths of it will be software.  If I 
don't have cube map support in hardware, do I really want to try it in 
software?  Is that a useful experience for developers or for users?

Perhaps what I would like is a new set of glGetString tokens that describe 
what version and extensions the hardware is actually capable of accelerating, 
rather than what the software supports.  Because in some sense, advertising 
GL 2.0 on a Riva is so inaccurate as to be worse than lying.

 This is as far as I know how MS's OpenGL ICD system works, there is
 one frontend and your driver can expose extra things via it...

It's not.  MS's MCD (mini client driver) system had something like our current 
system, where you have one GL dispatch layer and the vender provides a driver 
that gets loaded by the system.  In the ICD scheme, opengl32.dll (or whatever 
it is) is provided per-vendor.

- ajax


pgp6pMPXttBP8.pgp
Description: PGP signature


Re: Linux OpenGL ABI discussion

2005-09-29 Thread Allen Akin
On Thu, Sep 29, 2005 at 01:54:00PM -0400, Adam Jackson wrote:
| The deeper issue here is whether it's actually useful to require some minimum 
| level of functionality even when large swaths of it will be software.  If I 
| don't have cube map support in hardware, do I really want to try it in 
| software?  Is that a useful experience for developers or for users?

For OpenGL at least, history suggests the answer is usually yes.  The
argument goes back to the pre-1.0 days, when texture mapping was only
available on fairly exotic hardware.  The decision was made to require
it in the standard, and it turned out to be valuable on pure software
implementations because (1) it was fast enough to be usable for a
surprisingly large range of apps; (2) people with older hardware still
had the option to use it, rather than having that option closed off
up-front by the people defining the standard, and they found uses that
were worthwhile; (3) development could occur on older hardware for
deployment on newer hardware; (4) it served as a reference for hardware
implementations and a debugging tool for apps.

This experience was repeated with a number of other features as OpenGL
evolved.

If there's no consensus in the ARB about the desirability of a given
piece of functionality, it tends to be standardized as an extension (or
very rarely as a subset, like the Imaging Operations).  Extensions are
optional, so they provide middle ground.  But eventually, if a piece of
functionality proves valuable enough to achieve consensus, it moves into
the OpenGL core and software implementations become mandatory.

OpenGL ES has taken a slightly different route (with API profiles).  I
don't have firsthand knowledge of how well that's worked out.

| Perhaps what I would like is a new set of glGetString tokens that describe 
| what version and extensions the hardware is actually capable of accelerating, 
| rather than what the software supports.

This question also goes back to the very earliest days of OpenGL.

The fundamental technical problem is that there is no tractable way to
define an operation so that you can make a simple query to learn
whether it's accelerated.  So much depends on the current graphics state
(how many TMUs are enabled, the size of image or texture operands vs.
the size of available video memory, whether colors are specified by
floats or unsigned chars, whether vertices lie in DMAable or
CPU-accessible address space, etc., etc., ad infinitum) that most of the
time you can't even express a simple question like Is triangle drawing
accelerated?  A number of other APIs have gone down this road in the
past, and none of them found a viable solution to the problem.

In practice, two approaches are used with OpenGL.  One is simply to
benchmark the operations you want to perform and determine whether a
given OpenGL implementation is fast enough.  (This is used by
isfast-style libraries and by game developers, always during
development but occasionally during installation or initialization.) The
other is to assume that if an extension is advertised (via glGetString),
then it's accelerated; if an extension is present but not advertised,
then it's probably not accelerated.

There was interest a couple of years ago in implementing a more
sophisticated mechanism.  One option was a query of the form If I try
to execute a given drawing operation right now, with all the graphics
state that currently holds, will it be accelerated? (D3D has a pipeline
validation mechanism that's something like this).  Another was a query
of the form Have any software fallbacks occurred since the last time I
asked? that you could make after you'd actually run one or more
operations.  There were unanswered questions about whether either of
these could be made worthwhile.  I haven't tracked the ARB since late
last year so I don't know if any progress has been made on this front.

Sorry for the long reply.  These questions come up from time to time,
and I wanted to make sure everyone had the background information.

Allen


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Refactor server-side __glXImageSize / __glXImage3DSize

2005-09-29 Thread Ian Romanick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Brian Paul wrote:

 It's been a long time since I've looked at this stuff, but I'm not sure
 that __glXImageSize() is correct.  Specifically, the last part of the
 function:
 
 [...]
 if (imageHeight  0) {
 imageSize = (imageHeight + skipRows) * rowSize;
 } else {
 imageSize = (h + skipRows) * rowSize;
 }
 return ((d + skipImages) * imageSize);
 }
 }
 
 
 Why do skipRows and skipImages factor into the image size?  I believe
 the dimensions of the image going over the wire is W * H * D.  The
 skipRows and skipImages (and skipPixels) values just describe where to
 find the W*H*D image inside of a larger image.

This will require some investigation.  That particular block of code has
been in the server-side GLX code since day 1.

 See figure 3.8 on page 131 of the OpenGL 2.0 specification for a diagram.

I looked that information up in the glPixelStore man page in order to
write the comment before __glXImageSize. :)

 Consider a scenario in which you're replacing a single texel in a 3D
 texture map.  You've got a W*H*D 3D texture that in malloc'd memory
 which you previously uploaded with glTexImage3D.  To upload a single
 changed texel in that volume at (x,y,z) you'd set
 GL_UNPACK_SKIP_PIXELS=x, GL_UNPACK_SKIP_ROWS=y, and
 GL_UNPACK_SKIP_IMAGES=z then call glTexSubImage3D(target, level, x, y,
 z, 1, 1, 1, type, volume).
 
 Over the wire, we should send a single texel so the result of
 __glXImageSize should be pretty small.  The __glFillImage() command on
 the client side doesn't seem to use SKIP_ROWS or SKIP_IMAGES in the way
 that __glXImageSize does.

This is one of the tricky / annoying parts of sending pixel data between
the client and the server.  The pixel pack / unpack settings are not
stored as persistent state.  They are included with each pixel transfer
command.  So, the protocol for glTexImage2D and glReadPixels embed all
of the needed pixel storage information.  What this means is that the
sender of the data, be it the client or the server, can do whatever
voodoo it wants on the data so long as it includes the pixel storage
settings to correctly describe it.

As it turns out, both sides in our implementation do all the packing /
unpacking locally and send zeros for all three of these settings.

 Ian, you should test this with the drawpix demo:  decrease the image
 height to about half by pressing 'h'.  Then increase the skipRows value
 by pressing 'R'.  Things should blow up on the server side if the
 __glXImageSize computation is wrong.

I'll give that a try before I commit anything.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)

iD8DBQFDPEJCX1gOwKyEAw8RApYHAKCAcAxcbzr21njBUdFOPE5D6GULngCaAs+2
BMRIuYx7gOo+cgnrVV1CvN0=
=xTzj
-END PGP SIGNATURE-


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Linux OpenGL ABI discussion

2005-09-29 Thread Dave Airlie

 I think the single most important point is to explicitly disallow
 vendor-supplied libGL binaries in the LSB.  Every other LSB componenet
 relies on a single backing implementation for a reason, and in practice
 the Nvidia libGL just causes endless pain where people acceidentally
 link against it.  The DRI libGL should be declare the one and official
 one, and people who need extended features over it that aren't in the
 driver-specific backend will need to contribute them back.

I have to agree with Christoph, the libGL should be a
one-size-fits-all and capable of loading drivers from any vendor, I'm
not sure what is so hard about this apart from the fact that neither
vendor has seemed willing to help out infrastructure on the basis of
some belief that they shouldn't have to (maybe because they don't on
Windows) or maybe because they don't want to be seen to collaborate on
things there is hardly any major secrets in the libGL interface
that should stop it...

As far as I know idr did a lot of work recently on libGL so we can
expose GL extensions for vendors like ATI without them having to ship
their own driver (I'm not sure if ATI contributed anything more than a
list of things needed).. I think he mentioned this was a bit more
difficult for glx.. but I'm sure it should be possible...

This is as far as I know how MS's OpenGL ICD system works, there is
one frontend and your driver can expose extra things via it...

Dave.


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Linux OpenGL ABI discussion

2005-09-29 Thread Christoph Hellwig
On Wed, Sep 28, 2005 at 04:07:56PM -0700, Andy Ritger wrote:
 Some of the topics raised include:
 
 - minimum OpenGL version required by libGL
 - SONAME change to libGL
 - libGL installation path

I think the single most important point is to explicitly disallow
vendor-supplied libGL binaries in the LSB.  Every other LSB componenet
relies on a single backing implementation for a reason, and in practice
the Nvidia libGL just causes endless pain where people acceidentally
link against it.  The DRI libGL should be declare the one and official
one, and people who need extended features over it that aren't in the
driver-specific backend will need to contribute them back.



---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Linux OpenGL ABI discussion

2005-09-29 Thread Ian Romanick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

(I corrected the CC address for the lsb-desktop list.  It was
incorrectly listed as being at lists.freedesktop.org, so none of this
thread has made it to the list where the discussion should be.)

Allen Akin wrote:
 On Thu, Sep 29, 2005 at 01:54:00PM -0400, Adam Jackson wrote:
 | The deeper issue here is whether it's actually useful to require some 
 minimum 
 | level of functionality even when large swaths of it will be software.  If I 
 | don't have cube map support in hardware, do I really want to try it in 
 | software?  Is that a useful experience for developers or for users?
 
 For OpenGL at least, history suggests the answer is usually yes.  The
 argument goes back to the pre-1.0 days, when texture mapping was only
 available on fairly exotic hardware.  The decision was made to require
 it in the standard, and it turned out to be valuable on pure software
 implementations because (1) it was fast enough to be usable for a
 surprisingly large range of apps; (2) people with older hardware still
 had the option to use it, rather than having that option closed off
 up-front by the people defining the standard, and they found uses that
 were worthwhile; (3) development could occur on older hardware for
 deployment on newer hardware; (4) it served as a reference for hardware
 implementations and a debugging tool for apps.
 
 This experience was repeated with a number of other features as OpenGL
 evolved.
 
 If there's no consensus in the ARB about the desirability of a given
 piece of functionality, it tends to be standardized as an extension (or
 very rarely as a subset, like the Imaging Operations).  Extensions are
 optional, so they provide middle ground.  But eventually, if a piece of
 functionality proves valuable enough to achieve consensus, it moves into
 the OpenGL core and software implementations become mandatory.

This represents a goal of OpenGL to lead the hardware.  The idea is that
most current version of OpenGL defines the features that the next
generation of hardware will have standard.  In terms of making
functionality available and leading developers, this is a really good
strategy to take.

However, that's not (or at least shouldn't be) our goal.  Our goal is to
define the minimum that is required to be available on our platform.  As
such, that should reflect what actually exists on our platform.  From
talking to people at the various distros, the most common piece of
graphics hardware is the Intel i830 chipset (and derived chips like
i845G, i855GM, etc.).  That hardware is only capable of OpenGL 1.3.

If all applications were well behaved (i.e., allowed users to enable or
disable the use of individual hardware features like DOT3 texture
environment or shadow maps), this wouldn't be a problem.  That is sadly
not the case.

I think there is an alternative middle ground here that will satisfy
most people's concerns.  I propose that we require 1.2 as the minimum
supported version.  I also propose that we provide a standard mechanism
to demand that the driver advertise a user specified version up to
1.5.  For example, a user might run an app like:

LIBGL_FORCE_VERSION=1.5 ./a.out

When 'a.out' queries the version string, it will get 1.5 even if the
driver has to do software fallbacks for the new 1.5 functionality.

This will prevent the unexpected performance cliff I mentioned in
another e-mail, and it will still provide more modern functionality to
users that need / want it.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)

iD8DBQFDPEkjX1gOwKyEAw8RAtP4AJ9qf8IwVNDc+DCEek5Zfg5dPFuT6wCfQMdd
wlSNVXShsZTgPAUzY4sicT8=
=xRsC
-END PGP SIGNATURE-


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Linux OpenGL ABI discussion

2005-09-29 Thread Alan Cox
On Iau, 2005-09-29 at 22:02 +0200, Christoph Hellwig wrote:
 And replacing system libraries is not something we can allow anyone.
 It's totally reasonable to have different 3cards in the same systems
 and they're supposed to work. 

Agreed - but the LSB job is still that of defining an ABI. Obviously
users who replace system libraries with ones they got from another
source get burned whether its a perl upgrade required by a vendor or
libc.

Alan



---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Refactor server-side __glXImageSize / __glXImage3DSize

2005-09-29 Thread Brian Paul

Ian Romanick wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Brian Paul wrote:



It's been a long time since I've looked at this stuff, but I'm not sure
that __glXImageSize() is correct.  Specifically, the last part of the
function:

   [...]
   if (imageHeight  0) {
   imageSize = (imageHeight + skipRows) * rowSize;
   } else {
   imageSize = (h + skipRows) * rowSize;
   }
   return ((d + skipImages) * imageSize);
   }
}


Why do skipRows and skipImages factor into the image size?  I believe
the dimensions of the image going over the wire is W * H * D.  The
skipRows and skipImages (and skipPixels) values just describe where to
find the W*H*D image inside of a larger image.



This will require some investigation.  That particular block of code has
been in the server-side GLX code since day 1.


I guess it's been working then since the skip values have been zero, 
as you point out below.




See figure 3.8 on page 131 of the OpenGL 2.0 specification for a diagram.



I looked that information up in the glPixelStore man page in order to
write the comment before __glXImageSize. :)



Consider a scenario in which you're replacing a single texel in a 3D
texture map.  You've got a W*H*D 3D texture that in malloc'd memory
which you previously uploaded with glTexImage3D.  To upload a single
changed texel in that volume at (x,y,z) you'd set
GL_UNPACK_SKIP_PIXELS=x, GL_UNPACK_SKIP_ROWS=y, and
GL_UNPACK_SKIP_IMAGES=z then call glTexSubImage3D(target, level, x, y,
z, 1, 1, 1, type, volume).

Over the wire, we should send a single texel so the result of
__glXImageSize should be pretty small.  The __glFillImage() command on
the client side doesn't seem to use SKIP_ROWS or SKIP_IMAGES in the way
that __glXImageSize does.



This is one of the tricky / annoying parts of sending pixel data between
the client and the server.  The pixel pack / unpack settings are not
stored as persistent state.  They are included with each pixel transfer
command.  So, the protocol for glTexImage2D and glReadPixels embed all
of the needed pixel storage information.  What this means is that the
sender of the data, be it the client or the server, can do whatever
voodoo it wants on the data so long as it includes the pixel storage
settings to correctly describe it.

As it turns out, both sides in our implementation do all the packing /
unpacking locally and send zeros for all three of these settings.


I wonder why the GLX protocol was originally spec'd to pass all the 
packing parameters with the command?  I can understand the alignment 
and byte-swapping parameters getting shipped along, but not the 
skip/stride parameters.


The new twist to all this is pixel buffer objects.  When a PBO is 
bound, glRead/DrawPixels becomes a server-side-only operation; no 
pixel data would get transferred over the wire.  Did you start to look 
into the GLX protocol for this at one point?  Anyway, the pixel store 
parameters become relevant on the server side in this scenario.





Ian, you should test this with the drawpix demo:  decrease the image
height to about half by pressing 'h'.  Then increase the skipRows value
by pressing 'R'.  Things should blow up on the server side if the
__glXImageSize computation is wrong.



I'll give that a try before I commit anything.


The test should work if the client-side code is always setting the 
skip/stride values to zeros/defaults.


-Brian


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Linux OpenGL ABI discussion

2005-09-29 Thread Allen Akin
On Thu, Sep 29, 2005 at 01:05:55PM -0700, Ian Romanick wrote:
|  ...  Our goal is to
| define the minimum that is required to be available on our platform.  ...

If by our goal you mean the goal of the Linux OpenGL ABI effort, then
I agree.  I intended my previous note to address the more general
questions about performance queries and subsetting that Adam and Alan
raised.

Haven't had time to check the LSB mailing list yet, but I'll try to do
so in a day or two.

Allen


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Linux OpenGL ABI discussion

2005-09-29 Thread Christoph Hellwig
On Thu, Sep 29, 2005 at 01:54:00PM -0400, Adam Jackson wrote:
 http://lists.freedesktop.org/archives/dri-egl/2005-July/000565.html
 
 In particular, Andy's response about why they're uninterested in a common 
 libGL is basically The Last Word on the subject.  It would require that 
 nvidia expend time, effort, and money to get to the same level of 
 functionality they already have.  This applies equally to any other IHV, and 
 to ISVs like XiG and SciTech too for that matter.  You can have whatever 
 opinion you like about that stance, but it's simply an economic reality.

And it's a we shouldn't care about their economic issues.  Giving them
a branding only if they play nice with the open source world is one of
the few powers we have.

And replacing system libraries is not something we can allow anyone.
It's totally reasonable to have different 3cards in the same systems
and they're supposed to work.  Where would be get if every scsi card
came with it's own scsi stack and you could just use one brand at a
time?  Sure, we can't forbid scsi vendors to do that, but we do
everything in out power to avoid it - quite sucessfully so far.


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM problem on r300/ppc

2005-09-29 Thread Benjamin Herrenschmidt
On Thu, 2005-09-29 at 13:56 +0200, Dario Laera wrote:
 Hi all,
 since some week I'm no more able to load DRI, this is the error:
 
 (II) RADEON(0): [agp] 8192 kB allocated with handle 0x0001
 (EE) RADEON(0): [agp] Could not add ring mapping
 (EE) RADEON(0): [agp] AGP failed to initialize. Disabling the DRI.
 (II) RADEON(0): [agp] You may want to make sure the agpgart kernel module
 is loaded before the radeon kernel module.
 
 I get this error with linux-2.6.13 and 2.6.9 both with agpgart and 
 uninorth driver built in. The drm module is from cvs, and I've tried 
 with xorg-6.8.99.15 and cvs. I'm running on a powebook5,2 with r300 
 card. You can find some logs and xorg.conf on 
 http://laera.web.cs.unibo.it/drm/ .
 I'm sure that's my fault, but I can't figure out the error.

It looks like a problem I fixed recently, David, did you update CVS with
my fix ?

Ben.




---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Bug 4150] r300 cairo with glitz backend locks X

2005-09-29 Thread bugzilla-daemon
Please do not reply to this email: if you want to comment on the bug, go to
   
the URL shown below and enter yourcomments there. 
   
https://bugs.freedesktop.org/show_bug.cgi?id=4150  
 




--- Additional Comments From [EMAIL PROTECTED]  2005-09-29 19:45 ---
This is reproducible although I'm no longer sure it is directly related to cairo
(or anything opengl).  If I load the drm.ko and radeon.ko modules, after a
while, I get the freeze. Enabling page flipping didn't seem to solve the issue.

I'm not sure if the following backtrace is useful but here it is anyway:
gdb) bt
#0  0xe410 in ?? ()
#1  0xbfe964f8 in ?? ()
#2  0x in ?? ()
#3  0x6444 in ?? ()
#4  0xb7d69539 in ioctl () from /lib/libc.so.6
#5  0x080dbbb6 in xf86ioctl ()
#6  0xb7f76828 in drmCommandNone () from /usr/lib/xorg/modules/linux/libdrm.so
#7  0xb7977943 in RADEONWaitForIdleCP ()
   from /usr/lib/xorg/modules/drivers/radeon_drv.so
#8  0xb7799afc in XAAInit () from /usr/lib/xorg/modules/libxaa.so
#9  0x080f6edf in miInitializeBackingStore ()
#10 0x0810d446 in miSpriteInitialize ()
#11 0x080825e6 in DoGetImage ()
#12 0x08082896 in ProcGetImage ()
#13 0x0808594b in Dispatch ()
#14 0x0806dddc in main ()

I'll try to provide further information if necessary.

(Sorry for the late response as this is the first time I've had access since
Hurricane Katrina)  
 
 
--   
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email 
 
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM problem on r300/ppc

2005-09-29 Thread Dave Airlie
  is loaded before the radeon kernel module.
 
  I get this error with linux-2.6.13 and 2.6.9 both with agpgart and
  uninorth driver built in. The drm module is from cvs, and I've tried
  with xorg-6.8.99.15 and cvs. I'm running on a powebook5,2 with r300
  card. You can find some logs and xorg.conf on
  http://laera.web.cs.unibo.it/drm/ .
  I'm sure that's my fault, but I can't figure out the error.

 It looks like a problem I fixed recently, David, did you update CVS with
 my fix ?

Hmm.. thought I had.. hadn't.. I've just picked it up from git and put it
in CVS.. hopefully it works .. I've got to reboot to test it properly ..

Dave.

-- 
David Airlie, Software Engineer
http://www.skynet.ie/~airlied / airlied at skynet.ie
Linux kernel - DRI, VAX / pam_smb / ILUG



---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Bug 314] Radeon IGP 345M Graphics Controller Driver

2005-09-29 Thread bugzilla-daemon
Please do not reply to this email: if you want to comment on the bug, go to   
the URL shown below and enter your comments there.  
 
http://bugs.xfree86.org/show_bug.cgi?id=314   
   

[EMAIL PROTECTED] changed:

   What|Removed |Added

Summary|3D support for Radeon IGP   |Radeon IGP 345M Graphics
   |chips   |Controller Driver


   
   
--
Configure bugmail: http://bugs.xfree86.org/userprefs.cgi?tab=email   
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel