Re: [Dri-devel] Restoring DRM Library Device Independence

2002-03-27 Thread Jens Owen

Alan Hourihane wrote:
 
 On Tue, Mar 26, 2002 at 10:36:41PM -0700, Jens Owen wrote:
  I've made some headway on this today, and could use some feedback based
  on your BSD experience.  I've attempted to move the packing of
  drmRadeonInitCP into the 2D ddx driver, and the main concern I'm seeing
  is the actual IOCTL request number.  I can easily use the Linux number,
  but I thought it might be better to have some OS independent offset.
  However, generating all the combinations of _IO, _IOR, _IOW and _IOWR
  semantics in an OS independent way is going to be challenging.  See
  xc/programs/Xserver/hw/xfree86/os-support/linux/drm/kernel/drm.h:line
  373
 
  Here is an incomplete patch in case you are interested in the general
  direction I was currently prototyping...
 
  Should I just use the Linux _IO* semantics and let other OS ports
  twizzle this to get this working, or do you have any ideas on how we can
  generate the proper semantics in a more general way.  I think we will
  need to generate these semantics at run time, not compile time.
 
 Jens,
 
 The idea of offset's with an os dependent MAGIC_NUMBER sounds like
 the right idea. I also think that we go ahead and use the Linux _IO*
 semantics, as the *BSD just twizzles around these anyway at the moment.
 And until more OS's support the drm (or at least show some signs) then
 that's probably the best we can hope for.

Okay, I'll use the Linux DRM semantics where are:

( (direction)  30 | (size)  16 | (type)  8 | (request)  0 )

where

  direction is: 0 for none, 1 for write, 2 for read and 3 for both
  size is: size of record to be transfered
  type is: 'd' for DRM drivers
  request is: our OS independent offset

-- /\
 Jens Owen/  \/\ _
  [EMAIL PROTECTED]  /\ \ \   Steamboat Springs, Colorado

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Max texture size

2002-03-27 Thread Leif Delgass

In the code to set MaxTextureLevels in the Rage128 and Radeon drivers, 
4 bytes/texel is assumed when calculating for the max texture size.  If we
always convert to 2 bytes/texel for a 16bpp screen when choosing texture
formats, shouldn't that be factored into the calculation?  So we'd use
mach64Screen-cpp for the calculation instead of a fixed 4 bytes/texel?
Then the comparison would be:

if mach64Screen-texSize[0] = 
   2 * mach64Screen-cpp * 1024 * 1024, then MaxTextureLevels = 11
else if mach64Screen-texSize[0] = 
   2 * mach64Screen-cpp * 512  * 512 , then MaxTextureLevels = 10
else MaxTextureLevels = 9 (256x256)

This should apply to Rage128 and Radeon as well.  Am I missing something 
here?

-- 
Leif Delgass 
http://www.retinalburn.net


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Max texture size

2002-03-27 Thread Ian Romanick

On Wed, Mar 27, 2002 at 03:17:56PM -0500, Leif Delgass wrote:

 In the code to set MaxTextureLevels in the Rage128 and Radeon drivers, 
 4 bytes/texel is assumed when calculating for the max texture size.  If we
 always convert to 2 bytes/texel for a 16bpp screen when choosing texture
 formats, shouldn't that be factored into the calculation?  So we'd use
 mach64Screen-cpp for the calculation instead of a fixed 4 bytes/texel?
 Then the comparison would be:
 
 if mach64Screen-texSize[0] = 
2 * mach64Screen-cpp * 1024 * 1024, then MaxTextureLevels = 11
 else if mach64Screen-texSize[0] = 
2 * mach64Screen-cpp * 512  * 512 , then MaxTextureLevels = 10
 else MaxTextureLevels = 9 (256x256)
 
 This should apply to Rage128 and Radeon as well.  Am I missing something 
 here?

It occurs to me that, for cards that support mipmapping (i.e., not mach64),
even this test is wrong.  If there is 1 texture unit, 16-bits/texle, and 2MB (i.e.,
2*1024*1024 bytes) of texture memory, then a 1024x1024 texture and all of
its mipmaps will most certainly not fit into texture memory.  It would
require (0x  (32 - (2 * 11))) = 1398101 available texels.

-- 
Tell that to the Marines!

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Max texture size

2002-03-27 Thread Leif Delgass

On Wed, 27 Mar 2002, Daryll Strauss wrote:

 On Wed, Mar 27, 2002 at 04:00:55PM -0500, Leif Delgass wrote:
  On Wed, 27 Mar 2002, Alexander Stohr wrote:
So we'd use
mach64Screen-cpp for the calculation instead of a fixed 4 
bytes/texel?
Then the comparison would be:

if mach64Screen-texSize[0] = 
   2 * mach64Screen-cpp * 1024 * 1024, then MaxTextureLevels = 11
else if mach64Screen-texSize[0] = 
   2 * mach64Screen-cpp * 512  * 512 , then MaxTextureLevels = 10
else MaxTextureLevels = 9 (256x256)

This should apply to Rage128 and Radeon as well.  Am I 
missing something here?
   
   Yes, if you have a Radeon with i.e. 128 MB then you might want to 
   use even bigger textures or higher max levels, as long as the 
   renderer can handle them.
   
   Some sort of iteration or loop design might turn out to be the best.
   At least you can specify the lower and upper limits much easier then.
   
   Regards, AlexS.
  
  Yes, for Radeon you can go with a larger texture (2048x2048 looks like the
  max from the code, I don't have Radeon specs), I was thinking in terms of
  mach64 which has a max. size of 1024x1024. But do you see any problem with
  the basic idea in terms of using the screen bit depth?  Also, the first
  '2' should probably be MaxTextureUnits for cards that have more than two
  texture units implemented, right?
 
 This is all really just a heuristic. It works around a bigger problem
 that there's no good way to tell how many textures can fit on the
 board. So, these rules favor something like quake wher you want two
 textures (in order to multitexture) on the board at the same time and
 therefore lies to the application in defining what the maximum texture
 size is.
 
 Unfortunetly, this then breaks apps that require the use of bigger
 textures like you saw in the planet screensaver. You can argue that the
 planet screensaver should have made smaller textures. Since the app can
 query the maximum texture size, but that only works if shrinking texture
 map is acceptable for the app.
 
 If you use the correct hardware maximum texture size, then the problem
 is for the app is to determine if it can fit big textures in
 memory. There's no good way to query, but an app can test it by trying
 to do it at the maxmimum and then step down the texture size until it
 reaches one that's fast enough. That's the correct general approach in
 any case.
 
 So, making it 3 for a Radeon because it has 3 texture units isn't more
 correct. In fact, I'd argue that breaking your driver so that it is not
 capable of doing what the hardware can do is really the wrong solution
 overall. One of my apps needs 2k textures, and this heuristic makes the
 board non-functional for my app. Luckily it's all open source so I can
 change it! 
 
 I have no objection to making changes that make specific apps (like
 Quake) run faster as long as they don't impact other programs. This is a
 case where setting an environment variable like LIBGL_MAX_TEXTURE_SIZE
 might make sense or just having LIBGL_I_AM_RUNNING_QUAKE mode. Then it
 could throw correctness out the window and go as fast as possible. As
 long as it's only on when I ask for it, that's no problem. If I haven't
 asked the driver to be broken, I want correct behavior that allows me to
 use all of the hardware.
 
  - |Daryll

I see what you mean.  I noticed that one of the new screensavers
(flipscreen3d) wanted to create a 1024x512 mipmap from a screen grab and
failed, even though it should work with a single texture.  If I set
MaxTextureLevels to 11, it works.  Perhaps it's better, as you say, to
just use the maximum number of levels supported by the card and provide an
env var to step it down for apps that try to use the max. size for
multitexturing.  I'm not really a big fan of having a proliferation of env
vars, but I guess it's ok for now.  It might be nice to have some sort of
configuration file or interface like 3D workstation drivers where you
could create application profiles with different settings.  I suppose you 
could accomplish this by creating shell scipt wrappers for your GL apps 
that export the appropriate env vars.

-- 
Leif Delgass 
http://www.retinalburn.net


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Mach64 notex,tiny vertex formats

2002-03-27 Thread Sergey V. Udaltsov

 I've commited code to enable notex and tiny vertex formats for mach64.  
Cool! I've got the first playable version of Counter-strike(WINE, GL
mode, 640x480, screen res 800x600). Great cudos!
Though I have to admit there are some problems.
1. After closing glxgears, there were some pieces of the gl window
remaining on the screen. These pieces could be easily erased by moving
other windows. Also there are artefacts after tunnel and fog.
2. gltron really flies. But at the end of the game I encountered some
bad effects. In most cases, it is video subsystem hang. Once, I was able
just to kill the server (Alt-Bksp), another time the whole system
hung:((
3. I am not 100% sure but it seems some textures in armagetron are
either lost or rendered incorrectly.

I took the latest snapshot (today, 20pm)

Cheers,

Sergey

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



FW: [Dri-devel] Max texture size

2002-03-27 Thread Alexander Stohr

Oops, i should better care for sending
things to the mailing list address.

-Original Message-
From: Alexander Stohr 
Sent: Wednesday, March 27, 2002 22:14
To: 'Leif Delgass'
Subject: RE: [Dri-devel] Max texture size


No, i dont see problems with that.

When the resoultion changes and therefore
the memory consumption, all the applications
should terminate their screen access, 
including the OpenGL ones, and then query 
that maximum values again. 

bpp and max avail offscreen memory that might apply 
to your calculations should not change unless modes
are switched, so the results are consistent.

Let me summarize, the maximum texture level in a
multilevel texture mapping is determined by the
doulbe square object size of the biggest texture.
on linear texture heaps, the plain memory amount
makes up the capabilities. (visually on rectangular 
heaps the biggest texture is left and the smaller
levels arrange nestingly on the right half of the buffer)

the max level exactly matches the power of 2 that
the biggest texture has in dimensions. 
(assumed that all levels are power of 2 and the smalles is of size 1x1)



 -Original Message-
 From: Leif Delgass [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, March 27, 2002 21:55
 To: Alexander Stohr
 Subject: RE: [Dri-devel] Max texture size
 
 
 On Wed, 27 Mar 2002, Alexander Stohr wrote:
 
   So we'd use
   mach64Screen-cpp for the calculation instead of a fixed 4 
   bytes/texel?
   Then the comparison would be:
   
   if mach64Screen-texSize[0] = 
  2 * mach64Screen-cpp * 1024 * 1024, then MaxTextureLevels = 11
   else if mach64Screen-texSize[0] = 
  2 * mach64Screen-cpp * 512  * 512 , then MaxTextureLevels = 10
   else MaxTextureLevels = 9 (256x256)
   
   This should apply to Rage128 and Radeon as well.  Am I 
   missing something here?
  
  Yes, if you have a Radeon with i.e. 128 MB then you might want to 
  use even bigger textures or higher max levels, as long as the 
  renderer can handle them.
  
  Some sort of iteration or loop design might turn out to be the best.
  At least you can specify the lower and upper limits much 
 easier then.
  
  Regards, AlexS.
 
 Yes, for Radeon you can go with a larger texture (2048x2048 
 looks like the
 max from the code, I don't have Radeon specs), I was thinking 
 in terms of
 mach64 which has a max. size of 1024x1024. But do you see any 
 problem with
 the basic idea in terms of using the screen bit depth?  Also, 
 the first
 '2' should probably be MaxTextureUnits for cards that have 
 more than two
 texture units implemented, right?
 
 -- 
 Leif Delgass 
 http://www.retinalburn.net
 
 


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Max texture size

2002-03-27 Thread Keith Whitwell

Ian Romanick wrote:
 
 On Wed, Mar 27, 2002 at 03:17:56PM -0500, Leif Delgass wrote:
 
  In the code to set MaxTextureLevels in the Rage128 and Radeon drivers,
  4 bytes/texel is assumed when calculating for the max texture size.  If we
  always convert to 2 bytes/texel for a 16bpp screen when choosing texture
  formats, shouldn't that be factored into the calculation?  So we'd use
  mach64Screen-cpp for the calculation instead of a fixed 4 bytes/texel?
  Then the comparison would be:
 
  if mach64Screen-texSize[0] =
 2 * mach64Screen-cpp * 1024 * 1024, then MaxTextureLevels = 11
  else if mach64Screen-texSize[0] =
 2 * mach64Screen-cpp * 512  * 512 , then MaxTextureLevels = 10
  else MaxTextureLevels = 9 (256x256)
 
  This should apply to Rage128 and Radeon as well.  Am I missing something
  here?
 
 It occurs to me that, for cards that support mipmapping (i.e., not mach64),
 even this test is wrong.  If there is 1 texture unit, 16-bits/texle, and 2MB (i.e.,
 2*1024*1024 bytes) of texture memory, then a 1024x1024 texture and all of
 its mipmaps will most certainly not fit into texture memory.  It would
 require (0x  (32 - (2 * 11))) = 1398101 available texels.

Actually I think the test is correct, and that I was thinking of 16 bit
textures plus a full set of mipmaps at the time.  Thus the numbers should be
doubled in the 32 bit case, rather than halved for 16 as Leif was suggesting. 
(This is based on the idea that a full set of mipmaps packs perfectly to take
up two times the size of the base texture).  That's also not true for all
architectures...

Keith

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Max texture size

2002-03-27 Thread Ian Romanick

On Wed, Mar 27, 2002 at 10:53:48PM +, Keith Whitwell wrote:

 Actually I think the test is correct, and that I was thinking of 16 bit
 textures plus a full set of mipmaps at the time.  Thus the numbers should be
 doubled in the 32 bit case, rather than halved for 16 as Leif was suggesting. 
 (This is based on the idea that a full set of mipmaps packs perfectly to take
 up two times the size of the base texture).  That's also not true for all
 architectures...

Ok, that explains a bit.  However, in some circumstances we may loose a
level.  The mipmaps don't double the size, the only increase it by 1/3.
Then there are architectures like MGA the can't use all 11 mipmaps.

-- 
Tell that to the Marines!

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Mach64 notex,tiny vertex formats

2002-03-27 Thread Will Newton

On Wednesday 27 Mar 2002 10:13 pm, Leif Delgass wrote:

 The gears problem seems to happen if you close the window with the X or
 whatever in the window title bar.  Using ESC doesn't have this problem.  I
 guess there's a problem in the context teardown somewhere.

This problem also happens with the tdfx driver so I think it may be a more 
general issue. I've been meaning to look into it but my other machine is out 
on loan.

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Max texture size

2002-03-27 Thread Andreas Karrenbauer

Alexander Stohr wrote:
 
   (This is based on the idea that a full set of mipmaps packs
  perfectly to take
   up two times the size of the base texture).  That's also
  not true for all
   architectures...
 
  Ok, that explains a bit.  However, in some circumstances we
  may loose a
  level.  The mipmaps don't double the size, the only increase
  it by 1/3.
  Then there are architectures like MGA the can't use all 11 mipmaps.
 
 Okay, me was odd.
 (this always happens when i am thinking with the stomach...)
 
 But now i will try to go into deep and get to final formula of
   (4/3 * Lvl_size)
 for the maximum of expected pixels in the total amount of data
 stored in a squared level stack. Further i will point out to
 the goods and bads of the mentioned bithshift formlua and the
 alternative method of looping. In the end i will point out to
 reverse calculation methods and to specific implementations
 that finally might break or at least complicate the overall
 systematics of pyramid stack calculations.
 
 grafically:
 
 n=2  n=1 n=0
  ##  #
  ##
 
 
 
 mathematically:
   level width = 2^n
   level size = (2^n)*(2^n) = 2^(2*n)
   size increse ratio per level increment = 4
   = (2^(n+1))*(2^(n+1)) / ((2^n)*(2^n))
   = (2^(n+1-n))*(2^(n+1-n))
   = (2^1)*(2^1)
   = 2*2 = 4
 
 summarized size of all levels = sum(x=0, x=n, 2^(2*x) )
 (sorry, i dont have the math book handy for solving that here)

just FYI the formula is 
$$
\sum_{k=0}^n q^k = \frac{q^{n+1}-1}{q-1}
$$
in this case
$$
\sum_{x=0}^n 2^{2x} = \sum_{x=0}^n 4^x = \frac{4^{n+1}-1}{3}
$$
and hence the ratio 
$$
r = \frac{4}{3} - \frac{1}{3 4^n}
$$

I hope that this latex code is self-explaining for those who aren't used
to it ;-)

Regards,
Andreas Karrenbauer

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Mach64 texture fix

2002-03-27 Thread Leif Delgass

I found that calls to TexParameter were not setting the texture wrapping 
and texture filter in some cases (where the driver private texture object 
struct had not already been allocated).  This is now fixed and solves the 
following bugs:

- artifacts with plasma gun in q3
- blocky wall textures in q3
- texture clamping problem in armagetron.
- probably other things I hadn't noticed yet ;)

This should make things look much better, since you should always be
getting bilinear/trilinear filtering when it's requested.  The blocky
textures in quake were the result of nearest filtering, I think. The
downside is this fix will take away some of the speed increase you may
have seen, but it was cheating anyway. :)

I'm also planning on bumping the max texture size to 1024x1024
(MaxTextureLevels=11) and including an environment variable. So you would
use DRI_MACH64_MAX_TEXTURE_SIZE=512 to drop down to 512x512 max texture
size to speedup quake, for example, but the default will allow the max 
texture size supported by the card to be used with a single texture unit.

-- 
Leif Delgass 
http://www.retinalburn.net





___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel