Re: [Dri-devel] A question of bugs

2004-05-14 Thread Brian Paul
Felix Kühling wrote:
On Thu, 13 May 2004 15:31:50 -0500
Adam Jackson [EMAIL PROTECTED] wrote:

On Monday 10 May 2004 16:17, Adam Jackson wrote:

Anyone opposed to moving DRI's bug tracker to freedesktop's bugzilla?  I
ask because I _strongly_ dislike sourceforge's bug system, particularly
compared to bugzilla.  I'll even volunteer to import all the open bugs.
Opinions in any direction?
Just a followup on this.  I'm going to interpret silence as lazy consensus in 
a day or two...


AFAIR it was decided to disable the SF bugtracker in favour of the
XFree86 bugzilla. It just turned out that there was no way to really
disable it. Project admins, correct me if I'm wrong.
Now that DRI seems to become interesting to more projects than XFree86
it may be a good idea though, to have its own bug tracker somewhere.
Maybe you can bring this up again at the IRC meeting next monday. It is
important to have a (non-silent) consensus among the developers who are
supposed to fix those bugs in the end.
Another possibility would be to track DRI bugs in the Mesa bug tracker
as DRI drivers become more integrated into the Mesa build system. Do you
think you could talk Mesa developers into using a Bugzilla? AFAICT Brian
takes care of most Mesa bugs, so his vote would count most.
I guess I don't feel to strongly about the Mesa bug database.  The 
SourceForge tracker has been good enough for me but others much prefer 
Bugzilla.  I could move to Bugzilla if that's the concensus.

Keith?

-Brian



---
This SF.Net email is sponsored by: SourceForge.net Broadband
Sign-up now for SourceForge Broadband and get the fastest
6.0/768 connection for only $19.95/mo for the first 3 months!
http://ads.osdn.com/?ad_id%62alloc_ida84op=click
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] A question of bugs

2004-05-14 Thread Keith Whitwell
Brian Paul wrote:

I guess I don't feel to strongly about the Mesa bug database.  The 
SourceForge tracker has been good enough for me but others much prefer 
Bugzilla.  I could move to Bugzilla if that's the concensus.

Keith?
Bugzilla's certainly the more usable system, but my feelings one way or 
another aren't strong either.

Keith



---
This SF.Net email is sponsored by: SourceForge.net Broadband
Sign-up now for SourceForge Broadband and get the fastest
6.0/768 connection for only $19.95/mo for the first 3 months!
http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Re: [Linux-fbdev-devel] Redesign of kernel graphics interface

2004-05-14 Thread Jon Smirl
Just look at this picture and you can see the trend of 2D vs 3D (coprocessor
based) graphics.
http://www.de.tomshardware.com/graphic/20040504/images/architecture.gif
Within one or two generations the 2D box is going to be gone.

If Linux wants to stay current with technology we have to start using the
coprocessor features of the GPU. Most of the benchmarks I have seen show
coprocessor vs programmed at 100:1 speed differential. This is also a
competitive problem, Microsoft and Apple have both decided to go with the GPU
coprocessor this year. 

Lack of free drivers is no reason to ignore the GPU coprocessor. It just means
more effort needs to be put into mesa and prying the docs out of the graphics
chip vendors. If the current open drivers don't work on a non-x86 platform just
go fix them. All of the necessary data is available. Progress is being made with
ATI for getting the R300 specs now that the R400 series has shipped.

--- Sven Luther [EMAIL PROTECTED] wrote:
 As long as this doesn't change, stating that we have an accelerated API
 for OpenGL in linux is not only dead wrong, but is leading us in a
 dangerous direction, where we will depend on a non-free component in the
 kernel and were we are going to forget about graphic support on anything
 non-x86.

I said OpenGL is the only accelerated API available on Linux. Can you name
another? There is a little acceleration in framebuffer, but I don't know of any
others. Also, software mesa works just fine to provide OpenGL on dumb 2D cards.

You have to choose. Either we stay with programmed IO and low speed 2D graphics
forever or we have to embrace the GPU coprocessor. No one is going to make you
buy a high end ATI or Nvidia card. But if you want to use one you have to deal
with the realities of the situation. ATI and Nvidia own their technology and if
they don't want to open it all we can do is complain to them or use their
proprietary drivers. 

If being free is critical to you then you won't be using any high graphics
cards. Stick with an R200 or less type card. Your system will work fine with the
OpenGL based xserver. But graphics cards are evolving and Linux can't ignore
them. We just have to hope that Nvidia and API can see the light someday and
open their specs.

=
Jon Smirl
[EMAIL PROTECTED]




__
Do you Yahoo!?
SBC Yahoo! - Internet access at a great low price.
http://promo.yahoo.com/sbc/


---
This SF.Net email is sponsored by: SourceForge.net Broadband
Sign-up now for SourceForge Broadband and get the fastest
6.0/768 connection for only $19.95/mo for the first 3 months!
http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Re: [Linux-fbdev-devel] Redesign of kernel graphics interface

2004-05-14 Thread Ville Syrjälä
On Fri, May 14, 2004 at 10:51:35AM -0700, Jon Smirl wrote:
 Just look at this picture and you can see the trend of 2D vs 3D (coprocessor
 based) graphics.
 http://www.de.tomshardware.com/graphic/20040504/images/architecture.gif
 Within one or two generations the 2D box is going to be gone.
 
 If Linux wants to stay current with technology we have to start using the
 coprocessor features of the GPU. Most of the benchmarks I have seen show
 coprocessor vs programmed at 100:1 speed differential. This is also a
 competitive problem, Microsoft and Apple have both decided to go with the GPU
 coprocessor this year. 

I don't understand you GPU vs. PIO comparisons. You can use the 2D engine 
with DMA as well. And at least with older cards the 2D engine is clearly 
faster than the 3D engine (~100% faster for blits on my G400) so trying to 
bypass it is just stupid.

 I said OpenGL is the only accelerated API available on Linux. Can you name
 another?

DirectFB.

 There is a little acceleration in framebuffer, but I don't know of any
 others. Also, software mesa works just fine to provide OpenGL on dumb 2D cards.

Using unaccelerated OpenGL for 2D rendering doesn't sound exatly useful. 

-- 
Ville Syrjälä
[EMAIL PROTECTED]
http://www.sci.fi/~syrjala/


---
This SF.Net email is sponsored by: SourceForge.net Broadband
Sign-up now for SourceForge Broadband and get the fastest
6.0/768 connection for only $19.95/mo for the first 3 months!
http://ads.osdn.com/?ad_id%62alloc_ida84op=click
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Re: [Linux-fbdev-devel] Redesign of kernel graphics interface

2004-05-14 Thread Alex Deucher

--- Sven Luther [EMAIL PROTECTED] wrote:
 On Thu, May 06, 2004 at 05:50:40PM -0700, Jon Smirl wrote:
  --- James Simmons [EMAIL PROTECTED] wrote:
   2) Ben suggestion that we mount userland inside the kernel during
 early 
  boot and use a userland library. If we would use a library
 then it MUST 
  be OpenGL. This would be the forced standard on all platforms.
 This 
  would mean Mesa would be needed to build the kernel. We could
 move over 
  Mesa into the kernel like zlib is in the tree right now.
  
  It is not true that it must be OpenGL. The suggestion is for an
 independent
  library that would support mode setting and cursor control.
 Actually OpenGL does
  not specify an API for these things, we would need to develop one.
  
  But broader issues are at work. Microsoft has decided to recode all
 graphics in
  Longhorn to use Direct3D. This was done to get at the performance
 gains provided
  by D3D and hardware accelerated graphics. For example a Cairo
 implementation hat
  uses X rendering vs Cairo on OpenGL was benchmarked at being a
 100:1 faster.
  
  A proposal has been made that OpenGL be promoted as the primary
 base graphics
  API on Linux. Then things like Cairo and the xserver be implemented
 on top of
  OpenGL.
  
  1) OpenGL is the only fully accelerated API that Linux has. We
 don't have D3D or
  anything else like it. Fully accelerated interfaces are a pain to
 build and it
  would stupid to do another one.
 
 Notice that this is not really true, as there is no free OpenGL
 acceleration for any of the newer graphic cards coming out right now.
 The fastest graphic card with full free acceleration is the radeon
 9000,
 which is now two generations old. This means that there is no
 acceleration outside of the x86 world, since neither ATI nor Nvidia
 are
 ready to build their proprietary drivers on anything else than x86. 
 

There is the possiblity that graphics vendors may provide an open
source DRM and mode setting code and then closed source 3D libraries. 
this would at least allow you to at least get something on the screen. 

 As long as this doesn't change, stating that we have an accelerated
 API
 for OpenGL in linux is not only dead wrong, but is leading us in a
 dangerous direction, where we will depend on a non-free component in
 the
 kernel and were we are going to forget about graphic support on
 anything
 non-x86.

well what should we do then?  ignore graphics on linux since most
future graphics chips drivers closed source?  keep the same kludgey
xfree86 solution?  We can still provide a solution for non-x86 or chips
without 3D, it will just have to be software based (or marginally
accelerated using 2d). why re-invent openGL when we already have it? 
If we can provide a good system for graphics on linux perhaps more
vendors will use it.

Alex

 
 Friendly,
 
 Sven Luther
 
 





__
Do you Yahoo!?
SBC Yahoo! - Internet access at a great low price.
http://promo.yahoo.com/sbc/


---
This SF.Net email is sponsored by: SourceForge.net Broadband
Sign-up now for SourceForge Broadband and get the fastest
6.0/768 connection for only $19.95/mo for the first 3 months!
http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


separate blend equation / function on r200

2004-05-14 Thread Roland Scheidegger
Ok, here's a patch to enable separate blend function / equation on r200, 
as well as fix glBlendColor. It needs drm changes, I've tried to make it 
backward/forward compatible in all ways, except the driver will not 
build with old drm sources (so this needs to be applied first, could 
someone do this for me if the patch looks ok? I don't have drm/dri write 
access nor do I usually need it.)

A couple of comments though:
- not sure if the radeon_common.h patch for dri is needed. The packets 
defined there are exactly the same as in radeon_drm.h, which I consider 
not a good idea. But if it's doubled, at least keep it in sync...
- drm changes are just a bunch of #defines, and a version number bump. I 
think the driver date always stays the same?

Compatiblity to old drm is achieved with a bunch of ifs. If you think 
this is too ugly, I'm open for other suggestions ;-). But it didn't seem 
like that much of a headache which would warrant ditching compatiblity 
to old drm neither.
Also, if old drm is used, glBlendColor will silently fail (that's 
already the case up till now), so will the glBlendSeparate functions if 
you use different RGB and alpha factors/equations (extensions are not 
announced, but you could still use them, no error will be generated).

Also, in case of color blending, always separate blending 
functions/equations are used for the chip. I think it might be possible 
to only set the CBLENDCNTL register if the factors/equations are the 
same and not enable seperate_alpha, but this looked like more complex 
control code without any benefits. Unless, of course, separate blending 
has a performance hit, I assumed no but haven't measured it. Is there a 
good demo/benchmark somewhere which would show such a hit (i.e. lots of 
blending, but not much else done)?

Testing seemed to show the new extensions work, though I couldn't find a 
good test. Mesa lacks a test for these extensions (as well as for 
blendColor).

Roland
Index: r200_context.c
===
RCS file: /cvs/mesa/Mesa/src/mesa/drivers/dri/r200/r200_context.c,v
retrieving revision 1.23
diff -u -r1.23 r200_context.c
--- r200_context.c  29 Apr 2004 12:23:41 -  1.23
+++ r200_context.c  14 May 2004 17:04:08 -
@@ -403,5 +403,9 @@
if (rmesa-r200Screen-drmSupportsCubeMaps)
   _mesa_enable_extension( ctx, GL_ARB_texture_cube_map );
+   if (rmesa-r200Screen-drmSupportsBlendColor) {
+  _mesa_enable_extension( ctx, GL_EXT_blend_equation_separate );
+  _mesa_enable_extension( ctx, GL_EXT_blend_func_separate );
+   }

 #if 0
r200InitDriverFuncs( ctx );
Index: r200_context.h
===
RCS file: /cvs/mesa/Mesa/src/mesa/drivers/dri/r200/r200_context.h,v
retrieving revision 1.15
diff -u -r1.15 r200_context.h
--- r200_context.h  5 May 2004 20:16:17 -   1.15
+++ r200_context.h  14 May 2004 17:04:11 -
@@ -213,7 +213,12 @@
 #define CTX_RB3D_COLOROFFSET  11
 #define CTX_CMD_2 12 /* why */
 #define CTX_RB3D_COLORPITCH   13 /* why */
-#define CTX_STATE_SIZE14
+#define CTX_STATE_SIZE_OLDDRM 14
+#define CTX_CMD_3 14
+#define CTX_RB3D_BLENDCOLOR   15
+#define CTX_RB3D_ABLENDCNTL   16
+#define CTX_RB3D_CBLENDCNTL   17
+#define CTX_STATE_SIZE_NEWDRM 18
 
 #define SET_CMD_0   0
 #define SET_SE_CNTL 1
Index: r200_reg.h
===
RCS file: /cvs/mesa/Mesa/src/mesa/drivers/dri/r200/r200_reg.h,v
retrieving revision 1.4
diff -u -r1.4 r200_reg.h
--- r200_reg.h  14 May 2004 13:01:08 -  1.4
+++ r200_reg.h  14 May 2004 17:04:20 -
@@ -1307,6 +1307,7 @@
 #define R200_PP_TXABLEND_70x2f78
 #define R200_PP_TXABLEND2_7   0x2f7c
 /* gap */
+#define R200_RB3D_BLENDCOLOR   0x3218 /* ARGB  */
 #define R200_RB3D_ABLENDCNTL   0x321C /* see BLENDCTL */
 #define R200_RB3D_CBLENDCNTL   0x3220 /* see BLENDCTL */
 
Index: r200_screen.c
===
RCS file: /cvs/mesa/Mesa/src/mesa/drivers/dri/r200/r200_screen.c,v
retrieving revision 1.18
diff -u -r1.18 r200_screen.c
--- r200_screen.c   29 Apr 2004 12:23:41 -  1.18
+++ r200_screen.c   14 May 2004 17:04:23 -
@@ -351,6 +352,10 @@
 
 /* Check if kernel module is new enough to support cube maps */
 screen-drmSupportsCubeMaps = (sPriv-drmMinor = 7);
+/* Check if kernel module is new enough to support blend color and
+separate blend functions/eqations */
+ screen-drmSupportsBlendColor = (sPriv-drmMinor = 11);
+
   }
}
 
Index: r200_screen.h
===
RCS file: /cvs/mesa/Mesa/src/mesa/drivers/dri/r200/r200_screen.h,v
retrieving revision 1.7
diff -u -r1.7 r200_screen.h
--- r200_screen.h   17 Mar 2004 

Re: [Dri-devel] Memory management of AGP and VRAM

2004-05-14 Thread David Bronaugh
Mike Mestnik wrote:

This is vary good.
 - To accomidate mergedfb the number of FBs should be allowed to be 0.
 

How does mergedfb work internally? I don't know.

Alternatively to this, maybe the best way to do this would be to specify 
a double-width mode (eg 2048x768) and an extra feature parameter of 
MERGEDFB or some such -- that might work. However, I can't claim to 
understand mergedfb (as in, how it's implemented) yet, so this is 
probably a naive solution.

 - Sharing of FBs should be allowed, for heads on the same card.
 

Same deal, except instead of a feature of MERGEDFB, the feature 
should be CLONE

 - There is no way to ?change?(read as specify) the size of a FB.
 

If you can specify the resolution, you can specify the size of the 
framebuffer. What else did you have in mind?

 - Allocating the second/... FB may be difficult,
  - Have mem free as well as mem total.
 - Returning hardware capabilitys(like in a termcap type way), not just
mem sizes.  I.E. zbuffer type(how to know it's size).
 

Hmm... I'd love for you to elaborate here, though I -think-  I know what 
you're getting at.

The more I think about this, the more sense it makes to have the apps 
talking to the kernel and requesting things via ioctl(s), the kernel 
communicating with the userspace to do mode management, and the kernel 
communicating back to the app. Having the  being the thing communicated 
with is getting to be a huge mess.

That being said, having kernel calling userspace via whatever method can 
be ugly. Here's the suggestion I got from hpa (which is roughly what I 
was thinking of; but he filled in some important bits):
- At  startup, a pipe is opened which the mode manager can read from 
and the kernel can write to
- When the kernel needs to set a mode, it locks the dev, feeds the 
pipe, and waits a predefined period of time (0.5s?)
   - Once the kernel's sure it can set the mode, it feeds the pipe to 
the mode manager a serialized version of the mode params
   - The mode manager sets the mode, then goes back to waiting on the pipe
   - The kernel returns from the ioctl 0.5s (or however much time) 
after it called the mode manager

Bad things about this approach:
- Can't tell if setting the mode succeeded (see below about fail-over)
- There is an assumption made about how long it will take to set modes 
--  would probably have to run with realtime priority to ensure setting 
the mode happened quickly enough (it already runs as root; why not)

Dubious things:
- Have to have mode knowledge gleaned from DDC or whatnot in kernel
   - Alternative: Have  also do
- There might be problems if the entire device needed to be locked 
while the mode was being set
   - My question: Is this necessary?

Good things:
- Don't have to have the  know about MergedFB, clone mode, etc because 
it's *simply* setting mode timings -- nothing more
- Still moves that important chunk of code out of the kernel
- Keeps locking entirely in kernel (if it is needed)

Here's the call chain for the mode manager (when it starts up):
- Sets up pipe to kernel
- Tries each DRM device, and finds out what's at each one
- Opens a config file, loads in user-specified modelines (necessary?)
- Queries each of them for DDC data or whatnot (using the specific 
driver) and stores it associated with that device
- Calls an ioctl on each live DRM device and informs it of the 
available modes (simply xres, yres, refresh; nothing more)
   - Could fire up a thread to watch for i2c data or whatever
- Waits on the pipe

Here's my current call chain for setting a mode:
- App requests mode from Extended DRM via ioctl (same sort of format, 
except packed into a struct instead of a string)
- Extended DRM checks if the requested mode is available
- Extended DRM locks the device
- Extended DRM checks if there is enough memory to set the specified mode
   - If not enough memory, Extended DRM returns -ENOMEM or something
   - Otherwise, continue
- Extended DRM frees previous framebuffer (if applicable), allocates 
new framebuffer(s) (including Z buffer)
   - This could be where the device could be unlocked, all depending
  - If the Extended DRM ioctl could have a safe way to set 
registers, it would be true.
- Extended DRM cooks up a serialized version of the mode string
- Extended DRM feeds pipe to userspace  application the serialized 
mode string
- Extended DRM waits 0.5s or whatever timeout is decided upon
  - When this is done (if we don't have a safe way to unlock registers 
and setting modes while other stuff is going on is not safe) the 
Extended DRM unlocks the device
-  Mode manager receives serialized mode, parses it (or whatever; could 
simply shove it into a struct; it's trusted code)
-  Mode manager gets best-match mode (you want something usable at 
least if bad things happen; screen corruption's ugly)
-  Mode manager sets appropriate registers
   - Sets timing registers
   - Does _NOT_ set base address registers or anything to do with 
tiling modes, or 

Re: [Dri-devel] Memory management of AGP and VRAM

2004-05-14 Thread Mike Mestnik
Let me start of by saying I think you are on the right track and all of
your ideas look good.

--- David Bronaugh [EMAIL PROTECTED] wrote:
 Mike Mestnik wrote:
 
 This is vary good.
   - To accomidate mergedfb the number of FBs should be allowed to be 0.
   
 
 How does mergedfb work internally? I don't know.
 
However we need it to?  I think if we cripple X the current mergedfb will
also have to be crippled.

 Alternatively to this, maybe the best way to do this would be to specify
 
 a double-width mode (eg 2048x768) and an extra feature parameter of 
 MERGEDFB or some such -- that might work. However, I can't claim to 
 understand mergedfb (as in, how it's implemented) yet, so this is 
 probably a naive solution.
 
I see it more as just a way of pointing a viewport to a framebuffer, like
a screen(FB) swap.  What I see is that a FB gets allocated and then modes
get set, with it's viewport looking into this FB.  This can all be part of
the modesetting code, then the FB and the viewport should be returned. 
That way the FB can be deallocated, after a succsesfull FB change.  There
will be rare cases where the card can't handel both FBs then the FB
allocate code might need to handel this NEEDED deallocate/change inorder
to allocate the new/replacing FB.

Hopefully modes can be set withought FBs this cuts down on the FB
{a,de}lloc code.  However inorder to cutdown on card specific code, it may
be best for all cards the deal with worst-case FB alloc, if this is to be
a feature.

   - Sharing of FBs should be allowed, for heads on the same card.
   
 
 Same deal, except instead of a feature of MERGEDFB, the feature 
 should be CLONE
 
I don't like the idea of having things so static.  Attaching and
deattaching modes(viewports) from FBs should be done via a full API.  If
this is at all possible?

   - There is no way to ?change?(read as specify) the size of a FB.
   
 
 If you can specify the resolution, you can specify the size of the 
 framebuffer. What else did you have in mind?
 
Size of viewport != FB size, thought I think you got that by the end of my
msg.

   - Allocating the second/... FB may be difficult,
My comments above and below, as two diffrent cases.

- Have mem free as well as mem total.
This helps with multi-tasking, I.E. Two apps sharing the same VT(context).
 For multi-headed cards thay will have to share FB resources.

   - Returning hardware capabilitys(like in a termcap type way), not
 just
 mem sizes.  I.E. zbuffer type(how to know it's size).
Allocating a FB on some cards may not be a simple as L*H*D.  As I'm not an
expert on hardware I don't know what snags you might hit on, that are not
version but card dependant.

   
 
 Hmm... I'd love for you to elaborate here, though I -think-  I know what
 
 you're getting at.
 
I wish I could but I realy don't know, it's just something I think the
desing might need. I used the source and saw into the future.

 
 This is sorta what I had in mind for modes. The first part is a blatant 
 rip of linux/fb.h:
 
 struct mode {
 __u32 xres; /* Actual FB width */
 __u32 yres; /* Actual FB height */
 __u32 xres_virtual; /* Virtual fb width */
 __u32 yres_virtual;/* Virtual fb height */
 __u32 xoffset; /* Offset of actual from top-left corner of virtual
 */
 __u32 yoffset;
 
 __u32 bpp; /* Bits per pixel */
 
 __u32 refresh; /* Preferred refresh rate (0 for no preference) */
 
 __u32 fb_mode; /* Example: various tiled modes versus linear; 
 defined as integers (LINEAR, TILED4K, etc) */
 __u32 feature; /* Numeric feature code (eg MERGEDFB, CLONE) */
 };
 
Virtual fb vs Actual FB.  IMHO Actual FB is the monitors mode and not the
allocated size of the FB(Virtual fb).

 This is what the mode manager receives:
 
 struct ms_mode {
 __u32 xres;
 __u32 yres;
 __u32 bpp;
 __u32 refresh;
 };
 
No FB?  This may be positive.





__
Do you Yahoo!?
SBC Yahoo! - Internet access at a great low price.
http://promo.yahoo.com/sbc/


---
This SF.Net email is sponsored by: SourceForge.net Broadband
Sign-up now for SourceForge Broadband and get the fastest
6.0/768 connection for only $19.95/mo for the first 3 months!
http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


savage texture compression - REALLY CLOSE

2004-05-14 Thread Mark Cass



guys,

as i have mentioned before i am working on 
addings3tc texture compression support to the savage driver. i have added 
code to the savage driver based upon the radeon driver (and patches). the code i 
have added only supports uploading pre-compressed textures. as also previously 
mentioned, i have tested the code and texture on a different computer that uses 
nvidia's driver and everything works.

as it stands now, i have the compressed texture 
showing up but it does not look right. the colors are all messed up. the 
proportions and layout of the texture are correct.

i set the size of the texture from the 
compressedSize variable in the texture struct. i set the width and height from 
the formulas in the radeon driver. width = (ImageWidthUncompressed / 4) x 16. 
height = ImageHeightUncompressed / 4. the "16" in the width formula is used for 
DXT3 and DXT5, "8" is used for DXT1. this width and height are used as input to 
the tile upload code, as well as inputs to the texture registers (width  
height in power of 2). 

i have tried using all possible internal formats 
for s3tc, as enumerated in savage_bci.h, but nonemake the picture look any 
better. in fact, most make it look worse so i think i have these 
right.

i have also tried using a diffeerent bytes 
pertexel which affects how the tiles are uploaded. currently i am using 1 
byte per texel.

it seems that the rightamount of data is 
getting uploaded. i have played with the formulas (i.e. width and height just 
divided by 4 instead of the current disproportional values)and the texture appears distorted in proportion and 
composition.

does anyone have an idea?
mark


[ dri-Bugs-954295 ] Heavy DRI use causes lockup

2004-05-14 Thread SourceForge.net
Bugs item #954295, was opened at 2004-05-14 21:50
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detailatid=100387aid=954295group_id=387

Category: MGA OpenGL
Group: System Hang/Kernel Oops/Panic
Status: Open
Priority: 5
Submitted By: Chris Metzler (funkapus)
Assigned to: Nobody/Anonymous (nobody)
Summary: Heavy DRI use causes lockup

Initial Comment:

This bug was originally filed as Debian bug #247756 and
tagged as an upstream issue; hence, I'm here.

System info:  kernel 2.4.23, XF86 v4.3.0-7, MGA G550
video card, using the driver that came with X, *not*
using the proprietary mga_hal driver from Matrox.

I am getting lockups when running OpenGL programs. 
They're not exactly reproducible, in the sense that I
cannot guarantee the lockup will occur if I do any
particular sequence of steps.  However, I have noticed
that it only happens when a large amount of 3D info is
being updated -- for example, if I'm flying near the
ground of a city in FlightGear.  When it occurs, the
mouse still moves the mouse cursor around; but screen
updates (like a second counter on a clock, or the
OpenGL app itself) cease, no keys have any effect
(including attempts to switch virtual consoles), and
sound ends.  I don't have another machine handy, so I
don't know whether I could still ssh in; and I don't
know whether doing so and killing X would have any
effect.  I suspect I wouldn't be able to ssh in, and
that the lockup is complete, because it appears that
regular notes to the system logs stop being written
(looking at the logs after reboot).  I'm not sure of
that, though; and am reluctant to reproduce it and
check unless you need me to because I want to avoid
forcing crashes if possible.

At any rate, my only option is the reset button. 
Inspecting the XFree86.0.log.old file after rebooting,
I find hundreds of messages like this:

(EE) MGA(0): [dri] Idle timed out, resetting engine...
(EE) MGA(0): [dri] Idle timed out, resetting engine...
(EE) MGA(0): [dri] Idle timed out, resetting engine...

. . . up until the end of the file.

I was experiencing a very similar problem under XF86
4.2 (exactly the same symptoms, but no such messages
above in the log), but waited until now to file the bug
out of advice that the Matrox drivers/dri support for X
4.2 were dated and that things might improve under 4.3.

This looks like the same bug as someone else filed
against X.org:

http://pdx.freedesktop.org/cgi-bin/bugzilla/show_bug.cgi?id=473

except I don't get it immediately at the start.  The
program
needs to be doing a lot of rendering for it to occur.   And
unlike that poster, I dunno whether I can ssh in.

It may also be the same bug as #546281 here:

http://sourceforge.net/tracker/index.php?func=detailaid=546281group_id=387atid=100387

I don't have any way to tell.

I would like to do anything I can to get this solved. 
If there's more information you need from me, please
let me know and I'll be happy to send it your way.

Thanks for any help.

-c



--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detailatid=100387aid=954295group_id=387


---
This SF.Net email is sponsored by: SourceForge.net Broadband
Sign-up now for SourceForge Broadband and get the fastest
6.0/768 connection for only $19.95/mo for the first 3 months!
http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel