Re: Radeon 7200 problems

2004-12-07 Thread Manuel Bilderbeek
Hi,
Is there any update on the situation I reported? (Full thread on 
http://sourceforge.net/mailarchive/forum.php?thread_id=4872625forum_id=7177 
)

Could someone please answer the questions I posted? :)
Thanks in advance...
(Just a user checking if there's a fix for his problems in the mean time...)
--
Grtjs, Manuel
PS: MSX FOR EVER! (Questions? http://faq.msxnet.org/ )
PPS: Visit my homepage at http://manuel.msxnet.org/
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now. 
http://productguide.itmanagersjournal.com/
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-05 Thread Patrick McFarland
On 04-Jun-2004, Mike Mestnik wrote:
 Right, but thay must not EVER step on each other.  From the time one
 uploades a texture and then later unloads it, that space can't be used. 
 After an unload the memory can be maped back to system use.
 
 I think the kernel's memory subsystem should be the one to manage and
 setup AGP memory.  This way it can allocate unused AGP memory for ANY
 other application.  This is a long way away from where we are now but it
 definatly is where we should be.  On IGP systems it's even more important
 that video ram is also manged by this subsystem.

Though I disagree with some of your conclusions, and why you think some
things should be, I agree with you here. The kernel should manage video
memory, X shouldnt. 

In many cases, it would be advantagious to have one outside system (the
kernel); one being of which fbcon buffer usage, x buffer usage, and dri
texture space/other buffer usage in and out of x all map memory, but may
easily trip over each other if there are driver bugs; this readly shows
itself when you try to run multiple X at once, dri wont let you have
multiple contexts.

Which brings me to mention something else: I fully believe that the
kernel should be completely managing all aspects of memory and state
management of both 2D and 3D hardware. The kernel's portion of DRI
should be providing methods to allow multiple DRI using apps (such as
multiple xservers running at once) and multiple opengl apps within a
single DRI context to work flawlessly with each other.

Currently, projects such as DirectFB suffer because there is really no
unified method to do this. DirectFB nor xservers should ever be managing
memory on their own, nor managing parts of the DRI context on their own.
It becomes very easy to get different peices of software to break each
other, or simply prevent each other from working at the same time.

This, however, requires a more unified driver system (on platforms that
support it) between DRI and fbcon. (Does BSD have an equivilent to this?)
This new hybrid system would do all memory management, do the actual
resolution and depth changes, expose 2D and 3D hardware acceleration
functions, allow applications (DirectFB, xservers) to query the
available acceleration methods, provide DRI contexts and help manage
them so multiple GL apps would work on all drivers (which, afaik, few if
any correctly support), and probably increase the over all quality of
all software.

keithp, if you're around, I would love to hear your thoughts on this,
and may be would you be interested in implementing something like this?
(It is wy beyond my ability to even think of developing.)

-- 
Patrick Diablo-D3 McFarland || [EMAIL PROTECTED]
Computer games don't affect kids; I mean if Pac-Man affected us as kids, we'd 
all be running around in darkened rooms, munching magic pills and listening to
repetitive electronic music. -- Kristian Wilson, Nintendo, Inc, 1989


signature.asc
Description: Digital signature


Re: Radeon 7200 problems

2004-06-05 Thread Ville Syrjälä
On Sat, Jun 05, 2004 at 03:09:54AM -0400, Patrick McFarland wrote:
 
 Which brings me to mention something else: I fully believe that the
 kernel should be completely managing all aspects of memory and state
 management of both 2D and 3D hardware. The kernel's portion of DRI
 should be providing methods to allow multiple DRI using apps (such as
 multiple xservers running at once) and multiple opengl apps within a
 single DRI context to work flawlessly with each other.
 
 Currently, projects such as DirectFB suffer because there is really no
 unified method to do this. DirectFB nor xservers should ever be managing
 memory on their own, nor managing parts of the DRI context on their own.
 It becomes very easy to get different peices of software to break each
 other, or simply prevent each other from working at the same time.
 
 This, however, requires a more unified driver system (on platforms that
 support it) between DRI and fbcon. (Does BSD have an equivilent to this?)
 This new hybrid system would do all memory management,

I agree, the kernel should manage video and AGP memory.

 do the actual
 resolution and depth changes,

I agree with this one too. But the interface should be more flexible than 
what fbdev provides. And it should handle overlays as well.

 expose 2D and 3D hardware acceleration
 functions, allow applications (DirectFB, xservers) to query the
 available acceleration methods,

I disagree.

This part of the kernel should be as dumb as possible. I think the best 
interface would be simply one accepting almost complete DMA buffers. The 
only thing missing from these buffers would be real memory addresses. The 
client should just use a surface id (handed out by the memory allocator) 
instead of a real address. The kernel would then check if the client is 
allowed to use those surfaces and replace the ids with real addresses. The 
kernel should also check the buffers for other dangerous stuff.

For what it's worth Microsoft seems to have a quite similar system in 
mind. 
http://download.microsoft.com/download/1/8/f/18f8cee2-0b64-41f2-893d-a6f2295b40c8/DW04018_WINHEC2004.ppt

One clever thing they are doing is using the GART dynamically for swapping 
out video memory.

 provide DRI contexts and help manage
 them so multiple GL apps would work on all drivers (which, afaik, few if
 any correctly support), and probably increase the over all quality of
 all software.

? You can run multiple GL apps just fine. If it doesn't work it's a driver 
bug.

-- 
Ville Syrjälä
[EMAIL PROTECTED]
http://www.sci.fi/~syrjala/


---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-05 Thread Michel Dänzer
On Sat, 2004-06-05 at 12:21 +0300, Ville Syrjl wrote:
 On Sat, Jun 05, 2004 at 03:09:54AM -0400, Patrick McFarland wrote:
  
  expose 2D and 3D hardware acceleration
  functions, allow applications (DirectFB, xservers) to query the
  available acceleration methods,
 
 I disagree.
 
 This part of the kernel should be as dumb as possible. I think the best 
 interface would be simply one accepting almost complete DMA buffers. The 
 only thing missing from these buffers would be real memory addresses. 

I'm not sure about that; pseudo-command buffers that the DRM parses and
generates the actual DMA buffers from on the fly might be better for
security and/or performance reasons.

 The client should just use a surface id (handed out by the memory allocator) 
 instead of a real address. The kernel would then check if the client is 
 allowed to use those surfaces and replace the ids with real addresses. The 
 kernel should also check the buffers for other dangerous stuff.

Seconded.

I wonder if we can reasonably get there in a backwards compatible way...


-- 
Earthling Michel Dnzer  | Debian (powerpc), X and DRI developer
Libre software enthusiast|   http://svcs.affero.net/rm.php?r=daenzer



---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-05 Thread Ville Syrjälä
On Sat, Jun 05, 2004 at 12:41:33PM +0200, Michel Dänzer wrote:
 On Sat, 2004-06-05 at 12:21 +0300, Ville Syrjälä wrote:
  On Sat, Jun 05, 2004 at 03:09:54AM -0400, Patrick McFarland wrote:
   
   expose 2D and 3D hardware acceleration
   functions, allow applications (DirectFB, xservers) to query the
   available acceleration methods,
  
  I disagree.
  
  This part of the kernel should be as dumb as possible. I think the best 
  interface would be simply one accepting almost complete DMA buffers. The 
  only thing missing from these buffers would be real memory addresses. 
 
 I'm not sure about that; pseudo-command buffers that the DRM parses and
 generates the actual DMA buffers from on the fly might be better for
 security and/or performance reasons.

Quite possible. Though I'm not totally convinced of the performace 
argument since the kernel would then have to build all of the buffers from 
scratch. With real buffers we could just check which register the value is 
for and if there are no problems with that register the value could be 
passed as is.

And one major problem with pseudo buffers is that they should not impose 
any nasty limits on what we can do. So the design would have to be good 
from the start.

  The client should just use a surface id (handed out by the memory allocator) 
  instead of a real address. The kernel would then check if the client is 
  allowed to use those surfaces and replace the ids with real addresses. The 
  kernel should also check the buffers for other dangerous stuff.
 
 Seconded.
 
 I wonder if we can reasonably get there in a backwards compatible way...

I think the current DRM interface could be moved on top of the new one. 
Maybe as a separate compatibility module...

-- 
Ville Syrjälä
[EMAIL PROTECTED]
http://www.sci.fi/~syrjala/


---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-05 Thread Michel Dänzer
On Sat, 2004-06-05 at 14:09 +0300, Ville Syrjl wrote:
 On Sat, Jun 05, 2004 at 12:41:33PM +0200, Michel Dnzer wrote:
  On Sat, 2004-06-05 at 12:21 +0300, Ville Syrjl wrote:
   On Sat, Jun 05, 2004 at 03:09:54AM -0400, Patrick McFarland wrote:

expose 2D and 3D hardware acceleration
functions, allow applications (DirectFB, xservers) to query the
available acceleration methods,
   
   I disagree.
   
   This part of the kernel should be as dumb as possible. I think the best 
   interface would be simply one accepting almost complete DMA buffers. The 
   only thing missing from these buffers would be real memory addresses. 
  
  I'm not sure about that; pseudo-command buffers that the DRM parses and
  generates the actual DMA buffers from on the fly might be better for
  security and/or performance reasons.
 
 Quite possible. Though I'm not totally convinced of the performace 
 argument since the kernel would then have to build all of the buffers from 
 scratch. With real buffers we could just check which register the value is 
 for and if there are no problems with that register the value could be 
 passed as is.

True, but OTOH, to be secure, we might have to check against any
potentially harmful register value combinations, unmap the buffer from
user space before checking and submitting it to the hardware, ... It's
not obvious to me which variant would perform better. IIRC this has been
discussed extensively for mach64, has a conclusion been reached there
yet?

 And one major problem with pseudo buffers is that they should not impose 
 any nasty limits on what we can do. So the design would have to be good 
 from the start.

I think the cmdbuf interface used by the Radeon drivers should be
extensible enough?


   The client should just use a surface id (handed out by the memory allocator) 
   instead of a real address. The kernel would then check if the client is 
   allowed to use those surfaces and replace the ids with real addresses. The 
   kernel should also check the buffers for other dangerous stuff.
  
  Seconded.
  
  I wonder if we can reasonably get there in a backwards compatible way...
 
 I think the current DRM interface could be moved on top of the new one. 
 Maybe as a separate compatibility module...

Interesting idea.


-- 
Earthling Michel Dnzer  | Debian (powerpc), X and DRI developer
Libre software enthusiast|   http://svcs.affero.net/rm.php?r=daenzer



---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-05 Thread Patrick McFarland
On 05-Jun-2004, Michel D?nzer wrote:
 On Sat, 2004-06-05 at 12:21 +0300, Ville Syrj??l?? wrote:
  This part of the kernel should be as dumb as possible. I think the best 
  interface would be simply one accepting almost complete DMA buffers. The 
  only thing missing from these buffers would be real memory addresses. 
 
 I'm not sure about that; pseudo-command buffers that the DRM parses and
 generates the actual DMA buffers from on the fly might be better for
 security and/or performance reasons.

Yeah, security is always an issue. Isnt there a way to abstract direct
access to the hardware out, so there isnt any way to fubar the system?

  The client should just use a surface id (handed out by the memory allocator) 
  instead of a real address. The kernel would then check if the client is 
  allowed to use those surfaces and replace the ids with real addresses. The 
  kernel should also check the buffers for other dangerous stuff.
 
 Seconded.
 
 I wonder if we can reasonably get there in a backwards compatible way...

Do we really have to? I mean, I wouldnt mind telling $MODERN_KERNEL
users to upgrade their X, it would be for their own good anyhow.

-- 
Patrick Diablo-D3 McFarland || [EMAIL PROTECTED]
Computer games don't affect kids; I mean if Pac-Man affected us as kids, we'd 
all be running around in darkened rooms, munching magic pills and listening to
repetitive electronic music. -- Kristian Wilson, Nintendo, Inc, 1989


signature.asc
Description: Digital signature


Re: Radeon 7200 problems

2004-06-05 Thread Patrick McFarland
On 05-Jun-2004, Ville Syrj?l? wrote:
 On Sat, Jun 05, 2004 at 12:41:33PM +0200, Michel D?nzer wrote:
  I'm not sure about that; pseudo-command buffers that the DRM parses and
  generates the actual DMA buffers from on the fly might be better for
  security and/or performance reasons.
 
 Quite possible. Though I'm not totally convinced of the performace 
 argument since the kernel would then have to build all of the buffers from 
 scratch. With real buffers we could just check which register the value is 
 for and if there are no problems with that register the value could be 
 passed as is.
 
 And one major problem with pseudo buffers is that they should not impose 
 any nasty limits on what we can do. So the design would have to be good 
 from the start.

Well, wouldnt it be easier to deal with state management if the kernel
is abstracting the entire buffer interface?

  I wonder if we can reasonably get there in a backwards compatible way...
 
 I think the current DRM interface could be moved on top of the new one. 
 Maybe as a separate compatibility module...

Yeah, that could be possible. Old DRI using apps would never know the
difference...


-- 
Patrick Diablo-D3 McFarland || [EMAIL PROTECTED]
Computer games don't affect kids; I mean if Pac-Man affected us as kids, we'd 
all be running around in darkened rooms, munching magic pills and listening to
repetitive electronic music. -- Kristian Wilson, Nintendo, Inc, 1989


signature.asc
Description: Digital signature


Re: Radeon 7200 problems

2004-06-05 Thread Mike Mestnik

--- Patrick McFarland [EMAIL PROTECTED] wrote:
 On 05-Jun-2004, Michel D?nzer wrote:
  On Sat, 2004-06-05 at 12:21 +0300, Ville Syrj??l?? wrote:

   The client should just use a surface id (handed out by the memory
 allocator) 
   instead of a real address. The kernel would then check if the client
 is 
   allowed to use those surfaces and replace the ids with real
 addresses. The 
   kernel should also check the buffers for other dangerous stuff.
  
  Seconded.
  
  I wonder if we can reasonably get there in a backwards compatible
 way...
 
 Do we really have to? I mean, I wouldnt mind telling $MODERN_KERNEL
 users to upgrade their X, it would be for their own good anyhow.
 
We really need to support ALL of xfree86 4.2 and up.  Years from now we
can drop that support but there must be the replacement allready in it's
place, I.E. no one running 4.3 or 4.4, asuming the fix is in for 4.5.

 -- 
 Patrick Diablo-D3 McFarland || [EMAIL PROTECTED]
 Computer games don't affect kids; I mean if Pac-Man affected us as
 kids, we'd 
 all be running around in darkened rooms, munching magic pills and
 listening to
 repetitive electronic music. -- Kristian Wilson, Nintendo, Inc, 1989
 

 ATTACHMENT part 2 application/pgp-signature name=signature.asc






__
Do you Yahoo!?
Friends.  Fun.  Try the all-new Yahoo! Messenger.
http://messenger.yahoo.com/ 


---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-05 Thread Mike Mestnik

--- Patrick McFarland [EMAIL PROTECTED] wrote:
 On 05-Jun-2004, Ville Syrj?l? wrote:
  On Sat, Jun 05, 2004 at 12:41:33PM +0200, Michel D?nzer wrote:
   I'm not sure about that; pseudo-command buffers that the DRM parses
 and
   generates the actual DMA buffers from on the fly might be better for
   security and/or performance reasons.
  
  Quite possible. Though I'm not totally convinced of the performace 
  argument since the kernel would then have to build all of the buffers
 from 
  scratch. With real buffers we could just check which register the
 value is 
  for and if there are no problems with that register the value could be
 
  passed as is.
  
  And one major problem with pseudo buffers is that they should not
 impose 
  any nasty limits on what we can do. So the design would have to be
 good 
  from the start.
 
 Well, wouldnt it be easier to deal with state management if the kernel
 is abstracting the entire buffer interface?
 
easier, maby yes.  The problem is you going to want to add more state
latter.  If we keep state in the kernel it's bound to get ugly with
versioning and upgrading.

It's realy a driver dependant question, acctualy more of a developer
dependant question.  It would be a lovely world indeed with multiple
drivers for each card, some ignoring security with(SVGA type) SUID root.

   I wonder if we can reasonably get there in a backwards compatible
 way...
  
  I think the current DRM interface could be moved on top of the new
 one. 
  Maybe as a separate compatibility module...
 
 Yeah, that could be possible. Old DRI using apps would never know the
 difference...
 
Thats basicalu what has been done for years, I think it's best to ignor
this problem till it's time has come.  Develope fist, then wonder how your
going to bring in the old interface.

 
 -- 
 Patrick Diablo-D3 McFarland || [EMAIL PROTECTED]
 Computer games don't affect kids; I mean if Pac-Man affected us as
 kids, we'd 
 all be running around in darkened rooms, munching magic pills and
 listening to
 repetitive electronic music. -- Kristian Wilson, Nintendo, Inc, 1989
 

 ATTACHMENT part 2 application/pgp-signature name=signature.asc






__
Do you Yahoo!?
Friends.  Fun.  Try the all-new Yahoo! Messenger.
http://messenger.yahoo.com/ 


---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-05 Thread Ian Romanick
Ville Syrjälä wrote:
This part of the kernel should be as dumb as possible. I think the best 
interface would be simply one accepting almost complete DMA buffers. The 
only thing missing from these buffers would be real memory addresses. The 
client should just use a surface id (handed out by the memory allocator) 
instead of a real address. The kernel would then check if the client is 
allowed to use those surfaces and replace the ids with real addresses. The 
kernel should also check the buffers for other dangerous stuff.
Yes and no.  For some classes of operations, this would have significant 
performance penalties.  I think a system like what is used in 
ARB_vertex_buffer_objects would work nicely.  There you have an abstract 
handle to a buffer, but that buffer can be temporarilly mapped to a real 
address.  That pointer can then be used to load vertex data, textures, 
whatever.


---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-04 Thread Michel Dänzer
On Fri, 2004-06-04 at 04:16 +0200, Roland Scheidegger wrote:
 Ian Romanick wrote:
  Manuel Bilderbeek wrote:
  
  Option GARTSize 64M
  
 This doesn't work for me, the driver ignores all values supplied to that
 parameter (dri tree). It accepts though values supplied to the old,
 deprecated (?) AGPSize option.
 Quick fix attached. 

Oops, I thought I had tested this...

 A simpler option would be to axe support for AGPSize...

I don't think we should break working configurations as long as we can
support them so easily.

 It doesn't seem to change the reported maximum texture sizes though for
 the r200 at least (could be different on radeon I guess), does it
 actually support using agp memory for texturing? Some environment
 variable suggests that so though.

In contrast to radeon, r200 only uses the GART for some extension IIRC.


  Can we make the driver use a larger GART size by default when DRI is
   enabled?  5MB is just about worthless.  I would suggest 16MB as a
  bare minimum.

Seconded.

 Couldn't it just use the largest GART size possible (set by the bios),
 or would this have some negative consequences? 

It could waste a lot of RAM.

 Currently, if you set the gart size manually higher than what's possible 
 (set in bios), dri will just get disabled due to missing agp support, 
 which I consider bad behaviour, and that you get a useless error message 
 in that case doesn't help neither.
 (II) RADEON(0): [agp] 262144 kB allocated with handle 0x0001
 (EE) RADEON(0): [agp] Could not bind
 (EE) RADEON(0): [agp] AGP failed to initialize. Disabling the DRI.
 (II) RADEON(0): [agp] You may want to make sure the agpgart kernel 
 module is loaded before the radeon kernel module.

IMHO only the 'Could not bind' error could use some clarification,
otherwise I find this the only sane way to deal with an impossible
configuration.


-- 
Earthling Michel Dnzer  | Debian (powerpc), X and DRI developer
Libre software enthusiast|   http://svcs.affero.net/rm.php?r=daenzer



---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-04 Thread Nicolai Haehnle
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Friday 04 June 2004 12:22, Michel Dnzer wrote:
  Currently, if you set the gart size manually higher than what's possible 
  (set in bios), dri will just get disabled due to missing agp support, 
  which I consider bad behaviour, and that you get a useless error message 
  in that case doesn't help neither.
  (II) RADEON(0): [agp] 262144 kB allocated with handle 0x0001
  (EE) RADEON(0): [agp] Could not bind
  (EE) RADEON(0): [agp] AGP failed to initialize. Disabling the DRI.
  (II) RADEON(0): [agp] You may want to make sure the agpgart kernel 
  module is loaded before the radeon kernel module.
 
 IMHO only the 'Could not bind' error could use some clarification,
 otherwise I find this the only sane way to deal with an impossible
 configuration.

Would it be possible to do an automatic fallback to the largest allowed gart 
size, along with an appropriate warning/error message?
Tell me to shut up if it's not possible to query the maximum size ;)

cu,
Nicolai
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFAwFrssxPozBga0lwRAqn6AKDX6knsbuksaX3KoAC/A5kG852mbQCgzpxv
9wEb+fLrX40yADNIfAZeCww=
=kgbs
-END PGP SIGNATURE-


---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-04 Thread Roland Scheidegger
Michel Dnzer wrote:
A simpler option would be to axe support for AGPSize...

I don't think we should break working configurations as long as we
can support them so easily.
Ok.
Couldn't it just use the largest GART size possible (set by the
bios), or would this have some negative consequences?

It could waste a lot of RAM.
But is this a problem? It surely eats away some of the 3GB user address
space I believe (afaik the low-mem kernel address space agpgart takes is 
gone anyway), but unless the driver is really stupid and just fills it 
up even if it doesn't need to then I can't see a problem.

Currently, if you set the gart size manually higher than what's
possible (set in bios), dri will just get disabled due to missing
agp support, which I consider bad behaviour, and that you get a
useless error message in that case doesn't help neither. (II)
RADEON(0): [agp] 262144 kB allocated with handle 0x0001 (EE)
RADEON(0): [agp] Could not bind (EE) RADEON(0): [agp] AGP failed to
initialize. Disabling the DRI. (II) RADEON(0): [agp] You may want
to make sure the agpgart kernel module is loaded before the radeon
kernel module.

IMHO only the 'Could not bind' error could use some clarification,
That would be a good start...
otherwise I find this the only sane way to deal with an impossible 
configuration.
I like what's suggested by Nicolai.
Roland
---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-04 Thread Michel Dänzer
On Fri, 2004-06-04 at 16:48 +0200, Roland Scheidegger wrote:
 Michel Dnzer wrote:
 
  Couldn't it just use the largest GART size possible (set by the
  bios), or would this have some negative consequences?
  
  
  It could waste a lot of RAM.
 But is this a problem? It surely eats away some of the 3GB user address
 space I believe (afaik the low-mem kernel address space agpgart takes is 
 gone anyway), but unless the driver is really stupid and just fills it 
 up even if it doesn't need to then I can't see a problem.

I understand that's the case, that's why I wrote 'waste a lot'. I'd love
to be corrected though.


  otherwise I find this the only sane way to deal with an impossible 
  configuration.
 I like what's suggested by Nicolai.

I tend to consider that a violation of the principle of least surprise,
but I'll defer to the majority of developers.


-- 
Earthling Michel Dnzer  | Debian (powerpc), X and DRI developer
Libre software enthusiast|   http://svcs.affero.net/rm.php?r=daenzer



---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-04 Thread Felix Kühling
On Fri, 04 Jun 2004 16:48:09 +0200
Roland Scheidegger [EMAIL PROTECTED] wrote:

 Michel Dänzer wrote:
  A simpler option would be to axe support for AGPSize...
  
  
  I don't think we should break working configurations as long as we
  can support them so easily.
 Ok.
 
  Couldn't it just use the largest GART size possible (set by the
  bios), or would this have some negative consequences?
  
  
  It could waste a lot of RAM.
 But is this a problem? It surely eats away some of the 3GB user address
 space I believe (afaik the low-mem kernel address space agpgart takes is 
 gone anyway), but unless the driver is really stupid and just fills it 
 up even if it doesn't need to then I can't see a problem.

Apparently drivers tend to be stupid. ;-) Just this morning I happened
to play around with the AGPSize parameter on my notebook (Savage). After
reducing it from 32MB to 4MB I have 28MB more memory available (as seen
by free). This is quite significant with 256MB of RAM and a shared
memory graphics chip.

I'm not sure about the details of how AGP space is handled now. But
AFAIK once the Xserver has started any 3D client can put textures into
AGP space and the graphics card must be able to address that data. So
pages have to be allocated statically to the AGP aperture on Xserver
startup, otherwise this won't work. If this is correct than it's even
worse than wasting virtual memory, it wastes physical memory, since
pages in the AGP aperture must not be swapped to disk. Someone please
correct me if I'm wrong.

 
[snip]

| Felix Kühling [EMAIL PROTECTED] http://fxk.de.vu |
| PGP Fingerprint: 6A3C 9566 5B30 DDED 73C3  B152 151C 5CC1 D888 E595 |


---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-04 Thread Roland Scheidegger
Felix Kühling wrote:
It could waste a lot of RAM.
But is this a problem? It surely eats away some of the 3GB user address
space I believe (afaik the low-mem kernel address space agpgart takes is 
gone anyway), but unless the driver is really stupid and just fills it 
up even if it doesn't need to then I can't see a problem.

Apparently drivers tend to be stupid. ;-) Just this morning I happened
to play around with the AGPSize parameter on my notebook (Savage). After
reducing it from 32MB to 4MB I have 28MB more memory available (as seen
by free). This is quite significant with 256MB of RAM and a shared
memory graphics chip.
I'm not sure about the details of how AGP space is handled now. But
AFAIK once the Xserver has started any 3D client can put textures into
AGP space and the graphics card must be able to address that data. So
pages have to be allocated statically to the AGP aperture on Xserver
startup, otherwise this won't work. If this is correct than it's even
worse than wasting virtual memory, it wastes physical memory, since
pages in the AGP aperture must not be swapped to disk. Someone please
correct me if I'm wrong.
Oops, I guess this IS a problem then. Looks to me like the real solution 
would be to dynamically allocate AGP space? The whole memory management 
currently is really stupid, everything is completely static - buffers 
(front, back, Z, offscreen pixmaps, and so on), AGP pages... It's a 
complete waste of memory (video and it looks like system memory too).

Roland
---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-04 Thread Ian Romanick
Michel Dnzer wrote:
It doesn't seem to change the reported maximum texture sizes though for
the r200 at least (could be different on radeon I guess), does it
actually support using agp memory for texturing? Some environment
variable suggests that so though.
In contrast to radeon, r200 only uses the GART for some extension IIRC.
I'd like to see this behavior become optional.  The vast majority of 
users want normal AGP texturing, not a funky version of 
NV_vertex_array_range (that's really used for textures).

We'd need to disable the GART heap (see RADEONDRIGartHeapInit in 
xc/programs/Xserver/hw/xfree86/drivers/ati/radeon_dri.c) and communicate 
that to the client-side driver some how.  To maintain backwards 
compatibility, we'd probably want to initialize the GART heap to be 0 
bytes.  If the client-side driver detects a DRM version less than 1.6 or 
detects that the GART heap is less than the size of the aperture, it 
could use whatever was left over as normal AGP texturing.

Sounds like a good project for someone. ;)
Couldn't it just use the largest GART size possible (set by the bios),
or would this have some negative consequences? 
It could waste a lot of RAM.
Yup.  This is one of the bad parts of the AGP implementation on Linux. 
Once the AGP aperture is setup, it always has non-swappable memory 
backing it.  If you set a 256MB aperture, it is as if you took 256MB out 
of your computer.  We really need a mechanism to say This range of the 
aperture doesn't need any backing.  This could have security 
implications.  What if one process removes the backing from a region of 
the AGP aperture a the same instant another process tries to texture 
from that range?  Random memory reads?  System crash?  Dunno...


---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-04 Thread Mike Mestnik

--- Ian Romanick [EMAIL PROTECTED] wrote:
 Michel Dänzer wrote:
 Couldn't it just use the largest GART size possible (set by the bios),
 or would this have some negative consequences? 
I think the Idea Here is to fallback after a failed bind!!!  I.E. AGPSize
== 124MB where only a 64MB largest GART size possible.  Too AGPSize =
largest GART size possible; befour trying to bind, dumping a warning as
well.

  
  It could waste a lot of RAM.
 
 Yup.  This is one of the bad parts of the AGP implementation on Linux. 
 Once the AGP aperture is setup, it always has non-swappable memory 
 backing it.  If you set a 256MB aperture, it is as if you took 256MB out
 
 of your computer.  We really need a mechanism to say This range of the 
 aperture doesn't need any backing.  This could have security 
The 'owner' of mapped AGP space would be the only one who could unmap.  As
I see it contexts can't share AGP space so there is no sharing issue.

Can we just bind all of the *requested* AGP space and then store other(non
AGP) data there and swap that when needed?  Could we also extend this to
being able to use video ram for programs as well?  I know back in the days
of dos it was quite posible to use not only video ram but IO regs like the
speeker's power bit for storing data.

 implications.  What if one process removes the backing from a region of 
 the AGP aperture a the same instant another process tries to texture 
 from that range?  Random memory reads?  System crash?  Dunno...
 
This just can't heppen, no sharing of AGP space, right?





__
Do you Yahoo!?
Friends.  Fun.  Try the all-new Yahoo! Messenger.
http://messenger.yahoo.com/ 


---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-04 Thread Ian Romanick
Mike Mestnik wrote:
--- Ian Romanick [EMAIL PROTECTED] wrote:
Michel Dänzer wrote:
It could waste a lot of RAM.
Yup.  This is one of the bad parts of the AGP implementation on Linux. 
Once the AGP aperture is setup, it always has non-swappable memory 
backing it.  If you set a 256MB aperture, it is as if you took 256MB out

of your computer.  We really need a mechanism to say This range of the 
aperture doesn't need any backing.  This could have security 
The 'owner' of mapped AGP space would be the only one who could unmap.  As
I see it contexts can't share AGP space so there is no sharing issue.
In our case, I guess the owner is the X-server?  Or is it the DRM?  Hmm...
implications.  What if one process removes the backing from a region of 
the AGP aperture a the same instant another process tries to texture 
from that range?  Random memory reads?  System crash?  Dunno...
This just can't heppen, no sharing of AGP space, right?
Video memory and AGP memory is accessed by all direct rendering 
processes and the X-server.


---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-04 Thread Roland Scheidegger
Ian Romanick wrote:
implications.  What if one process removes the backing from a region 
of the AGP aperture a the same instant another process tries to 
texture from that range?  Random memory reads?  System crash?  Dunno...

This just can't heppen, no sharing of AGP space, right?

Video memory and AGP memory is accessed by all direct rendering 
processes and the X-server.
Hmm. But at least increasing the AGP size shouldn't be much of a 
problem, right?
I'm thinking something like this:
client runs out of texture size - requests from a central instance 
(say, X Server) GART size to be increased - X Server tells agpgart to 
increase mapped GART size - if successful, agpgart tells drm/drms (can 
there be more than one) new valid GART range, and X Server tells all dri 
clients new valid range.

Reclaiming could be done similarly, though maybe must be triggered by 
some daemon from time to time?
- Someone requests GART size to be decreased - X Server asks dri 
clients to cope with lower GART size (and if one client says no because 
it still has textures bound in that space abort) - asks agpgart to 
decrease GART size - agpgart asks drm drivers to decrease GART size - 
drm drivers need to make sure all commands they have already checked for 
valid offsets and sent to the graphic card have been processed (I think 
that's a problem, as some texture offset could still be used later), 
then give their ok to agpgart - agpgart reduces mapped size.

Looks like a lot could go wrong though, and I'm not even sure it makes 
sense...
And some crazy dri client holding that 1x1 texture right at 256MB would 
still be able to make it impossible to decrease gart size again, though 
that shouldn't be much of a problem.

Roland
---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-04 Thread Ian Romanick
Roland Scheidegger wrote:
Ian Romanick wrote:
implications.  What if one process removes the backing from a region 
of the AGP aperture a the same instant another process tries to 
texture from that range?  Random memory reads?  System crash?  Dunno...

This just can't heppen, no sharing of AGP space, right?

Video memory and AGP memory is accessed by all direct rendering 
processes and the X-server.
Hmm. But at least increasing the AGP size shouldn't be much of a 
problem, right?
I'm thinking something like this:
client runs out of texture size - requests from a central instance 
(say, X Server) GART size to be increased - X Server tells agpgart to 
increase mapped GART size - if successful, agpgart tells drm/drms (can 
there be more than one) new valid GART range, and X Server tells all dri 
clients new valid range.
I don't think you want the extra communication with the X-server while 
you're trying to render.  Esp. since, by that point, you have the 
hardware lock.  Not only that, but you'd have to communicate the new 
size to all the other direct rendering clients.  I suppose they could 
find out about the increase the next time they ask for the size to be 
increased, or it could be stored in the SAREA.  Hmm...

I really need to get back to working on my new memory manager...
Reclaiming could be done similarly, though maybe must be triggered by 
some daemon from time to time?
- Someone requests GART size to be decreased - X Server asks dri 
clients to cope with lower GART size (and if one client says no because 
it still has textures bound in that space abort) - asks agpgart to 
decrease GART size - agpgart asks drm drivers to decrease GART size - 
drm drivers need to make sure all commands they have already checked for 
valid offsets and sent to the graphic card have been processed (I think 
that's a problem, as some texture offset could still be used later), 
then give their ok to agpgart - agpgart reduces mapped size.

Looks like a lot could go wrong though, and I'm not even sure it makes 
sense...
And some crazy dri client holding that 1x1 texture right at 256MB would 
still be able to make it impossible to decrease gart size again, though 
that shouldn't be much of a problem.
You can't lock textures.  Any texture can be kicked out at any time (as 
long as the age on it has expired).


---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-04 Thread Manuel Bilderbeek
Ian Romanick wrote:
Manuel Bilderbeek wrote:
(II) RADEON(0): [agp] GART texture map handle = 0xe0302000
(II) RADEON(0): Using 5 MB for GART textures
(II) RADEON(0): Will use 2752 kb for textures at offset 0x1d5
The card has 32MB RAM and I'm usually running at 1600x1200x24 is 
this reasonable?
It seems the Radeon driver doesn't have the fallbacks you mention, then?
Of that 32MB, almost 23MB are taken up by your framebuffer and 
depthbuffer.  That doesn't leave a lot of room for textures.  You should 
try increasing your GART memory to 32MB or 64MB.  Add a line like:

Option GARTSize 64M
to your device section.
It seems that the M shouldn't be there. ANyway, I did this, but it 
doesn't help at all. Look at these snippets:

(**) RADEON(0): Option AGPMode 4
(**) RADEON(0): Option GARTSize 64
(--) RADEON(0): Chipset: ATI Radeon QD (AGP) (ChipID = 0x5144)
(II) RADEON(0): AGP card detected
(**) RADEON(0): Using AGP 4x mode
(II) RADEON(0): AGP Fast Write disabled by default
(II) RADEON(0): Depth moves disabled by default
(II) RADEON(0): Page flipping disabled
(II) RADEON(0): Using 8 MB GART aperture
(II) RADEON(0): Using 1 MB for the ring buffer
(II) RADEON(0): Using 2 MB for vertex/indirect buffers
(II) RADEON(0): Using 5 MB for GART textures
(II) RADEON(0): Largest offscreen area available: 1600 x 4040
(II) RADEON(0): Will use back buffer at offset 0xeaa000
(II) RADEON(0): Will use depth buffer at offset 0x15fd000
(II) RADEON(0): Will use 2752 kb for textures at offset 0x1d5
So:
1) although the optoin is recognized, it still only uses 8 MB GART 
aperture (note that my BIOS is set to 64 MB AGP Aperture). Still only 
5MB for GART textures :(
and, some other questions
2) AGP Fast Write is disabled. Is it smart to enable it? It is enabled 
in my BIOS, by the way.
3) Page flipping is disabled. Is that good?

Can we make the driver use a larger GART size by default when DRI is 
enabled?  5MB is just about worthless.  I would suggest 16MB as a bare 
minimum.
What can I do now?
Kind regards,
Manuel
---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-04 Thread Mike Mestnik

--- Ian Romanick [EMAIL PROTECTED] wrote:
 Mike Mestnik wrote:
  --- Ian Romanick [EMAIL PROTECTED] wrote:
 Michel Dänzer wrote:
 
 It could waste a lot of RAM.
 
 Yup.  This is one of the bad parts of the AGP implementation on Linux.
 
 Once the AGP aperture is setup, it always has non-swappable memory 
 backing it.  If you set a 256MB aperture, it is as if you took 256MB
 out
 
 of your computer.  We really need a mechanism to say This range of
 the 
 aperture doesn't need any backing.  This could have security 
  
  The 'owner' of mapped AGP space would be the only one who could unmap.
  As
  I see it contexts can't share AGP space so there is no sharing issue.
 
 In our case, I guess the owner is the X-server?  Or is it the DRM? 
 Hmm...
 
I realy don't think the X-server would use ANY memory exept the background
and mouse's icon.  All images and textures are uploaded by some program,
not by the Xserver.  Even window boaders belong to the window-manager. 
The X-server may map the AGP memory, but it's not REALY the owner.

 implications.  What if one process removes the backing from a region
 of 
 the AGP aperture a the same instant another process tries to texture 
 from that range?  Random memory reads?  System crash?  Dunno...
  
  This just can't heppen, no sharing of AGP space, right?
 
 Video memory and AGP memory is accessed by all direct rendering 
 processes and the X-server.
 
Right, but thay must not EVER step on each other.  From the time one
uploades a texture and then later unloads it, that space can't be used. 
After an unload the memory can be maped back to system use.

I think the kernel's memory subsystem should be the one to manage and
setup AGP memory.  This way it can allocate unused AGP memory for ANY
other application.  This is a long way away from where we are now but it
definatly is where we should be.  On IGP systems it's even more important
that video ram is also manged by this subsystem.

THE END.






__
Do you Yahoo!?
Friends.  Fun.  Try the all-new Yahoo! Messenger.
http://messenger.yahoo.com/ 


---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-03 Thread Manuel Bilderbeek
Roland Scheidegger wrote:
I have been using the latest Debian packages from Michel Daenzer
since a few years and that solved most problems with GL apps.
However, there are still a few left: - in the XScreensaver hack
AntSpotlight, I don't get to see the desktop... Only the ant
walking on a black surface... A similar thing happens with
Flipscreen3d and GFlux (grab) (normal GFlux does work, but with the
grab version I only see a white plane).
That's what I'm seeing to, at least with a just installed suse 9.1 linux
version (xfree 4.3.902).
Concluding (based on the rest of your mail): we should file bug reports 
against Xscreensaver(-gl)?

It is not directly limited to 512. Rather, the driver calculates what
maximum texture size it actually can handle in the worst case. So, it 
makes sure there is always enough video ram, for the radeon driver 
(which only exposes 2 texture units) this means there must be enough 
video memory to bind 2 textures with maximum resolution (and maximum 
bytes / pixel, which is 4), including mip maps. So, for 512x512 you need 
512*512*4(bytes per pixel)*2(texture units)*1.4(for mipmaps), which is 
~3MB. But for 1024x1024 you already need ~12MB, for 2048x2048 ~50MB. 
Other drivers typically just announce the maximum, and use a fallback, 
more clever texture upload functions like not uploading mipmaps which 
are not used, or maybe agp texturing if not enough video ram is present 
when these worst-case-conditions are actually met. You can see in your 
XFree86 log how much video ram the driver thinks is available for textures.
(II) RADEON(0): [agp] GART texture map handle = 0xe0302000
(II) RADEON(0): Using 5 MB for GART textures
(II) RADEON(0): Will use 2752 kb for textures at offset 0x1d5
The card has 32MB RAM and I'm usually running at 1600x1200x24 is 
this reasonable?
It seems the Radeon driver doesn't have the fallbacks you mention, then?

But, in any case, if the app has severe rendering errors when large 
textures are not possible, the app is broken. Minimum OpenGL requirement 
for maximum texture size an implementation must support is 256x256.
OK, see also above. I'll report this to the authors :)
Thanks for your reply.
Kind regards,
Manuel
---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-03 Thread Ian Romanick
Manuel Bilderbeek wrote:
(II) RADEON(0): [agp] GART texture map handle = 0xe0302000
(II) RADEON(0): Using 5 MB for GART textures
(II) RADEON(0): Will use 2752 kb for textures at offset 0x1d5
The card has 32MB RAM and I'm usually running at 1600x1200x24 is 
this reasonable?
It seems the Radeon driver doesn't have the fallbacks you mention, then?
Of that 32MB, almost 23MB are taken up by your framebuffer and 
depthbuffer.  That doesn't leave a lot of room for textures.  You should 
try increasing your GART memory to 32MB or 64MB.  Add a line like:

Option GARTSize 64M
to your device section.
Can we make the driver use a larger GART size by default when DRI is 
enabled?  5MB is just about worthless.  I would suggest 16MB as a bare 
minimum.


---
This SF.Net email is sponsored by the new InstallShield X.
From Windows to Linux, servers to mobile, InstallShield X is the one
installation-authoring solution that does it all. Learn more and
evaluate today! http://www.installshield.com/Dev2Dev/0504
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-06-03 Thread Roland Scheidegger
Ian Romanick wrote:
Manuel Bilderbeek wrote:
(II) RADEON(0): [agp] GART texture map handle = 0xe0302000 (II)
RADEON(0): Using 5 MB for GART textures (II) RADEON(0): Will use
2752 kb for textures at offset 0x1d5
The card has 32MB RAM and I'm usually running at 1600x1200x24
is this reasonable? It seems the Radeon driver doesn't have the
fallbacks you mention, then?

Of that 32MB, almost 23MB are taken up by your framebuffer and 
depthbuffer.  That doesn't leave a lot of room for textures.  You
should try increasing your GART memory to 32MB or 64MB.  Add a line
like:

Option GARTSize 64M
to your device section.
This doesn't work for me, the driver ignores all values supplied to that
parameter (dri tree). It accepts though values supplied to the old,
deprecated (?) AGPSize option.
Quick fix attached. A simpler option would be to axe support for AGPSize...
It doesn't seem to change the reported maximum texture sizes though for
the r200 at least (could be different on radeon I guess), does it
actually support using agp memory for texturing? Some environment
variable suggests that so though.

Can we make the driver use a larger GART size by default when DRI is
 enabled?  5MB is just about worthless.  I would suggest 16MB as a
bare minimum.
Couldn't it just use the largest GART size possible (set by the bios),
or would this have some negative consequences? I vaguely remember there
was some discussion about gart size some time before, but can't remember
any details. Currently, if you set the gart size manually higher than
what's possible (set in bios), dri will just get disabled due to missing 
agp support, which I consider bad behaviour, and that you get a useless 
error message in that case doesn't help neither.
(II) RADEON(0): [agp] 262144 kB allocated with handle 0x0001
(EE) RADEON(0): [agp] Could not bind
(EE) RADEON(0): [agp] AGP failed to initialize. Disabling the DRI.
(II) RADEON(0): [agp] You may want to make sure the agpgart kernel 
module is loaded before the radeon kernel module.

Roland

Index: radeon_driver.c
===
RCS file: /cvs/dri/xc/xc/programs/Xserver/hw/xfree86/drivers/ati/radeon_driver.c,v
retrieving revision 1.86
diff -u -r1.86 radeon_driver.c
--- a/radeon_driver.c   29 Mar 2004 14:55:11 -  1.86
+++ b/radeon_driver.c   4 Jun 2004 01:50:30 -
@@ -132,6 +132,7 @@
 OPTION_AGP_MODE,
 OPTION_AGP_FW,
 OPTION_GART_SIZE,
+OPTION_GART_SIZE_OLD,
 OPTION_RING_SIZE,
 OPTION_BUFFER_SIZE,
 OPTION_DEPTH_MOVE,
@@ -167,7 +168,7 @@
 { OPTION_USEC_TIMEOUT,   CPusecTimeout,OPTV_INTEGER, {0}, FALSE },
 { OPTION_AGP_MODE,   AGPMode,  OPTV_INTEGER, {0}, FALSE },
 { OPTION_AGP_FW, AGPFastWrite, OPTV_BOOLEAN, {0}, FALSE },
-{ OPTION_GART_SIZE,  AGPSize,  OPTV_INTEGER, {0}, FALSE },
+{ OPTION_GART_SIZE_OLD,  AGPSize,  OPTV_INTEGER, {0}, FALSE },
 { OPTION_GART_SIZE,  GARTSize, OPTV_INTEGER, {0}, FALSE },
 { OPTION_RING_SIZE,  RingSize, OPTV_INTEGER, {0}, FALSE },
 { OPTION_BUFFER_SIZE,BufferSize,   OPTV_INTEGER, {0}, FALSE },
@@ -3790,8 +3791,10 @@
}
 }
 
-if (xf86GetOptValInteger(info-Options,
-OPTION_GART_SIZE, (int *)(info-gartSize))) {
+if ((xf86GetOptValInteger(info-Options,
+OPTION_GART_SIZE, (int *)(info-gartSize))) ||
+(xf86GetOptValInteger(info-Options,
+OPTION_GART_SIZE_OLD, (int *)(info-gartSize {
switch (info-gartSize) {
case 4:
case 8:



Radeon 7200 problems

2004-05-31 Thread Manuel Bilderbeek
Hello all,
I have been using the latest Debian packages from Michel Daenzer since a 
few years and that solved most problems with GL apps. However, there are 
still a few left:
- in the XScreensaver hack AntSpotlight, I don't get to see the 
desktop... Only the ant walking on a black surface... A similar thing 
happens with Flipscreen3d and GFlux (grab) (normal GFlux does work, but 
with the grab version I only see a white plane).
- a GL app reports that the maximum texture size is 512, while the 
windows version of that app reports 2048 (when using the Ati drivers, of 
course). The texture size of 512 causes some problems in the GL app, 
while in Windows it runs fine. The problems are similar to the issue 
above: surfaces are simply white. Why is the maximum texture size 
limited to 512 in the DRI for the Radeon 1?

Thanks in advance for any reactions.
For any personal replies: please Cc: to my private mail account.
Kind regards,
Manuel Bilderbeek
---
This SF.Net email is sponsored by: Oracle 10g
Get certified on the hottest thing ever to hit the market... Oracle 10g. 
Take an Oracle 10g class now, and we'll give you the exam FREE.
http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Radeon 7200 problems

2004-05-31 Thread Roland Scheidegger
Manuel Bilderbeek wrote:
Hello all,
I have been using the latest Debian packages from Michel Daenzer
since a few years and that solved most problems with GL apps.
However, there are still a few left: - in the XScreensaver hack
AntSpotlight, I don't get to see the desktop... Only the ant
walking on a black surface... A similar thing happens with
Flipscreen3d and GFlux (grab) (normal GFlux does work, but with the
grab version I only see a white plane).
That's what I'm seeing to, at least with a just installed suse 9.1 linux
version (xfree 4.3.902).
- a GL app reports that the maximum texture size is 512, while the 
windows version of that app reports 2048 (when using the Ati drivers,
of course). The texture size of 512 causes some problems in the GL
app, while in Windows it runs fine. The problems are similar to the
issue above: surfaces are simply white. Why is the maximum texture
size limited to 512 in the DRI for the Radeon 1?
It is not directly limited to 512. Rather, the driver calculates what
maximum texture size it actually can handle in the worst case. So, it 
makes sure there is always enough video ram, for the radeon driver 
(which only exposes 2 texture units) this means there must be enough 
video memory to bind 2 textures with maximum resolution (and maximum 
bytes / pixel, which is 4), including mip maps. So, for 512x512 you need 
512*512*4(bytes per pixel)*2(texture units)*1.4(for mipmaps), which is 
~3MB. But for 1024x1024 you already need ~12MB, for 2048x2048 ~50MB. 
Other drivers typically just announce the maximum, and use a fallback, 
more clever texture upload functions like not uploading mipmaps which 
are not used, or maybe agp texturing if not enough video ram is present 
when these worst-case-conditions are actually met. You can see in your 
XFree86 log how much video ram the driver thinks is available for textures.
But, in any case, if the app has severe rendering errors when large 
textures are not possible, the app is broken. Minimum OpenGL requirement 
for maximum texture size an implementation must support is 256x256.

Roland
---
This SF.Net email is sponsored by: Oracle 10g
Get certified on the hottest thing ever to hit the market... Oracle 10g. 
Take an Oracle 10g class now, and we'll give you the exam FREE.
http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel