Re: [Dri-users] X hangs when starting glxgears on r350

2005-07-27 Thread Aapo Tahkola
On Tue, 26 Jul 2005 14:18:10 +0200
Bellido Nicolas [EMAIL PROTECTED] wrote:

 On Monday 25 July 2005 16:22, Aapo Tahkola wrote:
  On Mon, 25 Jul 2005 08:59:53 +0200
   [drm:drm_ioctl] pid=9733, cmd=0x40106450, nr=0x50, dev 0xe200, auth=1
   [drm:radeon_cp_cmdbuf] RADEON_CMD_SCALARS2
   [drm:radeon_cp_cmdbuf] *ERROR* bad cmd_type 0 at e08fa024
 
  r300_do_cp_cmdbuf doesnt get called...
 
 That's indeed strange. From radeon_cp_cmdbuf in  shared-core/radeon_state.c:
 
   if(dev_priv-microcode_version == UCODE_R300) {
 int temp;
 temp=r300_do_cp_cmdbuf(dev, filp, filp_priv, cmdbuf);
 
 if (orig_bufsz != 0)
 drm_free(kbuf, orig_bufsz, DRM_MEM_DRIVER);
 
 return temp;
 }
 
 Although dmesg says: 
 
 [drm] Loading R300 Microcode
 
 So in the function radeon_cp_load_microcode in shared-core/radon_cp.c:
 
   if (dev_priv-microcode_version==UCODE_R200) {
   [snip]
 } else if (dev_priv-microcode_version==UCODE_R300) {
 DRM_INFO(Loading R300 Microcode\n);
 for ( i = 0 ; i  256 ; i++ ) {
 RADEON_WRITE( RADEON_CP_ME_RAM_DATAH,
   R300_cp_microcode[i][1] );
 RADEON_WRITE( RADEON_CP_ME_RAM_DATAL,
   R300_cp_microcode[i][0] );
 }
 } else {
   [snip]
 
 The test against the microcode_version succeeds...
 
 And, from the logs, I don't see the DRM_IOCTL_RADEON_CP_INIT ioctl called 
 twice...
 
 Ideas ?

You dont have two cards hooked up by any chance? :)
Does Xorg.0.log get the card right?
You probably want to check if microcode_version actually has any sane value at 
radeon_cp_cmdbuf.
Try something like:
printk(microcode_version %d\n, dev_priv-microcode_version);
return DRM_ERR(EINVAL);

-- 
Aapo Tahkola


---
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_id=7477alloc_id=16492op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-users] X hangs when starting glxgears on r350

2005-07-27 Thread Bellido Nicolas
On Tuesday 26 July 2005 11:06, Jerome Glisse wrote:
   In fact, i use xfree86 (from debian testing) for fglrx,
 
  You mean xfree86 4.3.0 + ati.patch from r300 cvs ?

 No the original xfree86 from debian with fglrx, and i
 have installed elsewhere Xorg with r300 (in fact
 my rep /usr/X11R6 is link to xfree86 or to xorg and
 i point it to which one i want to run). I guess you can
 have an older xorg with gentoo and use this one
 with fglrx and use xorg cvs for r300 stuff.

OK, I have now xorg-6.8.2-r2 (from gentoo) + fglrx, and xorg-cvs + r300.

And guess what: X does not hang anymore when I start glxgears...
Neither does it when I let glxgears run for a minute or so (around 2400 fps).

Possible causes:
  . I did something terribly wrong in my previous install
  . Some incompatibilities caused by gentoo patches to xorg
  . I was previously running kde. Now, I have nothing else than X's default WM
(twm ??)
  . Solar wind
  . Bacterias attacking my PC (heu. no, that's the wrong movie)
  . Something else

Anyway, let's see what hwscript has to tell.

BTW, do you have any scenario where you are sure X will hang ??

Nicolas.


---
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_id=7477alloc_id=16492op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-users] X hangs when starting glxgears on r350

2005-07-27 Thread Jerome Glisse
On 7/27/05, Bellido Nicolas [EMAIL PROTECTED] wrote:
 On Tuesday 26 July 2005 11:06, Jerome Glisse wrote:
In fact, i use xfree86 (from debian testing) for fglrx,
  
   You mean xfree86 4.3.0 + ati.patch from r300 cvs ?
 
  No the original xfree86 from debian with fglrx, and i
  have installed elsewhere Xorg with r300 (in fact
  my rep /usr/X11R6 is link to xfree86 or to xorg and
  i point it to which one i want to run). I guess you can
  have an older xorg with gentoo and use this one
  with fglrx and use xorg cvs for r300 stuff.
 
 OK, I have now xorg-6.8.2-r2 (from gentoo) + fglrx, and xorg-cvs + r300.
 
 And guess what: X does not hang anymore when I start glxgears...
 Neither does it when I let glxgears run for a minute or so (around 2400 fps).
 
 Possible causes:
   . I did something terribly wrong in my previous install
   . Some incompatibilities caused by gentoo patches to xorg
   . I was previously running kde. Now, I have nothing else than X's default WM
 (twm ??)
   . Solar wind
   . Bacterias attacking my PC (heu. no, that's the wrong movie)
   . Something else
 
 Anyway, let's see what hwscript has to tell.
 
 BTW, do you have any scenario where you are sure X will hang ??

Launch ut2003 or 2004 (don't remember which one) demos then start a game
(quick launch) and it will lockup during level loading or few second after
the intro begin...Other opengl apps do the same a flight simulator which
i didn't remember the name will lockup in the menu after few secs..

Btw if i try to resize or move glxgears i have a lockup, less often but
some times...

Jerome Glisse


---
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_idt77alloc_id492op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: ATI commercial driver and software suspend

2005-07-27 Thread Jerome Glisse
On 7/27/05, Aapo Tahkola [EMAIL PROTECTED] wrote:
 On Tue, 26 Jul 2005 13:37:02 -0700
 Nguyen The Toan [EMAIL PROTECTED] wrote:
 
  You need a program called vbetool. This allows one to save and restore the
  graphic card state before suspending and after resuming.
 
 I wonder if this could be used to hunt down the r300 problem.

Don't cost much to take a look, i will as soon as i get little more
time, this weekend maybe.

Jerome Glisse


---
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_idt77alloc_id492op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-users] X hangs when starting glxgears on r350

2005-07-27 Thread Bellido Nicolas
On Wednesday 27 July 2005 08:44, Aapo Tahkola wrote:
 On Tue, 26 Jul 2005 14:18:10 +0200

 You dont have two cards hooked up by any chance? :)

No, no handmade mobo with 2 agp slots :)

 Does Xorg.0.log get the card right?

Apparently yes, it does.

 You probably want to check if microcode_version actually has any sane value
 at radeon_cp_cmdbuf. Try something like:
 printk(microcode_version %d\n, dev_priv-microcode_version);
 return DRM_ERR(EINVAL);

Yeah, I was planning to do smthg like that.

But, how do you explain:

 [drm:drm_ioctl] pid=9733, cmd=0x40106450, nr=0x50, dev 0xe200, auth=1
 [drm:radeon_cp_cmdbuf] RADEON_CMD_SCALARS2
 [drm:radeon_cp_cmdbuf] *ERROR* bad cmd_type 0 at e08fa024

Looking at the code, I think it is not possible, except if two ioctl were 
called concurently...

Nicolas




---
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_id=7477alloc_id=16492op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


DRI in CVS failed to compile on FC4 2.6.11

2005-07-27 Thread Barry Scott
I pulled the CVS version of DRI and found that it does not compile 
because the kernel version

checks against 2.6.11 all need to be 2.6.10. For example:

CC [M] /home/bscott/wc/cvs/dri/drm/linux-core/drm_agpsupport.o
/home/bscott/wc/cvs/dri/drm/linux-core/drm_agpsupport.c: In function 
‘drm_agp_acquire’:
/home/bscott/wc/cvs/dri/drm/linux-core/drm_agpsupport.c:114: error: too 
few arguments to function ‘agp_backend_acquire’


Having replace all KERNEL_VERSION(2,6,11) with KERNEL_VERSION(2,6,10) it 
built and works.


Barry



---
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_id=7477alloc_id=16492op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Stable DRI with VIA sources

2005-07-27 Thread Barry Scott
What controlled version of the DRI sources with VIA support do you 
recommend I use

with kernel 2.6.11 on an FC4 distribution?

Barry



---
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_id=7477alloc_id=16492op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


IGP + DRI + xfce4 = Hang.

2005-07-27 Thread Adam K Kirchhoff


I have an interesting problem with an HP Pavilion.  It's an IGP320M with 
a Radeon Mobility.  DRI works just fine when using WindowMaker or 
gnome.  However, when I try to use xfce4 instead, X locks up when the 
splash screen would normally be displayed.  I can move the mouse around, 
but I can't always control-alt-delete out of it (though sometimes I 
can...  I think the difference may have to do with whether it was the 
first time X started up since I rebooted).  I can ssh into the machine 
and reboot it.  If I disable the DRI, xfce4 has no problems.


This is with recent Mesa cvs and drm 1.16.0 (2.6.12.3, specifically, 
though I've noticed this with each of the 2.6.12 releases  Can't 
speak about anything earlier).


Any ideas?

Thanks,
Adam




---
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_id=7477alloc_id=16492op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-users] X hangs when starting glxgears on r350

2005-07-27 Thread Aapo Tahkola
On Wed, 27 Jul 2005 12:25:27 +0200
Bellido Nicolas [EMAIL PROTECTED] wrote:

 On Wednesday 27 July 2005 08:44, Aapo Tahkola wrote:
  On Tue, 26 Jul 2005 14:18:10 +0200
 
  You dont have two cards hooked up by any chance? :)
 
 No, no handmade mobo with 2 agp slots :)
 
  Does Xorg.0.log get the card right?
 
 Apparently yes, it does.
 
  You probably want to check if microcode_version actually has any sane value
  at radeon_cp_cmdbuf. Try something like:
  printk(microcode_version %d\n, dev_priv-microcode_version);
  return DRM_ERR(EINVAL);
 
 Yeah, I was planning to do smthg like that.
 
 But, how do you explain:
 
  [drm:drm_ioctl] pid=9733, cmd=0x40106450, nr=0x50, dev 0xe200, auth=1

DRM_COMMAND_BASE + DRM_RADEON_CMDBUF == 0x50

  [drm:radeon_cp_cmdbuf] RADEON_CMD_SCALARS2

cmd type 7 equals to R300_CMD_WAIT(from r300DoEmitState)

  [drm:radeon_cp_cmdbuf] *ERROR* bad cmd_type 0 at e08fa024

This is random bits of memory already as cmd length of previous wasnt right.

-- 
Aapo Tahkola


---
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_id=7477alloc_id=16492op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Stable DRI with VIA sources

2005-07-27 Thread Alex Deucher
On 7/27/05, Barry Scott [EMAIL PROTECTED] wrote:
 What controlled version of the DRI sources with VIA support do you
 recommend I use
 with kernel 2.6.11 on an FC4 distribution?
 
 Barry
 
 

I think you'll have to use xorg/mesa cvs for DRI support:
http://dri.freedesktop.org/wiki/Building
or:
http://dri.freedesktop.org/wiki/Download#head-55420c59a1c2e9a70f07a6fa02f0d228ffb87b76

Alex


---
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_idt77alloc_id492op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Stable DRI with VIA sources

2005-07-27 Thread Barry Scott

Alex Deucher wrote:


On 7/27/05, Barry Scott [EMAIL PROTECTED] wrote:
 


What controlled version of the DRI sources with VIA support do you
recommend I use
with kernel 2.6.11 on an FC4 distribution?

Barry


   



I think you'll have to use xorg/mesa cvs for DRI support:
http://dri.freedesktop.org/wiki/Building
or:
http://dri.freedesktop.org/wiki/Download#head-55420c59a1c2e9a70f07a6fa02f0d228ffb87b76


 

I have the CVS code checked out. But HEAD in CVS is usually not safe to 
assume is tested and stable.

Is there a label in CVS that is the latest tested/stable set of code?

Barry



---
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_id=7477alloc_id=16492op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


why no open source driver for NVIDIA/ATI

2005-07-27 Thread Juhana Sadeharju

Hello. I found a reason why ATI nor NVIDIA provides us hardware
details:
  http://www.futuremark.com/companyinfo/3dmark03_audit_report.pdf

Regarding ATI: This performance drop is almost entirely due to 8.2%
difference in the game test 4 result, which means that the test was
also detected and somehow altered by the ATI drivers.

Nvidia is worse: they have 8 cheats in their driver.

It is no wonder why they don't want release the hardware details.
They simply don't want a driver which does not contain the cheats.
Please continue developing reverse-engineered, open sourced drivers.

Juhana
-- 
  http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
  for developers of open source graphics software


---
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_id=7477alloc_id=16492op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-users] X hangs when starting glxgears on r350

2005-07-27 Thread Bellido Nicolas
On Wednesday 27 July 2005 19:04, Aapo Tahkola wrote:
 On Wed, 27 Jul 2005 12:25:27 +0200

 Bellido Nicolas [EMAIL PROTECTED] wrote:
  On Wednesday 27 July 2005 08:44, Aapo Tahkola wrote:
   On Tue, 26 Jul 2005 14:18:10 +0200
  
   You dont have two cards hooked up by any chance? :)
 
  No, no handmade mobo with 2 agp slots :)
 
   Does Xorg.0.log get the card right?
 RADEON_PARAM_GART_BUFFER_OFFSET
  Apparently yes, it does.
 
   You probably want to check if microcode_version actually has any sane
   value at radeon_cp_cmdbuf. Try something like:
   printk(microcode_version %d\n, dev_priv-microcode_version);
   return DRM_ERR(EINVAL);
 
  Yeah, I was planning to do smthg like that.
 
  But, how do you explain:
 
   [drm:drm_ioctl] pid=9733, cmd=0x40106450, nr=0x50, dev 0xe200, auth=1

 DRM_COMMAND_BASE + DRM_RADEON_CMDBUF == 0x50

   [drm:radeon_cp_cmdbuf] RADEON_CMD_SCALARS2

 cmd type 7 equals to R300_CMD_WAIT(from r300DoEmitState)

   [drm:rRADEON_PARAM_GART_BUFFER_OFFSETadeon_cp_cmdbuf] *ERROR* bad 
cmd_type 0 at e08fa024

 This is random bits of memory already as cmd length of previous wasnt
 right.

I meant I don't understand why there is a RADEON_CMD_SCALAR2 followed by the 
*ERROR* message, without a drm_ioctl notice in between...

Possibly because the PID is different than for the other calls ?


---
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_id=7477alloc_id=16492op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[2.6 patch] drivers/char/drm/drm_pci.c: fix warnings

2005-07-27 Thread Adrian Bunk
This patch fixes the following warnings:

--  snip  --

...
  CC  drivers/char/drm/drm_pci.o
drivers/char/drm/drm_pci.c:53:5: warning: DRM_DEBUG_MEMORY is not defined
drivers/char/drm/drm_pci.c:84:5: warning: DRM_DEBUG_MEMORY is not defined
drivers/char/drm/drm_pci.c:119:5: warning: DRM_DEBUG_MEMORY is not defined
drivers/char/drm/drm_pci.c:126:5: warning: DRM_DEBUG_MEMORY is not defined
drivers/char/drm/drm_pci.c:134:5: warning: DRM_DEBUG_MEMORY is not defined
...

--  snip  --


Signed-off-by: Adrian Bunk [EMAIL PROTECTED]

---

This patch was already sent on:
- 22 Jul 2005

 drivers/char/drm/drm_pci.c |   10 +-
 1 files changed, 5 insertions(+), 5 deletions(-)

--- linux-2.6.13-rc3-mm1-full/drivers/char/drm/drm_pci.c.old2005-07-22 
18:16:02.0 +0200
+++ linux-2.6.13-rc3-mm1-full/drivers/char/drm/drm_pci.c2005-07-22 
18:16:24.0 +0200
@@ -50,7 +50,7 @@
dma_addr_t maxaddr)
 {
drm_dma_handle_t *dmah;
-#if DRM_DEBUG_MEMORY
+#ifdef DRM_DEBUG_MEMORY
int area = DRM_MEM_DMA;
 
spin_lock(drm_mem_lock);
@@ -81,7 +81,7 @@
dmah-size = size;
dmah-vaddr = pci_alloc_consistent(dev-pdev, size, dmah-busaddr);
 
-#if DRM_DEBUG_MEMORY
+#ifdef DRM_DEBUG_MEMORY
if (dmah-vaddr == NULL) {
spin_lock(drm_mem_lock);
++drm_mem_stats[area].fail_count;
@@ -116,14 +116,14 @@
 void
 __drm_pci_free(drm_device_t * dev, drm_dma_handle_t *dmah)
 {
-#if DRM_DEBUG_MEMORY
+#ifdef DRM_DEBUG_MEMORY
int area = DRM_MEM_DMA;
int alloc_count;
int free_count;
 #endif
 
if (!dmah-vaddr) {
-#if DRM_DEBUG_MEMORY
+#ifdef DRM_DEBUG_MEMORY
DRM_MEM_ERROR(area, Attempt to free address 0\n);
 #endif
} else {
@@ -131,7 +131,7 @@
dmah-busaddr);
}
 
-#if DRM_DEBUG_MEMORY
+#ifdef DRM_DEBUG_MEMORY
spin_lock(drm_mem_lock);
free_count = ++drm_mem_stats[area].free_count;
alloc_count = drm_mem_stats[area].succeed_count;



---
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_id=7477alloc_id=16492op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: why no open source driver for NVIDIA/ATI

2005-07-27 Thread Roland Scheidegger

Juhana Sadeharju wrote:

Hello. I found a reason why ATI nor NVIDIA provides us hardware
details:
  http://www.futuremark.com/companyinfo/3dmark03_audit_report.pdf

Regarding ATI: This performance drop is almost entirely due to 8.2%
difference in the game test 4 result, which means that the test was
also detected and somehow altered by the ATI drivers.

Nvidia is worse: they have 8 cheats in their driver.

Ugh, this is OLD news. You're 2 years late...


It is no wonder why they don't want release the hardware details.
They simply don't want a driver which does not contain the cheats.
Why would that make any difference for them? After all, the open-source 
driver would be slower without the cheats, so they could provide an 
additional reason why you should use the closed-source driver from them 
(not that it likely wouldn't be faster anyway, even without cheating...).
Btw, both Nvidia and ATI use tricks which aren't really cheats, for 
instance the brilinear filtering and aniso filtering optimizations 
(and afaik aniso filtering isn't fully specified, so you can't even 
really cheat there even if you wanted, though there is some general 
expectation what it should do). You can control at least some of these 
optimizations in the driver control panels (though every here and then, 
usually when new cards are launched, disabling some of the optimizations 
won't work mysteriously, a bug which is usually fixed when the initial 
reviews of the cards are over...). Brilinear is supported for r200 
even in the open-source driver, though it's manually controlled and 
certainly not used by default. At one point I even experimented to 
autocompress textures (what ati's driver does) though I gave that up as 
its usefulness seemed limited (and it's not opengl conformant).
And, nowadays usually even app-detection cheats are not necessarily 
considered evil, as long as the same output is guaranteed (and if it's 
not just optimizing for a benchmark run, e.g. the static clip planes 
nvidia did for 3dmark03). Though it's probably something you'd want to 
stay far away from in a driver developed by the community, as it 
certainly increases driver complexity - you want good general case 
performance, not lots of app-specific optimized paths just to increase 
performance in those particular apps by 3%.



Please continue developing reverse-engineered, open sourced drivers.

As time permits...

Roland



---
SF.Net email is Sponsored by the Better Software Conference  EXPO September
19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: I need the R200 to work!

2005-07-27 Thread Roland Scheidegger

Alan Grimes wrote:


Software rendering: WORKS  (about 1.2 fps, if that..)
ATI Mach 64 (PCI):  WORKS  -- minor image degradation as would be
expected of a card of that vintage... (around 5 fps...)

ATI Rage 128: Works, slows down dramaticly in some areas, has a number
of visual artifacts.

MSI 9250: Some demos work, ones that don't contain portals to other
fields. Ones that do have portals, segfault IN THE DRIVER CODE.

A gdb backtrace might be useful.

 I really don't understand the information flows within the Mesa
 driver.. How do I tell which parts of the code are in use on my
 platform and which are only for software rendering?
Typically, stuff in src/mesa/main is shared by all drivers, opengl state 
handling and such. src/mesa/swrast is used only for software rendering, 
BUT you can hit that code too with hw acceleration, if you hit a 
fallback somewhere (the most common case is probably if you read/write 
directly to the framebuffer, most if not all drivers implement this as a 
fallback). All hw driver specific code is in src/mesa/drivers/dri, in 
your case obviously in the r200 directory.



Roland


---
SF.Net email is Sponsored by the Better Software Conference  EXPO September
19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: why no open source driver for NVIDIA/ATI

2005-07-27 Thread Roland Scheidegger

Patrick McFarland wrote:

On Wednesday 27 July 2005 02:43 pm, Roland Scheidegger wrote:


Juhana Sadeharju wrote:


Please continue developing reverse-engineered, open sourced drivers.


As time permits...



Heh, the only thing I want is GL ARB fragment shaders accelerated as much as 
possible by R200 hardware. I don't see that happening with ATI's binary 
drivers, they only support the old ATI pre-ARB fragment shader interface.


I'm not sure if it would be useful to even try something like that. Not 
only would you violate the spec (hardware doesn't support required 
precision/range of values), but if you'd try to compile a shader you'd 
likely figure out it won't fit into these 8 instruction slots anyway (ok 
if we'd figure out how to pass the values from stage 1 to stage 2 we 
would get 16 slots, and we could do even 1 level of dependant texture 
reads). But there are probably a ton of things you couldn't directly fit 
to the hardware and you'd need to expand to multiple instructions.


Roland


---
SF.Net email is Sponsored by the Better Software Conference  EXPO September
19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: why no open source driver for NVIDIA/ATI

2005-07-27 Thread Patrick McFarland
On Wednesday 27 July 2005 03:18 pm, Roland Scheidegger wrote:
 Patrick McFarland wrote:
  Heh, the only thing I want is GL ARB fragment shaders accelerated as much
  as possible by R200 hardware. I don't see that happening with ATI's
  binary drivers, they only support the old ATI pre-ARB fragment shader
  interface.

 I'm not sure if it would be useful to even try something like that. Not
 only would you violate the spec (hardware doesn't support required
 precision/range of values), but if you'd try to compile a shader you'd
 likely figure out it won't fit into these 8 instruction slots anyway (ok
 if we'd figure out how to pass the values from stage 1 to stage 2 we
 would get 16 slots, and we could do even 1 level of dependant texture
 reads). But there are probably a ton of things you couldn't directly fit
 to the hardware and you'd need to expand to multiple instructions.

Even if we violate precision/range stuff, being able to accelerate simplistic 
shaders would be quite useful. Its better than not having a software 
implementation of the shader pipeline.

Also, what stops you from splitting up a shader, and running the peices back 
to back over multiple passes? Can't you emulate longer shaders doing that?

-- 
Patrick Diablo-D3 McFarland || [EMAIL PROTECTED]
Computer games don't affect kids; I mean if Pac-Man affected us as kids, we'd 
all be running around in darkened rooms, munching magic pills and listening to
repetitive electronic music. -- Kristian Wilson, Nintendo, Inc, 1989


pgpjQspIAlWu3.pgp
Description: PGP signature


Re: [Dri-users] X hangs when starting glxgears on r350

2005-07-27 Thread Bellido Nicolas
On Wednesday 27 July 2005 10:13, Jerome Glisse wrote:
 On 7/27/05, Bellido Nicolas [EMAIL PROTECTED] wrote:
  BTW, do you have any scenario where you are sure X will hang ??

 Launch ut2003 or 2004 (don't remember which one) demos then start a game
 (quick launch) and it will lockup during level loading or few second after
 the intro begin...Other opengl apps do the same a flight simulator which
 i didn't remember the name will lockup in the menu after few secs..

 Btw if i try to resize or move glxgears i have a lockup, less often but
 some times...

ut2k4 did the trick...

I get lockups during the mission breafing or shortly after beginning of intro.
Modprobing drm with debug=1, shows two cases:

. ut2004-bin does an DRM_IOCTL_DMA ioctl, but radeon_freelist_get, called by
  radeon_cp_buffers returns NULL;
. ut2004-bin does an radeon_cp_getparam ioctl, with RADEON_PARAM_LAST_FRAME
  as parameter, but the value returned does not change anymore.

The later is the most common, though.

The result is that ut2004-bin keeps on issuing the same ioctl, thus the lock 
up.

Do you experience the same ? (Just to be sure that we talk about the same 
thing)

As the getparam lockup has already been discussed here [1], I'll post the logs 
I have as replies to this mail, in case people are interested.
Note that they are quite large (compressed ~50k, uncompressed ~10M).

-- Hopefully they won't be rejected by any mail size limit on the list --

Nicolas.

[1] http://www.mail-archive.com/dri-devel@lists.sourceforge.net/msg23051.html


---
SF.Net email is Sponsored by the Better Software Conference  EXPO September
19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-users] X hangs when starting glxgears on r350

2005-07-27 Thread Jerome Glisse
On 7/27/05, Bellido Nicolas [EMAIL PROTECTED] wrote:
 On Wednesday 27 July 2005 10:13, Jerome Glisse wrote:
  On 7/27/05, Bellido Nicolas [EMAIL PROTECTED] wrote:
   BTW, do you have any scenario where you are sure X will hang ??
 
  Launch ut2003 or 2004 (don't remember which one) demos then start a game
  (quick launch) and it will lockup during level loading or few second after
  the intro begin...Other opengl apps do the same a flight simulator which
  i didn't remember the name will lockup in the menu after few secs..
 
  Btw if i try to resize or move glxgears i have a lockup, less often but
  some times...
 
 ut2k4 did the trick...
 
 I get lockups during the mission breafing or shortly after beginning of intro.
 Modprobing drm with debug=1, shows two cases:
 
 . ut2004-bin does an DRM_IOCTL_DMA ioctl, but radeon_freelist_get, called by
   radeon_cp_buffers returns NULL;
 . ut2004-bin does an radeon_cp_getparam ioctl, with RADEON_PARAM_LAST_FRAME
   as parameter, but the value returned does not change anymore.
 
 The later is the most common, though.
 
 The result is that ut2004-bin keeps on issuing the same ioctl, thus the lock
 up.
 
 Do you experience the same ? (Just to be sure that we talk about the same
 thing)

Long time i haven't been looking to my log :) But as far my memory
is good i was experiencing this two kinds two. I will check that once
again.

 As the getparam lockup has already been discussed here [1], I'll post the logs
 I have as replies to this mail, in case people are interested.
 Note that they are quite large (compressed ~50k, uncompressed ~10M).
 [1] http://www.mail-archive.com/dri-devel@lists.sourceforge.net/msg23051.html

Hhhmmm the thread on that end with no conclusion, what is the status
of that for radeon ? 

Btw can you try if vbetool vbestate save work on your computer
(do this from text console with no X running). It seems that launching
fglrx then do a repost of the card (with vbetool repost) then use
r300 and the lockup appear again (no lockup with no repost).

Thus maybe fglrx initialize the card better than the bios do, and i fear
that looking at reg dump won't show me the magic...

Jerome Glisse


---
SF.Net email is Sponsored by the Better Software Conference  EXPO September
19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-users] X hangs when starting glxgears on r350

2005-07-27 Thread Aapo Tahkola
On Wed, 27 Jul 2005 21:53:14 +0200
Bellido Nicolas [EMAIL PROTECTED] wrote:

 On Wednesday 27 July 2005 19:04, Aapo Tahkola wrote:
  On Wed, 27 Jul 2005 12:25:27 +0200
 
  Bellido Nicolas [EMAIL PROTECTED] wrote:
   On Wednesday 27 July 2005 08:44, Aapo Tahkola wrote:
On Tue, 26 Jul 2005 14:18:10 +0200
   
You dont have two cards hooked up by any chance? :)
  
   No, no handmade mobo with 2 agp slots :)
  
Does Xorg.0.log get the card right?
  RADEON_PARAM_GART_BUFFER_OFFSET
   Apparently yes, it does.
  
You probably want to check if microcode_version actually has any sane
value at radeon_cp_cmdbuf. Try something like:
printk(microcode_version %d\n, dev_priv-microcode_version);
return DRM_ERR(EINVAL);
  
   Yeah, I was planning to do smthg like that.
  
   But, how do you explain:
  
[drm:drm_ioctl] pid=9733, cmd=0x40106450, nr=0x50, dev 0xe200, auth=1
 
  DRM_COMMAND_BASE + DRM_RADEON_CMDBUF == 0x50
 
[drm:radeon_cp_cmdbuf] RADEON_CMD_SCALARS2
 
  cmd type 7 equals to R300_CMD_WAIT(from r300DoEmitState)
 
[drm:rRADEON_PARAM_GART_BUFFER_OFFSETadeon_cp_cmdbuf] *ERROR* bad 
 cmd_type 0 at e08fa024
 
  This is random bits of memory already as cmd length of previous wasnt
  right.
 
 I meant I don't understand why there is a RADEON_CMD_SCALAR2 followed by the 
 *ERROR* message, without a drm_ioctl notice in between...
 
 Possibly because the PID is different than for the other calls ?

Because it processes multiple packets:
while (cmdbuf.bufsz = sizeof(header)) {
header.i = *(int *)cmdbuf.buf;
cmdbuf.buf += sizeof(header);
cmdbuf.bufsz -= sizeof(header);

switch (header.header.cmd_type) {


-- 
Aapo Tahkola


---
SF.Net email is Sponsored by the Better Software Conference  EXPO September
19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-users] X hangs when starting glxgears on r350

2005-07-27 Thread Aapo Tahkola
On Wed, 27 Jul 2005 23:38:57 +0200
Bellido Nicolas [EMAIL PROTECTED] wrote:

 On Wednesday 27 July 2005 10:13, Jerome Glisse wrote:
  On 7/27/05, Bellido Nicolas [EMAIL PROTECTED] wrote:
   BTW, do you have any scenario where you are sure X will hang ??
 
  Launch ut2003 or 2004 (don't remember which one) demos then start a game
  (quick launch) and it will lockup during level loading or few second after
  the intro begin...Other opengl apps do the same a flight simulator which
  i didn't remember the name will lockup in the menu after few secs..
 
  Btw if i try to resize or move glxgears i have a lockup, less often but
  some times...
 
 ut2k4 did the trick...
 
 I get lockups during the mission breafing or shortly after beginning of intro.
 Modprobing drm with debug=1, shows two cases:
 
 . ut2004-bin does an DRM_IOCTL_DMA ioctl, but radeon_freelist_get, called by
   radeon_cp_buffers returns NULL;

last_dispatch gets wrong value if buffer has more than one dma discard: (could 
someone check this change in?)
Index: r300_cmdbuf.c
===
RCS file: /cvs/dri/drm/shared-core/r300_cmdbuf.c,v
retrieving revision 1.1
diff -u -b -B -u -r1.1 r300_cmdbuf.c
--- r300_cmdbuf.c   20 Jul 2005 21:17:47 -  1.1
+++ r300_cmdbuf.c   27 Jul 2005 20:43:50 -
@@ -623,7 +623,7 @@
drm_radeon_private_t *dev_priv = dev-dev_private;
drm_radeon_buf_priv_t *buf_priv = buf-dev_private;
 
-   buf_priv-age = dev_priv-sarea_priv-last_dispatch+1;
+   buf_priv-age = ++dev_priv-sarea_priv-last_dispatch;
buf-pending = 1;
buf-used = 0;
 }
@@ -788,8 +788,6 @@
if (emit_dispatch_age) {
RING_LOCALS;
 
-   dev_priv-sarea_priv-last_dispatch++;
-
/* Emit the vertex buffer age */
BEGIN_RING(2);
RADEON_DISPATCH_AGE(dev_priv-sarea_priv-last_dispatch);

-- 
Aapo Tahkola


---
SF.Net email is Sponsored by the Better Software Conference  EXPO September
19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: why no open source driver for NVIDIA/ATI

2005-07-27 Thread Ian Romanick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Patrick McFarland wrote:

 Even if we violate precision/range stuff, being able to accelerate simplistic 
 shaders would be quite useful. Its better than not having a software 
 implementation of the shader pipeline.

The problem is that most shaders that use ARB_fp or NV_fp aren't
simplistic enough.  It would be a *lot* of work to benefit 1% of
real-world shaders.

 Also, what stops you from splitting up a shader, and running the peices back 
 to back over multiple passes? Can't you emulate longer shaders doing that?

So, I looked into this really deeply in the past for other things.  The
problem is it gets *very* hard to deal with framebuffer blend modes.  If
you have an arbitrary triangle list, triangles in the list may overlap.
 If you have a framebuffer blend mode other than dst=src, you can't
multipass it (generally) without breaking up the triangle list and
sending one triangle at a time.  It would not surprise me at all if the
performance there was close to that of a good software implementation.

This, BTW, is what ATI's fbuffer in all about.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

iD8DBQFC5/SSX1gOwKyEAw8RAlgoAKCLWrewHelrWjXFlaRZjzJ4ITdZ4gCeM9x5
7jYZbOZ/I0mduOG9O19zzlY=
=RibU
-END PGP SIGNATURE-


---
SF.Net email is Sponsored by the Better Software Conference  EXPO September
19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: why no open source driver for NVIDIA/ATI

2005-07-27 Thread Patrick McFarland
On Wednesday 27 July 2005 04:54 pm, Ian Romanick wrote:
 Patrick McFarland wrote:
  Even if we violate precision/range stuff, being able to accelerate
  simplistic shaders would be quite useful. Its better than not having a
  software implementation of the shader pipeline.

 The problem is that most shaders that use ARB_fp or NV_fp aren't
 simplistic enough.  It would be a *lot* of work to benefit 1% of
 real-world shaders.

I think ATI really screwed R200 owners then. The shader pipeline ultimately is 
useless.

  Also, what stops you from splitting up a shader, and running the peices
  back to back over multiple passes? Can't you emulate longer shaders doing
  that?

 So, I looked into this really deeply in the past for other things.  The
 problem is it gets *very* hard to deal with framebuffer blend modes.  If
 you have an arbitrary triangle list, triangles in the list may overlap.
  If you have a framebuffer blend mode other than dst=src, you can't
 multipass it (generally) without breaking up the triangle list and
 sending one triangle at a time.  It would not surprise me at all if the
 performance there was close to that of a good software implementation.

So, how many games use blend modes other than dst=src? Also, even if it isn't 
faster than a good software implementation, its still less work done by the 
CPU. I own a pretty outdated P3 550, and I'd rather have any sort of boost I 
can get.

 This, BTW, is what ATI's fbuffer in all about.

I'm trying to find more information about this fbuffer, but Google isn't 
being too friendly.

-- 
Patrick Diablo-D3 McFarland || [EMAIL PROTECTED]
Computer games don't affect kids; I mean if Pac-Man affected us as kids, we'd 
all be running around in darkened rooms, munching magic pills and listening to
repetitive electronic music. -- Kristian Wilson, Nintendo, Inc, 1989


pgpHhlHkgPi9u.pgp
Description: PGP signature


Re: why no open source driver for NVIDIA/ATI

2005-07-27 Thread Roland Scheidegger

Patrick McFarland wrote:

On Wednesday 27 July 2005 04:54 pm, Ian Romanick wrote:


Patrick McFarland wrote:


Even if we violate precision/range stuff, being able to accelerate
simplistic shaders would be quite useful. Its better than not having a
software implementation of the shader pipeline.


The problem is that most shaders that use ARB_fp or NV_fp aren't
simplistic enough.  It would be a *lot* of work to benefit 1% of
real-world shaders.



I think ATI really screwed R200 owners then. The shader pipeline ultimately is 
useless.
Erm, I think you're asking a bit too much for a card of this generation. 
It's not like you could do it on a GF3, for example...
The only way a card of that generation would have been able to do such 
future unknown functionality is if the fragment pipeline would have been 
a lot more generic - but then again it probably would have been too slow 
to be really useful, not to mention the longer your program, the more 
you'll be hurting from lack of precision generally.
And, I wouldn't say it's really useless. There ARE some apps out there 
which indeed can make use of it quite well (doom3 for example, there are 
quite some directx applications out there which use the equivalent in 
directx (ps 1.4) too).


Roland


---
SF.Net email is Sponsored by the Better Software Conference  EXPO September
19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: r300 testing..

2005-07-27 Thread Aapo Tahkola
On Mon, 27 Jun 2005 01:57:56 +0200
Roland Scheidegger [EMAIL PROTECTED] wrote:

 Ben Skeggs wrote:
  S3TC does seem to be the killer for UT2004.  I started porting over the
  S3TC stuff from the r200 driver a while
  back, but haven't had a lot of time recently to fix a couple of issues
  with it.  Overall fps doesn't seem to take a
  huge gain, but the sudden drops to 1-2fps in certain levels
  (CTF-Faceclassic) disappear when S3TC's enabled.
 That's true, but to avoid the huge drops you could also just decrease 
 texture detail. Or implement the second texture heap in main memory and 
 use gart texturing (though you'd also need to manually increase the gart 
 size). There are some problems with that for r200, and the strategy for 
 what textures to put where may not be optimal currently, but the drops 
 should be gone.
 That said, the performance in ut2k4 is probably really slow (apart from 
 that problem) due to deficiencies in drawArrays handling, at least that 
 was the case for r200 last time I checked...

First hack attempts to improve it.

Later two patches workaround RADEON_BUFFER_SIZE limit.
While this actually appears to work theres no speed boost in general.

-- 
Aapo Tahkola
Index: t_array_api.c
===
RCS file: /cvs/mesa/Mesa/src/mesa/tnl/t_array_api.c,v
retrieving revision 1.52
diff -u -b -B -u -r1.52 t_array_api.c
--- t_array_api.c   18 Jul 2005 12:31:30 -  1.52
+++ t_array_api.c   27 Jul 2005 20:28:16 -
@@ -78,21 +78,20 @@
 }
 
 
-/* Note this function no longer takes a 'start' value, the range is
- * assumed to start at zero.  The old trick of subtracting 'start'
- * from each index won't work if the indices are not in writeable
- * memory.
- */
 static void _tnl_draw_range_elements( GLcontext *ctx, GLenum mode,
+ GLuint min_index,
  GLuint max_index,
  GLsizei index_count, GLuint *indices )
 
 {
TNLcontext *tnl = TNL_CONTEXT(ctx);
struct tnl_prim prim;
+   int i;
+   static int size=0;
+   static GLuint *ind=NULL;
FLUSH_CURRENT( ctx, 0 );

-   _tnl_vb_bind_arrays( ctx, 0, max_index );
+   _tnl_vb_bind_arrays( ctx, min_index, max_index );
 
tnl-vb.Primitive = prim;
tnl-vb.Primitive[0].mode = mode | PRIM_BEGIN | PRIM_END;
@@ -100,8 +99,15 @@
tnl-vb.Primitive[0].count = index_count;
tnl-vb.PrimitiveCount = 1;
 
-   tnl-vb.Elts = (GLuint *)indices;
+   if(index_count  size){
+  size = index_count;
+  free(ind);
+  ind = malloc(index_count * sizeof(GLuint));
+   }
+   for(i=0; i  index_count; i++)
+  ind[i] = indices[i] - min_index;
 
+   tnl-vb.Elts = ind;
tnl-Driver.RunPipeline( ctx );
 }
 
@@ -297,20 +301,19 @@
* at the whole locked range.
*/
 
-  if (start == 0  ctx-Array.LockFirst == 0  
- end  (ctx-Array.LockFirst + ctx-Array.LockCount))
-_tnl_draw_range_elements( ctx, mode,
+  if (end-start+1  (ctx-Array.LockFirst + ctx-Array.LockCount)){
+_tnl_draw_range_elements( ctx, mode, start,
   ctx-Array.LockCount,
   count, ui_indices );
-  else {
+} else {
 fallback_drawelements( ctx, mode, count, ui_indices );
   }
}
-   else if (start == 0  end  ctx-Const.MaxArrayLockSize) {
+   else if (end-start+1  ctx-Const.MaxArrayLockSize) {
   /* The arrays aren't locked but we can still fit them inside a
* single vertexbuffer.
*/
-  _tnl_draw_range_elements( ctx, mode, end + 1, count, ui_indices );
+  _tnl_draw_range_elements( ctx, mode, start, end + 1, count, ui_indices );
}
else {
   /* Range is too big to optimize:
@@ -352,7 +355,7 @@
 
if (ctx-Array.LockCount) {
   if (ctx-Array.LockFirst == 0)
-_tnl_draw_range_elements( ctx, mode,
+_tnl_draw_range_elements( ctx, mode, 0,
   ctx-Array.LockCount,
   count, ui_indices );
   else
@@ -361,16 +364,18 @@
else {
   /* Scan the index list and see if we can use the locked path anyway.
*/
-  GLuint max_elt = 0;
+  GLuint max_elt = 0, min_elt = ~0;
   GLint i;
 
-  for (i = 0 ; i  count ; i++)
+  for (i = 0 ; i  count ; i++){
 if (ui_indices[i]  max_elt)
 max_elt = ui_indices[i];
-
-  if (max_elt  ctx-Const.MaxArrayLockSize  /* can we use it? */
- max_elt  (GLuint) count)/* do we want to use it? */
-_tnl_draw_range_elements( ctx, mode, max_elt+1, count, ui_indices );
+if (ui_indices[i]  min_elt)
+min_elt = ui_indices[i];
+  } 
+  if (max_elt-min_elt+1  ctx-Const.MaxArrayLockSize  /* can we use it? 
*/
+ max_elt-min_elt+1  (GLuint) count)  /* do we want to use 
it? */
+

Re: r300 testing..

2005-07-27 Thread Roland Scheidegger

Aapo Tahkola wrote:
That's true, but to avoid the huge drops you could also just decrease 
texture detail. Or implement the second texture heap in main memory and 
use gart texturing (though you'd also need to manually increase the gart 
size). There are some problems with that for r200, and the strategy for 
what textures to put where may not be optimal currently, but the drops 
should be gone.
That said, the performance in ut2k4 is probably really slow (apart from 
that problem) due to deficiencies in drawArrays handling, at least that 
was the case for r200 last time I checked...



First hack attempts to improve it.

Later two patches workaround RADEON_BUFFER_SIZE limit.
While this actually appears to work theres no speed boost in general.


That's what I found out some time ago as well, I did pretty similar 
changes (basically bring back that start value). Once upon a time I also 
had a hack so mesa could handle GL_UNSIGNED_SHORT elements lists 
natively (no more converting them with _ac_import_elements to 
UNSIGNED_INT and later back to shorts again in the driver) without any 
success neither.

I never really found out why the old mesa code was 2 times faster :-(.

Roland


---
SF.Net email is Sponsored by the Better Software Conference  EXPO September
19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: why no open source driver for NVIDIA/ATI

2005-07-27 Thread Dave Airlie


 So why can Doom 3 use R200 pixel shaders, and DRI can't?

And we currently don't implement the two extensions on r200 that Doom3
uses, I've still got a 90% finished ATI_fragment_shader done but I've
little time to pick it back up, and the only test code I had was doom3 and
unfortunately when I enable framgent shader it also needs one of the other
shaders we've implemented software only so it started to look like a
bigger job to accelerate it (if the r200 could do it..)

Dave.


-- 
David Airlie, Software Engineer
http://www.skynet.ie/~airlied / airlied at skynet.ie
Linux kernel - DRI, VAX / pam_smb / ILUG



---
SF.Net email is Sponsored by the Better Software Conference  EXPO September
19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: why no open source driver for NVIDIA/ATI

2005-07-27 Thread Patrick McFarland
On Wednesday 27 July 2005 06:16 pm, Adam Jackson wrote:
 On Wednesday 27 July 2005 18:05, Patrick McFarland wrote:
  So why can Doom 3 use R200 pixel shaders, and DRI can't?

 Doom3's r200 shader pipeline gives different (read: worse) output than
 their arb shader pipeline.  They have the liberty of knowing what visual
 quality they can sacrifice.  The driver doesn't.

This is the sound of me hating ATI for making such useless pixel shaders.

-- 
Patrick Diablo-D3 McFarland || [EMAIL PROTECTED]
Computer games don't affect kids; I mean if Pac-Man affected us as kids, we'd 
all be running around in darkened rooms, munching magic pills and listening to
repetitive electronic music. -- Kristian Wilson, Nintendo, Inc, 1989


pgpCTPLZwD7L7.pgp
Description: PGP signature


AAAARRRGH!!!

2005-07-27 Thread Alan Grimes
I spent all day dl'ing and installing:

#
This is a pre-release version of the The X.Org Foundation X11.
X Window System Version 6.8.99.1
#

And my reward for spending $40 on a card that appeared to be supported
by Linux?

###
[EMAIL PROTECTED] ~/Croquet0.3 $ ppracer
PPRacer 0.3.1 --  http://racer.planetpenguin.de
(c) 2004-2005 The PPRacer team
(c) 1999-2001 Jasmin F. Patry[EMAIL PROTECTED]
PPRacer comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to redistribute it under certain conditions.
See http://www.gnu.org/copyleft/gpl.html for details.

libGL warning: 3D driver returned no fbconfigs.
libGL error: InitDriver failed
libGL error: reverting to (slow) indirect rendering
%%% ppracer warning: Warning: Couldn't set 22050 Hz 16-bit audio
  Reason: Could not open sound device

[EMAIL PROTECTED] ~/Croquet0.3 $
##

-- 
Friends don't let friends use GCC 3.4.4
GCC 3.3.6 produces code that's twice as fast on x86!


---
SF.Net email is Sponsored by the Better Software Conference  EXPO September
19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: AAAARRRGH!!!

2005-07-27 Thread Adam Jackson
On Thursday 28 July 2005 01:09, Alan Grimes wrote:
 I spent all day dl'ing and installing:

 #
 This is a pre-release version of the The X.Org Foundation X11.
 X Window System Version 6.8.99.1
 #

 And my reward for spending $40 on a card that appeared to be supported
 by Linux?

Why you installed a prerelease version of X, to get support for a card (9250, 
right?) that was definitely supported in the last release, is a bit of a 
mystery.

Why you also chose to install 6.8.99.1 when the snapshots are up to about 
6.8.99.15 is more of a mystery.

 libGL warning: 3D driver returned no fbconfigs.
 libGL error: InitDriver failed
 libGL error: reverting to (slow) indirect rendering

This, however, is no mystery.  The 2D driver in Xorg enables color tiling by 
default, but the bundled DRI driver doesn't understand color tiling yet.  
This will be fixed in 7.0, in the meantime:

Option ColorTiling off

- ajax


pgpQrfxe2uUh0.pgp
Description: PGP signature


Re: why no open source driver for NVIDIA/ATI

2005-07-27 Thread Ian Romanick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Patrick McFarland wrote:
 On Wednesday 27 July 2005 06:16 pm, Adam Jackson wrote:
 
On Wednesday 27 July 2005 18:05, Patrick McFarland wrote:

So why can Doom 3 use R200 pixel shaders, and DRI can't?

Doom3's r200 shader pipeline gives different (read: worse) output than
their arb shader pipeline.  They have the liberty of knowing what visual
quality they can sacrifice.  The driver doesn't.
 
 
 This is the sound of me hating ATI for making such useless pixel shaders.

Gee...when they came out the were the *most function* pixel shaders
available.  Why are you complaining that a 5 year chip doesn't have the
latest features.  Uhduh?
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

iD8DBQFC6HEkX1gOwKyEAw8RAg/WAKCQyfa+Kr6x0P4k67IHdp89Pou9dgCfbmMX
4/PpCZ+xuICkuMK3irDpFR4=
=9Ku1
-END PGP SIGNATURE-


---
SF.Net email is Sponsored by the Better Software Conference  EXPO September
19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel