Since 2 people have asked for it, here are some quick numbers for r200 dri vs. fglrx.
r200 dri is using 45MB local tex heap (I believe fglrx reseverves pretty much anything for textures too, so that's only fair...). btw fglrx certainly has made some progress, what I noticed is at least 2d subjectively feels much faster (in fact, previously it felt about the same as when you used ACCEL_MMIO with the radeon driver, but now it feels pretty much the same as with the open source driver).
fglrx might be at an unfair disadvantage, I think it is not using pageflip. Don't know if it's using hyperz, last time I checked (with glxtest) it didn't seem to use that on my setup neither (but that was with an older driver). I suspect it still doesn't, at least not always, since glxgears (which gets a HUGE boost with hyperz) is now over two times faster with the r200 driver.
r200 dri uses xorg cvs head, with dri driver from Mesa cvs head, with color tiling, texture tiling, hyperz and whatever else I could find boosting performance :-).
fglrx uses XFree86 4.3.99.902 (from suse 9.1), with stock configuration, except I needed to correct the bus id and switched it to external gart. I don't know of any options which would boost performance.
Desktop resolution is 1280x1924, 85Hz.


Q3 demo four fullscreen 1024x768:
r200 dri 1): 129 fps
r200 dri 2): 150 fps
fglrx:       118 fps

Q3 windowed 1024x768
r200 dri 1): 125 fps
r200 dri 2): 145 fps
fglrx 3):    108 fps

rtcw demo checkpoint fullscreen 1024x768
r200 dri 1): 85 fps
r200 dri 2): 95 fps
fglrx 4):    89 fps
fglrx 5):    78 fps

ut2k3 flyby-antalus, low/average/high
r200 dri: 15.750896 / 37.862827 / 281.284637 fps
fglrx:    30.838823 / 78.981781 / 688.162048 fps

Ok now the interesting part:
Did I already mention there is a massive performance problem with vertex arrays in ut2k3 with the r200 driver? It is really really bad.


Remark 4) 5): 4) is the first benchmark run after the game is started, 5) are all subsequent runs. I don't know why fglrx is always faster on the first run with rtcw, but it behaved like that two years ago already.
Remark 3): It is really impossible to run 3d applications correctly at a screen resolution of 1280x1024 with 85Hz on my card with fglrx, independant of the 3d application. There is a lot of flicker going on around the screen. AFAIK this still is the bug with insufficient bandwidth allocation for scanout, which was fixed in the open source radeon driver ages ago (by an ati employee, no less!).


And now the really interesting thing:
The results marked with 1) are obtained BEFORE running fglrx, the result marked with 2) AFTER running fglrx, i.e. when I did not reboot between running the fglrx driver and the radeon driver (which in the past lead to lockups, but driver switching now seems to work fine, in both directions). This was a completely repeatable effect, I even figured out that starting the X server with fglrx is not enough, but a simple glxinfo when it's running triggers it.
Any ideas what's causing this? Maybe fglrx reconfigures the card's caches or something like that? It would be nice if we could get that additional 10-15% performance, especially if it is as simple as writing a single register...


Roland


------------------------------------------------------- SF email is sponsored by - The IT Product Guide Read honest & candid reviews on hundreds of IT Products from real users. Discover which products truly live up to the hype. Start reading now. http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click -- _______________________________________________ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to