I am interested in helping in getting DRI working
with different ATI chipsets. I have at my disposal the
following.
Radeon 9000 PCI
Radeon 9200se AGP
Radeon 9600XT
If someone can inform me as to how I would go about
reverse engineering the windows drivers, I could write small programs to
>
> Yes, lefover is always 0 in this case so we could simplify that. Care
> to send a patch? Otherwise, I'll get to it later.
>
There is quite a few cases that could be more simplified, I just used the
bgr888->rgba conversion as an example. I can go through and work it out
for each conversion
*helps if I send to the list*
> Maybe I don't understand what you're saying.
ok atm, the code for texsubimage2d_bgr888_to_rgba looks partially like
this
#define DST_TEXELS_PER_DWORD 1
#define CONVERT_TEXEL( dst, src ) \
dst = PACK_COLOR__LE( src[0], src[1], src[2], 0xff )
#define C
> For texture RGB888 is that 24bit packed or with 8bits of empty ?
Looking at the texsubimage source for bgr888->rgba conversion, it
appears to be 24bits packed, but I could be wrong, all the macros make it a
little difficult to follow.
ADMIN: please ignore pending email
> I'm wondering though why the heck quakeworld actually uses so much
> texsubimage2d calls. Typically, games specify textures at level load
> time and leave them completely alone after that - at least those games I
> profiled with oprofile never showed texsubimag
> What do you mean by "hardware TexSubImage"?
>
Maybe I'm reading it wrong but looking at these texbubimage functions, they
all copy out a the part of the texture to use, what I'm wondering is if
there is a way to tell the card to do that.
Also, atm Mesa falls back to software for vertex arrays, i
although I can't find it in mesa, does the r200
drm/dri have a way or method of doing hardware TexSubImage? and if not, does the
specs have infomation on doing hardware TexSubImage.
a particular conversion.
This would also make it much easier later on to add further conversion types
and methods.
Chris Ison
---
SF.Net is sponsored by: Speed Start Your Linux Apps Now.
Build and deploy apps & Web services for Linux
call or just called alot (as oprofile doesn't
give you that information)
Thanks in advance
Chris Ison
> oprofile perhaps might help.
The problem is oprofile is it works only on sample count which does not
reflect how many times a function was called, nor does it reflect time per
function call. It only shows that then oprofile checked, it was in that
function.
Here is the initial oprofile done on
ok, let me get this in perspective
R9200R8500
DRI14.62103716.860273
fglrx39.46159444.228085
I have been trying to hunt down the slowdown in DRI, I even if (0)'s all
occurances of sched_yield () which is slower in 2.6 than 2.4 due to 2.6
doing it properly.
Acco
> You know how to verify that...
I've actually forgotten how, I think its an enviroment setting, but none of
the settings I saw when I grep'd for getenv triggered my memory.
> I'm sure you've tried page flipping and ruled out things like usleeps in
> the client side drivers?
enableing page flipp
> You know how to verify that...
I've actually forgotten how, I think its an enviroment setting, but none of
the settings I saw when I grep's getenv triggered my memory.
> I'm sure you've tried page flipping and ruled out things like usleeps in
> the client side drivers?
usleeps in dri? there wo
ADMIN: ignore my foobar'd post
--
Note also from a Quakeforge developer,
WildCode: maybe you should mention my r100 gets ~44 fps :) (hang on
and I'll get a fresh run)
make that 42, with -nosound (39 with sound)
actually, I'm using a rather old dri
So the gigabyte binary windows drivers for this Radeon 9200SE (AGP) card are
faulty, and it really should perform the same as the PCI Radeon 9000?
> I would expect them to perform about the same, with the
> AGP bus providing a slight performance boost. To be honest, I'm not
> sure why your window
nfig
http://wildmidi.sf.net/XFree86.0.log
http://wildmidi.sf.net/glxinfo.out
http://wildmidi.sf.net/misc.out
(this is the full output of lspci -vvv)
Hope you can help
Chris Ison
f I'm
wrong.
Chris Ison
> Which hardware/driver are you using under Linux?
I use a radeon 9000 PCI with r200 DRI cvs (2 weeks ago)
Others who have also tested and saw the same result were using radeon 8500
and radeon 9200, both AGP.
However, as stated in previous email, in windows using catalyst the
multitexture is fas
multitexture is faster than multipass as seen
with nVidia in linux, and nVidia and ATI in windows.
Thanks in advance
Chris Ison
thanks, turned out to be a permissions thing, dunno how they changed.
Also note, its a PCI Radeon 9000, this AMD athlon 2000+ only seems to be
able to push about 450fps out of glxgears with direct rendering, can't help
wondering if something is setup wrong, but as discussed here several times,
th
I've compiled compiled and installed Xfree86, DRI from cvs, with mesa cvs,
last night, thismorning I confirmed it all worked by loading the radeon drm
and running glxinfo showing direct rendering enabled
however, when I ran glxgears, I get a libGL error saying it couldn't load
the drm and it was r
> You can run without software interrupts or usleeps with
>
> fthrottle_mode=0 glxgears
>
ok, doing this for both glxgears and quakeforge reveals interesting
results.It also has a 0.13% increase in FPS with glxgears
glxgears ...
samples %image name app namesymbol name
173692
I dunno if these are intended, or bug.
glxgears
without page flipping
252.8fps uncovered
1294.0fps completely covered
with page flipping
278.2fps uncovered
83fps completely covered
Also while running switching to console then back to X tearing is
observed in glx gears when it wasn't present befor
I don't know if its relevent, but just a couple of observations to do
with glxgears that "may" help pinpoint whats going on.
glxgears normally does about 250fps, if I enlarge its windows to cover
the the entire screen this drops to about 80fps, but if I cover the
window it jumps up to about 1270fp
On Sat, 2003-12-06 at 07:57, Mike Mestnik wrote:
> lspci -vvv
>
That just tells me the card is capable of using it, doesn't actually
tell my if DRI is taking advantage of it.
With and without DRI installed it tells me the same thing
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- Par
After a couple of kernel recompiles I got oprofile to work, and it was
reporting that most of its time was spend in default_idle for both
glxgears and qw-client-glx (from QuakeForge CVS).
After speaking to one of the oprofile ppl, they seem to think that this
is as much as I will get cause it appe
> Good idea, but beware that the gprof output was rather confusing when I
> first saw this problem, the time would appear to be spent in random 3D
> driver functions when in fact it seemed to be waiting for the hardware
> most of the time. oprofile may do a better job there, but you've been
> warne
I realize replies will be resticted under NDA, but I am unable to refine
my questions until I can find the answers to the following questions in
relation to ATI docs some developers have access too.
1. Do the ATI docs outline or detail expected PCI access process for the
r200 chipset, and if so, h
> >>I'd say the best supported cards are r200 based radeons (8500 to 9200 cards).
> >>
well thats just great, I was hopeing to see something different other
than r200 as I have had nothing but trouble and poor performance with
them with DRI. I have yet to determine why but since AGP ppl have re
I know you can't give details under NDA, but do the specs for the r200
show any API for PCI, or has the PCI work on the driver been done by
trial and error.
Also I am wondering if there is any word on specs for r250 and r300. If
not, I would like to know what the best supported card is in DRI with
> Can you try ATI's binary drivers for Linux, or are you not on x86?
ATI's FireGL drivers do not support PCI cards, lord knows I tried.
---
This SF.net email is sponsored by: SF.net Giveback Program.
Does SourceForge.net help you be more prod
On Wed, 2003-11-26 at 10:32, Ian Romanick wrote:
> I'm pretty sure that anything that did that for PCI would also do it for
> AGP. I assume that would kill performance even more, yes?
>
> Could somebody with an actual PCI card try this with ATI's driver? If
> the performance is okay there, tha
> Can you try ATI's binary drivers for Linux, or are you not on x86?
ATI's FireGL drivers do not support PCI cards, lord knows I tried.
---
This SF.net email is sponsored by: SF.net Giveback Program.
Does SourceForge.net help you be more produ
On Wed, 2003-11-26 at 10:32, Ian Romanick wrote:
> I'm pretty sure that anything that did that for PCI would also do it for
> AGP. I assume that would kill performance even more, yes?
>
> Could somebody with an actual PCI card try this with ATI's driver? If
> the performance is okay there, tha
On Tue, 2003-11-25 at 09:45, Roland Scheidegger wrote:
> Results (AXP 1600, 9000pro, 1GB sdram, KT133A Chipset):
> glxgears QuakeIII (1024x768, graphic options all set to high)
> AGP 4x 1910 62.5
> AGP 1x 1860 61.1
> PCI 200 16.8
My radeon 9000 PCI (with 64megs
On Sun, 2003-11-23 at 23:54, Dieter Nützel wrote:
> What do you get with "glxinfo"?
name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.2
server glx extensions:
GLX_ARB_multisample, GLX_EXT_visual_info, GLX_EXT_visual_ra
Its the same situation in GL apps (which I did mention, I only used
glxgears as the prime example cause it should be heeps faster
considering it doesn't do texturing).
> In general I agree. However, it does usefully suggest that Chris isn't
> getting hardware acceleration. He needs to find out
I am wondering if there is any env variables or host.def lines I can add
that will improve performance of DRI. I have a radeon 9000 PCI (in a
celeron 500 pc)and glxgears only gives me 250fps on average, and openGL
applications only perform a little better than they did with the voodoo2
(in a p200 p
> Hmm, I can't reproduce this with my Radeon7500 and DRI disabled by not
> loading module dri in XF86Config-4. Seeing a backtrace after the
> segfault would be extremely helpful in locating the exact problem. Just
> type "bt" at the gdb prompt after the segfault.
>
> Regards,
> Felix
bt just re
ok, should of mentioned, its with the r200 driver, and confirmed again
with the the cvs as of 30 mins ago.
glXSwapBuffers segs GL applications if DRI is NOT enabled.
Confirmed by recompiling glxgears with debugging and ran through gdb
(see previous emails)
-
On Thu, 2003-10-30 at 20:43, Ronny V. Vindenes wrote:
> Card is 9000/128mb under linux 2.6.0-test9 (athlon64 running in pure
> 32bit mode) with an lcd connected to dvi.
>
Just checked latest DRI CVS with my radeon 9000 PCI, no errors. I know
it doesn't help but it might give you a better idea of
further example using glxgears recompiled for debugging
Breakpoint 2, event_loop (dpy=0x804c050, win=31457282)
at /usr/src/xfree86-cvs/xc/programs/glxgears/glxgears.c:383
383 while (XPending(dpy) > 0) {
(gdb) p glXSwapBuffers
$1 = {void (Display *, GLXDrawable)} 0x40076010
(gdb) n
On Thu, 2003-10-30 at 17:08, Chris Ison wrote:
> I'm wondering what happened to the full sw mode for 3d you could use
> before when DRI wasn't enabled. Now GL apps just seg (including
> glxgears) where a few months ago you could still run them without DRI
> although
I'm wondering what happened to the full sw mode for 3d you could use
before when DRI wasn't enabled. Now GL apps just seg (including
glxgears) where a few months ago you could still run them without DRI
although very slow.
---
This SF.net email
ok, problem solved, looks lik I encounted a bug thats been fixed in the
last couple of days, cvs up'd, redid make World and make install, copied
across the radeon.o and it all works fine now.
Thanx for everyones help.
---
This SF.net email is
Is anyone aware, or know how to resolve the slowness of the current DRI
CVS, I'm only getting 30fps with glxgears compared to the 250+fps with
the SF dri cvs.
Note, Direct Rendering is enabled
name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: SGI
server
> Have you done all the proper idiot checks?
>
> I hate to use the word that way, but that's how I'd phrase it if I were
> speaking to myself...
>
> Have you checked to make sure /dev/dri/card0 exists, has the proper
> major and minor numbers, and is readable and writeable by root?
>
ok, /dev
forgot to mention I don't use radeonfb and ldmod shows the module not in
use
(null):/home/ceison# lsmod | grep radeon
radeon117648 0
(null):/home/ceison#
---
This SF.net email is sponsored by OSDN developer relations
Here's
On Tue, 2003-10-21 at 21:29, Michel Dänzer wrote:
> On Tue, 2003-10-21 at 16:03, Chris Ison wrote:
> >
> > Oct 21 23:56:51 (null) kernel: radeon: no version magic, tainting
> > kernel.
>
> This happens here when I insmod radeon.o instead of radeon.ko with a 2.6
>
please find attached the XFree86 log and config, also note that if I use
the old cvs from SF it works fine with the exact same config
Oct 21 23:56:47 (null) kernel: [drm] Module unloaded
Oct 21 23:56:51 (null) kernel: radeon: no version magic, tainting
kernel.
Oct 21 23:57:13 (null) gconfd (ceis
> There was a thread a few months ago concerning writing a useful fallback path
> (ie not swrast) that generated larger points with various primitives. I don't
> think it actually reached a point where I saw a patch ready for inclusion in
> CVS, however.
>
> Keith
Found the thread you were t
On Mon, 2003-10-20 at 17:46, Keith Whitwell wrote:
> Chris Ison wrote:
> > I have found a possable bug in the dri implementation of glPointSize()
> > ... it doesn't seem to work at all. I'm using the radeon.o drm and the
> > r200 mesa drivers.
> >
> > T
On Mon, 2003-10-20 at 17:46, Keith Whitwell wrote:
> Chris Ison wrote:
> > I have found a possable bug in the dri implementation of glPointSize()
> > ... it doesn't seem to work at all. I'm using the radeon.o drm and the
> > r200 mesa drivers.
> >
> > T
I have found a possable bug in the dri implementation of glPointSize()
... it doesn't seem to work at all. I'm using the radeon.o drm and the
r200 mesa drivers.
The source I was trying (and works perfectly in windows) is located at
the bottom of this page
http://www.opengl.org/developers/code/mjkt
make[5]: Leaving directory
`/usr/src/dri-cvs/new/xc/xc/lib/freetype2/freetype/internal'
make[4]: Leaving directory
`/usr/src/dri-cvs/new/xc/xc/lib/freetype2/freetype'
make[3]: Leaving directory `/usr/src/dri-cvs/new/xc/xc/lib/freetype2'
cleaning in lib/expat...
make: *** expat: No such file or dire
I have a screenshot of screen coruption occuring with the radeon driver.
This Screenshot is >500k with compression so if you are interested,
please email me so I can forward it on (so I don't waste list
bandwidth).
I have a Radeon 9000 PCI. This coruption does not occur in windows with
DirectX 9.0
I have a screenshot of screen coruption occuring with the radeon driver.
This Screenshot is >500k with compression so if you are interested,
please email me so I can forward it on (so I don't waste list
bandwidth).
I have a Radeon 9000 PCI. This coruption does not occur in windows with
DirectX 9.0
> The philosophy has _never_ been "screw you Billy Goat". That's Sun and
> Oracle.
>
lol, I expected this, I should of said the "older linux fanatics", as
linux was "sold" to me as a way to screw bill gates, and was the feeling
at the time of many of the users I associated with.
Yes, at the ti
Over the years, the core linux team has changes from fanatical hobbyists
to a coding production team with deadlines. A fair chunk of the team
seem to be from redhat who has a finacial interest in linux.
Because of this the core of the linux team appear to be more concerned
about getting the job do
Over the years, the core linux team has changes from fanatical hobbyists
to a coding production team with deadlines. A fair chunk of the team
seem to be from redhat who has a finacial interest in linux.
Because of this the core of the linux team appear to be more concerned
about getting the job do
I'm wondering if there is a way to easily correct the svideo output in
the radeon driver (r200). Its currently "out of sync" and flickers
badly, but it does try to display something. It works fine for linux
consoles. Windows has the svideo output set to 50Hz if thats of any
help.
---
loads at times and catch up when the load aint so heavey.
Please can you point out where I have gone wrong, and possable
solutions.
ps: I tried that wave generator example in the source, but I ended up
getting the same broken sound result.
Hope you can help.
Thankx in advance
Chris Ison
char
I've looked and looked and looked and I can't work
out why so maybe someone else with more experience can look at the Radeon
DRM and find the cause..
With trunk, I managed to get the radeon 9000 to
work till I upgraded my puter, and cvs up'd at the same time.
Now, with trunk and texmem, a
do you know the packet sizes for R200_EMIT_PP_CUBIC_FACES_* and
R200_EMIT_PP_CUBIC_OFFSET_*
> Probably because Brian wasn't aware of the sanity stuff when he added the
> cubemap packets. You can turn the sanity checks off as a workaround, or add
> the packets & submit a diff to this list.
>
> Ke
ok, I have a problem where when I run QuakeForge, mesa kills itself of
and invalist command packet (63). this turns out to be associated with
cube maps, and the sanity checks have cubic registers missing from the
list.
Also Quakeforge doesn't do cube maps unles you explicitly tell it so,
and I did
Chris Ison wrote:
>
> Michel Dänzer wrote:
> >
> > On Sam, 2003-02-08 at 23:17, Chris Ison wrote:
>
> >
> > Does Option "XaaNoScanlineCPUToScreenColorExpandFill" work around it?
> >
>
> um, no ...
>
> >
> > Well, glxgears
>
> You say "the system locks." Is it a total-death hard lock? Do you have
> another box that could SSH in? It might be intereting to run the app in
> gdb to see where it's wedged.
>
I think it could be an app lock, but as with GL unless the screen is
reset, you can't tell just by looking at
Chris Ison wrote:
>
> here is the log as requested ... also pcigart is enabled in the source
> ...
>
> as for x11perf, I may have miss understood the option, but its atleast 3
> *text* ones (after the 3rd I gave up), but the dots and lines ones
> worked.
>
>
ok, after a system upgrade, a cvs up and a compile for the new system,
dri trunk has a weird problem.
when ever a gl app tries to use textures, the system locks, glxgears
runs fine, x11perf runs fine till it hits the texture tests then locks.
System is now a p2 350, Radeon 9000 pci, yes the board
in XFree86 log
Symbol xf86strtof from module
/usr/X11R6/lib/modules/extensions/libGLcore.a is unresolved!
this function doesn't exist in XFree86 trunk, nor DRI trunk (going by
grep), how ever it is used in extras/Mesa/src/imports.c
did someone forget to commit its definition?
-
a couple of us QF codes are questioning this, looks like gcc
optimizations or gdb are interefering cause the way QF handles the
vertex array (with error checking), it's impossable for starters for it
send a NULL indices pointer, not to mention a 0 count.
> #9 0x41620115 in _tnl_DrawElements (mode
> #12 0x415bb936 in neutral_DrawElements (mode=0, count=0, type=0, indices=0x0)
> at ../../../../extras/Mesa/src/vtxfmt_tmp.h:369
> #13 0x40026f38 in Draw_nString (x=6, y=1078070736,
> str=0x4144e780 "_histogram GL_EXT_packed_pixels GL_EXT_polygon_offset", ' '
>, "GL_EXT_rescale_normal GL
This has been seen before, and fixes a known issue with PCI cards
locking up, the same fix possably fixes an issue occassionaly seen with
AGP card locking up.
It wasn't easy to track down and the fix was only found by accident, it
could well of remained alot longer if I hadn't gotten impatient and
Yeah, it works, just a little weary of it cause remembering without the
read it won't work, but it still appears to work with the read at the
end.
>
> You mean like this? Chris, does that work for you?
>
> --
> Earthling Michel Dänzer (MrCooper)/ Debian GNU/Linux (powerpc) developer
> XFree86 an
please find attached a complete patch that allows pci Radeon cards to
work with DRI. It was created against the DRI CVS xc branch/trunk.
Thanks to MrCooper and M.Harris for their help.
Note: also contains the pcigart patch many often refer too.
Index: programs/Xserver/hw/xfree86/drivers/ati/radeo
I managed to get my pci radeon working with an evil hack, a proper fix
is needed but after using the pcigart patch, I did in
programs/Xserver/hw/xfree86/os-support/linux/drm/kernel/radeon_drv.h
#define COMMIT_RING() do { \
+u32 test_read;
the link is there, has to be something else.
Gregor Riepl wrote:
>
> I assume that's because there's no softlink
> /lib/cpp -> /usr/bin/cpp
> I don't know where this stupid requirement comes from, but I've noticed that
> quite a few programs expect the C preprocessor in /lib instead of /usr/bin..
I'm having trouble getting "make World" happening in current DRI cvs.
./config/imake/imake does exists, and the error is unclear as to whats
missing
make[2]: Leaving directory `/usr/src/dri/xc/xc/config/imake'
make -w xmakefile
make[2]: Entering directory `/usr/src/dri/xc/xc'
rm -f xmakefile
./c
but it aint being co-operative. I hope what I have given is
helpful.
Thanks in advance
Chris Ison
---
This SF.NET email is sponsored by: FREE SSL Guide from Thawte
are you planning your Web Server Security? Click here to get a FREE
Thawte SSL guide and
but it aint being co-operative. I hope what I have given is
helpful.
Thanks in advance
Chris Ison
---
This SF.NET email is sponsored by: FREE SSL Guide from Thawte
are you planning your Web Server Security? Click here to get a FREE
Thawte SSL guide
80 matches
Mail list logo