[Dri-devel] r128, DRI and XaaNoPixmapCache

2002-07-01 Thread Peter Surda

Hi!

I just found out that unless you use XaaNoPixmapCache, dri on r128 freezes (==
complete system lockup) pretty often. I use yesterday's XF86 CVS.

Well, I thought it might be a good idea to write this on the webpage or into
docs with BIG letters, because otherwise gaming is pretty unusable if it
freezes every hour or so.

Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
  ...and that is how we know the Earth to be banana-shaped.



msg05610/pgp0.pgp
Description: PGP signature


[Dri-devel] r128 freezing

2002-04-03 Thread Peter Surda

Hi!

Several weeks ago I reported that on initialization of screen (q3, epsxe),
my machine freezes with about 10% probability. I got no meaningful reply, so I
upgraded the machine from rh 7.1 to 7.2 (including gcc), kernel to 2.4.19-pre5
and it still happens.

New hints?

Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
   Microsoft does write free software. Not free as in free
   beer, or free speech, but Free Tibet.



msg03750/pgp0.pgp
Description: PGP signature


[Dri-devel] freezes with r128

2002-03-26 Thread Peter Surda

Hi!

Since several weeks/months, running an opengl app (epsxe, quake3) sometimes
completely locks up the machine. It is not easily reproducible, about 5 times
it works and then suddenly the 6th time it locks up in the same situation. It
happens just before a new screen is being created (with epsxe when the
program finishes loading and the anti piracy thingy should appear, with q3
when it finishes loading and the arena should appear).

There aren't any warnings/kernel panics, it simply locks up, stops responding
to pings etc. Everything else is stable (I watch a lot of divx with aviplay).

I have rh7.2, 2.4.18-pre4 kernel, alsa 0.9beta12, xf86 4.2.0, dri-cvs from
today and gatos-cvs from today, everything self compiled. ECS K7S5A, Duron900,
192MB RAM.

I'll try a newer kernel and then xf86 cvs, but perhaps someone met this
behaviour before...

Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
NT, now approaching 23x6 availability.



msg03611/pgp0.pgp
Description: PGP signature


Re: [Dri-devel] freezes with r128

2002-03-26 Thread Peter Surda

On Tue, Mar 26, 2002 at 10:34:10PM +0100, Felix Khling wrote:
 Hi,
hi

 I had a problem with DRI locking up hard with a Matrox G200 when I 
 compiled the DRM into the kernel statically. Since I made it a module 
 everything works fine.
I do have it as a module :-(.

 Regards,
 Felix Khling
Mit freundlichen Gren

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
  Hello, this is Bill Gates and I pronounce Monopoly, er, Windows as Windows.



msg03622/pgp0.pgp
Description: PGP signature


[Dri-devel] Re: [GATOS]Re: Update #2 (was R128PutImage eating too much CPU, round 2 :-])

2002-02-26 Thread Peter Surda

On Tue, Feb 26, 2002 at 04:32:25PM +0100, Michel Dnzer wrote:
Hmm.. Michel, Peter, is it possible that poll function in DRM driver is
screwed up ? Though I thought that texture transfer goes through an ioctl..
   It does, and Peter says the cycles are wasted in user space, or what are
   you getting at?
  Michel, correct me if I am wrong, but I thought the cycles will be counted
  as system time only if we call schedule when inside the kernel, right ?
 Maybe, I don't know the hairy details.
 
 Anyway, I forgot to mention the main point: Peter observes the same
 effect with and without DRI so it can't be related to the DRM.
Yes indeed, it seems to eat rougly the same amount of CPU time regardless
of whether DRI is enabled or not. The time wasted is roughly equivalent to the
time it takes to transfer the data (memcpy or x*r128blittexture). The odd
thing is that as I said a wisely placed usleep in r128_video.c fixes it.
fixes meaning the time wasted is dramatically reduced, although it makes the
video less fluent and judder is more visible.

And yes, it occurs in userspace, in contrast to e.g. load caused by memcpy
which seems to occur in system.

Executing the PutImage on r128 takes a long time, about 11ms on DVD-sized
picture. Could it be that something is waiting for PutImage to complete or
vice versa? Can X execute a driver function and something else simultaneously?
And what change was done about in November that caused this?

Mit freundlichen Gren

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
  Hello, this is Bill Gates and I pronounce Monopoly, er, Windows as Windows.



msg03138/pgp0.pgp
Description: PGP signature


[Dri-devel] Re: [GATOS]Re: Update #2 (was R128PutImage eating too much CPU, round 2 :-])

2002-02-24 Thread Peter Surda

On Sun, Feb 24, 2002 at 02:40:50PM +0100, Michel Dnzer wrote:
   Would that usleep be an acceptable kludge until the real cause is found
   and fixed?
  Yes, it is mostly ok. It definitely worsens the latency (judder), but this has
  been already noticeable before to some extent, I think it's bearable.
 So do you think it should be added unconditionally or only as an option
 for CPU load saving freaks like you? ;)
It is a VERY BAD workaround (TM), it should not be committed.

Mit freundlichen Gren

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
Give a man a fish and you feed him for a day;
 teach him to use the Net and he won't bother you for weeks.



msg03107/pgp0.pgp
Description: PGP signature


[Dri-devel] Re: Update #2 (was R128PutImage eating too much CPU, round 2 :-])

2002-02-22 Thread Peter Surda

On Fri, Feb 22, 2002 at 01:38:36AM +0100, Michel Dnzer wrote:
  BEFORE the if, X load sinks by about 20% during video playing, BOTH when
  using dri (25-5) or not using DRI (50-30)
  When I put it AFTER the if, the load doesn't change (25 with dri, 50 without).
 Hmm. I don't suppose the R128DMA() call per se imposes such a high load?
No, I also tested it inside R128RMA (for the cases DMA is working) and inside
the if cycle for cases it isn't. It is not (directly) R128DMA that is
causing this, or memcpy, but as these functions take a lot of time to complete
(about 10ms for DVD-sized picture), I guess it is something that
asynchronously does a busy loop waiting for R128PutImage to complete. But why
a wisely placed usleep seems to (mostly) cure the symptoms remains a mystery
to me.

 Can you verify by changing the #ifdef XF86DRI inside the function to
 #if 0?
Especially for you I did as requested, X eats rougly 50%, i.e. no change.

Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
  To boldly go where I surely don't belong.



msg03033/pgp0.pgp
Description: PGP signature


[Dri-devel] Re: Update #2 (was R128PutImage eating too much CPU, round 2 :-])

2002-02-22 Thread Peter Surda

On Fri, Feb 22, 2002 at 04:28:09PM +0100, Michel Dnzer wrote:
   Hmm. I don't suppose the R128DMA() call per se imposes such a high load?
  No, I also tested it inside R128RMA (for the cases DMA is working) and inside
  the if cycle for cases it isn't. It is not (directly) R128DMA that is
  causing this, or memcpy, but as these functions take a lot of time to complete
  (about 10ms for DVD-sized picture), I guess it is something that
  asynchronously does a busy loop waiting for R128PutImage to complete. But why
  a wisely placed usleep seems to (mostly) cure the symptoms remains a mystery
  to me.
 Indeed, especially considering that X is single-threaded...
Exactly.

 Would that usleep be an acceptable kludge until the real cause is found
 and fixed?
Yes, it is mostly ok. It definitely worsens the latency (judder), but this has
been already noticeable before to some extent, I think it's bearable.

   Can you verify by changing the #ifdef XF86DRI inside the function to
   #if 0?
  Especially for you I did as requested,
 So I've been complaining about this all the time? ;)
Perhaps I should have written it more precisely, I did EXACTLY as you
requested. Because I already did similar stuff before, but as you always seem
to be complaining about what I do, now I did exactly as you said :-)

Mit freundlichen Gren

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
  Hello, this is Bill Gates and I pronounce Monopoly, er, Windows as Windows.



msg03043/pgp0.pgp
Description: PGP signature


[Dri-devel] Update #2 (was R128PutImage eating too much CPU, round 2 :-])

2002-02-20 Thread Peter Surda

Dear co-developers!

So, I did a deep research and found out the following:
- it doesn't happen inside kernel, because top shows the load as user, not
  as system
- I am able to determine where it happens, but not why :-(

in r128_video.c, there is a short function called R128CopyData422, containing
rougly

if (!dmacopyworx) {
memcpy the sucker
}

Ok, here comes the funny part. If I put a 

usleep (1);

BEFORE the if, X load sinks by about 20% during video playing, BOTH when
using dri (25-5) or not using DRI (50-30)

When I put it AFTER the if, the load doesn't change (25 with dri, 50 without).

I also tried rewriting the R128PutImage so that every second call is returned
without doing anthing, this sank the X load dramatically as well, but I think
this was expected anyway.

Now gurus, show what you can :-)

Mit freundlichen Gren

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
   My mind is like a steel trap - rusty and illegal in 37 states.



msg02977/pgp0.pgp
Description: PGP signature


[Dri-devel] Update: R128PutImage eating too much CPU, round 2 :-]

2002-02-19 Thread Peter Surda

On Tue, Feb 19, 2002 at 08:31:03AM +0100, Peter Surda wrote:
  If the CPU usage is really a problem, an interrupt is probably the way
  to go; don't know if and how the chip supports that though.
 Sounds good.
Ok, I did some tests:
- it isn't DMA-specific. It happens also with DMA disabled, so interrrupts
  won't help
- it isn't directly either R128WaitForIdle or R128CCEWaitForIdle. Conditional
  usleeps in the idle loops there don't change anything.
- by mistake I introduced an unconditional usleep (1) (i.e. 10ms usleep) into
  R128CCEWaitForIdle. Although this slowed everything down and video latency
  got worse, X suddenly eats about 20% less, still measurable though, about
  4%.
  
Hence, I assume the problem is caused by an idle loop in something that calls
accel-Sync before the loop. I am still unable to find where exactly though.

I need more ideas now :-)

Mit freundlichen Gren

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
   Reboot America.



msg02908/pgp0.pgp
Description: PGP signature


[Dri-devel] R128PutImage eating too much CPU, round 2 :-]

2002-02-18 Thread Peter Surda

Dear co-developers,

Several months ago I noticed that even if I have working DMA, R128PutImage
still eats lots of CPU, up to 26%, depending on video size. Several weeks ago
I complained about this already, at that time it seemed to be a problem with
aviplay or SDL. But since then I was able to reproduce it also with mplayer
with -vo xv.

The best problem description I can give you is this like this: when the player
is doing nothing for some time after returning from calling XvShmPutImage, top
shows X doesn't eat any CPU.  However, if it DOES do something (e.g. if I run
a multithreaded player like aviplay or set high postprocessing level to
mplayer), suddenly X eats 26% CPU.  Even worse, if I disable dri (so that DMA
isn't used for this function), it eats up to 50% !!!.

This must have been introduced roughly in November. I remember it worked
wonderfully right after Michel and I wrote the DMA patch in September, aviplay
was able to eat 99% and X ate nothing. I remember now this isn't a hardware
problem, I also had this before I upgraded the motherboard, just wasn't able
to reproduce it predictably (now I CAN reproduce it anytime).

You can reproduce it anytime as well, find a high quality divx and run a
player (aviplay or mplayer), and run top over ssh, or perhaps even in another
window. You'll notice that with no postprocessing and reasonably fast cpu (
500MHz), everything is ok, but when you turn the postprocessing to the max, X
will eat horribly lot of CPU.  Setting postprocessing is easy, in aviplay you
turn autoquality off in the config and while playing the file, right-click,
choose properties and slide the slider. With mplayer, use -pp 0 or -pp 4.

Zdenek's (aviplay maintainer) and my current theory is that calling the
XvShmPutImage returns before it is actually run. A player then does XSync, so
that I can actually see the picture appear on screen, and calling another X
function until it's finished causes X to hang and this eats CPU time. 

Parameters to SDL_Init don't change anything and as I said, I am able to
reproduce it with plain XvShmPutImage as well.

This is most probably not XF86 related but specific to r128, because I don't
remember upgrading X in between. In my hunt to exterminate this problem I
upgraded to XF86 4.2.0 (I had cvs 4.1.99 or something like that 'till then),
and cvs gatos from about a week a go. This didn't change anything.

So, I'd be very happy if someone could tell me if Zdenek's and my assumption
is right (that XvShmPutImage on r128 returns immediately and this was
introduced around November last year), and how to fix it (or even perhaps fix
it himself/herself, though I'm confident with a little hint I can do it
myself).

Thank you for your attention,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
  The best things in life are free, but the
expensive ones are still worth a look.



msg02882/pgp0.pgp
Description: PGP signature


[Dri-devel] Re: [GATOS]Re: R128PutImage eating too much CPU, round 2 :-]

2002-02-18 Thread Peter Surda

On Mon, Feb 18, 2002 at 02:04:29PM -0500, Vladimir Dergachev wrote:
  But I still don't understand why _X_ should hog the CPU:
[cut]
 Could it be that X is waiting for the engine to become quiscient ? So if
 you scheduled a DMA transfer already it has to busy wait for the card to
 finish. Which
a) creates unnecessary PCI traffic
b) wastes time..
This (wasting while waiting) is exaxtly what IMHO is causing this.

 The solution to this would be to not submit new frames faster than
 graphics card can handle them.
Actually, I reproduced it now with mplayer (sdl or xv) with only twm running.
mplayer eats 20%, X also. DMA is working. This sucks and I doubt it is sending
it faster than it should (I vaguely remember my card should be able to get 3
times 25fps dvd size at 16bpp). The tested video is 24 fps 640x480. It also
happens with smaller ones, but X eats less.


 Peter - Am I right in thinking that you have Rage128 card ?
Yes.

 Can you write a simple program to measure just how fast can you pump frames
 into overlay ?
I could but I'm lazy :-)

I really think it is what Michel (?) said in other email, that XSync is doing
busy-loop while waiting for the transfer to finish. I could rewrite it to do
usleep. I don't care really if it takes 10ms more to wait.

I'll report later.

   Vladimir Dergachev
Mit freundlichen Gren

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
   Press every key to continue.



msg02889/pgp0.pgp
Description: PGP signature


[Dri-devel] Re: [GATOS]Re: R128PutImage eating too much CPU, round 2 :-]

2002-02-18 Thread Peter Surda

On Mon, Feb 18, 2002 at 07:38:01PM +0100, Michel Dnzer wrote:
  This must have been introduced roughly in November. I remember it worked
  wonderfully right after Michel and I wrote the DMA patch in September, aviplay
  was able to eat 99% and X ate nothing. I remember now this isn't a hardware
  problem, I also had this before I upgraded the motherboard, just wasn't able
  to reproduce it predictably (now I CAN reproduce it anytime).
 What has changed since then is that the CCE is now used for 2D
 acceleration with DRI, and the info-accel-Sync() function waits for
 the CCE to go idle.
I have a feeling you found it.

 My first question is: Can you trust top?
Basically not, but this is such a large amount and I don't understand why a
simple XSync should make a difference of 25% if it wasn't guilty?
Surprisingly, the numbers are rougly the same as back in August, before we
made the DMACopyStuff422, even if now I have 40% faster CPU. Don't you think
it is a strange coincidence? This would indicate that about the same real time
is being wasted waiting for something, although we got rid of the stupid
memcpy.

 Or could it be that CPU time from the client gets accounted to X?
No. It is client-independent.

 It doesn't make too much sense that whether the client 'does something' has
 influence on the amount of CPU X uses, does it?
No, it really doesn't, but if SOMETHING is calling X functions while XSync is
pending, it would explain it.

  Zdenek's (aviplay maintainer) and my current theory is that calling the
  XvShmPutImage returns before it is actually run.
 If you look at the code, you see that it returns after it has memcpy'd
 the data to the framebuffer (without DRI) / after it has fired the
 indirect DMA buffers for the image transfer (with DRI).
Aha, so the assumption is correct.

  A player then does XSync, so that I can actually see the picture appear
  on screen,
 This could be where X uses all the CPU. While waiting for the CCE to go
 idle, it can't do much but busy loop.
Why is it so more difficult to do this correctly with CCE when it worked
without? I'll look at the code.

 (The DRM uses a loop with udelay() actually; see r128_do_cce_idle() in
 r128_cce.c)
Ok, I'll grep for it and try some magic :-)

  and calling another X function until it's finished causes X to hang
  and this eats CPU time. 
 It might be a good idea to test if XSync alone causes the same
 behaviour.
Yes it might, but I'm too lazy, I'll simply write some code and hope it solves
the problem :-)

Mit freundlichen Gren

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
 The computer revolution is over. The computers won.



msg02890/pgp0.pgp
Description: PGP signature


[Dri-devel] Re: [GATOS]Re: R128PutImage eating too much CPU, round 2 :-]

2002-02-18 Thread Peter Surda

On Tue, Feb 19, 2002 at 02:21:57AM +0100, Michel Dnzer wrote:
  Why is it so more difficult to do this correctly with CCE when it worked
  without?
 It probably didn't work without. ;) I think when DMA was used for
 XvPutImage, but not the CCE yet for 2D, then a Sync didn't wait for the
 data transfer to actually finish. So it took less CPU waiting, but the
 result was potentially incorrect.
Indeed, when you called a 2D accel function while DMA was in progress, it
caused a total system lockup. Still, the CPU load is a too high price to pay,
there must be another way.

 If the CPU usage is really a problem, an interrupt is probably the way
 to go; don't know if and how the chip supports that though.
Sounds good.

Mit freundlichen Gren

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
   My mind is like a steel trap - rusty and illegal in 37 states.



msg02906/pgp0.pgp
Description: PGP signature


Re: [Dri-devel] Bugfix for br0ken DMA on r128 and possibly radeon DMA functions

2002-01-27 Thread Peter Surda

On Sun, Jan 27, 2002 at 06:03:42PM +0100, Michel Dnzer wrote:
 [ I assume you meant to follow up to the list as well ]
yes it is possible I failed to do so.

  The first one definitely wasn't correct. A process of pid 0 doesn't exist, but
  it has been handled as if it existed.
 The test ( buf-pid != current-pid ) isn't there for fun. The pid field
 of the buffer must contain the current process' pid.
Yes, exactly. But this test fails if buf-pid == 0, which is wrong. Hence I
added this test. See added, not deleted or replaced

 Looking at r128_freelist_get(), ( buf-pid == 0 ) means the buffer is
 free, i.e. not supposed to be in use by any process.
Yes thats exactly what I'm talking about, this isn't tested though in other
places.

 Obviously _something_ related to it is broken, but does that mean the
 pending field shouldn't be used at all
Definitely, that's why I said my patch is most probably incorrect, but it
solved my problems nevertheless (at least I think so). Basically I am too
stupid to fix it correctly so I'm just curing the symptoms.

 After digging around the code a bit, my current theory is that the
 indirect buffer is incorrectly reused after the start of a new server
 generation.
I too think this is the most probable cause.

 The only difference I see in the radeon driver (which I assume doesn't have
 the same problem?)
Well, noone reported it so no idea.

 is in the LeaveServer() function, where it releases the indirect buffer. Can
 you try if that fixes the problem?  Another idea is to set
 info-indirectBuffer to NULL in R128CCEAccelInit().
Sounds reasonable, I'll try it. I hope I'll be around on tomorrows irc
meeting, we can try stuff in realtime then :-)

 Please test these ideas, hope one of them works.
Sure dude.

Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
0 and 1. Now what could be so hard about that?



msg02553/pgp0.pgp
Description: PGP signature


Re: [Dri-devel] Bugfix for br0ken DMA on r128 and possibly radeon DMA functions

2002-01-27 Thread Peter Surda

On Sun, Jan 27, 2002 at 06:03:42PM +0100, Michel Dnzer wrote:
 After digging around the code a bit, my current theory is that the
 indirect buffer is incorrectly reused after the start of a new server
 generation. The only difference I see in the radeon driver (which I
 assume doesn't have the same problem?) is in the LeaveServer() function,
 where it releases the indirect buffer. Can you try if that fixes the
 problem?
Yes this seems to have fixed it, here's the patch:

--- ati.2/r128_dri.cMon Dec 31 06:00:11 2001
+++ ati.2-shurdeek/r128_dri.c   Sun Jan 27 20:50:19 2002
@@ -308,6 +308,9 @@
info-sc_bottom   = INREG(R128_SC_BOTTOM);
info-aux_sc_cntl = INREG(R128_SC_BOTTOM);
}
+} else {
+   R128CCEFlushIndirect(pScrn);
+   R128CCEReleaseIndirect(pScrn);
 }
 }

Would please the responsible person in all projects (dri, xf86, gatos) apply it?

Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
   The product Microsoft sells isn't the software; it's comfort.
 The product that Linux vendors usually sell is freedom.



msg02555/pgp0.pgp
Description: PGP signature


Re: [GATOS]Re: [Dri-devel] Bugfix for br0ken DMA on r128 and possibly radeon DMA functions

2002-01-27 Thread Peter Surda

On Sun, Jan 27, 2002 at 08:15:29PM -0300, Davor Buvinic wrote:
 Works for me: ATI Xpert 128, XFree86 4.2.0, your patch against GATOS ATI 
 drivers sources. No more messages in the kernel log like the following:
 
 [drm:r128_cce_indirect] *ERROR* process 1668 using buffer owned by 0
Yes this was exactly what it was intended to fix, and several similar errors
as well.

 But if I play a video and run glxgears X crashes. Option UseCCEFor2D didn't 
 appears to help...
Hmm I just tried aviplay together with glxgears without crashing (though gears
caused aviplay to crawl). What do the logs show?

 - Davor
Mit freundlichen Gren

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
  ...and that is how we know the Earth to be banana-shaped.



msg02557/pgp0.pgp
Description: PGP signature


Re: [Dri-devel] Bugfix for br0ken DMA on r128 and possibly radeon DMA functions

2002-01-26 Thread Peter Surda

On Sat, Jan 26, 2002 at 05:22:17PM +0100, Michel Dnzer wrote:
 On Sam, 2002-01-26 at 06:13, Peter Surda wrote:
  Sorry that I'm not sending a patch, but I don't know if my solution is
  correct.
 A patch might help to make a judgement. ;)
Ok, here goes:

--- drm-kernel/r128_state.c Thu Dec 13 02:26:00 2001
+++ drm-kernel-shurdeek/r128_state.cSat Jan 26 05:36:10 2002
@@ -626,7 +626,7 @@
 
ADVANCE_RING();
 
-   buf-pending = 1;
+   buf-pending = 0;
buf-used = 0;
/* FIXME: Check dispatched field */
buf_priv-dispatched = 0;
@@ -686,7 +686,7 @@
 
ADVANCE_RING();
 
-   buf-pending = 1;
+   buf-pending = 0;
buf-used = 0;
/* FIXME: Check dispatched field */
buf_priv-dispatched = 0;
@@ -769,7 +769,7 @@
 
ADVANCE_RING();
 
-   buf-pending = 1;
+   buf-pending = 0;
/* FIXME: Check dispatched field */
buf_priv-dispatched = 0;
}
@@ -831,12 +831,12 @@
buf = dma-buflist[blit-idx];
buf_priv = buf-dev_private;
 
-   if ( buf-pid != current-pid ) {
+   if ( buf-pid  0   buf-pid != current-pid ) {
DRM_ERROR( process %d using buffer owned by %d\n,
   current-pid, buf-pid );
return -EINVAL;
}
-   if ( buf-pending ) {
+   if ( buf-pid  0  buf-pending ) {
DRM_ERROR( sending pending buffer %d\n, blit-idx );
return -EINVAL;
}
@@ -1334,12 +1334,12 @@
buf = dma-buflist[vertex.idx];
buf_priv = buf-dev_private;
 
-   if ( buf-pid != current-pid ) {
+   if ( buf-pid  0  buf-pid != current-pid ) {
DRM_ERROR( process %d using buffer owned by %d\n,
   current-pid, buf-pid );
return -EINVAL;
}
-   if ( buf-pending ) {
+   if ( buf-pid  0  buf-pending ) {
DRM_ERROR( sending pending buffer %d\n, vertex.idx );
return -EINVAL;
}
@@ -1397,12 +1397,12 @@
buf = dma-buflist[elts.idx];
buf_priv = buf-dev_private;
 
-   if ( buf-pid != current-pid ) {
+   if ( buf-pid  0  buf-pid != current-pid ) {
DRM_ERROR( process %d using buffer owned by %d\n,
   current-pid, buf-pid );
return -EINVAL;
}
-   if ( buf-pending ) {
+   if ( buf-pid  0  buf-pending ) {
DRM_ERROR( sending pending buffer %d\n, elts.idx );
return -EINVAL;
}
@@ -1552,12 +1552,12 @@
buf = dma-buflist[indirect.idx];
buf_priv = buf-dev_private;
 
-   if ( buf-pid != current-pid ) {
+   if ( buf-pid  0  buf-pid != current-pid ) {
DRM_ERROR( process %d using buffer owned by %d\n,
   current-pid, buf-pid );
return -EINVAL;
}
-   if ( buf-pending ) {
+   if ( buf-pid  0  buf-pending ) {
DRM_ERROR( sending pending buffer %d\n, indirect.idx );
return -EINVAL;
}


  So I looked at the code: pid 0 doesn't exist, and r128 driver seems to be
  using it for optimizing searches for free buffer. So I added a 
  buf-pid  0 
[cut]

  Further investigations showed r128 and radeon where the only driver that
  actually did a buf-pending = 1, so I changed it to 0 and now the symptoms
  aren't ocurring anymore.

 Your changes sound dangerous. :) You're basically removing the tests for
 the errors, or am I missing something?
The first one definitely wasn't correct. A process of pid 0 doesn't exist, but
it has been handled as if it existed. As for the second one, I too am not
sure I'm not breaking something, but it fixed the problem and as I said, r128
and radeon are the only drivers that actually set buf-pending to 1, no other
drivers EVER do that. So I'm assuming r128 and radeon were trying to implement
something new, but it hasn't been completed and is broken.

 I've also experienced the problem with gdm, when I log out of a GNOME
 session. I suspect something (the freelist apparently?) doesn't get
 properly reset when starting a new X server generation.
Hmm yes indeed I think you are right.

 Let's investigate more.
Well it's fixed for me, but you are free to go :-).

  Supplemental question: I noticed that while watching videos X often takes lot
  of CPU EVEN when DMACopyblahblah is working (I added a xDrvMsg to the
  driver to test). The funny thing is that e.g. with mplayer X eats about 6%,
  and ON THE SAME FILE, with aviplay X eats about 25%. What could be causing
  this? BTW this doesn't happen always, just mostly and I am unable to
  reproduce a situation when this doesn't happen, it simply sometimes fixes
  itself.
 No idea about this, could you do some profiling to see where the time is
 wasted?
I could

[Dri-devel] Bugfix for br0ken DMA on r128 and possibly radeon DMA functions

2002-01-25 Thread Peter Surda

Hi!

Sorry that I'm not sending a patch, but I don't know if my solution is
correct.

Problem description (both with dri from CVS dri and CVS gatos): on r128 when I
KDE is starting, display is corrupt and lots of:
Jan 26 04:38:38 ten kernel: [drm:r128_cce_indirect] *ERROR* process 15795 using buffer 
owned by 0
's
appear (I reported this several months ago already).

So I looked at the code: pid 0 doesn't exist, and r128 driver seems to be
using it for optimizing searches for free buffer. So I added a 
buf-pid  0 
into r128_state.c into all tests where it was tested. This fixed most
corruption, but then
Jan 26 05:03:28 ten kernel: [drm:r128_cce_indirect] *ERROR* sending pending buffer 0
's started appearing. 

Further investigations showed r128 and radeon where the only driver that
actually did a buf-pending = 1, so I changed it to 0 and now the symptoms
aren't ocurring anymore.

As for radeon, I assume as the code is very similar to r128 including what I
just fixed, it could help there too.

Supplemental question: I noticed that while watching videos X often takes lot
of CPU EVEN when DMACopyblahblah is working (I added a xDrvMsg to the
driver to test). The funny thing is that e.g. with mplayer X eats about 6%,
and ON THE SAME FILE, with aviplay X eats about 25%. What could be causing
this? BTW this doesn't happen always, just mostly and I am unable to
reproduce a situation when this doesn't happen, it simply sometimes fixes
itself.

Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
 They say when you play that M$ CD backward you can hear satanic messages.
 That's nothing. If you play it forward it will install Windows.



msg02537/pgp0.pgp
Description: PGP signature


[Dri-devel] my take on DRI docs availability

2002-01-21 Thread Peter Surda

Hi!

I hope I'm not gonna be completely offtopic, but I wanted to say that it isn't
necessary to have 10+ years of X experience to contribute useful code to DRI
project. The Use the source Luke is ALMOST sufficient (the docs available
are not, and often have obsolete information, but I think that doesn't matter
that much really), and as for the few (although sometimes crucial) pieces I
didn't understand, there were helpful people on mailing lists and irc (e.g.
Mike Harris and Michel Dnzer) who provided me with enough information so that
I could get it. I wrote my first contribution to X (tvout support for r128)
in like 3 hours (including time spent on IRC) without ANY prior knowledge and
any docs from ATI. I haven't had even written a client app for X before.

This isn't to say the situation couldn't be better, only that it is sufficient
for the worthy ones :-), and has a healthy base. Having free access to
MODIFYING the code is simply unbeatable, I can't imagine living without it.

I think writing docs should be done by people who get paid for it. I know I
hate writing docs for the stuff I program for free. Or at least it should NOT
be done by developers, but by people specialized in documentation. This isn't
a developer issue, but a development organization issue.

My advice to wannabe developers: use the source and when you get confused or
stuck, ask in the list or on irc. About how to ask, read a recent essay from
ESR. Decent docs may decrease the time you need to accomplish a task without
prior knowledge, but not really THAT much. A developer should trust the
availabilty of source and not be scared to dig in. It ain't gonna bite you :-)

Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
   There's no place like ~



msg02468/pgp0.pgp
Description: PGP signature


[Dri-devel] mach64_blit

2001-12-22 Thread Peter Surda

Hi!

I took a look at the source of mach64-0-0-2-branch, noticed mach64_blit is
defined, but not implemented yet. Any hints on how it's progressing?

I also noticed that there is dma_dispatch_clear and dma_dispatch_swap
implemented, so it seems DMA works to some extent. I also noticed they are
done with outreg. From R128 I learned that it should be done with cce (both in
dri and the xf86 driver), because outreg collides with dma. Any hints on this
one?

I'm writing this because I want to port the R128DMA (from XVideo).

Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
Disclaimer: This E-mail, because of shabby security on the Internet, in no way
reflects my thoughts or intentions. It may even not be from me!



msg02261/pgp0.pgp
Description: PGP signature


[Dri-devel] Re: dri trouble

2001-12-09 Thread Peter Surda

On Sun, Dec 09, 2001 at 04:24:00PM +0100, Michel Dänzer wrote:
  Syslog still complains CONTINUOSLY though, but I think that isn't so important
  now.
 While others have reported the same errors, I haven't seen them here. Do
 they always appear, or only under certain circumstances, e.g. running GL
 or Xv clients or whatever ?
Now I tested it and the ONLY way I can reproduce it (both syslog bitching and
display corruption) is:
- starting kdm
- logging in via kdm into kde aka KDE spash screen.
It looks like this
  http://ten.panorama.sth.ac.at/pictures/sshot.png
The background should be blue, there should be a kde splash picture and
obviously the fonts shouldn't overlap. Again to repeat: disabling dri fixes
it.

Afterwards I simply use X for a little time and apparently correct data finds
its way into videoram then :-). I am unable to reproduce either display
corruption or syslog entries later.

Perhaps there is another place where ModeInit is run AFTER some initialization
was done already?

Anyway it seems stable now, I am unable to reproduce any hangups I was
previously experiencing.

Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
Give a man a fish and you feed him for a day;
 teach him to use the Net and he won't bother you for weeks.



msg02108/pgp0.pgp
Description: PGP signature


[Dri-devel] dri trouble

2001-12-08 Thread Peter Surda

Hi ppl!

I have come to the conclusion that there is a bug somewhere (TM). Using XF86 cvs
from about 4 days ago, gatos cvs from the same time. Following happens with
the r128.o I get in either xf86 cvs or dri cvs:

-
ten kernel: [drm:r128_cce_indirect] *ERROR* process 2635 using buffer owned by 0
-
(X logs don't show anything)

This is IMHO the reason why when using dri:
- there are minor display distortions (parts of screen remain black, basically
  the kde splash screen is completely fscked, but it sorts out later).
- when I do a chvt 1;chvt 7, X starts eating all the CPU time and 2d
  accelerated functions seize working (mouse can be moved but kde panel
  doesn't pop up). killall X doesn't work, killall -9 X does. When this bug
  feels like it should become especially mean to me, this killall -9 X causes
  a complete machine lockup.

This also happens immediately after a fresh bootup when I haven't done
anything yet. Volodya: whether km is loaded doesn't matter.

These problems disappear when I comment dri in XF86Config.

Card: ATI AIW 128 16M AGP (non-pro). Kernel 2.4.14. RH 7.1 with current
patches.

Hints (besides not using dri)? I finally want to get better uptimes than 3
days.

Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
  The best things in life are free, but the
expensive ones are still worth a look.



msg02104/pgp0.pgp
Description: PGP signature


Re: [Dri-devel] my X-Kernel question

2001-10-22 Thread Peter Surda

On Mon, Oct 22, 2001 at 02:27:23AM -0400, [EMAIL PROTECTED] wrote:
 The biggest reason against this is that X (as it is now) support not only
 Linux but many other OSes: in particular BSD(s) and Solaris. Moving
 stuff into Linux kernel creates a fork of the drivers which is
 undesirable..
That's a lame excuse. I'm using Linux so I won't suffer from Windows, why
should I suffer because of BSD or Solaris?

Rant
About the precise vsync thingy we're talking about in xpert: we need kernel
support anyway. So why instead of calling a video driver in kernel lame and
uncool and adding a strange inflexible function god-knows-where, shouldn't
we move the whole driver structure to kernel? Drivers for every other device
type are in kernel. What would the anti-video-in-kernel-guys think if I
claimed that network cards should have userspace drivers in sort of uber
daemon and if an app wants to make a TCP connection it should contact this
uber daemon? I don't want to have staroffice in kernel, but the DRIVER
STRUCTURE. For a great UI, we need DMA, vsync and devices communicating with
each other directly or with little overhead. Why insist on doing this in
userspace? The reasons to put it into kernel aren't speed, but because it's
much more easier to add/maintain drivers, add functionality, share code and do
fancy stuff. DRI is a very good example of what I mean.
/Rant

Short explaination of the precise vsync thingy: For fluent video playback it
is necessary to precisely coordinate number of frames the monitor displays.
It is very visible on a TV. When I have a 25fps video, it should be EXACTLY
one frame of data == one frame on TV. Currently, I can tell the card (ATI)
to blit on vsync (so it won't tear), but I can't tell it don't miss a frame,
or block until vsync. This results in visible jumps when suddenly the same
picture is staying on screen for the double duration than the others and it
sucks and I can't do anything about it without SOME kernel support. Telling
Xserver to poll for vsync and eat CPU is lame.

Vladimir Dergachev
Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
   Disc space - The final frontier.

 PGP signature


Re: [Dri-devel] my X-Kernel question

2001-10-21 Thread Peter Surda

On Sun, Oct 21, 2001 at 10:01:33PM -0700, Jeffrey W. Baker wrote:
 Send us a mail that isn't from a windows machine, and you might get an
 interesting discussion.  As it stands, I can barely tell what you are going
 on about.
Dude, I think that Outlook is crap too, I had to administer a couple of them
for a year and it was a nightmare. But that isn't a reason to flame. Any
decent mailclient (such as mutt I'm using) can display mails with lines longer
than 72 chars and html attachments without hassle. I'm pretty sure there is a
way to tell your pine to do that as well. If there isn't, use the source and
make it so :-).

 -jwb
Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
   There's no place like ~

 PGP signature


Re: [Dri-devel] my X-Kernel question

2001-10-21 Thread Peter Surda

On Mon, Oct 22, 2001 at 05:48:56AM +0100, MichaelM wrote:
Would you consider it a good idea to make DRI part of the source of a
kernel? Direct 3d graphics supported from the boot sequence.
Hmm I thought DRI is part of the kernel? Perhaps you meant the DRM part of it.

I'm really concerned about your answer. There was a whole thread on
the linux-kernel mailing list about the hypothesis of the release of
an X-Kernel, a kernel which would include built-in desktop support.
I think it is a great idea to have a kernel implementation of Xserver. But it
would have to be more modular than current XF86, and also have a highly
flexible structure, so that adding new types of devices and functionality
wouldn't pose problems. I think this is currently the biggest XF86's drawback.

It would allow many cool things that XF86 is now struggling with (e.g. check
xpert mailing list for thread about precise vsync coordination).

Each device would have flags like:
- can the device serve as a keyboard?
- can the device serve as a pointer (mouse, joystick, touchpad, ...)
- can it be used for video output?
- can it grab/capture?
- can it convert between colorspaces?
- can it do DMA?
This would allow to write drivers easily and also support combined devices
(keyboard+touchpad, video+capture, ...).

Second: provide data structures
- keypress
- mouse movement
- image
- font
etc.

and hooks for these devices to:
- input data (e.g. keypress).
- output data (e.g. draw pixel)
- transfer data (from/to other devices, system RAM, etc).
- combination of those (e.g. transfer an image from system ram and draw it)
- process data internally (e.g. deinterlace?)
- report status (refresh rate, vertical retrace, ...)
- do something (e.g. wait for nth vsync)
(think ioctl). Currently in XF86 (IMHO) for each new type of use a new
standard has to be made. In this ioctl version you simply define a new value
and add a function to a driver that should do it. Other drivers, or older X
(as they would have something like switch (ioctl) default: return
E_UNSUPPORTED;) will return an error, but nothing will crash or seize to
work.

Another thing is code reuse, so that several drivers can call generic
functions for doing the same thing (I think the combination of transfer +
output is a very good candidate for this). This is also a problem in XF86
imho.

Most people answered, no, this would be ridiculous,
I wouldn't put it on a server there because imho server shouldn't even have a
monitor (mine don't). But for embedded and desktop, all way.

But supposing you want to use a graphical interface on a box, then this kind
of stuff simply DOES belong to the kernel (no I'm not an idiot and I don't
have MSWindows anywhere on my computers).

other said, yes, but hardware manufacturers are too unhelpful therefore
this would be totally a totally unstable release.
There isn't a reason why Xserver in kernel should be more unstable than
user-space Xserver. Both have direct access to all memory and hardware and can
lock up the machine.

One thing though: There should be an interface to reload a driver that is
currently in use, so that when developing it I wouldn't have to reboot
everytime I recompile it.

Oh and one more thing: the driver should autodetect if it is running on the
same videocard as the virtual terminal stuff, so that the first card will
simply open a new VT but secondary card will run independently of this VT
stuff. This would finally allow a decent way to concurrently run 2 separate X
sessions on the same machine using local hardware.

Others said.. other various things.
Ok I'll check the thread.

So, what do you think?
So, what do YOU think? :-)

Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
   Reboot America.

 PGP signature


[Dri-devel] Re: [Xpert]XVideo (memcoy) consuiming to much CPU (i810)

2001-10-13 Thread Peter Surda

On Sat, Oct 13, 2001 at 11:16:51PM +0200, Michael Zayats wrote:
 well back to our cows...
hi

 putting DMA might save about 25%...
A very reasonable assumption.

 another 2 questions:
 1) may be I should just use some optimized version of memcpy? someone knows
 of MMX or SSI uses in glibc? I have very defined hardware to run on...
I tried it (r128's xvshmputimage) before I was told I should try DMA and it
didn't change anything. mmx optimized memcpy helps if you are transferring
inside main memory, but not when the card or bus is the bottleneck. The code I
used is an optimized memcpy taken from mplayer source.

 2) offtopic: does somebody know how to access shared memory from kernel
 space ( may be I will fix bttv driver to write directly to shared memory,
 this will save me another 25%...)?
Well, unless you somehow tell the bttv driver to use DMA it will still consume
same CPU amounts, but this time not in userspace but inside kernel.

 any help?
Use DMA and tell everyone who claims otherwise that they are wrong and if they
don't believe you, they should ask in lkml :-)

Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
  ...and that is how we know the Earth to be banana-shaped.



msg02094/pgp0.pgp
Description: PGP signature


Re: [Dri-devel] Using drm

2001-10-08 Thread Peter Surda

On Mon, Oct 08, 2001 at 06:14:13PM -0700, Sottek, Matthew J wrote:
 my apologies for the misunderstanding.
np, the point is we all learned something new :-)

   My point should still hold. The DRM should allow you to map
 an video area into a client accessible memory location.
You don't need DRM for plain memcpy. Pure X functions can do that.

 Your client could then read directly from the video memory and output
 the result to disk.
Yes, but this:
1. is slow as hell (so I've been told by volodya and RC, I don't have personal
experience with that)
2. eats CPU time as hell (this follows directly from 1.)

 This would require some sort of synchronization to make sure you are not
 reading a buffer that is being updated.
Actually not necessary, read futher...

 Without capture:
 Allocate video memory for frames
 Set the capture to use this video memory
  (either a separate card or an on-chip capture)
 Set the overlay display hardware to display these buffers.
This is exactly how R128{Get,Put}Video (or something like that) currently works.

 With Capture:
 Allocate video memory for frames
 Set the capture to use this video memory
Just CutPaste from R128GetVideo.

 Use the DRM to map these into client memory
drmDMA;drmUberCoolCapture;memcpy;drmFree;

 After a frame is captured the client reads the data and stores it
 to disk. The client then does an XvMCPutSurface to display the
 frame on the overlay, since it is already in Video memory
 no copy is needed.
Not necessary, just call R128DisplayVideo422.

(for radeon or mach64 just replace the R128).

Hmm perhaps we could have a yet new function, XvShmGetImageAndVideo that
combines capturing and displaying, because from outside of X there is no way
to call the internal functions?

 Again, some sync between the frame reading and the capture would
 have to be done.
I think this already works (i.e. simply getting the data from the tuner and
displaying it doesn't produce flicker).

 For cc then you could just write to the cc area of the frame.
 This all comes down to how bad the mmap'd read is compared to
 a DMA write from the Video card to system memory
Memcpy is so slow that apparently you can't even transfer the frames in
realtime. Not to mention that you'd probably want to compress them after the
transfer.

I hope I can find time to make some real experience soon :-)

 plus the extra read from the system memory in order to write it to disk.
IMHO negligible.

Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
Where do you think you're going today?



msg01848/pgp0.pgp
Description: PGP signature


Re: [Dri-devel] Using drm

2001-10-08 Thread Peter Surda

On Mon, Oct 08, 2001 at 06:31:24PM -0700, Gareth Hughes wrote:
 I think it'll be hard to set up and initiate DMA transfers without 
 proper hardware documentation.  Do you have Radeon specs?
Yes, all the developers working on this (volodya, RC and I) have docs from ATI
under NDA (well I don't have radeon docs but I also don't have a radeon card
to test it). ATI is very nice on open-source.

 -- Gareth
Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
   Dudes! May the Open Source be with you.



msg01849/pgp0.pgp
Description: PGP signature


Re: [Dri-devel] Radeon 8500, what's the plan?

2001-10-02 Thread Peter Surda

On Wed, Oct 03, 2001 at 01:17:03AM +, David Johnson wrote:
 Actually I think SiS offers an idct solution as well but beyond protecting 
 intellectual property there are potential legal issues with exposing how ATI 
 decodes copy righted, copy protected DVD.
I don't understand what this fuss about hardware accelerated idct is. In which
situation you actually get use of it? When I play DVDs on my Duron 650 I get
over 50% free CPU time with a software-only dvd decoder (vlc), the card only
does yuv-rgb and scaling. It really only helps on older computers, but why
would anyone buy a radeon 8500 and put it in an old computer?

 It may or may not be an issue but I understand why they don't want to
 necessarily play those games.  There are similar issues with releasing TV
 Out information.
Yes, there are problems about macrovision, i.e. the manufacturer shouldn't
give out the docs if they can't ensure control of macrovision. Fortunately
there has been some progress lately in the area of undocumented TV-Out
features, thanks to me g.

Hmm isn't this dri-devel? Shouldn't we be talking about stuff like how to do
DMA efficiently and what new functions to add instead? Brings me back to what
I wrote a couple of weeks ago, there is no function in DRI that is able to
transfer data from the card to system memory and such a beast could really
come in handy when doing video capturing.

 David
Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
  To boldly go where I surely don't belong.

 PGP signature


Re: [Dri-devel] Re: Re: Radeon 8500, what's the plan?

2001-10-02 Thread Peter Surda

On Wed, Oct 03, 2001 at 12:57:55AM +, David Johnson wrote:
 Take a look at NVIDIA's linux driver website. 
 http://www.nvidia.com/view.asp?PAGE=linux  Is that confusing to a 
 non-technical user or what?  Is the average user going to know the 
 difference between Redhat 7.1 SMP Kernel vs RedHat 7.1, one CPU, 
 uniprocessor kernel vs RedHat 7.1, enterprise kernel?  Sorry, but that is 
 rediculous.
Indeed. But this isn't the fault of linux. It happens because nvidia's and
kernel devlopers' ideas on how to do this don't mix.

 If you guys really want to see Linux become a gaming platform go out and
 solve these issues.
I could equally claim nvidia should solve the issues.

I don't think it is necessary for the manufacturers to develop drivers. A lot
of open source developers would be more than happy to get cool hardware before
it gets officially released, sign a NDA and release a driver when the thing
gets marketed. This has been done and it seems to work. And I am surely more
happy when I sign a NDA with ATI and fix the problems myself than to complain
to nvidia that their drivers crash (although I must confess nvidia is doing
pretty well considered their drivers are closed source).

 Develop the driver infrastructure so that the kinds of things above don't
 happen.
rantTurn everything to open-source so that the kinds of things above don't
happen./rant Matter of perspective.

 Develop the driver infrastructure that makes it easy for the hardware
 manufacturers to develop drivers and support their users.  That is how you
 will take Linux to the next level and make Linux a viable desktop/gaming
 platform.
I would agree to this, but my experience says it is more efficient not to plan
into much details in the beginning, rather try to design a flexible scheme so
you can add new stuff later when need arises. A lot of open-source development
follows this path.

I think this flexibility and ability to adapt is one of the main linux'
strengths and I don't want to kill it in the name of gaming. Sure I want more
linux games, but I want it to be done the right way (TM), which means that
there would be a possibility for a normal guy like me to have access to
souce-code to the drivers, even under NDA. I don't think I need source-code to
games, if the manufacturer provides support. If a game crashes and I get
fragged, big deal. But if a driver crashes and the box freezes I'll be very
angry because all the stuff I run at that time gets killed.

Oh no now I've done it again. I should stop this and actually do something
creative :-) So I'm gonna sleep now and will do some programming tomorrow.

 David
Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
   There's no place like ~

 PGP signature


Re: [Dri-devel] Re: Radeon 8500, what's the plan?

2001-09-28 Thread Peter Surda

On Thu, Sep 27, 2001 at 11:21:37PM -0700, David Bronaugh wrote:
 If you look at it from a purely monentary point of view, yes, you are most
 likely correct.
Not only from monetary point of view.

 However, one has to remember that a LOT of people that run Linux are
 computer people that users rely upon for advice. What card will we
 suggest, a card whose manufacturer actively supports linux, 
Yeah, which one supports open source driver development more than ATI? This
isn't a flame, I really want to find out, I need a card that has all the
features I want supported (xv, tvout, 2d, opengl, video capture, DMA).
Currently from my point of view ATI's card suit me the best. Video capture
isn't there yet, but the developers do have enough docs and are working on it.

 Once you take this factor into account, there's a completely different kind
 of analysis needed.
Well, the larger the userbase (of ATI linux users), the larger the potential
developer base even if no money is in play. Developers get more bug reports
and more beta testers.

A month ago, I was a angry user that wanted some features supported, now I'm a
happy developer who actually did add some of them, and ATI supports me with
docs. Sure I'd be happy if someone gave me money, but a lack thereof isn't
going to stop my development.

 David Bronaugh
Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
NT, now approaching 23x6 availability.



msg01786/pgp0.pgp
Description: PGP signature


Re: [Dri-devel] Radeon 8500, what's the plan?

2001-09-27 Thread Peter Surda

On Thu, Sep 27, 2001 at 08:19:46PM +, David Johnson wrote:
Sure, that is a valid point but we need to remember that in the past
ATI has not been adverse to supporting open source drivers or to
releasing specs to qualified people. 
They are very friendly actually. They provided me mach64 and r128 docs (under
NDA) within 24 hours after I registered with them (last week). Although I must
confess I've been recommended, it still shows that they are completely OK. I
don't see any problems on the communication level, perhaps now that less
people get paid for developing the drivers the pace will slow down, but not
stop.

What developers can do is to recommend ATI cards to end-users, so there is
larger need for the drivers and larger chance someone would be willing to pay
for them.

Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
   Dudes! May the Open Source be with you.



msg01779/pgp0.pgp
Description: PGP signature


Re: [Dri-devel] r128 DMA for Xv update

2001-09-21 Thread Peter Surda

On Sat, Sep 22, 2001 at 03:13:23AM +0200, Michel Dänzer wrote:
 I've put up an updated version of the patch at
 
 http://master.penguinppc.org/~daenzer/patches/r128-xv-dma.diff
 
 Peter Surda found the bug which caused corruption with some videos, it's
 pretty solid now. If noone objects, I'll commit this to the trunk tomorrow.
No objections from me. Except for variable names and some tidying up (which I
like very much BTW) it seems to be the same code we made 3 weeks ago. I've
been running it since without modification without any problems.

Side note: VBI synchronization seems to fail on TVOut (I've been told it works
on a monitor), I'm trying to make it work correctly, this may cause another
patch into the same code, but it should only be a couple of lines.  I'll post
it when I get it (that is IF I get it).

Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
It is easier to fix Unix than to live with NT.

 PGP signature


[Dri-devel] DMA from card API

2001-09-19 Thread Peter Surda

Hello!

It's me again :-)

This time I need to utilize DMA to transfer data from videocard into system
memory (i.e the opposite direction), which is the only way to get decent video
grabbing from ATI AIW (for those who don't know AIW is a videocard with
onboard tv-decoder and other stuff). There is no v4l driver for AIW, I guess
because it has to cooperate with XFree86.

From what I see in the docs, no such beast (videocard-RAM) exists. The
livid-gatos developers are developing a separate kernel module, but I think
this belongs to DRI even if it isn't used for OpenGL but for Xv, as a patch
posted by Michel Dänzer a couple of weeks ago successfully demonstrates.

Therefore I'd like to propose change to DRI definition, to add a function that
transfers data over DMA into system memory. IMHO the transfers should behave
like this:

1. allocate buffers with drmDMA.
2. call the new function to do DMA transfer to allocated buffers.
3. now the data is in system memory and I can memcpy it where I need it, this
should eat much less CPU
4. free buffers

What do you think?

Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
0 and 1. Now what could be so hard about that?

 PGP signature