On 2002.06.11 22:12 Leif Delgass wrote:
> On Tue, 11 Jun 2002, José Fonseca wrote:
> 
> ...
> 
> This is where we have to make sure that any assumptions we make can be
> verified to be true.  I haven't done enough testing to really determine a
> sure fire way of knowing that the card won't stop yet.  What I'm
> concerned
> about is that the card might be doing some read-ahead buffering that we
> don't know about.  That's why I was thinking we might have to see the
> card
> actually advance a couple of times before determining it won't stop.  The
> test I did with changing BM_GUI_TABLE from a buffer took a couple of
> descriptors to take effect.

I've already tested that before, and it didn't seem that there wasn't a 
significant buffering noticed - at least with respect with the descriptor 
table. Only if it ther is a lookahead value. In that case we could compare 
the BM_GUI_TABLE instead of the head. In any case we would need more 
testing to be sure then..

Another idea I had is instead of having a flag is having a bookmark - the 
value of the last commited ring. Whenever we commit a buffer and the head 
if after that bookmark then we would had to set this bookmark to the 
beggining of the commited buffer. When we need the card to complete we 
just would had to wait (and reset the DMA if it stopped until then) for 
the head to reach the bookmark. Once it reached we could be sure that it 
would succeed because the ring table wouldn't suffer any change until the 
end.

In any event this will take some experimentation, and from your comments 
below this isn't as high priority as thing as condensating the state 
buffers or do a costumized vertex buffer templates for the Mach64 vertex 
format.

> ...
> 
> I don't think it's a problem if the head_addr is one behind the actual
> position if there are 2D commands still in the FIFO (which could only
> happen at the final descriptor on the ring).  We don't actually act on it
> until the card is idle.  It just means that the last buffer in the ring
> won't be reclaimed until the card is idle.  Actually, if you _did_

True.

> advance
> the head while the card is active, it would trigger the error check you
> added to freelist_get because head would equal tail, but the card would
> still be active.

I doubt, because in that case we would wait wait for idle and _then_ 
restart DMA... (as it done now in CVS)

Please, let's not discuss this further. I think we both agree that using a 
variable is the best wat to go, isn't it?

> 
> > ...
> >
> > I'm not sure if I understood correctly what you're saying. Note that
> once
> > we restart the card we can be sure that it won't stop until it finishes
> > _all_ buffers we supplied until _that_ moment.
> 
> I wasn't very clear here.  What I mean is that if the card is idle and we
> restart, we should be fine.  The problem is if we _only_ do that and
> do doing nothing if the card is active.

Ok.

> ...
> >
> > I think that having a flag indicating whether the card can stop or not
> is
> > more efficient. What do you think?
> 
> It depends on what's required to set a reliable flag.  That would have to
> be done every time we advance the ring tail, whereas a flush ioctl is
> less
> frequent.  We can remove the flush ioctls wherever they are followed by
> an
> idle ioctl with the current version of the idle ioctl (since it ensures
> _all_ buffers will complete), which would just leave the flush in DDFlush
> in the Mesa driver.  If an app calls glFlush, it's probably not doing it
> very often (maybe once or twice a frame?).

I really don't know.. I don't even know why would a regular application 
(not X) call a idle if the flush wasn't implicit...

> 
> ...
> 
> The biggest problem with getting the client submitted buffers to be used
> more efficiently is state emits.  Client-side state emits aren't secure.
> The current code _does_ allow multiple primitives in a buffer as long as
> there is no new state between them.  The AllocDmaLow will use an existing
> vertex buffer until it's full or a state change causes a dispatch.
> 

Oh.. I didn't had that impression... But even with that restriction, they 
are a lot smaller than I would expect. I would expect that OpenGL 
applications made less state changes than that...

> ...
> >
> > Besides my above comment I noticed that you did something with
> > RING_SPACE_TEST_WITH_RETURN, but it wasn't clear what from the diff. Is
> > that code still needed or not? I've disabled and I've been doing fine
> > without it.
> 
> Yeah, I meant to explain that.  I wasn't quite done there, I just have
> the
> macro enabled to make sure it didn't cause problems to remove it.  The
> macro you disabled was in the old path, where it _is_ necessary because

Oops, I didn't noticed that there were two of them..! That explains why 
when I removed them when I was debugging I still got UPDATE_RING_HEAD 
called without BM enabled - at the time RING_SPACE_TEST_WITH_RETURN was 
another of those situations where this could happen.

> buffers aren't added to the ring until a flush.  This path is probably
> broken now anyway.  In the new path, the only advantage I can see to
> having it is to reduce the number of times you call wait_ring if the ring
> is almost full.  Actually, the macro can also cause a new pass to be
> started if the card is idle (via UPDATE_RING_HEAD -- this is what I meant
> about the flush being 'hidden' in the UPDATE_RING_HEAD macro), so it 
> might
> cause less idle time in some cases.  It shouldn't be required for things

If it was just for that we could just call UPDATE_RING_HEAD as the first 
thing in the ioctl.

> to work, it's more a question of whether things perform better with or
> without it.

Ok. It's fair enough.

José Fonseca

_______________________________________________________________

Multimillion Dollar Computer Inventory
Live Webcast Auctions Thru Aug. 2002 - http://www.cowanalexander.com/calendar



_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to