Leif, On 2002.05.12 19:15 Leif Delgass wrote: > Jose, > > I've been experimenting with this too, and was able to get things going > with state being emitted either from the client or the drm, though I'm > still having lockups and things are generally a bit buggy and unstable > still. To try client side context emits, I basically went back to having > each primitive emit state into the vertex buffer before adding the vertex > data, like the original hack with MMIO. This works, but may be emmiting > state when it's not necessary.
I don't see how that would happen: only the dirty context was updated before. > Now I'm trying state emits in the drm, and I think that doing the emits on the DRM give us more flexibility than in the client. > to do that I'm just grabbing a buffer from the freelist and adding it to > the queue before the vertex buffer, so things are in the correct order in > the queue. The downside of this is that buffer space is wasted, since > the > state emit uses a small portion of a buffer, but putting state in a > separate buffer from vertex data allows the proper ordering in the queue. Is it a requirement that the addresses stored in the descriptor tables must be aligned on some boundary? If not we could use a single buffer to hold succesive context emits, and the first entry each table descriptor would point to a section of this buffer. This way there wouldn't be any waste of space and a single buffer would suffice for a big number of DMA buffers. > > Perhaps we could use a private set of smaller buffers for this. At any > rate, I've done the same for clears and swaps, so I have asynchronous DMA > (minus blits) working with gears at least. This is another way too. I don't know if we are limited to the kernel memory allocation granularity, so unless this is already done by the pci_* API we might need to to split buffers into smaller sizes. > I'm still getting lockups > with > anything more complicated and there are still some state problems. The > good news is that I'm finally seeing an increase in frame rate, so > there's > light at the end of the tunnel. My time is limited, and I can't spend more than 3 hrs per day on this, but I think that after the meeting tomorrow we should try to keep the cvs on sync, even if it's less stable - it's a development branch after all and its stability is not as important as making progress. > > Right now I'm using 1MB (half the buffers) as the high water mark, so > there should always be plenty of available buffers for the drm. To get > this working, I've used buffer aging rather than interrupts. Which register do you use to keep track of the buffers age? > What I > realized with interrupts is that there doesn't appear to be an interrupt > that can poll fast enough to keep up, since a VBLANK is tied to the > vertical refresh -- which is relatively infrequent. I'm thinking that it > might be best to start out without interrupts and to use GUI masters for > blits and then investigate using interrupts, at least for blits. That had crossed my mind before too. I think it may be a good idea too. > Anyway, > I have an implementation of the freelist and other queues that's > functional, though it might require some locks here and there. > I'll try to stabilize things more and send a patch for you to look at. > Looking forward to that. > I've also played around some more with AGP textures. I have hacked up > the > performance boxes client-side with clear ioctls, and this helps to see > what's going on. I'll try to clean that up so I can commit it. I've > found some problems with the global LRU and texture aging that I'm trying > to fix as well. I'll post a more detailed summary of that soon. > What can I say? Great work Leaf! =) > BTW, as to your question about multiple clients and state: I think this > is handled when acquiring the lock. If the context stamp on the SAREA > doesn't match the current context after getting the lock, everything is > marked as dirty to force the current context to emit all it's state. > Emitting state to the SAREA is always done while holding the lock. > I hadn't realize that before. Thanks for the info. Regards, José Fonseca _______________________________________________________________ Have big pipes? SourceForge.net is looking for download mirrors. We supply the hardware. You get the recognition. Email Us: [EMAIL PROTECTED] _______________________________________________ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel