On Saturday 25 May 2002 01:14 pm, Leif Delgass wrote: > I'm using the same model you had set up. When a client submits a buffer, > it's added to the queue (but not dispatched) and there's no blocking. > The DRM batch submits buffers when the high water mark is reached or the > flush ioctl is called (needed before reading/writing to the framebuffer, > e.g.). Clients have to wait for the lock to submit the buffer, but the > ioctl quickly returns. The only place where a client has to wait is in > freelist_get if the freelist is empty. That's where buffer aging or > reading the ring head allows the call to return as soon as a single buffer > is available, rather than waiting for the whole DMA pass to complete.
I guess I misunderstood somewhere. Then the only real question is, "can we safely/stably manage aging or do we need to do it the way I had planned?" Got it. > For vertex data, we can add the register commands based on the primitive > type and buffer size. By placing the commands, we can ensure that any > commands in the buffer would just be seen as data. This would require an > unmap and loop through the buffer, but we wouldn't have to copy all the > data. I'm going to try doing gui-master blits using BM_HOSTDATA rather > than BM_ADDR and HOST_DATA[0-15] and see if we can elimintate the register > commands in the buffer. We could also use system bus masters for blits, > but that would require ending the current DMA op and setting up a new one > for each blit, since blits done this way use BM_SYSTEM_TABLE instead of > BM_GUI_TABLE. With BM_HOSTDATA it would be a matter of changing the > descriptors for blits, but they could co-exist in the same stream as > vertex and state gui-master ops. It's still something eating cycles that if we could have secured the chip better that we wouldn't have to be doing. Unmapping's not a good thing to be doing with something you're trying to do quickly and it's rough on the kernel memory system. > > Now, as to which is more effiicient, that's still up to debate. I can't > As I explained above, serialization isn't needed. It's really a question > of which method of checking completion and dispatching buffers leaves the > least amount of idle time. Buffer aging could still be used in the > interrupt driven model, so that's not really a constraint of one approach > versus the other. I don't think it would be too difficult to test both > methods without too much change in the basic code infrastructure. Works for me. -- Frank Earl _______________________________________________________________ Don't miss the 2002 Sprint PCS Application Developer's Conference August 25-28 in Las Vegas -- http://devcon.sprintpcs.com/adp/index.cfm _______________________________________________ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel