On Fri, 2010-11-05 at 09:37 -0700, Ben Jackson wrote: > On Fri, Nov 05, 2010 at 09:46:00AM +0000, Peter Clifton wrote: > > > > Other than this, I don't know why glMapBuffer() doesn't "just work" > > performantly > > I would expect it to map GPU memory directly via PCI, which is going to > have much higher overhead than writing to a host buffer and letting the > card DMA via subdata.
VBOs seem to be a big win for static data, not so much for dynamic. On my card the problem was the use of lots of small buffers, e.g. ~2M each. Since I wasn't always making use of the full buffer, there is a lot of memory allocator overhead. Once the GPU starts to lag around a frame behind, there is a large number of overly large buffers hanging around. Owing to the size of the graphics aperture on my machine, that started to cause thrashing of objects in and out of the buffer, with lots of cost in flushing CPU / GPU cache lines. On Intel, subdata is implemented really nicely, the card uploads the data and queues a linear blit from the data buffer you uploaded into your main rendering buffer. (Although amazingly, the driver screwed this up and corrupted the buffer - now fixed.. but it had me scratching my head for a while!) -- Peter Clifton Electrical Engineering Division, Engineering Department, University of Cambridge, 9, JJ Thomson Avenue, Cambridge CB3 0FA Tel: +44 (0)7729 980173 - (No signal in the lab!) Tel: +44 (0)1223 748328 - (Shared lab phone, ask for me) _______________________________________________ geda-user mailing list geda-user@moria.seul.org http://www.seul.org/cgi-bin/mailman/listinfo/geda-user