Hi Tim,
I've given your suggestion some thought - and while it looks like something which would work, my time schedule currently is too tight (regardless of money involved) to implement support for the extension. I might think about it again in three months, if the extension has not been implemented by then by someone else. Thanks for bringing this up though, it definitely would've been an option that I hadn't thought of, if only I had any time to spend on it :)

Best regards,
Tony



Am 25.11.2013 22:45, schrieb Timothy Arceri:
Hi Tony,

I'm not one of the main Mesa devs just an independent developer that works on Mesa in my spare time. All I can suggest is you have a go at implementing the features yourself. You obviously have a lot of talent and I'm sure you would be able to accomplish the task. If time is an issue there is a lot of interest in the Linux community for improving Mesa and I myself have run two successful crowd funding campaigns to be able to support some full time work on Mesa. See: http://www.indiegogo.com/projects/improve-opengl-support-for-the-linux-graphics-drivers-mesa/x/2053460 Maybe you could do something similar. If you do decide to do this I find its useful to start working on the extension (showing work on github etc) before running the campaign as people like to be sure you can accomplish what you are promising.
Anyway this is just an option for you to think about.

Tim


On Sunday, 24 November 2013 11:57 PM, Tony Wasserka <neobra...@googlemail.com> wrote:
Hello everyone,
I was told on IRC that my question would get most attention around here
- so bear with me if this is the wrong place to ask

I'm one of the developers of the GC/Wii emulator Dolphin. We recently
rewrote our OpenGL renderer to use modern OpenGL 3 features, however one
thing that we stumbled upon are the lack of efficient (vertex/index)
buffer data streaming mechanisms in OpenGL. Basically, most of our
vertex data is used once and never again after that (we have to do this
for accurate emulation) - so all vertex data gets streamed into one huge
ring buffer (and analogously for index data, which uses its own huge
ring buffer). For buffer streaming, we have multiple code paths using a
combination of glMapBufferRange, glBufferSubData, fences and buffer
orphaning, yet none of these come anywhere close to the performance of
(legacy) rendering from a vertex array stored in RAM.

There are two OpenGL extensions which greatly help us in this situation:
AMD's pinned memory [1], and buffer storage[2] in GL 4.4. We currently
have no buffer storage code path, but usage of pinned memory gave us a
speedup of up to 60% under heavy workloads when working with AMD's
Catalyst driver under Windows. We expect the same speedup when using
buffer storage (specifically we need CLIENT_STORAGE_BIT, if I recall
correctly).

So the natural question that arises is: Is either of these two
extensions going to be supported in mesa anytime soon or is it of lower
priority than other extensions? Also, is the pinned memory extension AMD
hardware specific or would it be possible to support it for other
hardware, too? I'm not sure if buffer storage (being a GL 4.4 extension,
and I read that it might actually depend on some other GL 4.3 extension)
is possible to implement on older hardware, yet it would be very useful
for us to have efficient streaming methods for old GPUs, too.

I hope this mail doesn't sound too commanding or anything, it's just
supposed to be a friendly question on improving the emulator experience
for our user base
Thanks in advance!

Best regards,
Tony

[1] http://www.opengl.org/registry/specs/AMD/pinned_memory.txt
[2] http://www.opengl.org/registry/specs/ARB/buffer_storage.txt

_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org <mailto:mesa-dev@lists.freedesktop.org>
http://lists.freedesktop.org/mailman/listinfo/mesa-dev



_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to