On 11/01/2013 03:31 PM, Erik Faye-Lund wrote:

Won't using the git-sha1 as a compatibility-criteria cause issues for
developers with local changes? I'm not so worried about this for
OES_get_program_binary itself, but once the shader-cache is in place
it sounds like a potential source of difficult to track down
misbehavior...

I agree it might be too aggressive criteria but it is hard to come up
with
better and as simple.
That's not my objection. My objection is that this might give
headaches for people with local modifications to the glsl-compiler.
Local modifications does not affect the git-sha1.

For the automatic shader cache this headache could be helped a bit with a
environment variable or drirc setting that can be used during development.
On the other hand an automatic cache must work in a transparent way so it
should be always able to recover when it fails, so one should only see it as
'slower than usual' (since recompilation/relink required) sort of behaviour.
The WIP of the automatic cache I sent some time earlier also marked
(renamed) these 'problematic' cached shaders so that they can be detected on
further runs and cache can ignore those.

I agree that it might become problematic, on the other hand it is also easy
to just wipe ~/.cache/mesa and disable cache.
That won't help for programs that maintain their own (explicit)
shader-cache, which was the intention of introducing binary shaders to
OpenGL ES in the first place.

Ok, we are talking about the extension, I thought you referred to the automatic caching. For extension to work, we need at least more Piglit tests to ensure that it does not break. Of course every time you go and touch the code, some functionality may break, be it this extension or something else. I'm not sure if Chromium, Qt or other users expect glBinaryProgram call to always succeed, hopefully not.

Not sure if Nvidia or
Imagination try to handles these cases with their cache implementations.
I would assume they simply piggie-backed on their binary-shader
support. But I somehow doubt they have such a "lazy" approach to
binary shaders as this series attempts. I worked on
ARM_mali_shader_binary for the Mali-200 drivers myself, and our
approach was quite different from this, and surely much more robust.

With such strong opinion It would be nice to have some more technical explanation. Why it was "surely more robust"? The implementation itself can be likely very different as it targets only a particular GPU while the approach here is meant to be more generic. Please provide some more input and I can try to tackle the weak spots.

To be honest, I find the whole idea of serializing the IR quite
repelling, as it goes against almost every intention of the extension.
Add to that mesa's IR not at all being stable, well.... yeah, that's a
super-highway to disaster.

Again, I would appreciate a bit more technical approach if possible. I can't tell from this paragraph if you have a better idea for the architecture or not.

Thanks for the feedback;

// Tapani

_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to