On Wed, 15 Apr 2020 20:59:29 GMT, Kevin Rushforth <k...@openjdk.org> wrote:

>> I think @arapte has a similar MacBookPro model to mine.
>> 
>> I think @prrace might be able to test it (I'll sync with him offline).
>
> Here are the results on Phil's machine, which is a Mac Book Pro with a 
> graphics accelerator (Nvidia, I think).
> 
> Without the patch:
> 2000 quads average 8.805 fps
> 
> With the patch:
> 2000 quads average 4.719 fps
> 
> Almost a 2x performance hit.

Conclusion: The new shaders that support attenuation don't seem to have much of 
a performance impact on machines with
an Intel HD, but on systems with a graphics accelerator, it is a significant 
slowdown.

So we are left with the two choices of doubling the number of shaders (that is, 
a set of shaders with attenuation and a
set without) or living with the performance hit (which will only be a problem 
on machines with a dedicated graphics
accelerator for highly fill-limited scenes). The only way we can justify a 2x 
drop in performance is if we are fairly
certain that this is a corner case, and thus unlikely to hit real applications.

If we do end up deciding to replicate the shaders, I don't think it is all that 
much work. I'm more worried about how
well it would scale to subsequent improvements, although we could easily decide 
that for, say, spotlights attenuation
is so common that you wouldn't create a version that doesn't do that.

In the D3D HLSL shaders, ifdefs are used, so the work would be to restore the 
original code and add the new code under
an ifdef. Then double the number of lines of gradle (at that point, I'd do it 
in a for-each loop), then modify the
logic that loads the shaders to pick the right one.

For GLSL, the different parts of the shader are in different files, so it's a 
matter of creating new versions of each
of the three lighting shaders that handle attenuation and choosing the right 
one at runtime.

-------------

PR: https://git.openjdk.java.net/jfx/pull/43

Reply via email to