I thought it might be a good idea to write up a few things I've tried recently 
and not seen in widespread use - so that either others know about them as well 
or I can find out what the pitfalls are.

Basically this is about reducing the number of varyings, which is desirable for 
at least two reasons. First, their total amout is quite limited (I think 32?). 
Second, they cause work per vertex and per pixel, so their load scales always 
with the current bottleneck.  Their actual workload is just a linear 
interpolation across a triangle though, so the optimization I'm talking about 
here is maybe all together 10-20% gains, not something dramatic, and it's not 
unconditionally superior to save a varying if the additional workload in the 
fragment shader is substantial.

Also, the techniques are somewhat 'dirty' in the sense that they make it a bit 
harder to understand what is happening inside the shader.

* making use of gl_FrontColor and gl_BackColor -> gl_Color

As far as I know, these are built-in varyings which are already there 
regardless if we use them or not. So if we don't use them at all because all 
color computations are in the fragment shader, they can carry four components 
of geometry, if we use a color but know the alpha, there is one varying which 
can be saved by using gl_Color.a to encode it.

The prime example is terrain rendering where we know that the alpha channel is 
always 1.0 since the terrain mesh is never transparent. In default.vert/frag 
gl_Color.a is used to transport the information if a surface is front or 
backfacing, but in terrain rendering we know we're always above the mesh, so 
all surfaces we see are front-facing, and we do backface culling in any case.

* making use of unit vectors

Direction vectors (normal, tangent, binormal, light direction, view direction, 
half vector,...) are unit vectors, i.e. they really are not vec3 but their 
information content is just vec2 because they can be represented by an azimuth 
and a polar angle.

That representation may not be good in practice, because doing a linear 
extrapolation on it is not exact and it may cost too much to transform them, 
but it means that up to a sign the third component is determined once the other 
two are known. But that sign we often do know:

In terrain rendering in world coordinates, the z-component of the normal is 
always positive. In terrain rendering in eye coordinates, the z-component is 
always facing us, as we never see backfaces. 

A unit vector interpolated in every component into the fragment shader needs to 
be normalized in any case, so for the same computational pricetag this 
normalization can be carried out to determine the missing component.

* cross products and orthonormal bases

In case one needs an ON base of normal, tangent and binormal in the fragment 
shader, the definition of the binormal = cross(normal ,tanget) can be used to 
get it from the other two vectors rather than interpolation and normalization 
(it's not a priori clear, but in my tests this seems to be even slightly faster 
than to generate binormals at the vertices, extrapolate them and normalize them 
later). So the whole ON base can be constructed in the fragment shader at the 
expense of just 5 varyings rather than 9 if done naively (or even 4 if the 
tangent has a component with known sign, which isn't really clear to me).

* light in classic rendering

Leaving Rembrandt aside, the direction of the light source (the sun) is not a 
varying but actually a uniform. In case we need this in world space in the 
fragment shader, doing a
lightdir = normalize(vec3(gl_ModelViewMatrixInverse * 
gl_LightSource[0].position));
in the vertex shader and passing this as varying vec3 is quite an overkill.

Due to the complexity of the coordinate system of the terrain, it's not clear 
to me how to get the world space light direction really into a uniform, but we 
do have it's z-component (the sun angle above the horizon) as a property and 
can use this as uniform. Since light direction is a unit vector, it means that 
only the polar angle of the light needs to be passed as a varying then, saving 
two components. 

In particular for water reflections computed in world space, passing normal, 
view direction and light direction in world coordinates from the vertex shader 
(9 varying) is really not efficient - the normal of water surfaces in world 
space is (0,0,1) and not varying at all (we do have formally water on steep 
surfaces in the terrain, but we never render this correct in any case since in 
reality rivers don't run up and down mountainslopes and foam when they run 
really fast on slopes, and to worry about getting light reflection wrong when 
the whole setup is wrong is a bit academic), the light direction is really just 
the polar angle, since we later dot everyting with the normal we really only 
need the z-component of the half-vector, and that means just two components of 
the view direction - so it can in principle be done with 3 varyings rather than 
9.

I've tested and used some of those, for instance making use of the cross 
product or the alpha channel in terrain rendering performs just fine. A 10%ish 
performance gain for the terrain shader isn't something to be massively excited 
about, but in the end combining a few 10% gains from various tricks does make 
an impact.  Well, anyway - a few people just might be interested.

Cheers,

* Thorsten
------------------------------------------------------------------------------
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
_______________________________________________
Flightgear-devel mailing list
Flightgear-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flightgear-devel

Reply via email to