On Mon, 2011-09-19 at 12:52 -0600, Paul Martz wrote:
> On 9/19/2011 11:16 AM, Jeremy Moles wrote:
> > I can't help but wonder how to best do something like this in OSG. :)
> 
> The general solution -- OSG or whatever -- is to render your scene into one 
> or 
> more textures. Then draw a fullscreen quad to display the final image, but 
> use a 
> shader that uses your texture(s) as input to achieve the desired rendering 
> effect. Depending on the effect or effects you want, you might need to draw 
> multiple fullscreen quads with multiple different shaders, often using the 
> rendered output of the previous quad as an input texture to the next. With a 
> post-render pipeline such as this, you can create several visual effects, 
> including depth of field, motion (or other) blur, glow, heat distortion, 
> deferred lighting application, etc etc.
>     -Paul

Thanks Paul--I was certainly aware of this in the general sense, but
sought some confirmation. :)

What you've described is essentially what osgPPU does (or can do), yes?

And as far as performance is concerned: how often is this rendering
paradigm used in high-fps applications? Often articles simply state
"post-process the default frame buffer" which leads me to believe there
are (possibly lesser) alternatives? Even so, how would subsequent
shaders have access to the former color if it wasn't stored in a texture
somewhere, so really, RTT seems to be the ONLY way (which is what
prompted this message).

Thanks for the info...

> _______________________________________________
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
> 


_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to