Hi Marcus,
Marcus Lindblom schrieb:
Oliver Kutter wrote:
Given support for MRT (multiple render targets, which would mean
using OSGPassiveWindow for now, methinks) you could use the vertex
program to read from the particle state texture(s) and only simply
to transform each point into screen space, then do rendering _and_
calculate/update the state-texture(s) for the next frame in the
fragment program.
I use a second viewport with a PolygonForeground and a
TextureGrabForeground. That's the way I get my positions back to my
application.
But how can I read in the vertex program from the texture at the
correct texture position (where the position of the current particle
position is stored). In the fragment program I use an offset texture
to indicate the position of the current particle. But I don't know
how to do this in the vertex shader.
You'll have to set up an vertex array (f.ex. of positions) that are
your texture coordinates (quite a boring array though :). Then, if you
render your particles as points, you'll use the input (gl_vertex in
this case) as coordinate when looking up the position in the texture
in the vertex program, then output it both transformed (as
gl_position) and in world coords (as a named varying perhaps) and
output that from your vertex program. In the fragment program, use the
varying to compute next state and the position for lighting/fog (if
applicable).
Hmmm.. This one-pass-render-and-update technique only works if your
particles are single pixel points _and_ you can affect the output
coordinate with MRT (to write the correct screenspace-pos for the
visualization and the correct position in the state-texture). I would
assume that you don't do that, then you'll must do two passes:
1. draw one big rectangle to update the particle state texture(s).
(fragment shader does a lot of work, vertex shader is idle)
2. draw a set of points (or something) with index the particle's
positions. (vertex shader does most of the work, maybe
lighting/texturing in fragment shader)
Either way, you should be able to do texture fetches in the vertex
program. The tricky part is what you feed as input (your real
question), which would be the texture coords of the current 'live'
particles that you want to visualize.
I think there is a demo of this in Nvidia's SDK, which works with the
GeForce 6 series of cards.
Don't hesitate to ask more. :)
Best regards,
/Marcus
N.B. I've mostly only read about how to do this in online nvidia docs,
but I feel quite confident about the capabilities of this. At any
rate, I will need to do some such stuff like this in the future so
I'll gladly help you out.
Well, thank you for that and for your help. I'll try out something else
before, and if that does not work, I will ask more.
-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users