I'd think the combination of X and Y data would have to be done somewhere,

so even if OSG or OpenGL provided some mechanism for letting you keep these

as separate arrays, OSG or OpenGL would still have to do the combination

(eventually), and presumably at the same cost as if you had done it

yourself.

Though if OpenGL did this behind the scenes for me, I could simply
work directly off the separate arrays and apply the low level
vectorized math libraries available to me for when I need to transform
my data. For some reason, I think OpenGL could still be more efficient
than me in combining the data (creating new memory for arrays,
iterating and copying between interleaved (Vec3Array) and
non-interleaved formats).

But then I thought about this a little more, and I couldn't help but wonder

if a vertex program might be able to help you out. I haven't experimented

with this, and I don't even know if it's possible. But perhaps there's a way

to keep a separate buffer object for each axis and combine them in a vertex

program or perhaps in a prerender pass? If so, this would probably execute

much faster on modern graphics hardware than attempting to combine them on

the CPU, due to parallel processing of vertices.



You could also use a vertex program to transform just one axis.



I'll let someone who has done more with vertex programs comment on this. If

I'm full of it, don't hesitate to say so; I'll readily admit I haven't

tinkered with vertex programs much.

Your Vertex Program suggestion intrigued me. Unfortunately, I think
there is some other fall out if I adopt this approach. But for kicks,
I thought I would try hacking in this approach to see it. I am really
behind the times and don't know shaders yet, so I muddled through a
simple implementation. I was frustrated to find there is no log10
function (only natural log and log2) so I had to Google for a
conversion because I forgot all my log properties :)

Anyway, for the basic display, the transform works and I like the
solution because I avoided writing a lot of the data copying and for
loops and I don't have to worry about leaving hooks open (complexity)
for optimization. I think it saved me a lot of lines of code.

But I have an implementation bug or short coming. I am not using a
fragment program. I already specify a osg::Vec4Array of colors for my
Geometry. And I also already apply a texture and use the texture
MatrixMode to do some simple animated effect on my lines (which Don
introduced to me). Something about all this seems to not work well
with my Vertex-only program. My line comes out black with occasional
stippled white flickering. Is there something I can do to fix this
problem?

Finally, I think there may be negative fallout for my specific case.
One of the things I've been hoping to do is Intersection/Picking on my
lines once Robert adds the support. I suspect that the intersection
code will have no idea that a vertex program has modified the
positions of everything. And in general, I need to watch out about
making queries on the data I have in CPU land because the GPU might be
doing something I don't know about. (So for example, my
auto-scaling/zooming to best fit code now breaks because my range
queries are on the CPU data.) I don't know if there is a good solution
to this.

Thanks,
Eric
_______________________________________________
osg-users mailing list
[email protected]
http://openscenegraph.net/mailman/listinfo/osg-users
http://www.openscenegraph.org/

Reply via email to