Dirk Reiners wrote:
On Wed, 2006-02-08 at 16:28 +0000, Vince Jennings wrote:
1) The geometry of the avatar classes could not be fully shared by
instances of the avatar as each geometry node 'collected' by the
renderAction needs to be in its individual deformed state in the
rendering list. This is clearly not possible when sharing geometry so
each instance requires at least its own vertex and normal fields,
increasing the memory footprint and file size with each instance of an
avatar (these are level of detail avatars with the highest level ~8000
polygons).
The only way I see around that is doing the animation in a shader, or to
build synchronized groups of avatars that share data. But I don't see a
way around that in general, so it's not really an OpenSG-specific
problem unless I missed something.
Unless you're doing skinning/animation on the GPU you can't share
geometry. If you are, then you need to precompute bounding volumes (or
set it to the maximum extent).
I suppose what Vince might want is to compute animation geometry on the
fly just before rendering each avatar, not compute all avatars prior to
rendering. That would be an interesting concept (if generalized) to put
in a scene graph. Creating a core which does this should be feasible, it
would need the source data (source geometry + animation data) for each
and a cache shared between instances that could just be some a void* to
X mb (ideally allocated per-thread if we allow mulitple renders) that is
filled/typed on each draw.
You can get the Transform updated by just invalidating the bvolume of
your Avatar for every frame (Node::invalidateVolume()). That won't help
for being a frame late, though. The only way I see to fix that is to
artificially enlarge the bvolume so that it is big enough for whatever
movement happens in the next frame. That assumes that you know that, or
that the avatars have a limited motion speed. It will result in somewhat
less efficient culling, but that's much better than popping artifacts
due to bad bvolumes.
Precomputing the bounding box for each frame in the animation might be a
way around this, so your animation controller component can set the
correct bvolume. This would of course require some caching scheme to
reduce load times, but if you have that in place (we have it to avoid
parsing huge vrml-files on each load) then it might be something.
The real solution would be decoupling the animation from rendering by
having a separate update traversal. Given that traversals are not really
cheap in the current incarnation and there will be only very few nodes
that need it (in most apps, yours is probably different), I've shied
away from it so far. One of the (many ;) things I want to revisit for
2.x...
Tagging nodes/cores as being static/dynamic helps quite a lot with this,
since static nodes need not be visited during volume-update-traversal.
(similar tagging for other types of traversals apply of course) But
perhaps you already had that in mind? :)
/Marcus
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users