chris wrote:
> On 2/11/07, Ken Taylor <[EMAIL PROTECTED]> wrote:
> >
> > > if your physics - say bouncing boxes like my example - is performed in
> > > it's own local coordinate space then it could be made consistent every
> > > time - but I can't see how you would combine the rendering of this in
> > > realtime with the rendering of the scene
> >
> > I don't see why transforming from "physics simulation space" to world
space
> > for the purpose of rendering the frame is any more difficult than
> > transforming from, say, a model's local coordinate system to world
space.
>
> if it's not part of the scene when u simulate the physics then when
> you add the objects of the physics sim into the scene (assuming you
> have a way to do this over a series of frames) you could have all
> sorts of unrealistic things like objects passing thru others, objects
> not occluding when they should, no shadows etc.
>

Well I could certainly see physics-related problems (passing through other
objects, collision detection, etc), but shadows and occluding should be
easy. Just change your transformation matrix while rendering the boxes to
move their vertices from physics-space to rendering space -- exactly the
same as if you were rendering a model defined in its own space. They should
be rendered just the same as everything else in the world.

For example, if I wanted to put an md2 model in my world -- say, an
avatar -- it's defined in its own coordinate space, so I'll have to set up a
transformation to map from that space to where I want it to be in the world
I'm rendering. md2 models can still be rendered with shadows and proper
occlusion, so why couldn't a physics sub-system of a bunch of boxes?

Ken


_______________________________________________
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d

Reply via email to