On 2/10/07, Peter Amstutz <[EMAIL PROTECTED]> wrote:
> On Fri, Feb 09, 2007 at 04:56:39PM +0100, Karsten Otto wrote:
> > I am not quite sure what kind of precision is necessary here... I'd
> > expect that it should be enough to center the current sector for
> > displaying purposes, and re-center to the new sector once you cross a
> > portal boundary. Considering relatively small sectors, I'd imagine an
> > error factor of a few centimeters  is not too disturbing when the
> > objects in question are 50 meters away. This is probably two pixels
> > difference on the average display resolution. I can live with that :-)
>
> You've put your finger on it.  From a rendering perspective, you just
> need enough precision so that the edges between two triangles that are
> adjacent will actually be drawn that way without visible seams or
> cracks.  If you make sure the world is set up in such a way that the
> camera is never too far from the origin, then this isn't a problem.

that's basically right, tho a recent experiment I did showed visible
rendering issues starting at 1km. it all depends on how much time and
efffort and performance you want to devote to an ad-hoc solution:
there will always be ways to defeat it. But everyone else does it this
way - moves the viewpoint through the environment - so I'm really only
a single voice against all the conventional approaches :(
>
> (Strictly speaking, the camera transform causes the world to be oriented
> around the camera, not to oriented the camera in the world -- but it's
> easier to speak about it as if it were the camera that is moving).
>
> Human scale is pretty easy to manage.  Geospatial scale, and
> particularly moving between a galactic scale down to a human scale
> smoothly is where things are really difficult.  I'm open to ideas as to
> how to split this, because frankly I don't know how best to compromise
> between a massive scale and a high-detail scale, particularly in terms
> of the representation that gets pushed over the network.

I would look at how they do it in eve-online or one of the star system
based games, if you can find out, and make your object system do
something similar. The display system will always show a subset and
that is where the optimisations occur. Another ref to look at : I
thought O'Neil's on-the-fly system was very good - for a "conventional
ad-hoc" approach.
>
>
> Also, on the topic of precision, something else that hasn't been
> mentioned is cumulative error -- depending on how you work with your
> coordinates, you may start getting artifacts due to the accumulation of
> roundoff errors.  This is fairly managable by having some immutable
> "source" data from which you do your transforms from, instead doing
> incremental transformations, but still something to watch out for.

Hee hee! That's really what I aim to minimise with the floating origin!!
I don't believe it is so manageable in a conventional origin-relative system.
The spatial error that I talk about generally increases linearly with
distance from origin so that is not such a big problem if you move the
origin every now and then - like every 3 km in MS flight simulator.
But there are a few situations where it can increase in powers of 2 -
rare so you can discount them if you want (I don't, but that's me).

These spatial errors are one of the many contributors to relative
error. The relative error propagates exponentially - and that is the
main problem. So, my contention is: if you minimise the error input
into the relative error equation you minimise the exponential error
form relative error propagation - which is what effects things most in
the end when they get to the last stage of the graphics pipeline. The
more complex your simulation system in the future the more the
relative error impinges on the quality of the sim. So the foundation
you choose to build upon is important.

>
> > Regarding physics simulation, which (if I understand you correctly)
> > suffers the most from matrix creep, well... I am no expert, but
> > couldn't you calculate this in a virtual coordinate space, derived
> > from the world coordinates in such a fashion that all objects
> > involved are close to the center? And then, once you reach some
> > "stable" result, convert the virtual coordinates back to world
> > coordinate space and continue from there? That may not be
> > particularly precise or realistic, but again, as long as the system
> > behaves more or less consistently, I can live with it.
>
> Hmm.  Yes, you probably could do something like that, if you can divide
> up your interacting objects into isolated groups.  Then you could
> simulate those objects independently at the origin, then transform them
> back to the original position.  Seems like a hassle, though.
>
> In practice, precise consistancy is the *least* of people's worries when
> setting up a rigid body physics simulation; setting up the system takes
> a lot of tweaking just to prevent it from flipping out.
>
> Here's a harder question: how do you handle the physics for an object
> that's passing through or straddling a portal?

:) amen!

chris
>
>

_______________________________________________
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d

Reply via email to