> Oooh... that is a nice idea. Would that work? Steve enters 
> the world with
> an avatar which has a viewpoint which is sitting in a 
> position two meters
> away from his avatar's face and pointing backwards. Miriam enters the
> world, looks around, can't see Steve, and so selects the 
> "Steve" viewpoint
> and zooms across the landscape to stand in front of Steve.

 the problem with that is that if everyone started doing it the 
viewpoint list could become cluttered.  you would need 2 views -
one for "between the eyes" and one for third-person view.
i've got the viewpoints in the fractal gallery we're building so 
they are attached to LODs and in a hierarcal structure... ie hit 
the gallery viewpoint and you get close enough to "see" the 
viewpoints in the gallery. This way i can define viewpoints that are
useful for the area that they were designed for and hide themselves 
when they're not needed so that they don't clutter the viewpoint list.
maybe that stratedgy would work to keep the list uncluttered, but
it wouldn't help "find" someone in the environment.

this brings up another issue that i have thought about but didn't
post because i thought i would jokingly "illustrate" it later.  i
think that it would be possible to construct an avatar with sensors
so that it could set off an event that would only affect the client
that activated the sensor (unless persistance was implemented).
example = an avatar that would turn into a buttload of lightsources 
when a proximity sensor was activated or a proximity sensor that 
would bind the "activatee" to a viewpoint and set off an animation
of that viewpoint - sending them off into near-infinite VRMLspace...
i kinda' figured that would come in handy if some one was being a
royal pest ;-)  but we all know how arms races and virus propogate....

jon meadows

Reply via email to