Jonathan Jones wrote:

>Ken Taylor wrote:
...
>>A good compromise may be to have certain movements be activated by
>>higher-level scripting (such as walking animations), and others be fully
>>actuated (such as which direction the head is looking). Of course, as
>>motion-sensing VR type hardware becomes more common, more people will want
>>higher actuation in their avatars for immersion purposes. The amount of
>>real-time actuation to use should probably be configurable by the clients
>>and the servers. For instance, the client controlling the avatar can set
up
>>how much actuation to send out on the network, the server running the
space
>>can have a quota or limit of the amount of actuation bandwidth allowed per
>>client and the types of actuation allowed, and another viewing client can
>>tell the server what kinds of actuation and how much bandwidth it wants to
>>receive. This way, users with the bandwidth can have a rich experience,
>>while those with slower connections don't get totally left behind.
...
>Can I propose a change in nomenclature?
>
>As has been pointed out before, we're talking about clients and servers,
but VOS is technically P2P, so I propose we talk
>about fast-side and slow-side, or local and remote. Hopefully this conveys
what we mean by client and server, but fits in
>better with the P2P architecture.

VOS can be used to implement both P2P architectures and client-server
architectures. See this FAQ:
http://interreality.org/static/docs/manual-html/faq.html#AEN1379

The 3d virtual world model in the current implementation is more
client-server than p2p, so I think the nomenclature is appropriate. The 3d
space is controled by a server, which relays state updates between the
various clients connected to it. The main purpose of my mentioning clients
and servers above was not that there's some speed difference between them
intrinsically, but rather that the server controls and marshalls all state
info being propagated through the space, and so could have its own special
policies on how much animation bandwidth was allowed per client.

>
>I don't really see why most peers need to know *exactly* where someone is
looking, if you have a couple of defined
>look-positions, and then a look-at (object), most of the hard stuff can be
done fast-side.

For immersion purposes, an approximate look direction is sufficient. But it
seems like having a "look-at" command wouldn't be any simpler than just
passing along a look vector . Notably, most FPS shooters now days actuate
where each player is looking, and it does help to create a more believable
playing experience. Actually, maybe "the direction the head is looking"
isn't a good example, since it is pretty trivial (it really boils down to
just one vector) -- something like arbitrary arm actuation would be more
complex. Though, the case for arbitrary arm actuation is harder to make, as
most people wouldn't have motion capture devices, and a collection of
pre-animated "gestures" may be enough for basic body language and immersion
purposes.

Eventually, when we all don our cyber mocap suits and VR helmets, hopefully
networks will be fast enough not to worry about arbitrary joint actuation
across the network :)

But the protocol should be forward-looking, and allow for total model
actuation if the server and receiving clients are happy with that. But in
the near future, most servers would probably be set up to say "that's way
too many movement vectors, buddy -- simplify it a little," and the animation
model should be able to scale down as needed.

Ken


_______________________________________________
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d

Reply via email to