On Fri, Dec 01, 2006 at 10:47:37PM -0800, Ken Taylor wrote:

> For immersion purposes, an approximate look direction is sufficient. But it
> seems like having a "look-at" command wouldn't be any simpler than just
> passing along a look vector . Notably, most FPS shooters now days actuate
> where each player is looking, and it does help to create a more believable
> playing experience. Actually, maybe "the direction the head is looking"
> isn't a good example, since it is pretty trivial (it really boils down to
> just one vector) -- something like arbitrary arm actuation would be more
> complex. Though, the case for arbitrary arm actuation is harder to make, as
> most people wouldn't have motion capture devices, and a collection of
> pre-animated "gestures" may be enough for basic body language and immersion
> purposes.

In s4, for avatars we basically cheated -- the avatar was a file in the 
.md2 (Quake 2!) format.  There was an "actor" interface that let you 
specify which animation loop the avatar should display.

This was done for expediency reasons, of course.  The problem with using 
an opaque 3rd party opaque file format is precisely because we couldn't 
do what you are talking about here -- we couldn't move the head to point 
look at where the user was looking, we couldn't have arbitrary 
articulated limbs, etc.

With regard to limb movement, one compromise that gives you limb 
movement without need special input is using inverse kinematics, so you 
can say, I want my guy to touch *here* and it moves the arm to touch 
that place.  I think Second Life recently introduced a feature like 
this?

> But the protocol should be forward-looking, and allow for total model
> actuation if the server and receiving clients are happy with that. But in
> the near future, most servers would probably be set up to say "that's way
> too many movement vectors, buddy -- simplify it a little," and the animation
> model should be able to scale down as needed.

Like I said, a live data feed is actually easier in a lot of ways -- 
it's storing and playing back animation data which is hard.  Entirely 
solvable, but still hard.

[   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]

Attachment: signature.asc
Description: Digital signature

_______________________________________________
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d

Reply via email to