This is my first post and I hope I am not far off topic here. If I am, I ask your tolerance.
I have been following this thread concerning user tracking to animate the Avatar. I would like to point out an interesting approach used by There.com. There.com uses speech recognition to extract cues to animate the avatars. As one is speaking, the avatar makes very lifelike movements. Lip sync is excellent and facial expressions as well as posture and hand gestures quite realistically follow the content of the speech. While this may not be as sophisticated as head tracking, it does lend a great deal of realism to the experience there.
It would seem to me that this method of avatar animation would be easier to implement than tracking the users head movements and facial expressions and then translating these data to avatar movements.
As a slight aside, avatars in There.com frequently make eye contact. This is sadly missing and depersonalizing in Second Life.
I enjoy this mailing list a great deal and I know I am among esteemed company here.
eRobert Allen
|
_______________________________________________ Policies and (un)subscribe information available here: http://wiki.secondlife.com/wiki/SLDev Please read the policies before posting to keep unmoderated posting privileges
