This discussion reminds me of the idea to provide an API for LL's
puppetiering project so people could write different plug-in applications
to power avatar movement, rather than using lackluster mouse-control. An
example implementation would be camera recognition of hand position: even
if the user had to wear colored tape on his or her fingers the direct
control of an avatar's limbs would be rather liberating (3D movement
calculated by scale of hand and x, y, position, or with a depth-sensitive
camera.)
Head position and point tracking are two relatively well-established
techniques; while they still can --like most image recognition technology--
be finicky, both potentially could provide pathways to a more expressive
Second Life.
_______________________________________________
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/SLDev
Please read the policies before posting to keep unmoderated posting privileges