I think it would be better if we could shape-blend individual parts of
the face independently. Actually I think Second Life uses a hack like
that for lip-synching.

2008/9/25 Philippe Bossut <[EMAIL PROTECTED]>:
> Hi Carl,
> On Sep 24, 2008, at 12:28 PM, Carl Kenner wrote:
>
> I disagree with the call for recognition+trigger. I don't really want
> tracking 3D points, but rather tracking whether each eye is open or
> closed, how much your eyebrows are raised, where your eyes are
> looking, how much are you smiling, that sort of thing. I don't think
> there is all that much potential for incomprehensible poses. I want
> people to be able to keep one eye closed for example, rather than just
> triggering a wink, and to be able to smile a certain amount for a
> certain length of time, rather than just triggering a smile gesture.
>
> Could be you're right. May be I'd be happily surprised by the robustness of
> face tracking methods. I also need to mention that I envisioned
> "recognition+trigger" as that would be a solution that would require no
> change on the server side. Your point about the length of the gestures
> though is well taken.
>
> I vote for default motions too, I assume they just haven't been added
> yet because they require decision making about what parts of the face
> are controllable.
>
> Jani was talking about work done on the bone structure of faces.
> One could also use shape blending for face anim (as SL does I think). That's
> good for canned anims but not for free form (or not easily that is, one can
> imagine though to reduce the anim to a list of reference to known preloaded
> meshes and adequate weight for each). I've been discussing this with some
> folks who know a lot about facial animation and they prefer shape blending
> approaches as it creates more pleasing results and claim that a set of 60
> meshes would cover all the emotion need. That's debatable.
> One question to Jani and Tomi though: I understand shape blending
> shortcoming for skeletal animation (we don't want that) but, is shape
> blending something under consideration for rex avatars faces?
> Cheers,
> - Philippe
>
> 2008/9/25 Philippe Bossut <[EMAIL PROTECTED]>:
>
> Hi guys,
> Thanks for all the answers. This is very interesting. There are
> actually 2 dimensions in that discussion:
> 1- mocap (motion capture) or emotion capture: It's interesting to see
> that you thought about mocap from the get go. As you know (at least
> the guys I met in LA at VWExpo :) ), I've a vested interest to make
> something like that working (see my work at http://
> www.handsfree3d.com/) and it's just fitting that Jani challenges me
> to "just do it" :) We're trying to bring a much livelier experience
> to VW and are considering moving our work to rex because of its
> superior avatar model compared to SL. Porting the puppeteering is one
> thing but the face animation quite another. One idea I had was to
> "recognize and trigger" emotions rather than do pure mocap (for the
> face that is, the body is another problem). The reason behind this is
> that precise tracking of all elements in a face is rather difficult
> and that bad tracking of just a couple of points could result in very
> unpleasing and incomprehensible poses. It's okay when animating a
> robot or a snowman but, if you use your own FaceGen mesh with your
> own face, that could be very unpleasing (you don't want your avatar's
> face look contorted...). On the other hand, existing emotion anims
> (like the ones existing in SL) are perfectly understandable by others
> seeing them. Since the goal is to communicate emotion in a meaningful
> and pleasing way, it seems that "recognition+trigger" rather than
> "tracking" would be more efficient in that case. There are techniques
> developed to do just that (recognition I mean) which seem to work
> well under a wide range of capture conditions (see the talk "The
> Human Face" in http://www.photomarketing.com/6sightliveUpload/Default/
> day1/day1_video_11.html). Also, thanks to Carl for suggesting some
> other pointers.
> 2- default motion: even if we're successful in implementing webcam
> emotion capture, we (as a group) need to recognize that not everybody
> will have such a camera and, even though, not everybody will *want*
> to have it plugged in while in VW (sometimes it's appropriate,
> sometimes it's not). This is when a good "default motion" is
> important as it improves the copresence feel a great deal. The anims
> currently available in SL are rather pleasing and I'd encourage the
> rex community to implement something similar. The eyes in particular
> are very important for this feeling of copresence to set in. For more
> on "copresence", check out the work of Jeremy Bailenson at : http://
> vhil.stanford.edu/. Read in particular "The Independent and
> Interactive  Effects of Embodied-Agent Appearance and Behavior on
> Self- Report, Cognitive, and Behavioral Markers of Copresence in
> Immersive Virtual Environments".
> Cheers,
> - Philippe
> On Sep 24, 2008, at 9:26 AM, Antti Ilomäki wrote:
>
>
> Having your actual facial expressions modeled in the virtual reality
> would be great, but in the meantime simply having the avatar do some
> simple stuff by itself would probably be a cost-effective immersion
> booster.
> 2008/9/24 Carl Kenner <[EMAIL PROTECTED]>:
>
> The Emotiv Epoc can record the user's face movements. It can record
> eyebrow position, eyelid position, horizontal eye position, how much
> they are smiling, clenching their teeth, and how much they are
> smirking to the left or right side, and whether they are laughing. On
> the other hand, the Neural Impulse Actuator can only record
> horizontal
> eye position, and it records that badly.
> So if you can make sure those control points are implemented, people
> with an Emotiv Epoc will be able to use it.
> I'm not sure how best to do it with a webcam though.
> 2008/9/24 Jani Pirkola <[EMAIL PROTECTED]>:
>
> Hi Philippe,
> we are exploring possibilities to animate faces. Currently the
> basic woman
> has bones inside face, so she could be animated - then man does
> not because
> we want first to figure out what the bone structure should be to
> make it
> work well. Our original plan was to make that working during the
> spring, but
> we needed to postpone that work. The idea at that time was to
> integrate web
> camera and record user's face movements and overlay them using
> control
> points to the avatar. There seems to be only proprietary
> solutions to that
> so our GPL license for the viewer does not help either.
> Now we do not have exact plans when and who does that work, but
> it is like
> you said; it enhances greatly the feeling of presence and needs
> to be done
> at some point. Help would be appreciated!
> Best regards,
> Jani
> 2008/9/24 Philippe Bossut <[EMAIL PROTECTED]>
>
> Hi,
> I've a generic question to ask about rex avatars (tried the same
> one on
> IRC but no one answered so, trying here). I notice that,
> contrarily to SL's
> avatars, rex's avatars do not "blink". Actually, they do not
> blink nor they
> move their eyeballs or move their heads about. This gives very
> little
> "copresence" to the avatar (no feeling of "being there with
> someone else").
> Any idea why this is that way? Was there a conscious decision to
> take that
> out of the rexviewer?
> Alternatively, I noticed that anims like "breathing" are on...
> Cheers,
> - Philippe
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
http://groups.google.com/group/realxtend
-~----------~----~----~----~------~----~------~--~---

Reply via email to