as i said you can extract the high level info about phonemes to function
curves and put them on any parameter anywhere you choose...

targetNames = ["open", "W", "ShCh", "PBM", "FV", "wide", "tBack", "tRoof",
"tTeeth", "IY"]

for targetName in targetNames:
   # argumeents for FaceFX commands
   groupName = "Default"
   animName = "AudioTrack1"


   keyTimes = si.FaceFXGetBakedCurveKeyTimes(groupName, animName ,
targetName)
   keyValues = si.FaceFXGetBakedCurveKeyValues(groupName , animName ,
targetName)
   keySlopeIn = si.FaceFXGetBakedCurveKeySlopeIn(groupName , animName ,
targetName)
   keySlopeOut = si.FaceFXGetBakedCurveKeySlopeOut(groupName , animName ,
targetName)

let me say this... i came down to quality of auto lip sync. voice o matic
failed consistently with some phonemes. where as facefx worked a lot
better, and their high level phoneme editor (like lip sync view in
softimage/facerobot) was easy to use. so flexibility means nothing if the
solve is crap.

i believe both facefx and softimage use 'fonix' http://www.speechfxinc.com/.
so mirko's suggestion to use face robot could give you similar results...
for us i didn't have time or the desire to rig the character in facerobot,
but might have been able to use one of the default characters just to get
the auto lip sync results out.

s

On Wed, Feb 12, 2014 at 1:07 PM, Tim Leydecker <bauero...@gmx.de> wrote:

>
> The reason why I´m still leaning towards voice-o-matic is pretty nicely
> summed up in this example of storing keyframe data on "proxies":
>
>

Reply via email to