I have a face that uses computeSpectrum in order to sync a mouth with
dynamic vocal-only MP3s... it works, but works much like a robot mouth. The
jaw animates by certain amounts based on volume.

I am trying to somehow get vowel approximations so that I can fire off some
events to update the mouth UI. Does anyone have any kind of algo that can
somehow get close enough readings from audio to detect vowels? Anything I
can do besides random to adjust the mouth shape will go miles in making my
face look more realistic.

Thanks for any insights.

Eric
_______________________________________________
Flashcoders mailing list
Flashcoders@chattyfig.figleaf.com
http://chattyfig.figleaf.com/mailman/listinfo/flashcoders

Reply via email to