Eric E. Dolecki wrote:
I have a face that uses computeSpectrum in order to sync a mouth with
dynamic vocal-only MP3s... it works, but works much like a robot mouth. The
jaw animates by certain amounts based on volume.

I am trying to somehow get vowel approximations so that I can fire off some
events to update the mouth UI. Does anyone have any kind of algo that can
somehow get close enough readings from audio to detect vowels? Anything I
can do besides random to adjust the mouth shape will go miles in making my
face look more realistic.


You really just need to collect profiles to match against. Record people saying stuff and match the recordings with the live data. When they match, you know what the vocal is saying.
_______________________________________________
Flashcoders mailing list
Flashcoders@chattyfig.figleaf.com
http://chattyfig.figleaf.com/mailman/listinfo/flashcoders

Reply via email to