It's using dynamic text to speech, so I wouldn't be able to use cue points
reliably.

On Thu, Jun 3, 2010 at 4:09 AM, Glen Pike <g...@engineeredarts.co.uk> wrote:

> If your mp3's are pre-recorded rather than people recording them
> dynamically, could you use cue points?
>
>
> On 02/06/2010 20:57, Eric E. Dolecki wrote:
>
>> I have a face that uses computeSpectrum in order to sync a mouth with
>> dynamic vocal-only MP3s... it works, but works much like a robot mouth.
>> The
>> jaw animates by certain amounts based on volume.
>>
>> I am trying to somehow get vowel approximations so that I can fire off
>> some
>> events to update the mouth UI. Does anyone have any kind of algo that can
>> somehow get close enough readings from audio to detect vowels? Anything I
>> can do besides random to adjust the mouth shape will go miles in making my
>> face look more realistic.
>>
>> Thanks for any insights.
>>
>> Eric
>> _______________________________________________
>> Flashcoders mailing list
>> Flashcoders@chattyfig.figleaf.com
>> http://chattyfig.figleaf.com/mailman/listinfo/flashcoders
>>
>>
>>
>>
>
> _______________________________________________
> Flashcoders mailing list
> Flashcoders@chattyfig.figleaf.com
> http://chattyfig.figleaf.com/mailman/listinfo/flashcoders
>



-- 
http://ericd.net
Interactive design and development
_______________________________________________
Flashcoders mailing list
Flashcoders@chattyfig.figleaf.com
http://chattyfig.figleaf.com/mailman/listinfo/flashcoders

Reply via email to