Sure, let synthesizers handle ASCII text, but give synthesizers the
        textual pronunciation of Unicode characters, such as smiling face.
-- 
Sent from Discordia using Gnus for Emacs.
Email: r.d.t.pra...@gmail.com
Long days and pleasant nights!

Linux for blind general discussion <blinux-list@redhat.com> writes:

> I would argue that the pronunciation of symbols should most certainly
> be handled by the synthesizer rather than any intermediate layer.
> Letting the intermediate layers handle symbol pronunciation will only
> cause lots of problems similar to the "tiflda" problem we have in
> Speakup to this day. Most synthesizers have no trouble pronouncing the
> ~ (tilde) character, but they all get it horribly wrong using Speakup,
> because the pronunciation is hard coded in Speakup itself, and is
> quite wrong for most speech synthesizers.
> ~Kyle
>
> _______________________________________________
> Blinux-list mailing list
> Blinux-list@redhat.com
> https://www.redhat.com/mailman/listinfo/blinux-list

_______________________________________________
Blinux-list mailing list
Blinux-list@redhat.com
https://www.redhat.com/mailman/listinfo/blinux-list

Reply via email to