Re: Build a speech generating device

2011-07-31 Thread Patrick Welche
On Sat, Jul 30, 2011 at 02:22:59PM +0200, Frederik Elwert wrote:
   * Dasher, as a complete different approach. I might be a good
 replacement for a regular virtual keyboard once mobility
 decreases to a level where a regular keyboard is hard to handle.
 But it seems not to be very well maintained, it???s quite unstable
 and I did not manage to get all of its functionality working

I'm sorry to hear this - please let me know what problems you are having...

We have already put together dasher running on android writing into
talkadroid to provide mobile speech generation (both available from
the android market place).

The most recent dasher in the git repository
(git clone git://git.gnome.org/dasher) will use speechdispatcher or
gnome speech if it is installed on your system, and in combination
with control mode will speak what you write.

Best wishes,

Patrick
(dasher maintainer)

-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: Build a speech generating device

2011-07-30 Thread Frederik Elwert
Dear Patrick,

Am Samstag, den 30.07.2011, 14:59 +0100 schrieb Patrick Welche: 
 On Sat, Jul 30, 2011 at 02:22:59PM +0200, Frederik Elwert wrote:
* Dasher, as a complete different approach. I might be a good
  replacement for a regular virtual keyboard once mobility
  decreases to a level where a regular keyboard is hard to handle.
  But it seems not to be very well maintained, it???s quite unstable
  and I did not manage to get all of its functionality working
 
 I'm sorry to hear this - please let me know what problems you are having...

With the version in the Ubuntu repository I couldn’t get speech output
working, and the direct mode was unreliable (only few of the characters
were actually passed to the target application).

 We have already put together dasher running on android writing into
 talkadroid to provide mobile speech generation (both available from
 the android market place).
 
 The most recent dasher in the git repository
 (git clone git://git.gnome.org/dasher) will use speechdispatcher or
 gnome speech if it is installed on your system, and in combination
 with control mode will speak what you write.

Okay, that sounds interesting. I’ll try out the latest code, maybe it
already solves the issues I had. Otherwise, I’ll report back.

In the documentation for dasher[1], it sais that it’s also possible to
speak each word or on stop, not just in control mode. But I didn’t find
any way to configure that. (But since control mode already failed, I
didn’t investigate further.)

I also just found a blog article that describes how to set up OpenMary
as a speechdispatcher module.[2] That would probably allow to integrate
dasher with OpenMary easily.

On the other hand, if I write a speech synthesis frontend for normal
keyboard use anyway, I might also just use dasher for text input and
leave the rest to that application. I’ll see what works best.

Thanks,
Frederik


[1] http://library.gnome.org/users/dasher/unstable/reallife.html.en
[2] http://www.theopensourcerer.com/2011/05/05/speak-to-me/


-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: Build a speech generating device

2011-07-30 Thread Frederik Elwert
Hi Justin,

(I’m cc’ing the lists, as this might be valuable information for others.
Hope that’s okay.)

Am Samstag, den 30.07.2011, 09:50 -0400 schrieb Justin Duperre:
 Hi Frederik - I briefly worked on GNOME Caribou for a senior project
 in college. I am not sure of the state of presage integration, but
 what I can tell you is that a lot of work has been done on Caribou
 lately. In the past six months there have been major contributions to
 the code. It would definitely be a good choice for your project as the
 team is very active.

Yes, I also saw that in the course of GNOME 3 Caribou got a lot of
attention. Since I couldn’t find an official release, I was just
wondering how far this is from completion, so that one can actually
start using it, and if word prediction was just an experiment or part of
the recent development efforts.

 I have also used Dasher, and I agree with everything you said about
 it.

I think both cover slightly different usage scenarios, so it’d be great
to have both available.

 Are you planning on making this a public open source project?

Currently, I’m only doing a bit of research. My primary aim is to use
existing and stable software. But the parts I might end up writing
myself will be open source. I think primarily of an improved gespeaker,
or a new speechd frontend. (Having OpenMary support in speechd will
probably make things much easier.)

Besides actually writing code, I am planning to document the project, so
that others can benefit from my findings and experience.

Regards,
Frederik


-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: Build a speech generating device

2011-07-30 Thread Alan Bell

On 30/07/11 15:39, Frederik Elwert wrote:

I also just found a blog article that describes how to set up OpenMary
as a speechdispatcher module.[2] That would probably allow to integrate
dasher with OpenMary easily.
I wrote that article, give me a shout or find me in 
#ubuntu-accessibility on freenode if you want any help setting it up.


Alan.

--
The Open Learning Centre is rebranding, find out about our new name and look at 
http://libertus.co.uk


--
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: Build a speech generating device

2011-07-30 Thread Hugh Sasse

On Sat, 30 Jul 2011, Frederik Elwert wrote:

 there are only few speech generating devices (SGDs) available on the
 market, and those are as limited as they are expensive, I plan to build
 a custom SGD using a tablet computer as a basis and applying available

I don't know about the hardware.  I knew people who used Texas
Instruments chips, but I don't expect they'd include all the German
phonemes.

 The primary components I identified to be necessary are
 
   * a virtual keyboard with word prediction 

OK, the only one I know of is Dasher, which you have found.  
The inference group have a thing called Tapir, which is designed for
on screen text entry like texting.  I don't think it will do 
all the symbols on the keyboard, but it is at
http://www.inference.phy.cam.ac.uk/tapir/
It didn't seem to be a quick way of getting text in, but for people
with low mobility it may have some use.

In the book Beautiful Code [Andy Oram, Greg Wison, O'Reilly, 2007,
ISBN:9780596510046]  Chapter 30 When a button is all that connects
you to the world discusses the software used by Professor Hawking.
It claims the download is at
http://holisticit.com/eLocutor/elocutorv3.htm 
although I can find nothing useful there.  The search engines take
me to
http://hawking.sourceforge.net/
and it appears that the download is available as an executable or a
Zip file, so I suspect it is Windows only.

For prediction there is also Presage 
http://presage.sourceforge.net/
which is really a library, so could be attached to something else.
It does have some wxPython demos, which I can't get working [on
Cygwin], though your experience on Linux could well be better.

Way back, there used to be a program called reactivekbd, which was
a predictive text entry system that could be used from the shell.

It seems to be here:
http://ftp.sunet.se/pub/usenet/ftp.uu.net/comp.sources.unix/volume20/reactivekbd/
I had that working under Sunos 4.mumble, but have not retried recently.

The dasher project does have the Tcl/Tk dasher which may still be
useful if you can't get the rest to build and work, but that ought to
reasonably easy to connect to the HTTP interface of OpenMary.

   * pre-defined text snippets 
   * a speech synthesizer backend (for German language output) 

I think espeak supports German, but I'm not in a position to
comment on the quality.
http://espeak.sourceforge.net/

   * a frontend to the speech synthesizer

Both OpenMary and Espeak have front ends you can type into.  The
OpenMary example client is in Java, and there is a Ruby one and a Python
one in the repository now.  They will need more work for non-Windows:
lots of choices for sound on Linux.

 
 For the speech synthesizer, I currently plan to use OpenMary[1], since
 its output quality is significantly better than espeak?s, even with
 mbrola voices.

I think they are dropping mbrola voices because they need a non-java
backend for it, and they mostly have prosody working now.
See Msg Id: 4da410ad.8040...@dfki.de posted to Mary-users on 12 APR 2011.

 
 For the speech synthesizer frontend, I plan to either adapt gespeaker,

I don't know about gespeaker, so I searched, and found this:
http://alternativeto.net/software/gespeaker/
thereby finding Kmouth
http://www.schmi-dt.de/kmouth/index.en.html
which claims to have word completion and a phrase book, as well as
history.

[The rest trimmed]

Hope some of that helps,
Hugh

-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: Build a speech generating device

2011-07-30 Thread Hugh Sasse

On Sat, 30 Jul 2011, Hugh Sasse wrote:

 
 In the book Beautiful Code [Andy Oram, Greg Wison, O'Reilly, 2007,

s/Wison/Wilson/

 ISBN:9780596510046]  Chapter 30 When a button is all that connects

Hugh

-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility