Speech Dispatcher moves to GitHub under a new maintainer

2018-01-23 Thread Hynek Hanke

Dear all,

we are pleased to let you know that the Speech Dispatcher project which is a 
common high-level interface to speech synthesis was now moved to a new 
repository on GitHub. This brings easier code review capabilities, smoother 
contribution process and convenient issue tracking on a platform that is 
already very well known to many. We hope this will make it easier for others to 
contribute to its development and we invite everyone interested to cooperate.

The new repository is:
https://github.com/brailcom/speechd

Bugs can be reported and suggestions can be provided at:
https://github.com/brailcom/speechd/issues

Also speechd-el, the Emacs speech and Braille output interface, as well as a 
few other projects related to Speech Dispatcher were moved to GitHub:
https://github.com/brailcom/

As some of you already know, the Speech Dispatcher project has also recently 
changed its maintainer. The new leader of the project is now Samuel Thibault. 
We would like to use this opportunity to repeat our many thanks to Luke 
Yelavich who worked in this important role previously and to wish to Samuel and 
the Speech Dispatcher project all the best for the future.

Hynek Hanke & the team from BRAILCOM,o.p.s.
Samuel Thibault & Hypra


-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: It is time for me to depart.

2017-11-20 Thread Hynek Hanke
Dear Luke, Dear Samuel, Dear All,

We in BRAILCOM are very thankful for the good work that Luke Yelavich has
put into the Speech Dispatcher project by contributing to its development
as well as by guiding and helping other contributors and putting their 
improvements
together to produce solid and stable releases. Luke, we thank you very much
for all of this, it was not a small thing, and we wish you lot of luck with 
whatever
projects or activities you are going to pursue in the future. At the same time,
if you ever wish to contribute again to the collective effort that stands behind
the development of Speech Dispatcher, you are warmly welcome, do not
doubt it.

Samuel Thibault stepped forward and kindly offered help with maintaining
the Speech Dispatcher project in the future. We gladly accept this offer
as Samuel is also a long-term contributor to the project, knowledgable
not only about its technical implementation but also about its design,
intents and philosophy. Also to you Samuel, we thank you very much
and we wish you that this work brings you joy.

We understand that the time that Samuel can spend on the project
is limited and we hope others will continue helping with their
contributions and improvements which are indispensable for this
collaborative project.

We still think that Speech Dispatcher is very relevant, perhaps even more
than before. As both speech synthesis solutions and assistive technologies
such as screen readers continue improving, user interactions with computers
are getting more sophisticated, the need for effective coordination of assistive
messages are getting evermore important. The same is true for having an
easy and effective high-level interface between ATs and speech.

The last thing that I want to mention, we think that it would be fitting
at this point to move the project codebase to GitHub as it would make
it technically easier to contribute and to track issues, especially for
new people. We will prepare this transition and coordinate on it
with Samuel.

Kind regards to you all,

Hynek & the team at BRAILCOM

-- 
Mgr. Hynek Hanke | BRAILCOM, o.p.s.
ha...@brailcom.org | http://www.brailcom.org



> 8. 11. 2017 v 2:46, Luke Yelavich :
> 
> It is with an extremely heavy heart that I write to you all to announce my 
> departure from free and open source software development. GNU/Linux and free 
> and open source software development has been a part of my life for well over 
> a decade, some high points being my employment at Canonical for over 9 years, 
> and the opportunity to maintain a free software project, Speech Dispatcher.
> 
> I care very deeply about GNU/Linux accessibility, and free and open source 
> software. I strongly believe that the philosophy behind free software is key 
> to a better future for this world. However, I have lacked motivation of late, 
> and the current state of accessibility on GNU/Linux, as well as the lack of 
> funding for it, has not helped. I also would like to spend more time on other 
> tallants I have, which have been neglected somewhat until recently, and are 
> more likely to bring in a source of income in the future.
> 
> I am sure I will return one day, with renewed motivation, enthusiasm, and a 
> desire to contribute again. I am also sure I will be keeping watch on what 
> transpires in this community, and since I will still be using GNU/Linux, I 
> may still submit a bug fix from time to time for anything that I find 
> particularly annoying.
> 
> I step down from my positions as Vinux lead developer, and as Speech 
> Dispatcher maintainer with pride and joy at what has been achieved. I am 
> sorry that I have not fully helped to realize a renewed Vinux distribution 
> based on Fedora, but I am sure that no matter what direction the Vinux 
> project chooses to go, it will be lead well, and received well by the 
> community.
> 
> I will be closing my patreon campaign. To those who have supported me 
> financially, I thank you deeply. Your support has been much appreciated. You 
> know who you are.
> 
> I am so grateful for the time I have spent in this community. I have learnt 
> much, and have shared knowledge with others, and both the learning and 
> sharing have always been a pleasure and a joy. It has also been a pleasure to 
> talk to, and work with the free software community at large, but I would 
> particularly like to thank a few people.
> 
> To Rob Whyte, leader of the Vinux project, I owe a particularly heart felt 
> thank you. You have been a rock and confidant when I have needed someone to 
> talk to, as well as someone who I could blow off steam with, when things have 
> been rough. It has been an honour, and a pleasure, to work with, and get to 
> know you. Feel free to contact me any time if you want to chat.
> 
> To everybody at Brailcom, particularly Hynek Hanke, Toma

Speech Dispatcher 0.7 Released

2010-06-16 Thread Hynek Hanke

Speech Dispatcher 0.7 Released


The Brailcom organization is happy to announce the availability of
Speech Dispatcher 0.7 developed as a part of the Free(b)Soft
project. Please read `NOTES' bellow.

* What is new in 0.7?

  * Speech Dispatcher uses UNIX style sockets as default means
of communication, thus avoiding the necessity to choose a numeric
port and greatly easying session integration and adressing several
security issues

  * Autospawn -- server is started automatically when a client requests it
It can be forbidden in the appropriate server configuration file
(thanks to Luke Yelavich)

  * Pulse Audio output reworked and fixed (thanks to Rui Batista)

  * Dispatcher runs as user service (not system service) by default
and doesn't require the previous presence of ~/.speech-dispatcher
directory

  * Graceful audio fallback (e.g. if pulse is not working, use Alsa...)
(thanks to Luke Yelavich)

  * Various bugfixes and fine-tunnings

  * Updated documentation

NOTES for packagers: The communication mechanism of Speech Dispatcher
and the way of starting it has been severely reworked in this release.
Some ./configure variables as PIDPATH are no longer relevant. It is highly
recommended to start Speech Dispatcher per-user in his user session
and avoid starting it as a system service via /etc/init.d/. Please check
the updated documentation, especially the part Technical Specification.

* Where to get it?

   You can get the distribution tarball of the released version from

http://www.freebsoft.org/pub/projects/speechd/speech-dispatcher-0.7.tar.gz

   We recommend you to fetch the sound icons for use with Speech Dispatcher.
   They are available at
   http://www.freebsoft.org/pub/projects/sound-icons/sound-icons-0.1.tar.gz

   Corresponding Debian, Gentoo and Ubuntu packages will soon be 
available at
   your distribution mirrors.

   The home page of the project is http://www.freebsoft.org/speechd

* What is Speech Dispatcher?

   Speech Dispatcher is a device independent layer for speech
   synthesis, developed with the goal of making the usage of speech
   synthesis easier for application programmers. It takes care of most
   of the tasks necessary to solve in speech enabled applications. What
   is a very high level GUI library to graphics, Speech Dispatcher is
   to speech synthesis.

   Key Speech Dispatcher features are:

   - Message priority model that allows multiple simultaneous
 connections to Speech Dispatcher from one or more clients
 and tries to provide the user with the most important messages.

   - Different output modules that talk to different synthesizers
 so that the programmer doesn't need to care which particular
 synthesizer is being used. Currently Festival, Flite, Epos, Espeak
 and (non-free) Dectalk software, IBM TTS are supported. Festival
 is an advanced Free Software synthesizer supporting various languages.
 Espeak is a very fast multi-lingual synthesizer.

   - Client-based configuration allows users to configure different
 settings for different clients that connect to Speech Dispatcher.

   - Simple interface for programs written in C, C++ provided through a
 shared library. Python, Common Lisp and Guile interface. An Elisp 
library
 is developed as a sperate project speechd-el. Possibly an interface
 to any other language can be developed.

* How to report bugs?

   Please report bugs at . For other
   contact please use 


Best Regards,
Brailcom, o.p.s.
http://www.brailcom.org


-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: [orca-list] OpenTTS 0.1 released.

2010-06-08 Thread Hynek Hanke
On 7.6.2010 18:24, Bill Cox wrote
> Hynek, as the original author and maintainer, you have as much or more to 
> contribute than anyone
> else.  Why not join the OpenTTS team?

Sorry, but you do not understand. We work as a team. There is
not just me.

Best regards,
Hynek Hanke


-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: OpenTTS 0.1 released.

2010-06-07 Thread Hynek Hanke
On 6.6.2010 05:09, Luke Yelavich wrote:
> I am proud to announce the very first release of OpenTTS, version 0.1.

Dear Luke Yelavich,

It is a pure nonsense that the development of OpenTTS
started two months ago. You know very well that it is a fork
of the Speech Dispatcher project which has been developed
in the course of the past 10 years. The real developers are
different from who you wrote in your announce.

The project was not started to become a replacement for
gnome-speech, as you state, instead being a generic speech
service available to different client applications, Gnome and Orca
being one of them. The decision of Orca to migrate to Speech
Dispatcher has been made over the past year and was announced
before OpenTTS was ``created'', as is described on Bugzilla:
https://bugzilla.gnome.org/show_bug.cgi?id=606975

Please let us remind you that the architecture and design are
what is most important. It took us time and resources to come
up with the current architecture and we don't think OpenTTS is
introducing anything significantly new over Speech Dispatcher.
Although some work has been done in the fork, in the big picture,
these have been so far just minor code improvements.

Before the fork, there was a face-to-face agreement between
you and Brailcom, that you will keep an unofficial development
repository, where we will gather patches and changes, which will
later be reviewed and released in the official version. While we were
doing so, we evaluated the quality of the patches gathered in this way
and found it was very various. There were good code improvements,
but they were mixed with very amateurish hacks (such as totally random
port assignment or completely missing documentation), which we could
not release as serious software without first finding time to rework
them significantly, which takes resources and time.

The whole fork is an unnecessary fragmentation of the limited
resources we all have for accessibility. If there was capacity
to move Speech Dispatcher forward, it could have been done in
the same project.

We still continue the development of Speech Dispatcher from our
own resources for the benefit of all of you, in our own serious
and systematic way, and preparing the 0.7 release. It is however
impossible now for us to use fixes and improvements from the OpenTTS Git
due to reorganization of its code and especially the unnecessary
renaming of all identifiers in the code.

As the real developers of Speech Dispatcher, which some of you now
call OpenTTS, we must say that a lot of work is currently being wasted.
We continue to see this as very bad.

The new name OpenTTS is very unfortunate, because it is technically
wrong. Speech Dispatcher/OpenTTS doesn't do and shouldn't do any TTS
(Text-to-Speech). It is merely an interface between applications and
Text-to-Speech engines. Serious developers must understand and use 
terminology correctly.

We still don't see a technical reason for such duplication of effort.
We asked for responsibility and cooperation several times publicly and
also privately without any effect. We are a non profit organization
and our main goal is to help visually impaired people. All our
projects are strictly Free Software projects and Speech Dispatcher is
one of our key long term projects for the last 10 years. We believe
that it is a major benefit for every project to be backed up by a stable
company which provides quality controll and stability of the future
development. We don't understand the motivations behind rebranding
Speech Dispatcher to OpenTTS.

We believe that a constructive solution is still possible. Our offer
of cooperation is still valid, nothing has changed. There are two
possibilities for cooperation: as a volunteer contributor and in the
future as a paid member of our development team. Anyone who wishes to
cooperate, please contact us, we are very open.

Best regards,
Hynek Hanke
Speech Dispatcher maintainer
Brailcom, o.p.s.


-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Speech Dispatcher 0.7 Beta3 -- Please help with testing

2010-05-07 Thread Hynek Hanke

Dear all,

we have uploaded a second public beta version for the 0.7
release of Speech Dispatcher

The Beta 3 differs from Beta 1 in the following aspects:
  * Unix sockets are by default placed in ~/.speech-dispatcher/
thus fixing a DoS security concern,libraries now respect
the SPEECHD_SOCK environment variable
  * Speech Dispatcher now compiles on MacOS
  * Generic module fixed to respect new audio setting mechanism
  * Bugfixes

We would like to ask you to help us with testing and report
any issues so that we can fix them before the final release.

You can find the 0.7 Beta 3 version here:

http://www.freebsoft.org/pub/projects/speechd/speech-dispatcher-0.7-beta3.tar.gz

This release is based on the great work done in the unofficial
development branch managed by Luke Yelavich, but some parts
needed to be reworked before an official release to ensure a cleaner
design, conformance to standards and smoother interoperability with
the rest of the system; the new changes were also documented etc.

Most important improvements in the 0.7 version are:

* Speech Dispatcher uses UNIX style sockets as default means
  of communication, thus avoiding the necessity to choose a numeric
  port and greatly easying session integration. Inet sockets are however
  still supported for communication over network.

* Autospawn -- server is started automatically when a client requests it
   It can be forbidden in the appropriate server configuration file.

* Pulse Audio output reworked and fixed

* Dispatcher runs as user service (not system service) by default
   and doesn't require the previous presence of ~/.speech-dispatcher
   directory

* All logging is now managed centrally, not by separate options

* Graceful audio fallback (e.g. if Pulse is not working, use Alsa...)

* Various bugfixes and fine-tunnings

* Updated documentation

For more detailed description of the changes, please see the Git log:
  http://git.freebsoft.org/?p=speechd.git

The documentation can be found in the doc/ directory of the .tar.gz package.

With Best regards,
Hynek Hanke
Brailcom, o.p.s.



-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: Speech Dispatcher 0.7 Beta -- Please help with testing

2010-04-28 Thread Hynek Hanke

Hello all,

>There is a rather large local security problem with your use of unix sockets. 
>It is very easy for a local hostile user to cause a denial of service

Thanks for pointing this out. I think your concern, is valid and
we will fix it though I don't think it's one of the most important
problems for accessibility today. The situation is as follows:

1) Such a described DoS is as easy with the former inet socket
implementation (any hostile user can open the port first and thus
block it) that we have used till now. So this is actually nothing
new.

2) With session integration as done by Luke Yelavich (e.g. assigning ports
numbers as BASE_PORT+uid), we get problems even in case of no-attack,
since there is no guarantee that all 7560+ ports will be free to use
and not blocked by any other service.

3) With ports and without authentication (former situation), in
most current installation setups, any local user could connect to a
session run by any other user, which was a large documented problem
which was removed by the use of unix sockets with correct permissions.

4) The reason why the socket name is predictable is that clients
could predict it and connect it without having to refer to a third
party. If someone could suggest a good and universal (not Gnome or X based)
mechanism so that Speech Dispatcher knows which address to run on and the
clients know which address to connect to, without any need for some 
pre-configuration
(like the ever problematic SPEECHD_PORT variable), please send it to us!

5) We might as well try to use other destination, namely ~/.speech-dispatcher
as for all other speechd stuff (predictable but only writable by the given 
user).

6) We totally support a DBus interface (it was our plan if there would
be more funding), but I think it is necessary to also have a lower level
system communication mechanism for clients like speechd-el or clients
running outside of X.


I suggest we move this now technical discussion to 
spee...@lists.freebsoft.org .

Best regards,
Hynek Hanke


-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Speech Dispatcher 0.7 Beta -- Please help with testing

2010-04-27 Thread Hynek Hanke

Dear all,

we are preparing a new release of Speech Dispatcher, which
brings significant improvements, particularly in audio output,
stability, security, easier installation and closer integration
into the system.

We would like to ask you to help us with testing and report
any issues so that we can fix them before the final release.

You can find the 0.7 Beta version here:
   
http://www.freebsoft.org/pub/projects/speechd/speech-dispatcher-0.7-beta.tar.gz

This release is based on the great work done in the unofficial
development branch managed by Luke Yelavich, but some parts
needed to be reworked before an official release to ensure a cleaner
design, conformance to standards and smoother interoperability with
the rest of the system; the new changes were also documented etc.

Most important improvements in this release:

* Speech Dispatcher uses UNIX style sockets as default means
  of communication, thus avoiding the necessity to choose a numeric
  port and greatly easying session integration. Inet sockets are however
  still supported for communication over network.

 * Autospawn -- server is started automatically when a client requests it
   It can be forbidden in the appropriate server configuration file.

 * Pulse Audio output reworked and fixed

 * Dispatcher runs as user service (not system service) by default
   and doesn't require the previous presence of ~/.speech-dispatcher
   directory

 * All logging is now managed centrally, not by separate options

 * Graceful audio fallback (e.g. if Pulse is not working, use Alsa...)

 * Various bugfixes and fine-tunnings

 * Updated documentation

For more detailed description of the changes, please see the Git log:
  http://git.freebsoft.org/?p=speechd.git 
<http://git.freebsoft.org/?p=speechd.git>

The documentation can be found in the doc/ directory of the .tar.gz package.

With Best regards,
Hynek Hanke
Brailcom, o.p.s.


-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: this may save some of you the time

2008-12-04 Thread Hynek Hanke
mike:
> Hi, for anyone wanting to try blindubuntu. It isn't in Englishis

It basically is, because all the components are international,
but documentation needs to be translated and some
configuration modified a bit. I don't think that it's useful
for non-english speakers right now, but I'm pretty sure
that the amount of work needed to make it international
is very little (no programming etc.).

With regards,
Hynek Hanke

-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: Speech-dispatcher as a service?

2008-12-04 Thread Hynek Hanke
Isaac Porat wrote:
>> I have recently released (in the loosest possible sense of the word) a 
>> customised version of Ubuntu called Vibuntu (or Vinux - can't decide!) 
>> which is aimed at visually impaired users. It is still very early 
>> days, but I decided to make it available straight-away so that I could 
>> collect feedback, suggestions and advice from interested parties 
>> rather than keep it hidden away until it is finished (alledgedly).
>> 
>
> Then I invite you to let me know what you are changing in your version of
> Ubuntu, so we can include it in the main Ubuntu release. 

Luke, Isaac,

a clone of Ubuntu which is perfectly tuned for use with
assistive tools for visually impaired users already exists.
It's called Blind Ubuntu and contains Orca, Speech Dispatcher,
Yasr, Brltty, speechd-el, espeak, Festival and other tools
all configured to work together straight out of the box.
So it also solves the audio problems etc.

Another good thing is that apart from installing it as
a separate system, there is a repository and a package, that
you can just install and use in the ordinary Ubuntu
and it works great!

One problem is that it is currently setup by default for the
Czech language and also documentation is in Czech,
but we in Brailcom Speak Czech so we can be
helpful to make the bridge. All the components themselves
are international, so what is needed is only slight configuration
modifications.

I know the author Martin Sukany and he would very much
like his work to be included in Ubuntu.

So would it make sense to start from this, which is already
ready and being successfully used here in Czech Republic,
with the help of you Luke and Isaac bring it up to Ubuntu
standards and solve the remaining issues and then put
these packages into official Ubuntu repositories (at least
universe, contrib or something)?

Because that would be a huge step forward. People have
great difficulties putting all these things together, so although
we have good software, it is not really useful to the users
as it is now. Especially since the default audio is currently
all broken on Intrepid with regards to accessibility. These
meta packages have a power to solve these things in an easy
way without really comming into any conflict with anything.

Luke, should we perhaps have a discussion about this
on Ubuntu Accessibility IRC or on Jabber?

With regards,
Hynek Hanke








-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Speech Dispatcher 0.6.7 Released

2008-08-04 Thread Hynek Hanke

Speech Dispatcher 0.6.7
===

The Brailcom organization is happy to announce the availability
of Speech Dispatcher 0.6.7 developed as a part of the Free(b)Soft
project. This is a minor release.  Please read 'What is new'
and 'NOTES' bellow.

* What is Speech Dispatcher?

  Speech Dispatcher is a device independent layer for speech
  synthesis, developed with the goal of making the usage of speech
  synthesis easier for application programmers. It takes care of most
  of the tasks necessary to solve in speech enabled applications. What
  is a very high level GUI library to graphics, Speech Dispatcher is
  to speech synthesis.

  Key Speech Dispatcher features are:

  - Message priority model that allows multiple simultaneous
connections to Speech Dispatcher from one or more clients
and tries to provide the user with the most important messages.

  - Different output modules that talk to different synthesizers
so that the programmer doesn't need to care which particular
synthesizer is being used. Currently Festival, Flite, Epos, Espeak
and (non-free) Dectalk software, IBM TTS are supported. Festival
is an advanced Free Software synthesizer supporting various
languages. Espeak is a very fast multi-lingual synthesizer.

  - Client-based configuration allows users to configure different
settings for different clients that connect to Speech Dispatcher.

  - Simple interface for programs written in C, C++ provided through
a shared library. Python, Common Lisp and Guile interface. An Elisp
library is developed as a separate project speechd-el. Possibly
an interface to any other language can be developed.

* What is new in 0.6.7?

- Setting of preferred audio output method is now centralized in
   speechd.conf instead of being scattered around the various
   output module configurations.

- 'spd-conf' configuration, diagnostics and troubleshooting tool
   now makes it easy to create a user configuration for Speech Dispatcher
   or to send a request for help with all appropriate logging information

- Dummy output module which attempts to play a pre-recorded help
   message via various sound systems when all other modules fail

- Possibility to switch on verbose logging over SSIP
   for easy debugging and bug-reporting from client applications

- Volume settings in Pulse Audio and avoidance of having to reopen
   the PA connection on every synthesis request

- Punctuation mechanism in IBM TTS is now configurable

- New generic output modules for Espeak with Mbrola
   and for Cepstral Swift.

- Bugfixes.

NOTES (0.6.7)

   - There are changes in the configuration file since the previous
 release. It is highly recommended to replace your speechd.conf
 file with the speechd.conf provided in this package and copy your
 settings there if you are upgrading from 0.6.6 or any older
 versions. The old configuration file should also work, but
 audio output method settings will in output modules configuration
 will have no effect.

   - By default, the communication port of Speech Dispatcher is only
 opened for localhost connections. Please see the
 LocalhostAccessOnly option in speechd.conf for information on how
 to allow connections from other manchines as well.

* Where to get it?

  You can get the distribution tarball of the released version from

http://www.freebsoft.org/pub/projects/speechd/speech-dispatcher-0.6.7.tar.gz

  We recommend you to fetch the sound icons for use with Speech Dispatcher.
  They are available at
  http://www.freebsoft.org/pub/projects/sound-icons/sound-icons-0.1.tar.gz

  Corresponding Debian, Gentoo and Ubuntu packages will soon be available
  at your distribution mirrors.

  The home page of the project is http://www.freebsoft.org/speechd

* How to report bugs?

  Please report bugs at <[EMAIL PROTECTED]>. For other
  contact please use <[EMAIL PROTECTED]>


Happy synthesizing!




-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: Text to speech italian

2008-05-12 Thread Hynek Hanke
Angelo Marra napsal(a):
> I need to use GOOD voices (such as sapi5) in UBUNTU.
> Is there any ITALIAN and ENGLISH voice I can use?
> Which software can I use in Linux similar to DSpeech or Textalou
Hello Angelo,

there are Italian voices in Festival:

festlex-ifd - Italian support for Festival
festvox-italp16k - Italian female speaker for Festival
festvox-itapc16k - Italian male speaker for Festival

Though not great, they are not so bad either.

With regards,
Hynek Hanke


-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: problems running speech-dispatcher 0.6.6

2008-02-18 Thread Hynek Hanke
Sérgio Neves napsal(a):
> [spd-say] says:
> spd-say: error while loading shared libraries: libspeechd.so.2: cannot open 
> shared object file: No such file or directory.
>
>   
You need to install  the libspeechd2 package.
sudo apt-get install libspeechd2
> [Sun Feb 17 11:55:29 2008 : 915097] speechd: LINE here:|200-default en none|
> What's the meaning of "none" word?
It is language variant. 'none' is fine, this isn't any error.

With regards,
Hynek Hanke


-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Speech Dispatcher 0.6.6 Released

2008-02-14 Thread Hynek Hanke

Speech Dispatcher 0.6.6
===

The Brailcom organization is happy to announce the availability of
Speech Dispatcher 0.6.6 developed as a part of the Free(b)Soft
project. This is a minor release, it contains mostly bugfixes.
Please read `NOTES' bellow.

* What is Speech Dispatcher?

  Speech Dispatcher is a device independent layer for speech
  synthesis, developed with the goal of making the usage of speech
  synthesis easier for application programmers. It takes care of most
  of the tasks necessary to solve in speech enabled applications. What
  is a very high level GUI library to graphics, Speech Dispatcher is
  to speech synthesis.

  Key Speech Dispatcher features are:

  - Message priority model that allows multiple simultaneous
connections to Speech Dispatcher from one or more clients
and tries to provide the user with the most important messages.

  - Different output modules that talk to different synthesizers
so that the programmer doesn't need to care which particular
synthesizer is being used. Currently Festival, Flite, Epos, Espeak
and (non-free) Dectalk software, IBM TTS are supported. Festival
is an advanced Free Software synthesizer supporting various
languages.  Espeak is a very fast multi-lingual synthesizer.

  - Client-based configuration allows users to configure different
settings for different clients that connect to Speech Dispatcher.

  - Simple interface for programs written in C, C++ provided through a
shared library. Python, Common Lisp and Guile interface. An Elisp
library is developed as a sperate project speechd-el. Possibly
an interface to any other language can be developed.

* What is new in 0.6.6?

- Bugfixes (SMP related, ALSA output, libspeechd reconnection
and others)

NOTES (0.6.6)

   - There are changes in the configuration file since the previous
 release. It is highly recommended to replace your speechd.conf
 file with the speechd.conf provided in this package and copy your
 settings there if you are upgrading from 0.6.3 or any older
 versions. The old configuration file should however also work.

   - By default, the communication port of Speech Dispatcher is only
 opened for localhost connections. Please see the
 LocalhostAccessOnly option in speechd.conf for information on how
 to allow connections from other manchines as well.

* Where to get it?

  You can get the distribution tarball of the released version from

http://www.freebsoft.org/pub/projects/speechd/speech-dispatcher-0.6.6.tar.gz

  We recommend you to fetch the sound icons for use with Speech
Dispatcher. They are available at
  http://www.freebsoft.org/pub/projects/sound-icons/sound-icons-0.1.tar.gz

  Corresponding Debian, Gentoo and Ubuntu packages will soon be
available at your distribution mirrors.

  The home page of the project is http://www.freebsoft.org/speechd

* How to report bugs?

  Please report bugs at <[EMAIL PROTECTED]>. For other
  contact please use <[EMAIL PROTECTED]
>


Happy synthesizing!







-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Speech Dispatcher 0.6.5 released

2007-11-30 Thread Hynek Hanke

Speech Dispatcher 0.6.5
=

The Brailcom organization is happy to announce the availability of 
Speech Dispatcher 0.6.5 developed as a part of the Free(b)Soft project. 
This is a minor release, it contains mostly bugfixes and minor 
improvements. Please read `What is new' and `NOTES' bellow.

* What is Speech Dispatcher?

   Speech Dispatcher is a device independent layer for speech
   synthesis, developed with the goal of making the usage of speech
   synthesis easier for application programmers. It takes care of most
   of the tasks necessary to solve in speech enabled applications. What
   is a very high level GUI library to graphics, Speech Dispatcher is
   to speech synthesis.

   Key Speech Dispatcher features are:

   - Message priority model that allows multiple simultaneous
 connections to Speech Dispatcher from one or more clients
 and tries to provide the user with the most important messages.

   - Different output modules that talk to different synthesizers
 so that the programmer doesn't need to care which particular
 synthesizer is being used. Currently Festival, Flite, Epos, Espeak
 and (non-free) Dectalk software, IBM TTS are supported. Festival
 is an advanced Free Software synthesizer supporting various
 languages. Espeak is a very fast multi-lingual synthesizer.

   - Client-based configuration allows users to configure different
 settings for different clients that connect to Speech Dispatcher.

   - Simple interface for programs written in C, C++ provided through a
 shared library. Python, Common Lisp and Guile interface. An Elisp
 library is developed as a sperate project speechd-el. Possibly
 an interface to any other language can be developed.

* What is new in 0.6.5?

  - Pulse Audio output module (thanks to Gilles Casse)

  - Speech Dispatcher is now adapted for easy setup and using
under ordinary system user accounts

  - Bugfixes

  NOTES (0.6.5)

- There are changes in the configuration file. It is highly
  recommended to replace your speechd.conf file with the
  speechd.conf provided in this package and copy your settings
  there. The old configuration file should however also work.

- Default output module has been switched to espeak.

- By default, the communication port of Speech Dispatcher
  is only opened for localhost connections. Please see
  the LocalhostAccessOnly option in speechd.conf for
  information on how to allow connections from other
  manchines as well.

* Where to get it?

   You can get the distribution tarball of the released version from

http://www.freebsoft.org/pub/projects/speechd/speech-dispatcher-0.6.5.tar.gz

   We recommend you to fetch the sound icons for use with Speech Dispatcher.
   They are available at
   http://www.freebsoft.org/pub/projects/sound-icons/sound-icons-0.1.tar.gz

   Corresponding Debian, Gentoo and Ubuntu packages will soon be
available at
   your distribution mirrors.

   The home page of the project is http://www.freebsoft.org/speechd

* How to report bugs?

   Please report bugs at <[EMAIL PROTECTED]>. For other
   contact please use <[EMAIL PROTECTED]>


Happy synthesizing!

___
Speechd mailing list
[EMAIL PROTECTED]
http://lists.freebsoft.org/mailman/listinfo/speechd


-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Speech Dispatcher 0.6.5rc1 (help with testing)

2007-11-26 Thread Hynek Hanke

Hello,

we are preparing a 0.6.5 release of Speech Dispatcher and would
like to ask you for help with testing it before the final release.
Bellow is a short description of the improvements.

You can download the .tar.gz from

http://www.freebsoft.org/pub/projects/speechd/speech-dispatcher-0.6.5rc1.tar.gz

(we prefer if you use this version for testing rather than CVS)

* What is new in 0.6.5?
- Pulse Audio sound output support.
- Speech Dispatcher is now adapted for easy setup and using
   under ordinary system user accounts.
- Bugfixes.

NOTES (0.6.5)
   - There are changes in the configuration file. It is highly
 recommended to replace your speechd.conf file with the
 speechd.conf provided in this package and copy your settings
 there. The old configuration file should however also work.
   - Default output module has been switched to espeak.

Please report any bugs at <[EMAIL PROTECTED]> . For other
contact please use <[EMAIL PROTECTED]> .

You can learn more about Speech Dispatcher at
http://www.freebsoft.org/speechd

Thank you very much,
Hynek Hanke
Brailcom, o.p.s.





-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: eSpeak and Screen readers?

2006-11-01 Thread Hynek Hanke
> but I don't know what should be changed since I use English.  Both files
> are well commented, so you should have no trouble finding the needed
> lines to change.

If you want to use other language than English with eSpeak, you need
to figure out the name of the voice(s) for the language and add them
to the espeak.conf file, one by one, like

AddVoice"en""MALE1" "en"
AddVoice"en""MALE2" "en-b"

etc.

Do not modify the existing lines, they are right. Add new lines.
Then if you want to use other language as the default with Speakup,
you must set it as the default language inside Speech Dispatcher
(this is because Speakup doesn't support runtime language switching
yet). You can do it in speechd.conf via the option DefaultLanguage
(2-character ISO language code is expected)

> Once you have speech-dispatcher working with espeak, you can tell orca
> to use the gnome-speech speech-dispatcher driver. 

I'd highly recommend to directly use the Speech Dispatcher driver for
Orca instead. You can download it here
http://www.freebsoft.org/~cerha/orca/speech-dispatcher-backend.html
and the instalation is simple.

With regards,
Hynek Hanke


-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: eSpeak Problem: PaHost_OpenStream: could not open /dev/dsp for O_WRONLY

2006-09-30 Thread Hynek Hanke
> So I fetched speak-1.14-linux.zip on the Web site and unzipped it directly 
> in my home folder. UPon running it, however, I get the following error:
> 
> [EMAIL PROTECTED]:~$ ./speak "this is a test"
> PaHost_OpenStream: could not open /dev/dsp for O_WRONLY
> PaHost_OpenStream: ERROR - result = -1
> Apparently it cannot access the sound card for some reason. This is a 
> separate termminal via which I logged in. I also have a Gnome session 
> running and had disabled audio in it as someone adviced it might help with 
> Gnopernicus problems. I re-enabled the Gnome sound effects and tried again 
> with no changes. I've testedd and Gnome happily plays its own sound effects 
> like it should.

Hi,

if you plan to use Speech Dispatcher, I advise you just skip this test
[EMAIL PROTECTED]:~$ ./speak "this is a test"

The eSpeak configuration file for Speech Dispatcher uses a binary like
play or aplay to play the sound. So if either of these work for you and
eSpeak is able to produce audio data (but not play them), you should
be fine.

Sorry, I have no experience with vmware, so I can't give a more concrete
hint.

With regards,
Hynek Hanke


-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Speech Dispatcher 0.6.1 Released

2006-07-25 Thread Hynek Hanke

Speech Dispatcher 0.6.1
===

The Brailcom organization is happy to announce the availability of
Speech Dispatcher 0.6.1 developed as a part of the Free(b)Soft project.
This is a minor release, it contains mostly bugfixes and support for new
synthesizers. Please read `What is new' and `NOTES' bellow.

* What is Speech Dispatcher?

  Speech Dispatcher is a device independent layer for speech
  synthesis, developed with the goal of making the usage of speech
  synthesis easier for application programmers. It takes care of most
  of the tasks necessary to solve in speech enabled applications. What
  is a very high level GUI library to graphics, Speech Dispatcher is
  to speech synthesis.

  The architecture of Speech Dispatcher is based on a proven
  client/server model. The basic means of client communication
  with Speech Dispatcher is through a TCP connection using the Speech
  Synthesis Interface Protocol (SSIP).

  Key Speech Dispatcher features are:

  - Message priority model that allows multiple simultaneous
connections to Speech Dispatcher from one or more clients
and tries to provide the user with the most important messages.

  - Different output modules that talk to different synthesizers
so that the programmer doesn't need to care which particular
synthesizer is being used. Currently Festival, Flite, Epos and
(non-free) Dectalk software are supported. Festival is an
advanced Free Software synthesizer supporting various languages.

  - Client-based configuration allows users to configure different
settings for different clients that connect to Speech Dispatcher.

  - Simple interface for programs written in C, C++ provided through
a shared library, also Python, Common Lisp and Guile interface.
An Elisp library is developed as a sperate project speechd-el.
Possibly an interface to any other language can be developed.

* What is new in 0.6.1?

 - Bug fixes

 - Generic output module support for the eSpeak synthesizer
(free English speech synthesizer, GPL)

 - Output module for Cicero (french TTS, GPL but requires mbrola)
(thanks to Olivier Bert)

 - Output module for IBM TTS (IBM TTS is non-free)
(thanks to Gary Cramblitt)

 - Revision and stabilization of the Python interface

 NOTES (0.6.1 together with notes for 0.6)

   - A Gnome Speech output module was developed which allows you to use
 Gnopernicus with Speech Dispatcher and is available in Gnome Speech
 distribution.

   - An experimental module for Orca provides support of
 Speech Dispatcher:
   http://www.freebsoft.org/~cerha/orca/speech-dispatcher-backend.html
 (this version of Speech Dispatcher 0.6.1 is required)

   - ALSA audio output is not turned on by default. If you like,
 go to etc/speech-dispatcher/modules and turn it on for your
 output module.

   - If you are using speechd-up, you likely need to upgrade to
 speechd-up-0.3 due to a bug in speechd-up. Speechd-up 0.3
 also brings new capabilities, notably support for the ``Read all''
 function in Speakup.

   - Although not necessary, we highly recommend you to install the
 festival-freebsoft-utils 0.6 available on
http://www.freebsoft.org/pub/projects/festival-freebsoft-utils/

* Where to get it?

  You can get the distribution tarball of the released version from

http://www.freebsoft.org/pub/projects/speechd/speech-dispatcher-0.6.1.tar.gz

  We recommend you to fetch the sound icons for use with Speech
  Dispatcher. They are available at

http://www.freebsoft.org/pub/projects/sound-icons/sound-icons-0.1.tar.gz

  Corresponding Debian packages will soon be available at your
  Debian distribution mirror.

  The home page of the project is http://www.freebsoft.org/speechd

* How to report bugs?

  Please report bugs at <[EMAIL PROTECTED]>. For other
  contact please use <[EMAIL PROTECTED]>
  
Happy synthesizing!



-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: [g-a-devel] Happy patch bonanza

2006-06-29 Thread Hynek Hanke
> So I guess the Italian voice doesn't define that.  If you give me a
> piece of lisp code that adds that definition, I can add it to the Debian
> and Ubuntu package, and try to push it upstream.

Would it make sense to also include definitions for the other
voices in the list that Gary Cramblitt posted here?

With regards,
Hynek


-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


[Fwd: Speech Dispatcher backend for Orca]

2006-06-29 Thread Hynek Hanke
 Přeposlaná zpráva 
Od: Tomas Cerha <[EMAIL PROTECTED]>
Komu: gnome-accessibility-devel@gnome.org, [EMAIL PROTECTED],
[EMAIL PROTECTED]
Předmět: Speech Dispatcher backend for Orca
Datum: Thu, 29 Jun 2006 17:00:50 +0200

Hello,

I would like to announce availability of an EXPERIMENTAL Speech
Dispatcher backend for Orca.  Please, see
http://www.freebsoft.org/~cerha/orca/speech-dispatcher-backend.html for
more information.  Any feedback is welcome, however, please note that I
will not be on-line until July 10.

Kindest regards, Tomas



--
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: [g-a-devel] Happy patch bonanza

2006-06-28 Thread Hynek Hanke
Bill Haneman writes v St 28. 06. 2006 v 18:57 +0100:
> > recode.patch
> >   Patch for gnome-speech festival driver.  When one of the Italian
> >   voices is requested, switches the g_io output channel to latin1
> >   instead of utf-8.
> Are you sure this is sufficient?  Don't you need to call g_convert in
> order to convert the strings from the gnome-speech client to latin1
> before passing them to the engine?

>That's great news.  But it seems to me that there should be something
>more general and robust than just checking the voice string, in order
>to determine the correct encoding which the festival engine/voice
> expects.
> [...]
> Note also that this problem seems to have been introduced on May 14 when
> Will explicitly changed the encoding on the iochannel from ISO-8859-1 to
> UTF-8.  Will, can you explain why you did that?

Hello,

if you use festival-freebsoft-utils to communicate with Festival, then
you can send all the input in UTF-8 through the appropriate functions
and let Festival care about the necessary conversions between encodings.
Encodings can be easily defined by the user in the configuration file,
or can be specified by the author of the voice, as is the case with
festival-czech. It has a dependency on the 'recode' utility.

festival-freebsoft-utils also provide other nice functions (such as
partial SSML support) and a coherent API. I'm CCing its developer
Milan Zamazal who can answer future questions better than me.

If you want to go some other way, I'd highly recommend that the encoding
used for different voices is easily configurable by the user. I think
there is no way how to determine the encoding of a given voice in
Festival automatically (which is of course broken :( ), so giving the
user the power to fix the problem without recompiling anything is very
important.

With regards,
Hynek Hanke



-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: Common AT config panel

2006-04-29 Thread Hynek Hanke
Henrik píše v Ne 23. 04. 2006 v 17:36 +0100:
> There are several new AT apps coming on line that need settings panels. 
>  From the user's perspective it would be preferable to have a single 
> interface for all the AT on the free desktop. The challenge of course is 
> that we are dealing with apps written in a variety of languages running 
> on different platforms using different config systems including dotconf, 
> gconf and a raw python file.

Hello Henrik and all,

I'd like to propose to move this thread to
[EMAIL PROTECTED]
so that it is not spread across several mailing list,
people know where to post replies and we have an archive.

The mailing list at Freedesktop currently has subscribes
from a wide range of accessibility projects both from graphical
desktops and from the console and is a place where also other
similar points of cooperation are being discussed. I also
invite everyone who is interested to join if he didn't do
that already. It is a low-traffic list.

More information is on
http://lists.freedesktop.org/mailman/listinfo/accessibility

I'd like to contribute to the discussion, but I currently
don't know where to post my reply.

If this suggestion is accepted, it would be great if the
original email was resent to Freedesktop and information
about it is sent to the mailing lists that are currently
in CC.

With regards,
Hynek Hanke


--
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: [Kde-accessibility] Common AT config panel

2006-04-23 Thread Hynek Hanke
Gary Cramblitt píše v Ne 23. 04. 2006 v 14:52 -0400:
> Once KTTS migrates to using Speech Dispatcher as its backend, I envision a 
> single GUI for configuring it and Speech Dispatcher, since they will be 
> closely related.  I'm thinking therefore, that instead of two icons "KTTS" 
> and "Speech Dispatcher", there should be a single icon "TTS".

Hello,

I like the original idea better. KTTS and Speech Dispatcher will be
closely related, and I like the idea of having a common configuration
method for them, as was the original proposal, but on the other side,
KTTS, Speech Dispatcher (and TTS API) will be different things even in
the future, even if they are run in a chain. They should have separate
entries inside the common configuration tool.

It is important not to mix them. The user must always know which
components on the system does a given option belong to.

> OTOH, what if the system has multiple TTS interfaces, say GNOME Speech
> and KTTS/Speech Dispatcher, or SpeakUP and Speech Dispatcher?

I'd add entries for Gnome Speech and SpeakUP then.

With regards,
Hynek Hanke
Brailcom


--
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: Edgy Accessibility features

2006-04-22 Thread Hynek Hanke
> Right. Is the preferred way to talk to SD via libspeechd, r TCP 
> connection to the server?

The binding thing is SSIP, the TCP protocol. The convenience libraries
(you mentioned libspeechd which is for C, but it is the same for the
Python library) are developed as wrappers over the TCP protocol. I think
for this purpose the Python library distributed along with Speech
Dispatcher is a good option. If it is lacking something, then it can be
enhanced/fixed.

With regards,
Hynek Hanke


-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility


Re: Edgy Accessibility features

2006-04-22 Thread Hynek Hanke
> > Henrik writes:
> > Speech dispatcher -- According to their website the SD gnome-speech 
> > driver will soon be shipped with gnome-speech. How mature is this? 

Hello,

this is true that a driver for Speech Dispatcher we in Brailcom
developed will be shipped with Gnome Speech. It is already in CVS and
will be in the next releases. It is useful for now, but it is by no
means a great solution. It has some drawbacks, some of them result from
trying to glue two things together with a little different design that
were originally intended to sit at the same level. Other drawbacks are
actually drawbacks in Gnome Speech itself. The worst thing is that it is
currently impossible to switch languages at runtime in this setup.

We think a better approach for the future would be to add Speech
Dispatcher support to Orca. It does not mean Orca would need to
abandon Gnome Speech for that reason. To my knowledge, Orca
is prepared to have multiple backends. It would be great if
this would be done.

> > Should we standardise on SD for speech output in Ubuntu? What else is 
> > needed, configuration interfaces? 

Yes, a configuration interface would be nice. Not just for Dispatcher
but also for things like Festival.

Luke Yelavich writes:
> KDE are planning to use speech-dispatcher for their back-end. IMO Sun 
> were stupid in creating gnome-speech, although I am pretty sure they did 
> it at a time when there wasn't really anything else.

Not stupid. The two things started to be developed in parallel without
knowing much about each other. Since then, the intention is to unify.
But it is complicated by the different design. Now we at least all
decided to create TTS API, a low level interface to the speech engines
drivers and share the drivers.

> IMO we rip the gnome-speech support out of orca, and use speech-dispatcher
> directly. I am pretty sure SD has python bindings, and speech-dispatcher
> as far as I have seen is a ittle easier to program for.

As I explained above, there is no need to rip gnome-speech support out
of Orca to be able to add direct support for Speech Dispatcher. There
are Python bindings for Speech Dispatcher. I don't think the whole
task is very difficult.

> I think there is also a FSG accsessibility mailing list, although I 
> haven't looked for it yet. I will probably look into that sometime soon.

There is also the ongoing work on TTS API from Brailcom, Gnome, KDE and
several other accessibility projects. The main discussion place
for that is the [EMAIL PROTECTED] mailing list hosted
on www.freedesktop.org . If somebody is interested to help, please
join the list.

With regards,
Hynek Hanke


-- 
Ubuntu-accessibility mailing list
Ubuntu-accessibility@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility