hopefully, someone here can either help or point me in the right direction.

As some of you know, I used speech recognition in order to be able to work with computers. I'm looking for a way to direct the action of speech recognition onto a Linux machine. There are two components speech and commands. The way many of us create commands is via a NaturallySpeaking Python link. That link is created by a com interface. The first step in making action show up in a Linux environment is to move this NaturallySpeaking Python link to the Linux side. In order to do this, I would need a proxy to bridge the COM interface to the Linux environment.

one) does that kind of bridge exists?
Two) if not, is it possible to build it?
3) (and you knew this was coming) feel like helping a bunch of crips?
3a)  there are some political benefits nfpc.

I'm thinking about operating in a Windows host Linux virtual machine environment. Not over any great extent of network. I choose the window hosts because that way we get the best performance out of speech recognition and if it's just running a virtual machine, it's pretty stable and safe from attack.

--- eric
_______________________________________________
python-win32 mailing list
python-win32@python.org
http://mail.python.org/mailman/listinfo/python-win32

Reply via email to