I'm working on an accessibility app for the visually impaired and was hoping to use NSSpeechRecognizer.
I've found it extremely difficult to get NSSpeechRecognizer to behave predictably on my system. Does anyone on the list have experience with this class & success with the Speech Recognition system preference panel? Any tips or tricks? I find that that calibration dialog for the Speech Recognition settings doesn't work at all for me. I'm using a pretty standard external microphone (built-in to a Logitech Webcam) with an intel Mac Mini. I can see my signal just fine and I'm speaking clearly in as accent-neutral a way as I can, and still none of the test sentences ever highlights. Is a headset mic typically required, or is there some other gotcha here? When I give NSSpeechRecognizer a very small and unambiguous command set, I find it badly misses the mark. For example I might have "Play", "Next", and "Stop" in my command set, and it will interpret "Next" as "Play", but it will never interpret "Play" as a command - pretty unusable, I'm hoping it's just a calibration issue. One last note - is there any way to do proper dictation with this class or will it only recognize the preset command list you give it? I'm thinking for example of prompting for a file name to save to, or a term to search on - it would be nice to have true dictation, otherwise I'll resort to providing an alphabet as a command set so the user can spell it out (assuming I can get that to work). TIA, Chris _______________________________________________ Cocoa-dev mailing list ([email protected]) Please do not post admin requests or moderator comments to the list. Contact the moderators at cocoa-dev-admins(at)lists.apple.com Help/Unsubscribe/Update your Subscription: http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com This email sent to [email protected]
