Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : philip_bennefall via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

@32Certainly it could be used in other types of applications; there is nothing limiting it to games. It could also be wrapped fairly easily to work in other programming languages, if desired.Kind regards,Philip Bennefall

URL: https://forum.audiogames.net/post/489026/#p489026




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : devinprater via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

Could this library be used in more than just games? For example, I could see this being used in Retroarch for Windows users, to hopefully cut down on needed libraries to access stuff. Also, Retroarch is written in C/C++, so it shouldn't be hard to integrate there.Also, having speech be controllable by the developers would be great, one could use 3D libraries to tell the user where things are by "pointing" at them, instead of saying "Ex-core: 5 full dashes away in the left corner at the north end of the stage" and such.

URL: https://forum.audiogames.net/post/489004/#p489004




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : CAE_Jones via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

@24: Using both to get multiple information channels is brilliant. Information bandwidth is our biggest weakness compared to vision; tricks like this to get more manageable information flows are exactly what we need.@30: Oh, well, in that case, yes, please do that  . The ability to actually put TTS audio into the world, rather than as an overlay, is a feature that would also be very nice to have. It can contribute to that bandwhidth thing, but mostly it'd just be convenient and interesting.

URL: https://forum.audiogames.net/post/488993/#p488993




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : philip_bennefall via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

Thanks everyone for your great feedback! It's very valuable to me to get this information at an early stage in my development, as I can then make choices that will hopefully satisfy as many people as possible.Now, to explain a little further what I have implemented thus far, I can say that I now know way more about the low level Sapi and OneCore interfaces than I ever cared to. I am basically writing a very lightweight, portable C library for text to speech output. However, it has a twist. The main focus is not on playing audio, only on generating it. In short, the audio does not get sent directly to the audio device - it gets sent to a memory buffer of raw samples. This means that the developer is then free to do any kind of digital signal analysis/processing on the resulting audio that they wish, and it allows them to make the speech an integral part of the game soundscape. Doing multiple speech channels is easy, including things such as applying different effects and HRTF positioning to different speech output streams. I can also get rid of latency in the beginning of the speech, so at this point my latency is much lower than NVDA and Jaws for most voices. I am not sure if NVDA's new speech refactoring effort will do something similar, but if it does, we will see a drastic drop in latency for voices other than ESpeak and Eloquence.Also, the OneCore interface does actually enable me to change the length of pauses at the end of the speech as well as the pause that is inserted between punctuation. This was not available in the original interface, but can be easily queried at run-time so will work out of the box if you have a recent build of Windows 10.As things currently stand, when I use the existing screen reader interfaces, I run into the following issues:1. I cannot tell when the voice is speaking. This requires a lot of extra work in terms of design choices along the way.2. I cannot do any kind of processing on the speech output such as a compressor/limiter, or even a volume change to make it fit into the game's over-all soundscape.3. I don't know what language the voice is speaking.4. I don't know how loudly the voice is speaking, so I cannot automatically adjust the volume of the game audio to fit the speech.5. Generally latency will be higher, because the screen readers currently do not trim the speech.But there are also some benefits that screen reader output has:1. I may not have access to the voice that the user wants, as there are voice packages that only interface with a specific screen reader.2. The user may have preexisting speech dictionaries that I don't have access to, improving pronunciation for the specific voice they've chosen.I can easily solve the second point by allowing the user to have a dictionary for the game, which is literally just a few lines of code, but it requires the user to customize this for the specific game which is not always practical.And of course, changing the pitch and the speech rate is something that games should allow if they make use of a lot of text to speech, so screen reader output offers no benefit in that regard as long as the game developer spends the time to implement these features. With the library I am developing, this will be very simple for all the speech engines that support it.As for having fallback translations that are not provided by the game, I believe it would be a better option to simply make a translation based entirely on Google translate and have the game load that, than intercepting the text at run-time and attempting to translate it using the Internet based  API. The game would of course have to make it possible for users to translate all the text, and this is exactly what I have implemented. So while I can see a strong use case for this approach in games that don't offer translations, its usefulness is reduced if not eliminated altogether when you have a game that offers native translation support out of the box.In summary, I will add support for screen readers to my library, as it is trivial to implement, but as for whether I will expose this in all of my future games I cannot be sure. it really depends on the type of game, and whether the limitations outlined above present enough of a problem for me. I will make a great effort to support the screen reader interfaces as far as is practical for each particular game, but there are definite tradeoffs that I have covered which make this far from a trivial decision. If the NVDA speech refactor improves the API, this will definitely encourage myself and I'm sure other developers as well to integrate it.This ended up being a far longer post than I had intended, but I wanted to cover as many of your points as I could, and also outline my current thoughts.Thanks again for taking the time to give feedback!Kind regards,Philip Bennefall

URL: https://forum.audiogames.net/post/488983/#p488983




-- 
Audiogames-reflector mailing list
Audiogames

Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : philip_bennefall via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

Thanks everyone for your great feedback! It's very valuable to me to get this information at an early stage in my development, as I can then make choices that will hopefully satisfy as many people as possible.Now, to explain a little further what I have implemented thus far, I can say that I now know way more about the low level Sapi and OneCore interfaces than I ever cared to. I am basically writing a very lightweight, portable C library for text to speech output. However, it has a twist. The main focus is not on playing audio, only on generating it. In short, the audio does not get sent directly to the audio device - it gets sent to a memory buffer of raw samples. This means that the developer is then free to do any kind of digital signal analysis/processing on the resulting audio that they wish, and it allows them to make the speech an integral part of the game soundscape. Doing multiple speech channels is easy, including things such as applying different effects and HRTF positioning to different speech output streams. I can also get rid of latency in the beginning of the speech, so at this point my latency is much lower than NVDA and Jaws for most voices. I am not sure if NVDA's new speech refactoring effort will do something similar, but if it does, we will see a drastic drop in latency for voices other than ESpeak and Eloquence.Also, the OneCore interface does actually enable me to change the length of pauses at the end of the speech as well as the pause that is inserted between punctuation. This was not available in the original interface, but can be easily queried at run-time so will work out of the box if you have a recent build of Windows 10.As things currently stand, when I use the existing screen reader interfaces, I run into the following issues:1. I cannot tell when the voice is speaking. This requires a lot of extra work in terms of design choices along the way.2. I cannot do any kind of processing on the speech output such as a compressor/limiter, or even a volume change to make it fit into the game's over-all soundscape.3. I don't know what language the voice is speaking.4. I don't know how loudly the voice is speaking, so I cannot automatically adjust the volume of the game audio to fit the speech.5. Generally latency will be higher, because the screen readers currently do not trim the speech.But there are also some benefits that screen reader output has:1. I may not have access to the voice that the user wants, as there are voice packages that only interface with a specific screen reader.2. The user may have preexisting speech dictionaries that I don't have access to, improving pronunciation for the specific voice they've chosen.I can easily solve the second point by allowing the user to have a dictionary for the game, which is literally just a few lines of code, but it of course requires the user to customize this for the specific game which is not always practical.And of course, changing the pitch and the speech rate is something that games should allow if they make use of a lot of text to speech, so screen reader output offers no benefit in that regard as long as the game developer spends the time to implement these features. With the library I am developing, this will be very simple for all the speech engines that support it.As for having fallback translations that are not provided by the game, I believe it would be a better option to simply make a translation based entirely on Google translate and have the game load that, than intercepting the text at run-time and attempting to translate it using the Internet based  API. The game would of course have to make it possible for users to translate all the text, and this is exactly what I have implemented. So while I can see a strong use case for this approach in games that don't offer translations, its usefulness is strongly reduced if not eliminated altogether when you have a game that offers native translation support out of the box.In summary, I will add support for screen readers to my library, as it is trivial to implement, but as for whether I will expose this in all of my future games I cannot be sure. it really depends on the type of game, and whether the limitations outlined above present enough of a problem for me. I will make a great effort to support the screen reader interfaces as far as is practical for each particular game, but there are definite tradeoffs that I have covered which make this far from a trivial decision. If the NVDA speech refactor improves the API, this will definitely encourage myself and I'm sure other developers as well to integrate it.This ended up being a far longer post than I had intended, but I wanted to cover as many of your points as I could, and also outline my current thoughts.Thanks again for taking the time to give feedback!Kind regards,Philip Bennefall

URL: https://forum.audiogames.net/post/488983/#p488983




-- 
Audiogames-reflector

Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : philip_bennefall via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

Thanks everyone for your great feedback! It's very valuable to me to get this information at an early stage in my development, as I can then make choices that will hopefully satisfy as many people as possible.Now, to explain a little further what I have implemented thus far, I can say that I now know way more about the low level Sapi and OneCore interfaces than I ever cared to. I am basically writing a very lightweight, portable C library for text to speech output. However, it has a twist. The main focus is not on playing audio, only on generating it. In short, the audio does not get sent directly to the audio device - it gets sent to a memory buffer of raw samples. This means that the developer is then free to do any kind of digital signal processing analysis/processing on the resulting audio that they wish, and it allows them to make the speech an integral part of the game soundscape. Doing multiple speech channels is easy, including things such as applying different effects and HRTF positioning to different speech output streams. I can also get rid of latency in the beginning of the speech, so at this point my latency is much lower than NVDA and Jaws for most voices. I am not sure if NVDA's new speech refactoring effort will do something similar, but if it does, we will see a drastic drop in latency for voices other than ESpeak and Eloquence.Also, the OneCore interface does actually enable me to change the length of pauses at the end of the speech as well as the pause that is inserted between punctuation. This was not available in the original interface, but can be easily queried at run-time so will work out of the box if you have a recent build of Windows 10.As things currently stand, when I use the existing screen reader interfaces, I run into the following issues:1. I cannot tell when the voice is speaking. This requires a lot of extra work in terms of design choices along the way.2. I cannot do any kind of processing on the speech output such as a compressor/limiter, or even a volume change to make it fit into the game's over-all soundscape.3. I don't know what language the voice is speaking.4. I don't know how loudly the voice is speaking, so I cannot automatically adjust the volume of the game audio to fit the speech.5. Generally latency will be higher, because the screen readers currently do not trim the speech.But there are also some benefits that screen reader output has:1. I may not have access to the voice that the user wants, as there are voice packages that only interface with a specific screen reader.2. The user may have preexisting speech dictionaries that I don't have access to, improving pronunciation for the specific voice they've chosen.I can easily solve the second point by allowing the user to have a dictionary for the game, which is literally just a few lines of code, but it of course requires the user to customize this for the specific game which is not always practical.And of course, changing the pitch and the speech rate is something that games should allow if they make use of a lot of text to speech, so screen reader output offers no benefit in that regard as long as the game developer spends the time to implement these features. With the library I am developing, this will be very simple for all the speech engines that support it.As for having fallback translations that are not provided by the game, I believe it would be a better option to simply make a translation based entirely on Google translate and have the game load that, than intercepting the text at run-time and attempting to translate it using the Internet based  API. The game would of course have to make it possible for users to translate all the text, and this is exactly what I have implemented. So while I can see a strong use case for this approach in games that don't offer translations, its usefulness is strongly reduced if not eliminated altogether when you have a game that offers native translation support out of the box.In summary, I will add support for screen readers to my library, as it is trivial to implement, but as for whether I will expose this in all of my future games I cannot be sure. it really depends on the type of game, and whether the limitations outlined above present enough of a problem for me. I will make a great effort to support the screen reader interfaces as far as is practical for each particular game, but there are definite tradeoffs that I have covered which make this far from a trivial decision. If the NVDA speech refactor improves the API, this will definitely encourage myself and I'm sure other developers as well to integrate it.This ended up being a far longer post than I had intended, but I wanted to cover as many of your points as I could, and also outline my current thoughts.Thanks again for taking the time to give feedback!Kind regards,Philip Bennefall

URL: https://forum.audiogames.net/post/488983/#p488983




-- 
Audiogames

Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : Ethin via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

My reasons for using a screen reader are pretty much all the reasons listed in this topic thus far. The issue with using screen readers is that not all screen readers have APIs exposed, so you may not be able to interface with them. Its much easier to interface with the platforms native TTS engine than it is to interface with every screen reader. This is definitely a point to keep in mind for cross-platform games.@28, this is definitely possible (I've written code to do just what you describe in your last paragraph in a little game engine project-ish thing I was working on about a year back). You can do everything a screen reader can do with SAPI/One Core/..., it just takes a lot of time to implement all of that.

URL: https://forum.audiogames.net/post/488954/#p488954




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : Ethin via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

My reasons for using a screen reader are pretty much all the reasons listed in this topic thus far. The issue with using screen readers is that not all screen readers have APIs exposed, so you may not be able to interface with them. Its much easier to interface with the platforms native TTS engine than it is to interface with every screen reader. This is definitely a point to keep in mind for cross-platform games.

URL: https://forum.audiogames.net/post/488954/#p488954




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : Ethin via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

My reasons for using a screen reader are pretty much all the reasons listed in this topic thus far. The issue with using screen readers is that not all screen readers have APIs exposed, so you may not be able to interface with them. Its much easier to interface with the platforms native TTS engine than it is to interface with every screen reader.

URL: https://forum.audiogames.net/post/488954/#p488954




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : Xoren via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

Greetings Philip.I'm going to give you an account of my experience with trying to use built-in voices, as well as personal opinion on preferences and the reasons for them.A while back, I worked with Aprone on making Castaways 2. For that game, me and Aprone shelled out money to hire someone to record professional voice clips just for menu items as you arrow over them, as well as to speak a few quick prompts (the rest of the game used a screen reader API). Even with the high quality voice clips that I honestly think made the game feel a lot more alive and interesting, people continually requested that Aprone add in a choice to turn those voice clips off and just have them spoken by the screen reader API. It's about the only time I disagreed with Aprone, as we had spent money on the recordings, and capitulating to those requests basically made our expense a waste. Moral of the story here is that, as least in my experience, most blind players want to have speech piped through their screen reader, no matter if that makes the game less theatrical.This can also be seen in Manamon and its sequel, where the game fully supports SAPI, and indeed prefers that engine, but overwhelming demand from the user base has mandated that the develop also implement a screen reader API.My own views on the matter are a bit less black and white, as I can see the merits of both approaches:In favour of speech delivered by SAPI and such engines: You are able to monitor when speech ends, have fewer dll's to contend with, etc. I honestly don't mind this approach, nor do I mind pre-recorded speech for menu's and prompts, as you can create a much more immersive atmosphere with this approach.In favour of the screen reader though, is that the gamer has free range to navigate and explore the contents of the game. That is, a player has the facilities built in to review a name's spelling, as well as to customize the manner in which punctuation and other types of data are expressed.I'm neither in favour of one or the other, but I definitely recognize the benefits of both. Now, if you could easily add in a facility to cover the review capabilities of a screen reader as part of the game (spelling names, copying text, etc), then I definitely think there's no reason why using SAPI or other more inter actable engines would be a bad thing.Kai

URL: https://forum.audiogames.net/post/488952/#p488952




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : Chris via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

The biggest issue for me is that SAPI cannot be interrupted while a screen reader can. For example, playing a game like Entombed is ridiculous with SAPI, because I have to sit and wait until it's done reading everything. However, I assume speech interrupt for SAPI can be added to a program? I know Perilous Hearts does this, but all other games and programs I've used that utilize SAPI don't, so I can only conclude it's laziness or lack of knowledge on the part of the developer.What exactly is OneCore? My understanding is that it's a completely different speech synthesizer that's not at all related to SAPI. It would be interesting to see this implemented into games going forward, provided the speech could be interrupted at any time.I have to agree about the timing issues with screen readers. I don't find it an issue most of the time, but I definitely understand the problems you can run into. I wonder how NVDA's new speech system will help this? Are they coding a way to detect when the synthesizer stops speaking and convey this information to the Controller Client and your program?I like the idea of different speech channels for different pieces of information. A Hero's Call does this well, so it would be interesting to see where this could go in future projects. My only complaint about screen reader support right now is that literally any keystroke will silence speech, which can be annoying in some situations. However, it's also a blessing for the speech interrupt reasons I discussed above. Would it be possible to implement SAPI, OneCore, and the screen reader in one project? Then you could have different synthesizers speaking different pieces of information.While we're talking about screen readers, I have a couple more questions that could create more points that haven't been discussed yet. Is it possible to send information to Narrator like you can with NVDA? I assume not since there isn't an API to my knowledge, but I could very well be wrong. I think Narrator output would be particularly cool, especially when Narrator is a built-in component of Windows and is getting better all the time.My other question concerns Braille. As far as I know, none of the games that use screen readers can output to Braille Displays. Why is this? Is it a limitation of BGT, NVDA, the NVDA Controller Client, or a combination of these things? I am not a JAWS user, so I can't speak to JAWS, although I know it doesn't really play nice with games or any application that wants to use the keyboard. I know Braille output may not be practical in all games, but it's something that's interested me for a while.

URL: https://forum.audiogames.net/post/488912/#p488912




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : Chris via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

The biggest issue for me is that SAPI cannot be interrupted while a screen reader can. For example, playing a game like Entombed is ridiculous with SAPI, because I have to sit and wait until it's done reading everything. However, I assume speech interrupt for SAPI can be added to a program? I know Perilous Hearts does this, but all other games and programs I've used that utilize SAPI don't, so I can only conclude it's laziness or lack of knowledge on the part of the developer.What exactly is OneCore? My understanding is that it's a completely different speech synthesizer that's not at all related to SAPI. It would be interesting to see this implemented into games going forward, provided the speech could be interrupted at any time.I have to agree about the timing issues with screen readers. I don't find it an issue most of the time, but I definitely understand the problems you can run into. I wonder how NVDA's new speech system will help this? Are they coding a way to detect when the synthesizer stops speaking and convey this information to the Controller Client and your program?I like the idea of different speech channels for different pieces of information. A Hero's Call does this well, so it would be interesting to see where this could go in future projects. My only complaint about screen reader support right now is that literally any keystroke will silence speech, which can be annoying in some situations. However, it's also a blessing for the speech interrupt reasons I discussed above. Would it be possible to implement SAPI, OneCore, and the screen reader in one project? Then you could have different synthesizers speaking different pieces of information.While we're talking about screen readers, I have a couple more questions that could create more points that haven't been discussed yet. Is it possible to send information to Narrator like you can with NVDA? I assume not since there isn't an API to my knowledge, but I could very well be wrong. I think Narrator output would be particularly cool, especially when Narrator is a built-in component of Windows and is getting better all the time.My other question concerns Braille. As far as I know, none of the games that use screen readers can output to Braille. Why is this? Is it a limitation of BGT, NVDA, the NVDA Controller Client, or a combination of these things? I am not a JAWS user, so I can't speak to JAWS, although I know it doesn't really play nice with games or any application that wants to use the keyboard. I know Braille output may not be practical in all games, but it's something that's interested me for a while.

URL: https://forum.audiogames.net/post/488912/#p488912




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : Chris via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

The biggest issue for me is that SAPI cannot be interrupted while a screen reader can. For example, playing a game like Entombed is ridiculous with SAPI, because I have to sit and wait until it's done reading everything. However, I assume speech interrupt for SAPI can be added to a program? I know Perilous Hearts does this, but all other games and programs I've used that utilize SAPI don't, so I can only conclude it's laziness or lack of knowledge on the part of the developer.What exactly is OneCore? My understanding is that it's a completely different speech synthesizer that's not at all related to SAPI. It would be interesting to see this implemented into games going forward, provided the speech could be interrupted at any time.I have to agree about the timing issues with screen readers. I don't find it an issue most of the time, but I definitely understand the problems you can run into. I wonder how NVDA's new speech system will help this? Are they coding a way to detect when the synthesizer stops speaking and convey this information to the controller client and your program?I like the idea of different speech channels for different pieces of information. A Hero's Call does this well, so it would be interesting to see where this could go in future projects. My only complaint about screen reader support right now is that literally any keystroke will silence speech, which can be annoying in some situations. However, it's also a blessing for the speech interrupt reasons I discussed above. Would it be possible to implement SAPI, OneCore, and the screen reader in one project? Then you could have different synthesizers speaking different pieces of information.While we're talking about screen readers, I have a couple more questions that could create more points that haven't been discussed yet. Is it possible to send information to Narrator like you can with NVDA? I assume not since there isn't an API to my knowledge, but I could very well be wrong. I think Narrator output would be particularly cool, especially when Narrator is a built-in component of Windows and is getting better all the time.My other question concerns Braille. As far as I know, none of the games that use screen readers can output to Braille. Why is this? Is it a limitation of BGT, NVDA, the NVDA Controller Client, or a combination of these things? I am not a JAWS user, so I can't speak to JAWS, although I know it doesn't really play nice with games or any application that wants to use the keyboard. I know Braille output may not be practical in all games, but it's something that's interested me for a while.

URL: https://forum.audiogames.net/post/488912/#p488912




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : Chris via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

The biggest issue for me is that SAPI cannot be interrupted while a screen reader can. For example, playing a game like Entombed is ridiculous with SAPI, because I have to sit and wait until it's done reading everything. However, I assume speech interrupt for SAPI can be added to a program? I know Perilous Hearts does this, but all other games and programs I've used that utilize SAPI don't, so I can only conclude it's laziness or lack of knowledge on the part of the developer.What exactly is OneCore? My understanding is that it's a completely different speech synthesizer that's not at all related to SAPI. It would be interesting to see this implemented into games going forward, provided the speech could be interrupted at any time.I have to agree about the timing issues with screen readers. I don't find it an issue most of the time, but I definitely understand the problems you can run into. I wonder how NVDA's new speech system will help this? Are they coding a way to detect when the synthesizer stops speaking and convey this information to the controller client and your program?I like the idea of different speech channels for different pieces of information. A Hero's Call does this well, so it would be interesting to see where this could go in future projects. My only complaint about screen reader support right now is that literally any keystroke will silence speech, which can be annoying in some situations. However, it's also a blessing for the speech interrupt reasons I discussed above. Would it be possible to implement SAPI, OneCore, and the screen reader in one project? Then you could have different synthesizers speaking different pieces of information.While we're talking about screen readers, I have a couple more questions. Is it possible to send information to Narrator like you can with NVDA? I assume not since there isn't an API to my knowledge, but I coud very well be wrong. I think Narrator output would be particularly cool, especially when Narrator is a built-in component of Windows and is getting better all the time.My other question concerns Braille. As far as I know, none of the games that use screen readers can output to Braille. Why is this? Is it a limitation of BGT, NVDA, the NVDA Controller Client, or a combination of these things? I am not a JAWS user, so I can't speak to JAWS, although I know it doesn't really play nice with games or any application that wants to use the keyboard. I know Braille output may not be practical in all games, but it's something that's interested me for a while.

URL: https://forum.audiogames.net/post/488912/#p488912




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : devinprater via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

Yeah, OneCore and SAPI5 both are pretty laggy and pause way too much after clauses and sentences. It's so bad that the Narrator team now simply overrides the pauses to make them shorter, lol.

URL: https://forum.audiogames.net/post/488917/#p488917




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : Chris via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

The biggest issue for me is that SAPI cannot be interrupted while a screen reader can. For example, playing a game like Entombed is ridiculous with SAPI, because I have to sit and wait until it's done reading everything. However, I assume speech interrupt for SAPI can be added to a program? I know Perilous Hearts does this, but all other games and programs I've used that utilize SAPI don't, so I can only conclude it's laziness or lack of knowledge on the part of the developer.What exactly is OneCore? My understanding is that it's a completely different speech synthesizer that's not at all related to SAPI. It would be interesting to see this implemented into games going forward, provided the speech could be interrupted at any time.I have to agree about the timing issues with screen readers. I don't find it an issue most of the time, but I definitely understand the problems you can run into. I wonder how NVDA's new speech system will help this? Are they coding a way to detect when the synthesizer stops speaking and convey this information to the controller client and your program?I like the idea of different speech channels for different pieces of information. A Hero's Call does this well, so it would be interesting to see where this could go in future projects. My only complaint about screen reader support right now is that literally any keystroke will silence speech, which can be annoying in some situations. However, it's also a blessing for the speech interrupt reasons I discussed above. Would it be possible to implement SAPI, OneCore, and the screen reader in one project? Then you could have different synthesizers speaking different pieces of information.

URL: https://forum.audiogames.net/post/488912/#p488912




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : Chris via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

The biggest issue for me is that SAPI cannot be interrupted while a screen reader can. For example, playing a game like Entombed is ridiculous with SAPI, because I have to sit and wait until it's done reading everything. However, I assume speech interrupt for SAPI can be added to a program? I know Perilous Hearts does this, but all other games and programs I've used that utilize SAPI don't, so I can only conclude it's laziness or lack of knowledge on the part of the developer.What exactly is OneCore? My understanding is that it's a completely different speech snythesizer that's not at all related to SAPI. It would be interesting to see this implemented into games going forward, provided the speech could be interrupted at any time.I have to agree about the timing issues with screen readers. I don't find it an issue most of the time, but I definitely understand the problems you can run into. I wonder how NVDA's new speech system will help this? Are they coding a way to detect when the synthesizer stops speaking and convey this information to the controller client and your program?I like the idea of different speech channels for different pieces of information. A Hero's Call does this well, so it would be interesting to see where this could go in future projects. Myo nly complaint about screen reader support right now is that literally any keystroke will silence speech, which can be annoying. However, it's also a blessing for the speech interrupt reasons I discussed above. Would it be possible to implement SAPI, OneCore, and the screen reader in one project? Then you could have different synthesizers speaking different pieces of information.

URL: https://forum.audiogames.net/post/488912/#p488912




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : devinprater via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

I prefer screen reader output, very much because of synthesizer usage. In Manamon, eSpeak seems to be able to pronounce Manamon names correctly, whereas Microsoft David cannot. Also, has anyone used the OneCore voices to read a good deal of text? Try it here. Do you hear the amount of time taken between sentences? I hate using it for that reason, and that David sounds even more robotic, intonation wise, than eSpeak, and that's saying something. Microsoft may care a bit about accessibility, but for TTS, no. Windows OneCore, and sAPI5, make me /want/ to use eSpeak or Eloquence, so if I have to use that voice, the game becomes pretty boring to me.

URL: https://forum.audiogames.net/post/488911/#p488911




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : pitermach via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

Most of the time, I prefer screen reader output, for reasons that were already described of mainly voice customization and responsiveness. But also, regarding the translations, while it's great that you have a good localisation framework in place, you can't guarantee that you'll be able to support every language that your players might speak. Even if these ad-ons add on delay and require an internet connection they can be the difference for someone from being able to play the game, even if the translation will be far from perfect, versus not being able to play it at all.Finally, another option that in my opinion could be worth exploring that noone seems to have brought up yet is having multiple speech channels. I can think of only 2 games that do this but it makes them much more enjoyable. In A hero's call, you can set the game to use a screen reader, but some messages that require precise timing, like damage output in battles, is always spoken with SAPI. This is great because for menuing, reading quests and navigation messages I can use my prefered speech settings, while having the flow of battles not interrupted by having to additionally press enter to advance the text. AHC also makes great use of SAPI by positioning the voices relative to where the characters are on-screen which makes the battles even easier to follow.Another game which does this is a chinese MMO RPG called Dreamland. Again,you have 2 speech channels you can configure, one used for general messages and user interface navigation, the other used for reading damage output, system messages like experience gains and loot pickups as well as chat from other players. This is great for pretty much the same reason, I can do all the menuing, select targets and skills with my prefered voice while SAPI keeps me updated on how a battle is going. This is particularly great here because in a game where you have 1 speech channel and use a screen reader, it's way too easy to interrupt an important system message or chat by pressing other keys to navigate the interface. Having a seperate voice handling those not only makes this impossible, but also means they're much easier to hear.

URL: https://forum.audiogames.net/post/488909/#p488909




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : Dark via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

Philip, I use Vocalizer daniel on Windows, and Oliver on Ios. Unfortunately, with nuance silly licensing, I cannot use Vocaliser for Sapi output despite having purchased it to use with NVDA. Whilst I probably could buy a version of vocaliser to use with Sapi, firstly as I said, I've noticed windows 10 is far less friendly about using different sapi voices than windows 7 was, and secondly the vocaliser voices are far from cheap and the amount I use Sapi these days has shrunk drastically as opposed to back in the supernova days.Good luck with the new project, I'll be looking forward to seeing what it turns out to be.

URL: https://forum.audiogames.net/post/488903/#p488903




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : RTT entertainment via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

Yes! I know exactly what you’re doing! You are developing a secret mind control project along with the government that will give you access to our screen readers. After you have gained access, you will set our speech rates to 0 and set our punctuation levels to all. You will then install space attack and Q9 On our computers and make us play for hours on end for $.50 an hour. LOL. Anyway, I’m glad that you have got something in the works.

URL: https://forum.audiogames.net/post/488902/#p488902




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : pulseman45 via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

I would say it depends on the game or even what you are doing in said game. Manamon 2 seems to be a pretty good example, battles are smoother with a screen reader since you can interrupt more stuff than with SAPI, but the tutorials in the Music Hall are much harder to follow with a screen reader than with SAPI. Now perhaps the game could have been coded so that both issues could have been avoided but given that I'm hardly a coder myself, I won't insist too much on that. Still, here is an example of a game where the user has the choice between Screen Reader and SAPI and having the choice is neet, for the reasons I mentionned.On the other hand, I didn't try Entombed with a screen reader, but I guess the battles would sound a bit messy, just a supposition though.

URL: https://forum.audiogames.net/post/488901/#p488901




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : pulseman45 via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

I would say it depends on the game or even what you are doing in said game. Manamon 2 seems to be a pretty good example, battles are smoother with a screen reader since you can interrupt more stuff than with SAPI, but the tutorials in the Music Hall are much harder to follow with a screen reader than with SAPI. Now perhaps the game could have been coded so that both issues could have been avoided but given that I'm hardly a coder myself, I won't insist too much on that. Still, here is an example of a game where the user has the choice between Screen Reader and SAPI and having the choice is neet, for the reasons I mentionned.On the other hand, I didn't try Entombed with a screen reader, but I would guess the battles would sound a bit messy, just a supposition though.

URL: https://forum.audiogames.net/post/488901/#p488901




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : philip_bennefall via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

@18Of course I do have my reasons for posting this topic, other than just idle curiosity. But what those reasons may be, I shall keep close to my chest for now. *Smiles*Kind regards,Philip Bennefall

URL: https://forum.audiogames.net/post/488899/#p488899




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : philip_bennefall via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

@DarkThanks for your detailed post! Out of interest, what voice do you use? Is it available through Sapi, or only through your screen reader? One situation where I can definitely see screen reader output being useful for many people, as I mentioned in another post, is when the screen reader has access to voices that the game does not.@17This is actually very close to my own personal preferences as a gamer, rather than as a developer. When dealing with walls of text the screen reader is infinitely preferable, while for short pieces of text I tend to prefer Sapi for its responsiveness, especially with the silence trimming workaround I described earlier. Nothing beats prerecorded speech when it comes to character acting and atmosphere, but sometimes the game content is of such a dynamic nature that prerecording everything is far from practical.Kind regards,Philip Bennefall

URL: https://forum.audiogames.net/post/488898/#p488898




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : RTT entertainment via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

Philip, I think you just made a huge mistake lol JK.Are you cooking up a secret invention in that wonderful code-filled mind of yours?Lol sorry, just had to ask.I mean, why would you post a topic like this?Lol.

URL: https://forum.audiogames.net/post/488895/#p488895




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : Dark via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

@Philip, this is an interesting thing for me to consider specifically, since up until 2017 I used supernova as my main screen reader and it was rarely supported. this meant I got used to using Sapi in a lot of games on windows. since 2017 however, as I've switched to NVDA I've had screen reader support, and whilst I certainly don't mind playing games with Sapi now and again, as i might need to do if playign Jim Kitchin's games for example, I still would prefer screen reader support. apart from the issues of lag and  customisation already mentioned, one major factor for me is neutrality. I'd argue personally, that a screen reader, being the thing that most people use to read their emails, websites, /tax returns,  whatever, approximates printed text for a sighted person. That being said, when reding text, what you want is clarity, being able to quickly  understand the content of what is being said rather than the sound of the voice itself. Therefore for me, it helps to have my synth voice as close  in accent and timbre to my own, that is a man with a roughly baritone speaking voice with an English accent (as opposed to an American, or Australian or Scottish accent). this isn't to say I have a problem understanding female voices, or American accents etc (I'm married to a lady with an American accent), it's just that when I hear a synth voice, there will always be a fraction of a second of mental sublimation  in which I must process the content of the speech independently from the voice, a point when I focus on what is being said, not what is saying it.the less close to my own mental voice the voice I'm reading with is, the longer this initial gap, indeed that is one reason I personally really do not like eloquence, espeak or highly synthetic voices and use vocaliser Daniel as my voice of choice.Obviously I can  get used to using something else if I need to , as I said, I played games with Sapi for years, and indeed the first time I used Sapi, I was stuck with microsoft Sam due to an error on Xp that made no other voices workable (and we all remember how great Microsoft Sam was). I can't deny though, that if I'm playing a self voicing game on Ios which by default uses an American female voice, for a while I will be listening to the accent first. I suspect this is true for everyone. Each person has their own preference for voice tone, gender and dialect or accent, possibly similar to their own natural voice, possibly not (my lady also much prefers listening to English male voices, possibly just a voice they've used so much it is familiar,  I personally don't really get why so many people like eloquence, but obviously everyone is entitled to their own preferences.Whilst you could undoubtedly  program a game to use Sapi or another speech engine and let the player customise factors such as tone, volume, pitch etc, you ultimately can't change the gender and style of the voice, indeed one miner irritation I have  in Windows ten, is that even if I try to change it, sapi always seems to default to microsoft Hazel.Of course, this  is a matter of bare preference only, and I personally would always try to get used to a game voice if I could (I remember playing the early versions of Sound RTS with Espeak). Also, obviously this is not counting situations where the synth voice is part of the game itself, such as when Tom Ward  recorded a voice very much like the enterprise computer for startrek final conflict, or when Vipgameszone recorded distorted synth examples to create robot characters, indeed I've seen a bit of this recently using the Alexa voice selection in games like the Vortex. Hope some of this makes sense.

URL: https://forum.audiogames.net/post/44/#p44




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : crashmaster via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

To be honest it depends on the game.Some stuff works better with screen reader others sapi.Sometimes a screen reader can interupt something before the end of what it is saying.Now if you need to massively interact with something like big text blocks, I agree a screen reader is good for that.However with high quality sapi being as it is on faster systems, in some cases where its just a few strings, eurofly etc its better with sapi.In beatstar, yeah you can use nvda but there are key conflicts and while you can change the keyboard layout I find it works a lot better with sapi.With the codefactory eloquence and sapi synths being reasonably priced through atguys  there is no point to use a screen reader in say a simulation.Lonewolf and interactive fiction excluded and a few other text heavy games.Now the best form of self voicing is if its voiced within the game itself with human speech, no delays, but larger filesize and more time to get voices done, etc.Now I have heard audiobooks cheaply done with a screen reader or synth voice, compair that with a human well.

URL: https://forum.audiogames.net/post/46/#p46




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : philip_bennefall via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

@14I think these are all valid points. I had considered them, except for the interrupt key. I think that any professionally written game would have a way to interrupt the speech as necessary, and I can also picture a scenario where the control key is used to fire a gun so that you end up killing someone every time you want the speech to be quiet, which would be an amusing side effect. But I can definitely see that the need to buy a sapi version of the given voice is a very real concern for some people.Kind regards,Philip Bennefall

URL: https://forum.audiogames.net/post/45/#p45




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : Dark via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

@Philip, this is an interesting thing for me to consider specifically, since up until 2017 I used supernova as my main screen reader and it was rarely supported. this meant I got used to using Sapi in a lot of games on windows. since 2017 however, as I've switched to NVDA I've had screen reader support, and whilst I certainly don't mind playing games with Sapi now and again, as i might need to do if playign Jim Kitchin's games for example, I still would prefer screen reader support. apart from the issues of lag and  customisation already mentioned, one major factor for me is neutrality. I'd argue personally, that a screen reader, being the thing that most people use to read their emails, websites whatever, approximates printed text for a sighted person. That being said, when reding text, what you want is clarity, being able to quickly  the content of what is being said rather than the sound of the voice itself. Therefore for me, it helps to have my synth voice as close  in accent and timbre to my own, that is a man with a roughly baritone speaking voice with an English accent (as opposed to an American, or Australian or Scottish accent). this isn't to say I have a problem understanding female voices, or American accents etc (I'm married to a lady with an American accent), it's just that when I hear a synth voice, there will always be a microsecond of mental sublimation  which I must process the content of the speech from the voice. the less close to my own mental voice the voice I'm reading with is, the longer this gap, indeed that is one reason I personally really do not like eloquence, espeak or highly synthetic voices and use vocaliser Daniel as my voice of choice.Obviously I can  get used to using something else if I need to , as I said, I played games with Sapi for years, and indeed the first time I used Sapi, I was stuck with microsoft Sam due to an error on Xp that made no other voices workable (and we all remember how great Microsoft Sam was). I can't deny though, that if I'm playing a self voicing game on Ios which by default uses an American female voice, for a while I will be listening to the accent first. I suspect this is true for everyone. each person has their own preference for voice tone, gender and dialect or accent, possibly similar to their own natural voice, possibly not (my lady also much prefers listening to English male voices).Whilst you could undoubtedly  a game to use Sapi or another speech engine and let the player customise those factors, you ultimately can't change the gender and style of the voice, indeed one miner irritation I have  in Windows ten, is that even if I try to change it, sapi always seems to default to microsoft Hazel.Again, this  a matter of bare preference only, and I personally would always try to get used to a game voice if I could (I remember playing the early versions of Sound RTs with Espeak). Also, obviously this is not counting situations where the synth voice is part of the game itself, such as when Tom Ward  a voice very much like the enterprise computer for startrek final conflict, or when Vipgameszone recorded distorted synth examples to create robot characters, indeed I've seen a bit of this recently using the Alexa voice selection in games like the Vortex. Hope some of this makes sense.

URL: https://forum.audiogames.net/post/44/#p44




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : burak via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

Hello,Using a screen reader in audiogames feels more comfortable for me. I can use the synthesizer I like, with the pitch and speed I want without buying a sapi version. Also there's the ability to interrupt it whenever I want, which I don't have if if the developer doesn't consider adding.

URL: https://forum.audiogames.net/post/43/#p43




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : philip_bennefall via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

@11 I have considered speech dictionaries, and that definitely makes a case for screen reader output if the user has spent a lot of time customizing their dictionary. My question would be, would that speech dictionary generally be relevant for in-game text? My guess is that usually you would make changes for names, places and so on, which I don't think would be particularly likely to appear as part of game text.As I mentioned before, I am not comparing a game that uses prerecorded speech versus a game that uses screen readers. I am only comparing screen reader output with the lower level alternatives where the game communicates directly with the speech engine. If we compare text to speech in general versus recorded speech, that's a completely different discussion. Still very much a relevant question, but not what I was getting at in this particular topicKind regards,Philip Bennefall

URL: https://forum.audiogames.net/post/488879/#p488879




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : philip_bennefall via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

@7You are absolutely right that a lot of Sapi voices suffer from lag, in fact this is also true for the OneCore voices. However, this can actually be fixed at runtime by the game to a great extent. The reason why many speech engines appear to lag is because they actually insert silence, or audio that is very close to silence, at the beginning of the output. If the game can get access to the audio in memory, this silence can easily be trimmed which results in a significant decrease in latency.@8, @9 and @10I don't think it is ridiculous. It is a preference, and there's nothing wrong with that. My objective with this topic is not to question anyone's individual preferences, but rather to get at the specific reasons that form the basis for that preference.If a game could be designed in such a way as to avoid the problems that users try to solve by having their output spoken by the screen reader, that would give us the best of both worlds. There would be enough customization options to make users happy, while still giving the developer the opportunity to design the game exactly as they wish. If games went out of their way to provide customization options for the speech, would it then be acceptable to not provide screen reader support? Of course, the screen reader would still be able to be running while playing the game even if the game doesn't speak through it, so that is a separate issue altogether and one that does not really present a problem in most cases.Regarding the new NVDA speech refactor, that is a huge step in the right direction and it will make NVDA support a viable alternative for many more types of games. However, in my case I want to be able to taylor the soundscape to such an extent where not having the ability to control the speech volume in relation to the game audio becomes a problem. To give a comparison, it's like mixing a song but not being able to control the volume of the vocals. You can still enjoy the song and you can make it sound quite good, except for this one part that you don't control.A way around this would be to let the end user control the over-all volume of all the audio that the game outputs, including both sound effects and music. But this would be unnecessary if the game had control over the entire user experience in the first place, including the speech output.Kind regards,Philip Bennefall

URL: https://forum.audiogames.net/post/488877/#p488877




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : CAE_Jones via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

The levels of customization allowed by a screen reader are generally not available with a self-voicing game unless the game effectively recreates a screen reader. Furthermore, some customizations, such as speech dictionaries, develop over time and are not easily replicated when setting up a game, without just finding and referencing screen reader settings directly. It just seems overall simpler to output to a screen reader when possible, with some basic rate / pitch / volume settings available if defaulting to SAPI. Of course, I also like the idea of allowing games to attempt to use pitch / rate / volume / etc modifications to stand in for different characters, etc, which is hard to do via screen reader, but easy with something lower level that interfaces with the TTS directly, or through a standardized interface such as SAPI. I'm not aware of anyone actually trying this, though, outside of some BNS games and a couple of my own attempts. Usually, people either don't bother, or go all the way with pre-recorded speech.I tend to lean toward maximum customization, I suppose. And that includes the option of just using a screen reader for everything.

URL: https://forum.audiogames.net/post/488875/#p488875




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : RTT entertainment via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

Screen reader support should be a must in all games that require text to speech. There are many reasons for doing this. First of all, I prefer having a screen reader running just in case the game decides to crash or my computer needs my attention. Also, you are then given the opportunity to exactly customise the voice and speed.

URL: https://forum.audiogames.net/post/488874/#p488874




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : ashleygrobler04 via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

Ok, well my answer stays the same. i still prefere the screen reader due to the fact that i can customize speach.

URL: https://forum.audiogames.net/post/488866/#p488866




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : Slender via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

I prefer using a screen reader for output, since I can use my synthesizer of choice, and SAPI also seems to suffer from a natural lag, though I see why developers don't like screen readers because they can't tell if they are speaking or not and thus can't do things like only send the next string when the synth is done speaking, though NVDA in particular may be able to address this now that speech refactor has officially landed in 2019.3.

URL: https://forum.audiogames.net/post/488864/#p488864




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : SkyLord via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

@6, yep sure. Copying output to a buffer with the speech history addon for nvda, for example. That also can be achieved by a game programming, sure.And just i prefere screen reader output because i feel better that way for some reason. That maybe rediculous, but that is how it is.

URL: https://forum.audiogames.net/post/488865/#p488865




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : Slender via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

I prefer using a screen reader for output, since I can use my synthesizer of choice, and SAPI also seems to suffer from a natural lag, though I see why developers don't like screen readers because they can't tell if they are speaking or not and thus can't do things like only send the next string when the synth is done speaking, though NVDA may be able to address this now that speech refactor has officially landed in 2019.3.

URL: https://forum.audiogames.net/post/488864/#p488864




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : philip_bennefall via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

@4In this case I am not comparing prerecorded speech with screen reader support. I am comparing screen reader output versus having the game render its text to speech directly through an engine, whether it be via Sapi or OneCore, Flite, ESpeak and so on. In that context you can also customize settings such as speech rate, pitch and volume, plus a lot more.@5While building my current framework I have spent a significant amount of time on localization support. That is to say, the ability to easily translate the game to other languages as long as you have a translator on hand. So the need for the screen reader to do that would disappear, particularly because a translation performed by the game itself would have access to a lot more context sensitive information that would be highly relevant for translations. If a hypothetical add-on would simply connect to Google translate, for example, this would introduce an unacceptable amount of latency as well as require an active Internet connection at all times.Aside from translation, are there any other add-ons provided by the screen reader that you would consider benifitial?Kind regards,Philip Bennefall

URL: https://forum.audiogames.net/post/488860/#p488860




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : SkyLord via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

Because screen reader has addons for example translation, which sapi does not support in most of the cases. I don't think it would be a problem to press enter some fewer times like in manamon/manamon2. At least it's not a problem for me.

URL: https://forum.audiogames.net/post/488858/#p488858




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : ashleygrobler04 via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

I would prefere a screen reader. the reason why i would prefere this, is that it makes the game smaller than having to put a whole bunch of audio files indicating speach... and it is just a lot easier to implement in almost all games!the last reason is because i can customize my speaking rate of the speach, and the pitch.

URL: https://forum.audiogames.net/post/488856/#p488856




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : philip_bennefall via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

@2 Thanks for the feedback.First, to briefly address the film score themes question, I do not have any plans to make it free. It is still available from the same resellers as it always was, so nothing has changed except that the entry for royalty free music has been removed from the menu on the blastbay.com website.To get back on topic, I can certainly see that screen reader support would be atractive because it is what you are used to. But I think it would be useful to break that down a little. I'm assuming that what this ultimately boils down to is wanting to use a specific voice at a specific speaking rate, and possibly with a specific pitch setting. So if we look at Jaws and NVDA, the most common choices for synthesizers I guess would be Eloquence, ESpeak, the Microsoft OneCore voices and Vocalizer. As an example, what if the game provided support for the OneCore voices out of the box? This would be useful especially for games that are made available in multiple languages, since a language pack for Windows 10 usually also ships with a OneCore voice in that same language. To be clear, this does not appear to be true for Sapi 5 where you only have two US English voices (at least I do on mine), but OneCore offers a much wider array of options.I would love for other people to also chime in, for or against.Thanks!Kind regards,Philip Bennefall

URL: https://forum.audiogames.net/post/488852/#p488852




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : manamon_player via Audiogames-reflector


  


Re: Screen Reader Support in Audio Games

helloI think yes, because screan reader is the thing that we are familia with it and it can help us more in the audiogamessorry for offftopic but, can you please make a topic about filmscore things and if it gonna be free for all or not? thanks

URL: https://forum.audiogames.net/post/488840/#p488840




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Screen Reader Support in Audio Games

2019-12-28 Thread AudioGames . net Forum — General Game Discussion : philip_bennefall via Audiogames-reflector


  


Screen Reader Support in Audio Games

Hi all,From reading this forum for a number of years, I believe that there is a fairly strong consensus that screen reader support should be available in audio games that make use of text to speech for some or all of its output. If you agree with this, I would be curious to hear what reasons you have for considering it an important feature? Similarly, if you don't feel it is important, why not?I do have my own views on this and will be glad to expand upon them, but first I would like to have an open discussion about this just to get a feeling for the arguments for and against.Thanks!Kind regards,Philip Bennefall

URL: https://forum.audiogames.net/post/488837/#p488837




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector