On Thu, 17 Aug 2023 18:49:11 GMT, Phil Race <p...@openjdk.org> wrote:

>> Artem Semenov has updated the pull request incrementally with one additional 
>> commit since the last revision:
>> 
>>   update
>
> There's a really  tree of JBS issues related to this topic and I see a 
> process problem.
> 
> The bug for this PR is https://bugs.openjdk.org/browse/JDK-8302687
> 
> But the CSR in this PR is listed as 
> https://bugs.openjdk.org/browse/JDK-8304499
> but that is actually the CSR for https://bugs.openjdk.org/browse/JDK-8302352
> which is an umbrella bug.
> 
> So this is wrong.
> The CSR must be the CSR for the bug in the PR, ie for something you will 
> actually push!
> Else skara and everyone else will get confused.
> 
> Then there's the overall question as to how important, or appropriate is this 
> API ?
> Seems like its not really an A11Y API, its an arbitrary text-to-speech API.
> I think of the A11Y APIs as being tied to the UI and making it accessible, not
> providing some other way of communicating something which isn't even in the 
> UI.
> Put it this way, if I am NOT using an AT to provide speech, how are you 
> communicating
> whatever it is to the user - something changes in the UI, right ?
> So then the A11Y API will be able to report that already so why do you need 
> this ?
> Show me some equivalent cases and uses in platform ATs APIs

@prrace
"There's a really tree of JBS issues related to this topic and I see a process 
problem.
The bug for this PR is https://bugs.openjdk.org/browse/JDK-8302687
But the CSR in this PR is listed as https://bugs.openjdk.org/browse/JDK-8304499
but that is actually the CSR for https://bugs.openjdk.org/browse/JDK-8302352
which is an umbrella bug.
So this is wrong.
The CSR must be the CSR for the bug in the PR, ie for something you will 
actually push!
Else skara and everyone else will get confused."

Thank you for your comment. I can close this PR and open a new one with a 
different number.

"Then there's the overall question as to how important, or appropriate is this 
API ?
Seems like its not really an A11Y API, its an arbitrary text-to-speech API.
I think of the A11Y APIs as being tied to the UI and making it accessible, not
providing some other way of communicating something which isn't even in the UI.
Put it this way, if I am NOT using an AT to provide speech, how are you 
communicating
whatever it is to the user - something changes in the UI, right ?
So then the A11Y API will be able to report that already so why do you need 
this ?
Show me some equivalent cases and uses in platform ATs APIs"

This functionality is part of the a11y API, for announcing we use the 
capabilities of the screen reader.
For example, NSAccessibility API capabilities for Mac: 
https://developer.apple.com/documentation/appkit/nsaccessibilitynotificationname?language=objc
Or NVDA controller client for Windows: 
https://github.com/nvaccess/nvda/tree/master/extras/controllerClient
Using the usual TTS annunciation tools is not a good idea, as the default TTS 
settings on the user's system can differ from those in the screen reader down 
to the voice, which is very confusing, and there may also be a difference in 
speed, volume and pace of speech.
Also, the built-in system TTS can sound simultaneously with the screen reader, 
it will sound like two voices speaking at the same time. Therefore, it will not 
be possible to implement interruption of the current output.

-------------

PR Comment: https://git.openjdk.org/jdk/pull/13001#issuecomment-1687540189

Reply via email to