Hi,

Here's an experimental proposal I'd like to throw out there.  All types of
suggestions, criticism, and questions are welcome.

- Dominic
*
*
*Overview*
This experimental API exposes information about focused controls in the
native ui, like dialog boxes and the location bar.  Specifically, it allows
an extension to determine the current focused control and listen to events
such as changing focus, selecting controls, and text editing.  It optionally
allows for generating keyboard events, too.

Existing screenreaders on Mac, Windows, and Linux do a good job of exposing
simple dialog boxes, but a poor job of exposing complicated web pages.
 Javascript-based screenreaders can often provide much better support for
browsing the web, but are poor at exposing the user interface of the
browser.  This solution empowers javascript-based screenreaders.

*Use cases*
1. Build a complete screenreader as a Chrome extension, similar to the
"FireVox" screenreader for Firefox (see http://www.firevox.clcworld.net/).
 This would enable developers to create custom accessibility solutions for
people with all sorts of special needs, using JavaScript, that will run on
Chrome on any platform.

2. Enable pure-javascript testing of browser user-interface elements, like
interacting with controls in a dialog box.  This could potentially simplify
some ui tests.

*Could this API be part of the web platform?*
No.

*Do you expect this API to be fairly stable?*
Yes.

*What UI does this API expose?*
It does not have a UI of its own.

*How could this API be abused?*
I hope that exposing information about focused controls and listening to
focus and text editing events should not be risky; most of the information
in these controls is already exposed via other APIs.

Enabling automation of keyboard events is risky; a malicious extension could
control the user's browser.  This should only be enabled for trusted
extensions or via a flag.
*
**How would you implement your desired features if this API didn't exist?
***
The only way to provide accessibility or UI automation now requires writing
platform-specific, low-level code that is beyond the reach of most
developers.  This API could enable new innovation in accessibility.

*Are you willing and able to develop and maintain this API?*
Yes.

*Draft API spec*
This changelist contains the API spec and an implementation that exposes
information about most of the controls in the Options dialog for GTK:
http://codereview.chromium.org/402099
*
*
In a nutshell, the proposed API exposes two functions:
* getFocusedControl
* simulateKeyPress

and five events:
* onOpen (for a dialog or context menu, for example)
* onClose
* onFocus
* onSelect
* onText

Accessibility APIs such as MSAA on Windows are significantly more
complicated than this, as they need to support a huge range of possible
applications, including applications that were not designed with
accessibility in mind.  The main reason that this API can be much simpler is
that we assume that the entire user interface is already accessible via the
keyboard (or should be).  Instead of allowing tools to traverse the user
interface hierarchy and determine how it should be presented to a disabled
user, we can just expose information about the control the user has
navigated to using the existing keyboard commands.
*
*

--

You received this message because you are subscribed to the Google Groups 
"Chromium-extensions" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/chromium-extensions?hl=en.


Reply via email to