Emacs with BRLTTY and Emacspeak, in my opinion, offers a superior interface to anything that proprietary operating systems (including Microsoft's) currently offer. I wrote my Ph.D. thesis largely in the Emacs environment.

The key to this success is the extensibility that is built into Emacs, which T.V. Raman - the computer scientist who developed Emacspeak - could then take advantage of to create a highly effective interface. If we had more such extensible and open systems, perhaps there would be more such solutions.

BRLTTY offers excellent braille support in the Linux console (while also supporting graphical environments such as GNOME/Orca). It's another example of success.

Nonvisual access to graphical environments is a different kind of problem, requiring work across a wide variety of projects. It isn't something that one or two people can implement relatively independently, so it will require the regulations or some other investment of resources to bring the quality up to high standards.

On 30/5/21 8:57 pm, Bill Cox wrote:
The EU regulations are good news!  Still, regulations force companies to make products accessible.  Making them delightful to use while blind is not required.  Gnome is already accessible.  Every problem the OP ran into has a work around.  That generally is all laws require.

What makes Windows a11y nice is Microsoft made it a priorityat the CEO level, in a militaristic company. Note that I use Linux exclusively, but I miss the.speed of NVDA with Microsoft Word, for example.  Google docs is slow, so I copy docs into gedit to read them quickly using Orca. Note that Google is also a slime mold.  Google's a11y devs are awesome!  I know a few of them, and some of the are my heros. But a slime mold has trouble delivering delightful a11y.

Bill

On Sun, May 30, 2021, 7:01 AM Jason White <ja...@jasonjgw.net <mailto:ja...@jasonjgw.net>> wrote:

    I think the regulatory environment may change attitudes and
    resource allocation somewhat. For example, computers and operating
    systems are explicitly required to meet accessibility standards
    under the European Accessibility Act, which applies to products
    placed on the European common market. This isn't yet in force, but
    it will be later in the decade. There may be other regulatory
    changes elsewhere.

    This could have significant consequences in terms of legal
    liability for commercial distributors and for hardware vendors who
    supply pre-installed systems. I think it is in the interests of
    Linux distributors to move ahead of the regulations by
    coordinating through the GNOME Foundation, Linux Foundation and
    other projects to commit additional resources to accessibility
    efforts. At the moment, there's a community with great expertise
    which is doing excellent work, but there's a need for additional,
    ongoing commitments in order to improve the quality of
    implementation and to make it sustainable. We may be heading to a
    point at which distributors can't integrate code until it
    satisfies accessibility criteria - for legal reasons, and that
    would place pressure (positively, by way of developer education
    and awareness, and negatively, by way of the risk of legal
    liability) on the decentralized development processes.

    On 30/5/21 8:02 am, Bill Cox wrote:
    GNU-Linux's accessibility limitations compared to Windows are
    basically baked-in, and I doubt it will change.  It is no one's
    fault.  It is simply a result of distributed development with no
    central leader.  However, you can use NVDA to access Linux
    through a terminal window, and most of Linux's goodness will be
    accessible this way.

    I will try to explain how I see the situation below.  It is
    certainly not the fault of anyone working on Orca, or any a11y
    developer at all.  It is just life in the land of distributed
    open source projects.

    TL;DR

    Janky a11y on Linux is not the fault of the various a11y
    developers, who genuinely care about the needs of blind folks,
    and IMO do a great job with very limited resources.  The problems
    are baked into GNU-Linux in various ways.  One way to look at
    this is that Microsoft is like the military, with a top commander
    issuing orders, which are followed by everyone, while Linux is
    more like a slime mold, with no central nervous system.  There is
    good and bad with both approaches, and unfortunately, a11y
    support will generally be better in a military-style run
    organization, assuming that the top leaders have made a11y a
    priority.

    Bill Gates mandated that accessibility was a top priority, and
    attended accessibility meetings personally.  That is why Windows
    is as accessible as it is.  Ubuntu is an open source project, and
    it is simply not possible to force every developer to get
    onboard.  IIUC, Steve Jobs did not care about a11y, which is why
    Apple had non-accessible products for so long, and IIUC, Tim Cook
    does care, and was able to force Apple to embrace a11y.  With
    Linux, we have various leaders who do care, and some who don't.
    The result is that a11y on Linux is janky and probably always
    will be.

    There are many examples I can point to.  For example, the main
    developer of PulseAudio cares about music, but not as much about
    screen reader users, which is why PulseAudio has broken a11y so
    many times.  Some devs in the low level GTK widgets refuse to
    make pixmaps capable of having a text description, which is why
    the icons remain inaccessible in many desktop environments in
    Linux.  Gnome does better than any other Linux desktop
    environment, in my experience, but Gnome can't make
    non-accessible widgets magically accessible.  While in most
    cases, the goals of free software advocates are in line with a11y
    advocates, these groups tend to differ on support for commercial
    closed-source software, such as text-to-speech engines, which is
    one reason we have limited options in Linux.  I use the Voxin
    voice, which is the same as Eloquence, and if I were not a
    programmer capable of hacking the speech stack, I doubt I could
    consistently use it.

    A common reply to a11y advocates in the open-source community is
    that if you don't like the way it is, fix it yourself.  However,
    this is simply not realistic.  For example, I fixed the pixmap
    GTK class to add an accessible description, and attempted to
    merge this fix into the Vinux version of Linux.  I had to fork
    not just GTK, but all of Gnome to make this work.  I don't have
    the time to maintain a fork of the entire desktop just to make
    pixmaps talk.

    Another problem I've faced personally in the open-source
    community is dealing with folks' feelings.  For example, I have
    an entire alternate speech stack that can work with Orca, but
    this upset some of the speech-dispatcher devs who do very
    important a11y work.  I tried working with them, and to their
    credit, they did incorporate one of the most important changes I
    have in my stack: they moved the code to talk to the sound system
    into speech-dispatcher proper.  However, I keep most of my a11y
    code to myself simply not to upset anyone.  Maybe if I understood
    people's feelings better, I could contribute more effectively,
    but from my point of view, I poke a random weak spot of the slime
    mold, and the whole thing freaks out.

    So, I hope that long winded explanation helps you understand why
    Linux a11y is as janky as it is.

    Best regards,
    Bill






    On Sat, May 29, 2021 at 12:21 PM Jason White via
    gnome-accessibility-list <gnome-accessibility-list@gnome.org
    <mailto:gnome-accessibility-list@gnome.org>> wrote:


        On 29/5/21 4:29 am, Rynhardt Kruger via
        gnome-accessibility-list wrote:
        > I definitely think image recognition has improved a lot,
        both in speed
        > and accuracy. However, even a difference like 50
        milliseconds may be
        > noticeable by an experienced screen reader user, especially
        if one
        > uses speech at 400 words per minute or more.

        A further difficulty is that any system relying on image
        recognition
        imposes the burden of errors on the user, whose ability to
        correct for
        them is limited.

        Image recognition might be useful, however, in automatically
        detecting
        errors in the implementation of accessibility APIs. I suppose
        that would
        be a research project.

        My understanding is that the GNOME Foundation has
        accessibility plans
        which include a new accessibility API in GTK 4, guidance for
        developers,
        and, possibly, better tools for automatically detecting
        implementation
        errors.

        I don't know whether GNOME developers also plan to fix the
        accessibility
        API and keyboard navigation of their own applications during the
        transition to GTK 4. Some proprietary operating system
        developers have
        been relatively successful in setting an accessibility policy
        for their
        software and implementing it reasonably consistently (e.g.,
        Apple and
        Microsoft in recent years). So there are precedents that
        GNOME could
        surpass, given suitable project governance, developer
        education, and
        associated commitment of time and expertise. The GTK 4
        initiative is an
        encouraging start.


        _______________________________________________
        gnome-accessibility-list mailing list
        gnome-accessibility-list@gnome.org
        <mailto:gnome-accessibility-list@gnome.org>
        https://mail.gnome.org/mailman/listinfo/gnome-accessibility-list
        <https://mail.gnome.org/mailman/listinfo/gnome-accessibility-list>

_______________________________________________
gnome-accessibility-list mailing list
gnome-accessibility-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gnome-accessibility-list

Reply via email to