Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Which screen reader(s) should we focus on for initial development of JupyterLab accessibility? #14

Closed
telamonian opened this issue Oct 12, 2020 · 8 comments
Labels
area: best practices ✨ A11y best practices area: workflows Testing and processes to audit accessibility type: question 🤔 Further information is requested

Comments

@telamonian
Copy link

At the last meeting of the jlab accessibility workgroup, it became clear that collectively we know very little about screenreaders. One of the working group's main goals is get jlab to work well with at least the most used screen readers, but I'm not sure which one we should start with for our initial investigation/development.

Does anyone in the Jupyter have hands on experience using a screen reader? So far the best guidance I've dug up online is this mostly up to date survey, which suggests that NVDA would be the way to go: https://webaim.org/projects/screenreadersurvey8/#primary

image

image

Both JAWS and NVDA have about 40% market share, but JAWS has been declining for 10 years while NVDA has been rising for about the same. So I'm leaning towards starting with NVDA, but it would be great if I could some input from someone who has actual experience with these screen readers.

@jasongrout
Copy link
Member

I don't have actual experience, but IIRC at the accessibility workshop we were all installing and using NVDA

@choldgraf
Copy link
Contributor

Perhaps this is something that @sinabahram @clapierre or @zorkow have thoughts on?

@sinabahram
Copy link

At the risk of saying something we all know, I think it is important that the development be caried out against ARIA and WCAG standards, not to make a specific screen reader work. The reason for this is that there may be techniques you can use that make a particular screen reader happy but won't work for all of them, so following best practices will take you further than any single screen reader.

For testing, NVDA would be a good one to target on Windows. Jaws being the other offering of note. On Mac, there is VoiceOver. Given the limited resources, starting with NVDA is an advisable path.

For something as technical and nuanced as this undertaking, you need to have a mastery over exactly what your approach will be as many moving parts exist here. I've offered to hop on some calls about this in the past.

I hope the above helps. Happy to discuss more.

@telamonian
Copy link
Author

telamonian commented Oct 13, 2020

At the risk of saying something we all know,

@sinabahram I think it would be safest for all of us if you just assume that we don't know. For example, until you mentioned it I didn't realize that NVDA and JAWS were both Windows-only (which is unfortunate...).

I think it is important that the development be caried out against ARIA and WCAG standards, not to make a specific screen reader work. The reason for this is that there may be techniques you can use that make a particular screen reader happy but won't work for all of them, so following best practices will take you further than any single screen reader.

It's interesting you say that. From what I've read so far, I had been starting to get the impression that the exact opposite approach (ie special handling for each popular tool) was considered best practice, at least for certain DOM elements/use cases.

For example, what's your take on the common advice (given in MDN and elsewhere) that a button should always be labeled via its text contents (and not by eg aria-label)?

If we follow said label-by-text-contents approach it then become very convoluted to do icon-only buttons (of which there are currently dozens in the jlab UI). But if that's what it takes to makes the buttons in jlab accessible, that's what it takes. So I figured we'd at least have to test that approach out, maybe do some live A/B comparison using a few different screen readers.

If you're telling me that it's better if we just use aria-label in the first place, that honestly saves me quite a bit of trouble

@manfromjupyter
Copy link

manfromjupyter commented Oct 13, 2020

@sinabahram addressed the aria-label point here if I may save him a couple minutes here hahhaah: jupyterlab/frontends-team-compass#98 (comment)

In regards to the screenreader conversation, I have used Jaws quite a bit and all of my coworkers and customers with accessibility requirements all use and are told to use JAWS. Why not NVDA? Only reasoning I have is it's just been that way for a long time and people have been using it comfortably for over a decade. Although declining, I admit in popularity, I agree with @sinabahram; best to just code for the standards of the internet. If it brings any comfort, I at least would be happy to test everything in JAWS in case some standard or nuisance it should support is broken or experiencing a minor hiccup.

Agree we can't go making one work over another but if it doesn't break others and would assist others, I'm in favor. One classic example I can think of is something as minor as capitalizing the "P" in "python" (and then can downcase with CSS if it's important) so it reads it as we would read it and not as "pith on" which is a legitimate issue with JAWS and it's dictionary lol. Not sure how NVDA handles it.

@sinabahram
Copy link

It would be good to know the mechanism that will be used for testing and validation. Code against standards. Test with assistive technologies.

Please note CSS will not fix that caps problem for you. When you change case with CSS, it does so in the DOM too; thereby, affecting a screen reader's output. Having said that "Python" and "python" announce the same over here, and that's not screen reader specific, that's text to speech (TTS) engine specific, which is as you say, quite minor compared to the issues first needing to be resolved here.

@telamonian
Copy link
Author

It would be good to know the mechanism that will be used for testing and validation.

@sinabahram I agree. Opening this issue is my attempt to get the ball rolling on the process of the Jupyter community figuring this out.

Hypothetically, if we were to start by working on improving compatibility with NVDA, is there a particular dev/testing workflow that you would recommend?

@sinabahram
Copy link

Sure, I've pasted my comment from the other issue here, just so we can all be on the same page.

A few things.

First, putting text inside the button could be thought of as preferred over using ARIA, and yes, the appropriate way to do that is with an sr-only class on the inner text. This follows the principle often quoted as "The first rule of ARIA is not to use ARIA", and you obviously do not "need" ARIA to label a button.

Having said the above, I'm unaware right now of any lack of support of aria-label amongst the popular screen reader/browser combinations, so this is a trivial issue, and please feel free to use aria-label to label the buttons.

Next, this touches on the point I was trying to make earlier. I think attacking low hanging fruit like unlabeled buttons is wonderful. It will have real impact for sure, but it is a small fraction of a fraction of what is required to actually make significant progress towards an accessible experience. To that end, I suggest that you need a strategy for these various patterns, be it unlabeled controls, grouping techniques, container semantics, heading semantics, traversal patterns with alt+shift+f6 and alt+f6 (you can't use f6 alone because that will conflict with browser's native f6 implementation), image descriptions, when to have automatic announcements, and so forth.

None of the above tackles the trickier issues around content editable, mode toggling, and code-related features, all of which need their own conversations of course.

Moving beyond the above tactical issues into matters more strategic around accessibility, there need to be several conversations around how these semantics are conveyed and reasoned about within the system. It's been a while since I've been in the code, but I remember a dedicated UI library happening via Phosphor or something like that, along with certain assumptions about functionality and accessibility mappings therein being made at various places up/down the stack. Some of these decisions will need to be revisited with an eye towards accessibility because the approaches/patterns I reference above, once agreed upon, can then be templatized in a way that doesn't require doing a ton of work every single time we want to create a button, add it to a toolbar, etc. Same goes for menu options and much more.

I hope this helps a bit, and I'm happy to find time to hop on a synchronous call to discuss further.

@trallard trallard added area: workflows Testing and processes to audit accessibility area: best practices ✨ A11y best practices type: question 🤔 Further information is requested labels May 20, 2022
@jupyter jupyter locked and limited conversation to collaborators May 20, 2022
@trallard trallard converted this issue into discussion #87 May 20, 2022

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
area: best practices ✨ A11y best practices area: workflows Testing and processes to audit accessibility type: question 🤔 Further information is requested
Projects
None yet
Development

No branches or pull requests

6 participants