__NOPUBLISH__

Welcome to a splinter etherpad sheet!
This pad is only reachable locally within the range of the shared portable server. 

To prevent this pad to appear in the 'etherdump' list of pads, you can add the word __NOPUBLISH__ anywhere in the text.

This pad installation includes a system to mark pads to include specific keywords, 'snowpoles'. 
When ++SNOWPOLES++ are used, pads will be tagged with a certain keyword, for future re-editing or publishing.
v Worksession Drempel Drempel Drempel
Friday 15 - 03 - 2024 - Day 4

helloooooo

10:30: small round of presentations

11:00: Alyssa's contribution 

(impact beyond just blind/low vision audience)
in memory of jess curtis ; artist who brought a lot to web accessibility practices 
www.jesscurtisgravity.org/access


 ++About-screenreaders++
\B ackground
working in field of blindness and low vision since 2015
trained dance & choreography 
working with neurological vision loss, deafblindness
Orientation and Mobility (O&M)
how to spatially orient in the city , navigating public transportation
Vistion Rehabilitation Therapy (VRT)
feminist and disability justice-informed questions, link between contemporary dance and rehabilitation science

co -habilitation instead of re-habilitatoin
cohabilitation :invited word to talk about the possibility of reciprocal relationship between patient and therapist
+++++
\\What are screen readers?

content in speech through a synthetiser (text, images, menu, symbols)
modifier keys can trigger specific commands

some device can output speech and braille simultaneously
+++++
Most popular:
    JAWS, (North America/Australia)
    NVDA, (Europe/Asia/MiddleEast & Africa) ... (tho nearly equal usage with JAWS)
    VoiceOver,
    ZoomText/Fusion

working with many different languages +50..
generally in combinatoin with Chrome

WebAIM Survey (2021)
Chrome+JAWS, NVDA+Chrome, JAWS+Edge
JAWS: "Job access with Speech" => proprietary, product of a US company
NVDA: "Non-visual desktop access" => free open source program developed in Australia
JAWS is a product of Freedom Scientific

For Smartphones:
VoiceOver for iOS + TalkBack on Android

history of rehabilitation science came also from c reating job access, JAWS comes from these initiatives..

> listen to example of voice over

very fast, very edgy :/

it works with containers, you navigate from a global view into the smaller menu, the slides, the preview, layour area
and then enter the areas/layers of content
"you are currently in" is repeated at every step
it reads everything!!! 
different objects create a differe nt efficency of scrolling through
+++++

pdfs if not ocrd they are just undreaable..
image is read in the editor as "google shape" (not having alt-text)

"Generate alt-text for me" button
creates a general description (through ai image recognition)
adding "automatically generated description"

through tab, shift+tab
through H (go to head ings )
use up and down arrows will jump through different tags (p, img, a ...)
if you are in a form you will go into focus mode.
lot of time in spent in learning the short cuts of your keyboards because you are navigating without mouse

on a linux system there is ORCA [not sure how diffused how usable]

+++++
\\\What is magnification?
(softwares: zoomText, MAGic and Dolphin SuperNova)
tool that enlarges content from a computer, tablet or smartphone
it is already included in different systems we use (Window, Android, iOS)
for more features  and if you need something that has over 200% magnification you will need to purchase a third party software for mac/windows.. 
* when reviewing websites > test at 2x magnification if the content is obscured or lost or hidden
Fusion refers to ZoomText used with JAWS
+++++

predictability and consistency about horizontal scanning can help orientation at these scales
(imaginging navigating a window at 4x navigation)

+++++
\\\\Alt-text & image description
"gives access to a quick understanding of an image"
best 1000 characters
Image Description tends to go into more depth.
alt-text and image descriptions are different things, amount of characters, precision, details
(image descriptions are more detailed)
Misunderstanding of alt-text as "the description" but in fact is a basic, quick description.
audio description = additional commentary (body languages, expressions, movements)
"Puddle on the floor" (alt-text) vs. "puddle of orange juice on a white tiled kitchen floor" (image description)
Audio Description is really being discussed right now as yet another approach to commentary to what happens on screen.
NetFlix and other companies are starting to use audio description
alt-text has limitation with characters that 's also why one has ot be more efficients
alt-text + description can work together (including cultural specific / interpretation)
+++++

"a person with arms raised above their head"
"a woman dancing"
when on kaai site... dance IS the context (often), so what is the function of the image on a specific page
why is the image there / what is the function of the image how do we address it correctly
is the function to hold the interest of the visitor... (imagine the same function being satisfied in alt-text form)

can one think through a description-first ( or alt-text -first)  image selection?
"it's not that the visual doesn't matter" (assumption of a blind reader exclusively using a screen reader misses the frequent case of combination of screen-reader + magnification)

\\\\\Case study: personal website of Alyssa
https://www.alyssagersony.com/
built on squarespace using mostly wcag guidelines
WCAG guidlines : https://w3c.github.io/wcag/guidelines/22/

she designed it and implemented individually and asked for comments and tests

What do you include in an image description?
Worked with a variety of inputs, mainly friends asked, with different points of view...
There's no "right way", some have a preference for more detailed information about clothing, others more interested in compositioni (her body is centered in the frame). There's of course a layer of subjectivity.

test public is one way of course, can be friends at a small scale, institutions can put resources into it..

What's at the top of the screen is read (by a screen reader) first.
Place to put a "disclaimer" of what is coming up on the page
menu / navigation
short description of what's coming in the page

horizontal lines can help separate areas
video has next of each other two version:
    one with closed captions 
    another with audio description

artists often dont have time for this.
- how we create spaces and resources to make their work accessible

alt-text/image description
can they be combined? is it a shotcut, what are the limitations?

image description give a visiiblity to the choice of not being only-visual, and offer more space to elaborate and give more rich descriptions, but you need to develop layouts and templates to accomodate and build around them.
alt-text is built into the HTML (attribute of an image tag)
image description needs to be added to the content of a page (or CMS).
+++++
importance of reconsidering design priorities!
think about the flow of a page, the experience of hearing an alt-text, then some "body" text of a page, then an image description.
+++++

is there plugins that add descriptions?

providing alt-text is labour intesive

"Reframing expectations around the visual excitement of artist websites and influencing other artists to reconsider their design priorities."

Multilingual?
alt-text is generally put in english.. what are alternative approaches?

\\\\\\Current issues: audio description

Audio description and performance is a critical topic today, and has been for the last 20 years.
COVID-19 brought more awareness, (as performance was experienced more broadly online)
CC/SL Interpretation and AD protocols for audiences with disabilityies

Audio description resources to consider:
\\\\\\\Practical implications

* Learning screen readers
* access tools and working in community
* hiring audiiting services to test for you

can come in the way of JAWS and other software, if you use it be mindful of how to integrate it..
who is it there for, what is it there to do 
Add-ons (Ai-powered)
https://userway.org/

Question of for who are such modifications for (doesn't really include audience that's already using screen readers)

Structural protocols to implement predictability so people know what to expect on your platform
* Standard practice for amassing descriptions, alt text, hiring practice for outsourcing CC for video trailers / descriptions.
What can you do now to integrate this information into your website plan?
Work with specific communities / local.
"Nothing about us without us" - James I. Charlton

Would writing alt-text be an interesting job for a dramaturg?
Or is this a bad idea, ... who should you hire (if you would hire)?

Depends on cultural context / specific necessity...
Start from a position of Disability Justice.
Some organisations (wihin disability justice) may have already integrated a sense of dramaturgy, for instance.
For others it might be new.


References & Resources
https://www.rnib.org.uk/living-with-sight-loss/assistive-aids-and-technology/computers/screen-reading-software/
https://webaim.org/projects/screenreadersurvey9/#proficiency
https://www.nvaccess.org/
https://support.apple.com/guide/voiceover-guide/welcome/web
https://www.w3.org/TR/WCAG20/
https://www.perkins.org/resource/how-write-alt-text-and-image-descriptions-visually-impaired

haptic tours would allow other senses than description access some of the material of the performance before the performance > https://www.jesscurtisgravity.org/access
example of feeling certain dance combinations (that are hard to describe / experience)

//

software side
> orca reader for linux, nvda is floss but for windows

can this software skip ADS?
> might be that popups and other elements are skipped by the reader

iOs for the phone also has built-in voice over

Voice customization: Choose voice, speed, and verbosity

tunings:
    > voice tone
    > voice speed
    > verbosity

https://circulations.constantvzw.org/2024/drempel/images/refreshable-braille-display.jpg

Lynxs, browser for the command line
https://en.wikipedia.org/wiki/Lynx_(web_browser )
wondering if this approach is also used by the community, would be good to get in touch and find out :)
(might be very relevant considering the constant publics!)

--
Lunch break
--
14:30: things that we can do in the afternoon

15:00 : screenreading of Constant and Kaaitheater website through voice over

15:00: 
    
    - 1 group: alt-text and/or audio description and/or animated text generation tools
    - 2 group: exploring free software screenreaders
    - 3 group: aria labels, further screenreading

+++++
++floss-screen-readers++
# An experience with Screen Readers

And select "laptop layout" which enabled CAPS LOCK as the orca shortcut.
( Learned from this tutorial: https://www.youtube.com/watch?v=UI76P-KPZec )

Now it's possible to follow, the guide, for instance, there is a "learning mode" to understand what different keys do, it's turned on by using MODIFIER + H, or CAPSLOCK + H
Then pressing a key lists it's function.
You need to press ESC to leave the learning mode.
I noticed that FIrefox (on my machine) doesn't seem to support accessibility features.
Chromium in contrast seemed to work with Orca.

It's possible to see a list of application specific shortcuts, using CAPSLOCK + F3.

In this view, for instance, I saw that you can use the "h" key to move through a document based on the page headers.

It's clear that there's quite a learning curve to understand how best to use a screenreader, such as Orca, and that in order to properly test / evaluate a site design with a screen reader, you ideally could see a user who is proficient with actually using a particular screen reader.
+++++

take-home
the aria creates a parallel tree form the dom that can be modified 
the accessibility function in the browser gives access to this tree

wikipedia predictably is a good example of aria usage

+++++
++ # \ An experience with Alt-text ++

Different tests with images from Constant website in chatGPT to create alt-text description

would we ever want to use this?
if it would be more ethical?
if it would be open source if would not be so effictient because chatGPT is so extrational of datas

there is something important in mentioning also when editorial work is outsourced to a model

and also another fundamental question about company that has broken a lot of licenses and permit to use materials..
without entering their business model / ethical implications of a company that gave exclusive access to their technologies to the US government and the pentagon for war-mongering?

the absurdity of gender inclusive language being served back by being actually copied by writers that probably are against this industry/company..
+++++

stability ai. is an open and free company doing image interpretation
https://stability.ai/