Industry News

Facebook can now automatically create alternative text for images generating descriptions that enable users who are blind or have low vision to envisage the content of the photo. The iOS app provides an audio breakdown of what’s happening in the photo using object recognition technology.

Using its vast supply of user images, Facebook has trained a deep neural network driving a computer vision system to recognize object in images. As is a standard in the WCAG 2.0 guidelines, the results are translated to “alt text,” which can be read by screen readers. 

Anyone familiar with the WCAG 2.0 Guidelines know how important alt text on an image is for people who are blind or have low vision. Twitter has recently upped their accessibility and added an option that allows users to add descriptions to images allowing screen readers and braille displays to announce and display the text.

The feature is enabled by using the compose image descriptions option in the Twitter app’s accessibility settings. It is available in Twitter’s iOS and Android applications. Descriptions can be up to 420 characters. 

"CSUN logo"

The 31st Annual International Technology and Persons with Disabilities Conference, known to people in the industry as the 2016 CSUN Conference, is being held at the Manchester Grand Hyatt Hotel in San Diego, CA from March 21 to March 26. CSUN, through the International Conference on Assistive Technology for Persons with Disabilities, provides an inclusive setting and hosts many groups including:

W3C published updates to two supporting documents for Web Content Accessibility Guidelines (WCAG 2.0) today:

The WCAG 2.0 document is stable. The guidelines and success criteria are designed to be broadly applicable to both current and future technologies. This includes:

Dynamic Applications

Mobile

  • Digital Television
  • Other technologies

The supporting W3C Working Group Notes publish today provide specific guidance, which includes code examples, resources and tests. They are periodically updated to cover current practices for meeting the WCAG 2.0 guidelines.

More resources:

A Texas A&M University biomedical engineering researcher is developing a device that, while worn on the wrist, translates sign language into text. The wearable tech uses motion sensors in conjunction with measurements of electrical activity in the muscles to interpret gestures.  It can already recognize 40 American Sign Language (ASL) words with an approximate 96 percent accuracy. This gives great promise that the device could bridge the communications delta between people who are deaf and those who don’t know ASL.

For more information see the GAATES article.  

On Tuesday, March 8, 2016 Boston Accessibility is hosting a roundtable at the IBM Innovation Center on 1 Rogers Street in Cambridge Massachusetts. Jeremey Curry from AISquared will talk about the new, ZoomText Fusion.

ZoomText Fusion is a Magnifier/Reader with a complete Screen reader that is designed for users with advanced or progressive vision loss. It tailored for individual who, over time, want a smooth and safe transition from magnification to full screen reading.  

Crowd sourcing brings the knowledge of the masses to the needs of an individual. As a wheel chair user, Maayan Ziv had an individual need – to know if places in her city were accessible before showing up. This was the inspiration for her new crowd sourcing app, AccessNow, which collects and shares accessibility information around the globe.

AccessNow is a web based app that shows the accessibility status of hotels, restaurants, coffee shops and tourist destination all gathered from the globally crowdsourced information. The information is shown on an interactive map giving the user the benefit of the knowledge prior to traveling to the location. 

The popular screen reader NVDA has released its 2016.1 version adding new features and changes.

Supports Baum VarioUltra and Pronto! when connected via USB

New feature include:

  • New braille translation tables:
  • Polish 8 dot computer braille
  • Mongolian
  • Ability to turn off the braille curser and changes is shape is the Show Cursor and Cursor shape option in the Braille Setting Dialog
  • Bluetooth connection to a HIMS Smart Beetle braille display
  • Lower the volume of other sounds with Windows 8 and higher installs through the Audio ducking mode option in the Synthesizer dialog of by pressing NVDA+shift+d
  • Supports APH Refresabraille in HID mode
  • Support for HumanWare Brialliant BI/B braille displays when the protocol is set to OpenBraille.

Changes:

  • Emphasis reporting is disabled by default
  • The shortcut for Formulas in the Elements List Dialog in MS Excel has been change to alt+r
  • Liblouis braille translator updated to 2.6.5
  • Text objects no longer announce “text” when they have focus.

The WCAG 2.0 guidelines help in coding accessibly and help meet the requirements of the ADA

The National Association of the Deaf (NAD) and Gogo LLC have reached an agreement for Gogo to make closed captioning available for all of the programming content sourced by Gogo and streamed on-demand on their in-flight entertainment service, Gogo Vision. This marks the first agreement of this type with and in-flight entertainment company.

A new technology added by Gogo will enable customers to display closed captions for content with closed captions. Gogo is also sourcing new content with closed captions where available.

Read more on in-flight Closed Captions

The Timed Text Working Group at the W3C invites implementation of the Candidate Recommendation of TTML Profiles for Internet Media Subtitles and Captions 1.0 (IMSC1)

The document specifies two profiles:

  • Text-only
  • Image-only

These profiles are meant to be used across subtitle and caption delivery applications globally to simplify interoperability, consistent rendering and conversion to other subtitling and captioning formats.

Pages

Subscribe to RSS - Industry News