Chopping the Curb: The Energy of Accessibility Analysis | by Yao Ding | Fb Analysis | Could, 2021
“Ease of use and accessibility are twins who were separated at birth. The same goals, but like two brothers in a fable they went different ways: Accessibility went a way of legal rights. This gave him strength, but not a lot of love. Ease of use took a user-exploration path. This gave him deep insight but not much strength.
What happens when these two meet? Can we get deep insights and great strength? “
– Whitney Quesenbery, better access requires user research
The answer to Whitney Quesenbery’s question is undoubtedly yes. At Facebook, user research on accessibility helps every day to meet deep insights on great power. We believe that user-centered, inclusive research and design can and must improve the product experience for all people. In keeping with the “Nothing About Us Without Us” slogan, accessibility solutions must be based on deeper understanding and validation of users through user research, rather than just following guidelines and checkboxes.
Recent advances in Facebook show how effective this approach can be. This article focuses on how accessibility research helped shape one of these advances, automatic alt text, and provides some guidance on integrating accessibility research into standard UX practices.
Based on research into accessibility, Facebook has made great strides in improving product experiences for users with disabilities. For example, since 2018 we’ve been continuously improving keyboard controls and navigation for users with physical disabilities who rely primarily on keyboards and alternative input devices (which mimic keyboards) when accessing Facebook. In September 2020 we published automatically generated video titles on IGTV and FB Live for people with hearing loss. Our long-term goal is to provide auto-generated subtitles for most video content and audio / video calls.
But the advancement we’re going to get into here is Automatic Alt Text (AAT). In January 2021, we rolled out a major update to AAT that uses AI and computer vision technology to automatically generate image descriptions that screen readers can read to convey the meaning of an image to users with visual impairments. AAT can now identify over 1,200 objects and concepts – ten times more than when it was first published in 2016.
Very few of the 2 billion photos shared daily on Facebook products contain alt text. As social media becomes increasingly visual, we believe that AAT with AI support can fill this gap and allow users with visual impairments to enjoy their Facebook experience as much as others.
Understand pain points and user needs
Before starting AAT for the first time, we had to answer a few basic questions. How important are photos to blind users? How useful would you find AAT? A large-scale quantitative analysis of 50,000 blind users showed that blind users on Facebook are just as active and productive as non-disabled users – in fact, their posts get more comments and likes on average. However, blind users upload, comment, or like fewer photos and receive fewer comments or likes on their own photos.
An in-depth interview study helped us get a clearer picture. Although blind users are interested in visual content, they often feel frustrated and even left out or isolated from not being able to fully participate in conversations that revolve around photos and videos.
Identify what people value most
After AAT had provided image descriptions for several years, we wanted to better understand how useful the descriptions are. What did people want to know about the pictures we didn’t provide? How much information was too much or not enough? We conducted a number of studies to answer these questions.
As the visual recognition engine increased in power, AAT could potentially recognize millions of objects and concepts that could describe a photo. We conducted a maxdiff survey to order common visual concepts. The three concepts that blind users seemed to value the most were human interactions (hugging, kissing, etc.), sights (Eiffel Tower, Taj Mahal, etc.), and scenes (inside an elevator, ranch, train station, etc.). . These results have helped us prioritize high quality concepts in computer vision engine development and AAT rendering.
Evaluation of UX for information about iterations
In our standard user-centric design process, we always evaluate a product with users with disabilities to inform product and feature iterations. A moderated remote usability study on AAT produced some surprising results. Participants had different opinions on the trade-off between more information and more accurate information.
This prompted us to consider a setting that would allow users to choose between higher accuracy with less detail and medium accuracy with more detail. We also found that blind users value location information the most – more than size and people / objects. As a result, we’ve changed our original design to organize the three main categories of information as Position> Size> Objects. We’ve also redesigned AAT’s entry point to respond to user feedback that AAT wasn’t very discoverable for first-time users.
The automatic alt text would let the user know that the picture might show 1 person standing and Machu Picchu.
Comments are closed.