Facebook Launches Automatic Alt Tags to Describe Photos to Blind Users

By: Kate Van Druff

Standing in line at the store, when stopped at a red light, at home on the couch, maybe even at work when you should be working or at dinner when you should be enjoying family time—just think of how much information you consume when you get lost in the world of Facebook. Recent statistics reveal that the average person spends between 20 and 40 minutes per day on Facebook, scrolling through the news feed and then liking, sharing, and commenting on their friends’ posts and photos.


The time generally flies by as we devour countless status updates, news links, photos and videos. After all, the scrolling motion tends to be both therapeutic and addictive when we need a little distraction in our lives, and the roughly 2 billion pictures shared across the Facebook, Instagram, Messenger, and WhatsApp social media platforms leave no shortage of fodder for our curious minds.


The ease with which most of us fly through the feed on Facebook is a far cry from the challenges faced by our blind and visually impaired Facebook friends. The visual content on Facebook and other apps tends to leave the blind community out of the conversation, and that level of accessibility is just what Facebook has set out to fix with the newly introduced automatic alternative (alt) text.


What It Is

Automatic alternative text rolled out on Facebook for iOS on April 5 this year. Leveraging object recognition technology, automatic alternative text provides a detailed text description of a photo that is delivered audibly. Automatic alternative text cross-references its vast neural network of examples and offers a list of items that may be featured in the photo to those using an iOS device and a screen reader to access Facebook. For example, this accessibility feature might express, “Image may contain: two people, smiling, sunglasses, sky, outdoor, water,” or simply, “Image may contain: pizza, food.”


How It Works

Facebook’s object recognition technology serves as the driving force behind this new enhancement. When browsing Facebook, the device’s screen reader will first dictate the name of the Facebook user who posted the image, then the date, the time, and any text caption, if included. Next it shares the automatic alternative text, predicting what the photo may be featuring, followed by the number of likes, comments, and shares. You can view Facebook’s video illustrating how the new enhancement works to make Facebook more accessible to the visually impaired community, along with some actual user reactions.


Automatic alt text offers a vast improvement over the prior way photos were shared with visually impaired Facebook users. Those using screen readers before this feature became available would simply hear the name of the person sharing the photo, followed by “photo”—as you can imagine, a far less stellar experience. Driven by a neural network boasting billions of parameters and fueled by millions of examples, automatic alt text should only become even more intelligent, allowing Facebook’s accessibility team to continue to innovate for more inclusive use across the board.


Who Can Use It

The automatic alt text enhancement is currently available for iOS devices using the English language in the United States, the U.K., Canada, Australia, and New Zealand. Facebook plans to expand the feature to work with other languages, the Android platform, and the Web in the near future.


Automatic alternative text joins closed captioning for videos and the option to increase the default font size as key contributions by Facebook’s five-year-old Accessibility team. These insightful improvements offer a more inclusive experience for folks who, due to lack of sight, may have previously felt as though they were missing out.


Facebook says their mission is to make the world more open and connected for everyone. Automatic alt text is certainly working toward the goal. It’s another great first step in the effort to improve accessibility, showing how much is possible with good intentions and a little artificial intelligence. We can’t wait to see what happens next!

Add A Comment

Your email address will not be published. Required fields are marked *