Congratulations to all the sharp-eyed readers who spotted the lone real photo in our gallery of AI-generated images of prosthetic limbs (“Can AI Draw an X3? LOL,” on page 8 of the January 2024 print edition). There were more than a dozen correct guessers; the winner (chosen at random) is Earl Fogler. He gets a $50 gift card and a one-year subscription to Amplitude. Our thanks to Earl for participating, and to everyone else who submitted a guess.
As we noted in the print article, AI’s laugh-out-loud inability to draw a realistic prosthesis has implications that aren’t funny at all. All the AI images in our contest—and dozens more, some of which we have published below (click for an expanded view)—are available for download via popular stock-image sites such as Adobe Stock, Shutterstock, and iStock. These repositories serve thousands of print publications, websites, ad agencies, and other media that, in the aggregate, reach billions of eyeballs. Stock images have influence and reach. They’re part of a visual shorthand that affects how people see, understand, and communicate ideas.
And they’re often a godsend for editors and art directors on deadline who desperately need an illustration to provoke interest in their content and heighten its emotional impact. Amplitude and The O&P Edge license hundreds of stock images every year.
But we know a flagrant misrepresentation of limb loss when we see it. Most creative professionals don’t.
That’s why we’re more annoyed than amused that stock-image repositories contain so many flat-out inaccurate AI-generated images of amputees. These bogus pics don’t harmlessly misinform viewers. They perpetuate obnoxious stereotypes about limb loss and disability, reinforcing comic-book caricatures that exaggerate and fetishize bionic enhancements. They flat-out lie about what mechanical limbs weigh, how they function, how they articulate with flesh-and-blood bodies, and how they’re meant to be used. Instead of normalizing limb loss, the images turn amputees into superhuman cyborgs, techno-enhanced idols, armored RoboCops—freaks, to use plain language.
Far from building a visual shorthand that helps viewers better understand and communicate about disability, these untrue-to-life images degrade the conversation and obstruct efforts to raise true awareness. They undercut amputees’ never-ending battle to be perceived and recognized as regular people.
And yet they’re being offered to media outlets all over the world. Generative AI images routinely appear toward the top of the results if you type “amputee” or “prosthetic” into the search engine. Why?
We contacted our go-to image vendor, Adobe Stock, to find out. In addition to learning who’s making these images, who’s vetting them, and why bogus content is allowed to stay in the catalog, we wanted to start a discussion about how generative AI can be a force for good. Because, like it or not, this technology isn’t going away. Its impact is going to increase by orders of magnitude in the coming years, and there’s nothing anybody can do about it. So it serves everyone’s interest if people with disabilities can get involved in making generative AI smarter and turn it into a constructive tool rather than a barrier to progress.
“We really see AI as being a further advancement of democratizing creativity,” says Sarah Casillas, senior director of content for Adobe Stock. “We saw that initially with the iPhone, and then with social media. Those were definitely ways of democratizing creativity. And I think for the disability community, this is a way to give access and production capabilities to people that maybe have had barriers to entry. I have a former business partner who was a photographer and also did video, and he became paraplegic. Being able to use generative AI has been very empowering for him.”
Adobe welcomes contributions from visual creators with disabilities, Casillas adds. “There’s so much value in making sure that the content that gets created is created by people that self-identify with the community they’re photographing. We’ve done quite a few different programs where we’ll commission people to provide content who self-identify from different communities. We will actively seek out people.”
Including amputees? “There’s definitely opportunity there,” Casillas says. “We always want to learn and grow, and this is a new space where we really see a lot of room for doing more.” And how can amputees get involved in upgrading Adobe Stock’s limb-loss content, whether via generative AI or via the old-fashioned, made-by-humans method? “I would say to email me,” says Casillas. “I would love to get a chance to see what we could figure out.”
We’re totally down with Adobe’s efforts to seek out authentic voices and elevate creators from all backgrounds. That’s in their own interest—it makes their image collection more diverse, more useful, and more broadly appealing—and it enriches the visual vocabulary that’s available to media storytellers. Those are all admirable goals.
So isn’t it self-defeating to leave junk AI images of amputees in the catalog? Who’s creating this crap, anyway? Surely not amputee creators?
“We truly have a million contributors from around the world,” Casillas explains. “So it could be anyone from a hobbyist who has a day job but is interested in photography, to a creative professional who is full-time submitting stock photography. They’re from all over the world, so you could have people with different understandings about what’s acceptable in different cultures.”
Why publish stuff from contributors who are submitting material about subjects they don’t understand? “There’s a difference, I think, between something being inaccurate and something causing harm and bias,” Casillas says. “As a first step in this journey with generative AI, we’re constantly working to avoid harm and bias. One thing we are doing is labeling the content. You can filter and say that you don’t want generative AI content. And there’s a little indicator on the image that says it is AI created.”
Has there ever been a case in which Adobe removed an AI image because someone said it was harmful? “There are definitely times when we’ll we’ll pull content,” Casillas says. “Sometimes that’s where somebody is infringing on the name of a person, or something is misleading where it’s a newsworthy event. That’s not permitted in our guidelines. And I have a group of 15 people who are curators, and they’re essentially almost like super users of the collection. So if somebody has a complaint, we can have some experts on the team look at it from a different lens and determine whether or not that content should be removed, or even to block the contributor.”
Suppose that we—white creators from a landlocked state—think there’s a market for images of a Makah whale hunt, so we create some AI-generated content and submit it to Adobe. And let’s say our picture depicts the wrong type of boat, unrealistic harpoons, inaccurate clothing, and a whale species that doesn’t live near the Makah homeland—or doesn’t exist at all. Would that image be allowed to stay in the catalog?
“It really goes back to that difference of accuracy versus harm and bias,” Casillas says. “If you have a female wearing a war headdress, that’s considered offensive in Native American culture. So when we see those types of images, we’ll pull them because there’s bias to that. But if a Native American male model is wearing a headdress, but maybe there’s not great accuracy at a surface level, that image would get accepted.”
How did Adobe learn that images of women in war headdresses might give offense to Native American communities? Casillas: “It was just a case of somebody coming to us and having a conversation, and of us saying: ‘Help us do better. We definitely don’t want to have harm and bias in the collection, so can you give us some guidance for things we should avoid and that cross the line?’ And he shared quite a bit with us in writing. He was also an amazing photographer, and we said, ‘We really need more Native American contributors to submit to our collection and to represent those communities with the understanding of cultural nuance.’ So he’s submitted quite a bit of content to us.”
Casillas reiterated that Adobe is actively looking to engage with image-makers from the disability community. “We’re constantly learning and trying to build that knowledge,” she says. “So if anyone has any ability to help us to get access to accurate content that shows a range of disability, I would love to have that conversation.”
She repeated the invitation for interested parties to email her and help Adobe understand disability from a more authentic angle. Amplitude’s audience includes lots of people with creative talent. Here’s a chance to showcase your work, make a positive impact, banish bogus content from the stock-image marketplace, and replace it with content that tells the truth about limb loss and disability.