If you live under an internet rock or you simply don’t pay attention to the online trend du jour, you may not have taken any notice of the ImageNet Roulette craze that’s taking over Twitter.
All this week, hundreds of images have popped up on the social media site that look a little strange. They tend to have a green box around them as well as a description of a particular social role beneath them that’s not always positive. It’s weird and terrible and a ton of fun.
The premise is simple: you upload a photo of yourself to ImageNetRoulette and the site’s internal algorithms classify which role they think you might play in society based on random factors.
Created by Berlin-based developer Leif Ryge, the art project “uses a neural network trained on the ‘Person’ categories from the ImageNet dataset which has over 2,500 labels used to classify images of people.”
oh wow i got the rare “double skinhead” pic.twitter.com/z1rnmIYu6r
— Sam (@SamuelMoen) September 18, 2019
Ryge worked with fellow artist and researcher Trevor Paglen and AI researcher Kate Crawford, it’s part of the Training Humans exhibition on at the Osservatorio Fondazione Prada in Milan. What an online app has to do with a real-life art exhibition might seem confusing, but ImageNet Roulette is meant to be an extension of the exhibition, which would be great if it wasn’t so horrible.
fascinating tbh pic.twitter.com/qTD4RGlisp
— Edward Ongweso Jr (@bigblackjacobin) September 17, 2019
You see, there’s a major issue with the app: the way it classifies people is downright terrible. Many of the descriptions it spits out are terribly racist or otherwise discriminatory and when they’re not, they kinda don’t make sense. You could use this as proof that artificial intelligence will never be able to override humanity as it lacks the ability to consciously think and judge humans subjectively (or even objectively in this case), but it seems to undermine the whole point of Ryge’s creation if that’s the case.
Well, this feels quite literal and accurate. pic.twitter.com/l2BcxNcxyu
— Lydia Polgreen (@lpolgreen) September 17, 2019
As Crawford explained to BuzzFeed News in what seemed to be a defense of the app, “The labels come from WordNet, the images were scraped from search engines. The ‘Person’ category was rarely used or talked about. But it’s strange, fascinating, and often offensive.”
She continued: “It reveals the deep problems with classifying humans—be it race, gender, emotions or characteristics. It’s politics all the way down, and there’s no simple way to ‘debias’ it.”
Fascinating insight into the classification system and categories used by Stanford and Princeton, in the software that acts as the baseline for most image identification algorithms. pic.twitter.com/QWGvVhMcE4
— Stephen Bush (@stephenkb) September 16, 2019
While that may very well be true, if an app you created is so obviously non-functioning (or functioning extremely poorly, in any case), maybe don’t put it online?
tfw when you get a press release about an AI photo thing that you’ve seen lots of other tech reporters having fun with but then it’s actually not that fun pic.twitter.com/NMZNxlGNZW
— Julia Carrie Wong (@juliacarriew) September 17, 2019
— Ryan Broderick (@broderick) September 16, 2019
— Scott Lucas (@ScottLucas86) September 16, 2019
Or maybe do, and let Twitter take it and run with it.
omggg the ImageNet Roulette thing is wild 😳 pic.twitter.com/eEO5Z2wEI2— czar & friends (@FlagrantRevue) September 19, 2019
reply with an emoji and ill let imagenet roulette classify your icon pic.twitter.com/Pd8NRmfXdi— 𝐜𝐚𝐢𝐭 (@diorfentys) September 19, 2019
image roulette got fuckin jokes tonight huh! pic.twitter.com/7EihRk16qQ— bri (@biwinkle) September 19, 2019