in ,

Everyone Is Uploading Selfies To The New ImageNet Roulette App—Here’s How It Works

If you live under an internet rock or you simply don’t pay attention to the online trend du jour, you may not have taken any notice of the ImageNet Roulette craze that’s taking over Twitter.

All this week, hundreds of images have popped up on the social media site that look a little strange. They tend to have a green box around them as well as a description of a particular social role beneath them that’s not always positive. It’s weird and terrible and a ton of fun.

The premise is simple: you upload a photo of yourself to ImageNetRoulette and the site’s internal algorithms classify which role they think you might play in society based on random factors.

Created by Berlin-based developer Leif Ryge, the art project “uses a neural network trained on the ‘Person’ categories from the ImageNet dataset which has over 2,500 labels used to classify images of people.”

Ryge worked with fellow artist and researcher Trevor Paglen and AI researcher Kate Crawford, it’s part of the Training Humans exhibition on at the Osservatorio Fondazione Prada in Milan. What an online app has to do with a real-life art exhibition might seem confusing, but ImageNet Roulette is meant to be an extension of the exhibition, which would be great if it wasn’t so horrible.

You see, there’s a major issue with the app: the way it classifies people is downright terrible. Many of the descriptions it spits out are terribly racist or otherwise discriminatory and when they’re not, they kinda don’t make sense. You could use this as proof that artificial intelligence will never be able to override humanity as it lacks the ability to consciously think and judge humans subjectively (or even objectively in this case), but it seems to undermine the whole point of Ryge’s creation if that’s the case.

As Crawford explained to BuzzFeed News in what seemed to be a defense of the app, “The labels come from WordNet, the images were scraped from search engines. The ‘Person’ category was rarely used or talked about. But it’s strange, fascinating, and often offensive.”

She continued: “It reveals the deep problems with classifying humans—be it race, gender, emotions or characteristics. It’s politics all the way down, and there’s no simple way to ‘debias’ it.”

While that may very well be true, if an app you created is so obviously non-functioning (or functioning extremely poorly, in any case), maybe don’t put it online?

Or maybe do, and let Twitter take it and run with it.