Friday, September 9, 2016

Can Algorithms Be Racist? (or, "Ku Klux Klomputer")


I know it's become a hackneyed thing to say, but it's still just as true as ever: nobody's perfect. (Although Batman definitely comes pretty close; he is the world's greatest detective and has definitely defeated Superman on multiple occasions with the use of kryptonite and sheer wit.) Everyone has flaws, vices, prejudices, and screws up from time to time.

That being said, it would be preposterous for any of us humans to ever purport to create anything intelligent without the flaws we haven't managed to shake for the 200,000 our species has been around on the planet. Therefore, it should come as no surprise that there we've some AIs that exhibit some of our less admirable qualities.

Recently, there was an online beauty pageant held that was the first to be completely judged by an artificial intelligence. Beauty.AI launched about a year ago, and the idea was that people would upload photographs of themselves to the website, and the AI would select the ones it deemed to be the most attractive.

There were a total of 44 winners selected, and the programmers who created the algorithm the Beauty.AI used to judge noticed one common factor among them: All of them, barring one, were white.

Doesn't seem like that huge of an issue, right? It wouldn't be if there hadn't been thousands of entries from dark-skinned people, mostly from India and Africa. You'd think the AI would select more than one dark-skinned amateur model, but it showed an obvious preference for lighter-skinned people.

So, what's the deal? Why is this artificial intelligence seemingly racist? The answer's a lot simpler than you might think: In its "youth," the AI wasn't exposed to a plethora of minorities.

It all comes back to algorithms, explained Youth Laboratories, the group that created Beauty.AI. Of course, an AI wouldn't automatically know what humans tend to think of as beautiful, so they taught Beauty.AI what to look for in a contestant by creating an algorithm using thousands of portrait photos. These photos were, overwhelmingly, those of white people. Few people of color were included.

Alex Zhavoronkov, chief science officer of Beauty.AI, was shocked by the contest's winners. Still, he understood why so few people of color were included. "If you have not that many people of color within the dataset, then you might actually have biased results,” he commented.“When you’re training an algorithm to recognize certain patterns … you might not have enough data, or the data might be biased.”

This is definitely a problem programmers must anticipate if they're going to be most effective. But even so, could a problem as abstract as determining whether or not what an artificial intelligence finds attractive is culturally sensitive ever really be solved? Is it an issue even worth remedying? Just some food, for thought, I suppose.


Sources:

No comments:

Post a Comment