Prejudice in the use of artificial intelligence is a well-known problem. Twitter is not immune to this either. Through a competition, the short message service now wanted to find out which errors occur in its own Twitter algorithm.

Artificial intelligence (AI) was created to mimic the way the human brain works. With the help of training, AI can learn to respond to questions like Siri or Alexa or make decisions on its own.

But this often also leads to problems – for example, when minorities are disadvantaged by the AI used or gender bias occurs.

Twitter also had to struggle with this problem. At the end of 2020, criticism was voiced that the algorithm that crops the images in the feed is characterized by prejudice. For example, white faces were favored over black faces in the cropping.

In response, Twitter initially adjusted the cropping of photos in the newsfeed in May. Since then, portrait-format images are no longer cropped so heavily.

What are the weaknesses of Twitter’s algorithm?

But the short message service didn’t just want to fix the problems of its cropping algorithm that were visible in the newsfeed. Changes were also to be made in the background.

To get to the bottom of its own algorithm’s problems, Twitter launched the first Algorithmic Bias Bounty Challenge at the end of July 2021. Such challenges are particularly established in the hacker scene, for example to find security holes in the systems of large corporations.

The challenge took place as part of the online conference Def Con AI Village. The challenge offered prize money of $3,500 for first place, $1,000 for second place and $500 for third place.

The problem of the Twitter algorithm

According to Twitter, it’s particularly problematic that “companies don’t learn about unintentional ethical harm until after it’s been published.” Beforehand, it is particularly difficult to find bias in AI, it says.

Twitter is now looking to build on the success of the Hacker Challenge with its challenge.

We’re inspired by how the researcher and hacker community has helped the security field develop best practices for identifying and mitigating vulnerabilities to protect the public.

Twitter wants to help establish a similar community – but one that specializes in the ethical aspects of artificial intelligence and machine learning.

Twitter algorithm favors slim, young and fair skin

First place in the challenge was won by Bogdan Kulynych, a researcher at Switzerland’s EPFL University of Technology. According to the results, Twitter’s cropping algorithm favors faces that are slimmer, younger and have lighter skin.

Kulynych summarizes the results as follows: “The target model is set to prefer people who appear slim and young or have a light or warm face color, smooth skin texture, and stereotypically feminine facial features.”

This bias could lead to the exclusion of minorities and perpetuate stereotypical beauty standards in thousands of images.

To arrive at his findings, the researcher compared a photo of a human face to a series of AI-generated versions.

The algorithm gave better places to younger and slimmer-looking faces. Lighter or warmer-looking skin was also preferred, as well as higher-contrast images with more saturated colors.

The distorted beauty ideal in social networks

The beauty craze in social networks is a well-known problem. So-called beauty filters, but also algorithms – such as Twitter’s – reinforce this effect. It is therefore absolutely positive that Twitter now wants to do something to counter this problem.