Algorithms aren't just judging us, they're comparing us.

As the issues around bias are becoming more visible, the common conception is that this issue can be fixed. The algorithms should be trained to be bias aware. But is this a viable solution?


The problem with this is that most "AI" companies don't train their own "AI". They simply implement off-the-shelf algorithms.

Not only are people judged by algorithms, but these algorithms themselves are surviving in a free market, which means they are susceptible to survival of the fittest themselves.

The algorithms that are the most generally applicable become the most popular. But generic algorithms are also the most biased.

The installation

The installation consists of a screen and a camera. As soon as it recognises two people, a number of off-the-shelf algorithms start judging the face of both contestants. The goal: determine which of the two seems more employable.

Participants could play with masks and other attributes to try and get a better score, and beat their opponent.

Judging was done on a number of (ridiculous) criteria:

This is really happening

While this piece is a caricature, similar things do really happen. 

To find the best candidate for a job, companies like Hirevue ask potential employees to record a video of their own face with their own smart phone.

This practice has been widely criticised. It's not just invasive, it's highly unscientific.

In politics

The first version of the installation was shown at the Unfreezing Freedom conference in Rotterdam in 2020. This sold out event about data driven chilling effects was mostly visited by politicians.

If you'd like to have a version of this installation at your event:

Get in touch

In education

Currently, 60 students of the Hogeschool Rotterdam are using the installation it as a starting point to create their own installations.

Learn more