“Training Humans”, conceived by Kate Crawford, AI researcher and professor, and Trevor Paglen, artist and researcher, is the first major photography exhibition devoted to training images: the collections of photos used by scientists to train artificial intelligence (AI) systems in how to “see” and categorize the world.
In this exhibition, Crawford and Paglen reveal the evolution of training image sets from the 1960s to today. As stated by Trevor Paglen, “when we first started conceptualizing this exhibition over two years ago, we wanted to tell a story about the history of images used to ‘recognize’ humans in computer vision and AI systems. We weren’t interested in either the hyped, marketing version of AI nor the tales of dystopian robot futures.” Kate Crawford observed, “We wanted to engage with the materiality of AI, and to take those everyday images seriously as a part of a rapidly evolving machinic visual culture. That required us to open up the black boxes and look at how these ‘engines of seeing’ currently operate”.
“Training Humans Symposium” took place on Saturday 26 October at 2.30 pm, engaging with the exhibition. The event involved Prof. Stephanie Dick (University of Pennsylvania), Prof. Eden Medina (MIT), Prof. Jacob Gaboury (University of California, Berkeley), along with the project curators Kate Crawford and Trevor Paglen. Putting the ideas in the exhibit in conversation with their path-breaking work, the speakers examined questions such as: where are the boundaries between science, history, politics, prejudice and ideology in artificial intelligence? And who has the power to build and benefit from these systems?
“Training Humans” explores two fundamental issues in particular: how humans are represented, interpreted and codified through training datasets, and how technological systems harvest, label and use this material. As the classifications of humans by AI systems becomes more invasive and complex, their biases and politics become apparent. Within computer vision and AI systems, forms of measurement easily – but surreptitiously – turn into moral judgments.
Of import to Crawford and Paglen are classificatory taxonomies related to human affect and emotions. Based on the heavily criticized theories of psychologist Paul Ekman, who claimed that the breadth of the human feeling could be boiled down to six universal emotions, AI systems are now measuring people’s facial expressions to assess everything from mental health, whether someone should be hired, to whether a person is going to commit a crime. By looking at the images in this collection, and see how people’s personal photographs have been labeled, raises two essential questions: where are the boundaries between science, history, politics, prejudice and ideology in artificial intelligence? And who has the power to build and benefit from these systems?
As underlined by Crawford, “There is a stark power asymmetry at the heart of these tools. What we hope is that “Training Humans” gives us at least a moment to start to look back at these systems, and understand, in a more forensic way, how they see and categorize us.”
The exhibition will be accompanied by an illustrated publication in the Quaderni series, published by Fondazione Prada, including a conversation between Kate Crawford and Trevor Paglen on the complex topics addressed in their project.