What if it doesn't go wrong when prisoners train AI?

David Man & Tristan Ferne / Better Images of AI / Trees / CC-BY 4.0

David Man & Tristan Ferne / Better Images of AI / Trees / CC-BY 4.0

Tuukka Lehtiniemi

Wired recently published the story ”These prisoners are training AI” on data labor in Finnish prisons. The unconventional data labor arrangement is also one of REPAIR’s ongoing case studies, and that’s why I was interviewed for the story.

From a researcher’s point of view, Wired does a good job in bringing together different elements of the AI training project, including the usual data labor arrangements in the Global South, language issues related to the typical data labor platforms, and the prisons system’s motivations for bringing in data labor.

Nevertheless, the story gives an overall impression of a looming threat. Surely, when AI and tech firms are involved, some ethical shenanigans are going on. And if something is not yet wrong, it could be, and soon probably will. This threat narrative is, of course, something much more widespread than one magazine article. It’s how we are accustomed to seeing different AI projects – in terms of doom or gloom.

However, all Finns quoted in the story – myself included – speak positively about the project, even if it’s also clear that the overall effects remain ambiguous. AI training in the prison, as we have written elsewhere, works in the domain of the aspirational: in the prison, AI training is associated with a curious form of techno-hope, a hope of rehabilitation, a hope that prisoners’ lives might get a little better if they work on a computer. In the prison, as one of our informants put it, success comes in very small steps.

As us Finns are apparently too close or ignorant to see what’s looming, Wired brings in a researcher from the Human Rights Watch to comment on the case. They present a slippery slope – while data labor might be ok now, it will get worse:

“Even if the data being labeled in Finland is uncontroversial right now, we have to think about the precedent it sets […] What stops companies from outsourcing data labeling of traumatic and unsavory content to people in prison, especially if they see this as an untapped labor pool?” 

What stops companies from forcing prisoners to handle toxic materials with their bare hands, or, say, from exposing prisoners to harmful medical experiments? The response is that rules exist. Regulations that already protect prisoners’ human rights continue to regulate what happens around AI.

To me, the most interesting aspect of the AI training project is exactly this: The Finnish prison is a place that tech companies and AI projects cannot transform to their liking. It is a strictly structured and regulated environment with its own rules, policies, and values. AI training and data labor do not and cannot simply turn Finnish prisons into clickwork sweatshops. And while ChatGPT trainers in Kenya were exposed to psychologically harmful content, such content does not automatically and inevitably appear in the prison. The prison system’s rules would not allow it.

We should remain critical and be very careful that these things don’t happen in the future either, and that’s why any AI-related developments in the prison should be closely monitored. To me, as a researcher and an observer, the Finnish prison’s rules and policies appear robust and effective in protecting prisoners from abusive forms of data labor.

While AI developments are often problematic, so is the threat narrative. At worst, it makes us assume that nothing can be done, that AI developments simply cannot be anything different. If we assume that AI is only bad and will get worse, we also assume that rules and regulations cannot have any effects. So far, in the Finnish prison, these assumptions would be mistaken. Ultimately, lumping everything AI-related together as simply dubious and problematic can mean passing up the opportunity to carefully examine outlier cases – such as the Finnish prison – to learn what could be done to make AI developments better.

Edellinen
Edellinen

Tekoäly lasten asialla

Seuraava
Seuraava

DEDA – työkalu dataprojektien eettisen pohdinnan tueksi