Film and television actors in the US came out on strike on July 14, causing Hollywood productions to shut down. The action has also had an impact on US films shooting in the UK: director Tim Burton’s Beetlejuice 2 has “paused” and the production of Deadpool 3, filming at Pinewood Studios with stars Ryan Reynolds and Hugh Jackman, has been stood down.
The dispute is about remuneration for actors, very few of whom enjoy the high income of Hollywood stars. But an additional argument between the union, SAG-AFTRA, and film producers is about the use of artificial intelligence (AI). Actors are fearful of the impact of AI on their careers.
When they perform on film sets, their image and voice are digitally recorded at extremely high resolution, providing producers with huge amounts of data. Actors are concerned the data can be reused with AI. New processes such as machine learning – AI systems that improve with time – could turn an actor’s performance in one movie into a new character for another production, or for a video game.
Actors feel an urgent need to control how AI manipulates their image. The union president, Fran Drescher, says: “We are all in jeopardy of being replaced by machines.” But how realistic are these fears?
‘Synthetic media’
When we talk about the use of AI in film and television, there are multiple techniques under development. We categorise these as “synthetic media”. This covers processes such as deepfakes, voice cloning, visual effects (VFX) created using AI, and completely synthetic image and video generation.
I have written before about deepfakes for The Conversation, pointing out the benefits as well as dangers. For screen actors, deepfakes are one of the most vivid threats.
This is because, since machine learning took off, Hollywood stars including Scarlett Johansson and Gal Gadot have found their faces deepfaked into porn movies. This is a major gender-based issue: it’s almost always female actors whose images are manipulated and used in this way.
We tend to think of artifical intelligence as omnipotent. But my research has found that integrating deepfakes into the language of cinema and TV drama is difficult. Certain shot types are easy, such as front-on long shots, but asking the AI to produce a profile shot tests the algorithm to its limit.
Industry research and development (R&D) programmes such as Disney Research have invested a huge amount of effort into perfecting deepfake techniques. But no one has yet produced an easy way to swap an actor’s face into any shot size or angle that the director chooses, with convincing, high-definition results.
Background actors
The actors’ union, SAG-AFTRA, is particularly concerned about background actors – or “extras” – being exploited by producers using AI manipulation. In the union’s special agreement for background actors, which lists the additional payments that they should receive, there is currently nothing stated about the AI use of recorded footage – the arrival of new technology necessitates a negotiated deal with the producers.
The Alliance of Motion Picture and Television Producers (AMPTP) claims to have made a “groundbreaking AI proposal which protects performers’ digital likenesses, including a requirement for (a) performer’s consent for the creation and use of digital replicas or for digital alterations of a performance”.
However, the actors’ union boss Duncan Crabtree-Ireland retorted: “They proposed that our background performers should be able to be scanned, get paid for one day’s pay, and their companies should own that scan – their image, their likeness – and should be able to use it for the rest of eternity in any project they want, with no consent and no compensation.”
Ethical dimension
This month, I convened a meeting at the University of Reading, in which academics, stakeholders and creative producers came together to discuss the issues of AI in screen production. We have formed the Synthetic Media Research Network, a group that wants to see strong ethics built into the exciting new opportunities that AI brings to the screen industries.
Philosophers, lawyers, ethicists and trade unionists joined the discussion, because establishing a values-based system for how AI can change performers’ images and identities is a fundamental issue for the film and TV industries.
When I talked to Liam Budd, national officer for the UK actors union, Equity, he said: “If you’re going to exploit our members’ work using AI tech, you have to get consent from them and many members won’t want to.” Currently, there is no nationally agreed system regulating how performers give consent for the use of AI on their image.
Actors will want to be persuaded that the extra pay they receive makes it worthwhile – or they want the right to opt out on a job-by-job basis. The current situation is that actors feel obliged to sign away their rights “in all media” and “in perpetuity”.
Dr Mathilde Pavis, an expert on AI rights and intellectual property, says: “You can’t ask all of this from people without either remuneration or something in return, and at the moment that’s being added on to their contracts without more given in return.” The lack of agreed terms has led to Equity launching a campaign called Stop AI Stealing the Show.
Last week, the union also held rallies in Manchester and London in support of their striking counterparts in the US. When they began a similar scale of dispute in 1980, actors stopped work for three months. Brian Cox, star of Succession, thinks that the strike may last until the end of the year.
Actors are angry that their system of payment has not caught up with the streaming era, with Netflix, Amazon and Disney repeat screening their work while paying little in the way of royalties.
But fear is the stronger emotion here: AI is a new technology that sparks deep and legitimate fears for screen actors. Will they be “replaced by machines” as the union president has said? Unless they can be reassured about their future, American actors will not be returning to the studios soon.