By Karl Quinn
In this screen grab from the Disney+ movie Prom Pact, digital extras can be seen in the crowd, directly behind the basketball team.Credit: Disney
Save articles for later
Add articles to your saved list and come back to them any time.
Last weekend, a tiny scrap of footage from the Disney+ movie Prom Pact began circulating on social media. Soon after – and despite the fact it had first been noticed and reported upon months earlier – it was doing the rounds of mainstream news outlets too.
Lasting barely a second, the offending scene showed five students in the second row of the bleachers at a high-school basketball game. They were admirably multi-racial. They were also clearly fake – digital extras brought into being either by poorly handled CGI (computer-generated imagery, driven by a human VFX artist) or by the closely related AI (artificial intelligence, which drives itself).
Either way, they were terrible.
For the actors in Hollywood whose negotiations with the studios again broke down last week, they offered a little comic relief. But they also served as a reminder of one of the key battlegrounds in the dispute: AI could soon take their jobs.
The original Gladiator (2000) was groundbreaking in its use of digital extras, but the sequel has tried to take it even further.Credit: AP
It’s unlikely anyone was laughing the previous week, though, when it was reported that extras on the set of Ridley Scott’s Gladiator sequel had been “pulled aside by production staff and asked to step into a camera-filled booth where their likeness was scanned”.
This was precisely the nightmare scenario the actors’ guild SAG-AFTRA had been warning about for months.
“We want to be able to scan a background performer’s image, pay them for a half a day’s labour, and then use an individual’s likeness for any purpose forever without their consent,” is how the guild sums up the position of the AMPTP (Alliance of Motion Picture and Television Producers), which represents the studios.
The key phrase in all this is “informed consent”, meaning the actor is told how the scanned image might be deployed, given the opportunity to agree or not, and offered adequate recompense for the process – and for the fact that this may be the only paying job in the industry they’ll ever get.
Charles Jazz Terrier, an Australian-French actor living in Los Angeles, has been an enthusiastic early adopter of AI tech in his own work as a podcaster. “We have been able to create visual representations of what our listeners are hearing, and create physical representations of our characters utilising our own likeness, as fully willing and consenting participants in the AI image-generation process,” he says.
Australian-French actor Charles Jazz Terrier, and his AI-generated avatar, Wyatt.
But there’s a big difference between choosing to use the tech and having that choice made for you.
“I wouldn’t feel comfortable consenting to having my likeness scanned and utilised if I was not aware or actively involved in the production, implementation and purpose of its use,” he says. “I feel for the many actors out there who may not have received proper counsel or full disclosure of the intended use of their likeness.”
And that, it seems, is what happened on the set of Gladiator 2 where, as the Times of Malta quoted one extra, “it didn’t really feel like we could say no”.
That performer had now “begun to worry that their likeness will be used in future productions without their knowledge and consent”, the paper claimed.
Whatever the ethics of that alleged approach, there is no surprise in Sir Ridley wanting to push the boundaries of what is possible when technology and humans are brought together.
For the director’s original Gladiator, which was shot in Malta in 2000, VFX company Mill Film filled the Colosseum with digital extras, using crowd replication software to turn 2000 actual humans into 33,000 spectators.
According to Robert Burgoyne’s book The Epic Film in World Culture, “thirty extras (photographed three times, straight on, or at 45 degrees from the side, or above) and also photographed in three different lighting setups (shade, partial shade, full sun) formed a database of individual crowd members, and later these could be placed anywhere they were needed in the Colosseum arena shots”.
It was ground-breaking work at the time, and a big part of the reason the film won the Academy Award for best visual effects the following year.
Since then, it has become commonplace for actors to be photographed from multiple vantage points in order to build a dataset. Photogrammetry, as it is known, is used to build a 3D model (of an actor, say) from a set of 2D images, and to manipulate the resulting model within real or virtual screen environments.
Closely related is motion capture, where an actor provides the movements for a character while wearing a body suit covered in motion sensors. But here only the movements are captured. These provide the framework – or digital skeleton – onto which the VFX artists add the character’s skin. Among the best known, and best, examples are Andy Serkis’s performances as Gollum in the Lord of the Rings and Hobbit movies, Kong in King Kong and Caesar, the chimpanzee general, in the rebooted Planet of the Apes movies.
For the relatively recent, and not always successful, practice of de-ageing (think Robert DeNiro in The Irishman), a combination of photogrammetry and motion capture is used.
Speaking to Stephen Colbert on The Late Show earlier this year, Harrison Ford marvelled that the younger version of his character seen in the wartime sequences of the latest Indiana Jones movie “is my actual face at that age”.
“They have this artificial intelligence program,” he explained. “It can go through every foot of film Lucasfilm owns, because I did a bunch of movies for them. They have all this footage, including film that wasn’t printed, so they could mine it for where the light is coming from, the expression … then I put little dots on my face and say the words and they make my [mouth move]. It’s fantastic.”
The de-aged version of Indiana Jones was created using footage of Harrison Ford shot many years ago.Credit: Lucasfilm
Despite the input of AI, it still required three years and more than 100 VFX artists at Industrial Light and Magic to render the scenes of the younger Indie (and even then not every viewer was convinced).
But advances in the technology will inevitably mean the process gets more heavily automated, cheaper, faster and ostensibly better.
Scientific American recently reported that advances in AI had already made some photogrammetry scans unnecessary.
“A generative model can be trained on existing photographs and footage – even of someone no longer living,” the magazine reported. “Digital Domain, a VFX production company that worked on Avengers: Endgame, says it’s also possible to create fake digital performances by historical figures.”
For some Hollywood players, this is where the real faultline in the battle over AI lies.
“The generative AI video companies are way ahead of whatever the studios are thinking about,” Justine Bateman – writer, producer, former Family Ties star and the holder of a degree in computer science and digital data management – said recently on the Hollywood industry podcast The Town. “The fact the studios are not defending their copyrighted films from being fed into these generative AI models is just lunacy.”
Justine Bateman has outlined a vision of a future in which AI is used to churn and reconstitute old footage into new offerings, with no pay-off for the original creators.Credit: Steven Meiers Dominguez
In Bateman’s view, the technology will soon be advanced enough “for this AI blender to churn [source footage] up so that you can … put in seven seasons of an old TV show so you can get an eighth season out of it.”
It is highly likely, she added, that the real action in this space is happening out of the glare of the current showdown in Hollywood.
“Even if the studios aren’t doing it … all these third parties that are not distracted by these labour actions, not distracted by dealing with filmmakers at all, they’re making this stuff. It’s happening,” she said. “And I think we might get to a point where they get it to such a good place that they turn to the studios and say, ‘hey, we’d like to take Citizen Kane and make everybody [in it] Hispanic because we see from the viewing history that you have a lot of viewers that love Hispanic films’. And then they can start licensing that ability from the studios.”
The No Fakes Act – a bipartisan proposal introduced to the US Senate on October 12 – seeks to prevent that, or at least to prevent it happening without the informed consent and compensation of artists affected.
“Generative AI has opened doors to exciting new artistic possibilities, but it also presents unique challenges that make it easier than ever to use someone’s voice, image, or likeness without their consent,” said Democrat senator Chris Coons, one of the four sponsors of the proposed legislation, the full name of which is The Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act of 2023.
In a backgrounder, the four cited a couple of recent examples of high-profile transgressions, including the song Heart on My Sleeve, “which was created with AI-generated replicas of the voices of pop stars Drake and The Weeknd”, and an AI-generated version of Tom Hanks that “was used in advertisements for a dental plan that he never appeared in or otherwise endorsed”.
Kendall Jenner has reportedly licensed her image to Meta for two years.Credit: Kevinn Mazur
At the other end of the spectrum, celebrities including Paris Hilton, Snoop Dogg, Kendall Jenner and tennis star Naomi Osaka have licensed their images to Meta (the owner of Facebook and Instagram) to allow for the creation of digital clones that will interact with fans. According to one report, “Meta paid one top creator (who goes unnamed) as much as $5 million over two years, for as little as six hours work”.
For most actors, a payday like that is not even worth fantasising about. All they want is to be dealt in some small way into the revolution that everyone knows is coming, and will be televised.
“It is our position that language in a performer’s contract which attempts to acquire [digital image] rights are void and unenforceable until terms have been negotiated with SAG-AFTRA. The rights have not been conveyed,” the guild’s lawyer, Jeffrey Bennett, wrote to the AMPTP in June. “We look forward to working with you to negotiate appropriate terms and conditions as we continue to move our industry forward with exciting new technologies.”
Contact the author at [email protected], follow him on Facebook at karlquinnjournalist and on Twitter @karlkwin, and read more of his work here.
Find out the next TV, streaming series and movies to add to your must-sees. Get The Watchlist delivered every Thursday.
Most Viewed in Culture
Source: Read Full Article