Artificial Intelligence could make soaps to rival the BBC in just FIVE YEARS, Slow Horses director reveals – as he tells MPs that British stories can be ‘enabled’ by technology

A director of Apple TV's Slow Horses has told MPs that artificial intelligence could create soaps to rival the BBC in less than five years (EastEnders' Jacqueline Jossa, pictured)
Advertisement

A director of Apple TV’s hit series Slow Horses has told MPs that artificial intelligence could create soaps to rival the BBC in less than five years.

James Hawes spoke to politicians during the UK Culture, Media and Sport Commission’s inquiry into British film and high-end TV, and the impact of technology on the industry.

Advertisement

During the investigation, James shared the findings of a forum held following the news that the BBC’s long-running soap Doctors would be axed at the end of this year.

The soap has been plagued by declining ratings, after an attempt to move the soap to a primetime slot failed to attract new audiences.

James, the vice-chairman of Directors UK, told MPs: ‘One of the members there started talking about AI and he asked me to investigate how long it would take before a show like Doctors could be made entirely by generative AI and I conducted a survey among several VFX people….”

A director of Apple TV's Slow Horses has told MPs that artificial intelligence could create soaps to rival the BBC in less than five years (EastEnders' Jacqueline Jossa, pictured)

A director of Apple TV’s Slow Horses has told MPs that artificial intelligence could create soaps to rival the BBC in less than five years (EastEnders’ Jacqueline Jossa, pictured)

“Then I talked to some of the legal team that advised SAG(-AFTRA) and (the) Writers Guild (of America) last summer before they came here.

“And the best guess is that someone between the ages of three and five can say, ‘Create a scene in an emergency room where a doctor comes in, he’s having an affair with a woman, so they’re flirting, and someone… dies. on the table ”and it’s going to create it and you’re going to build that and it’s going to be generative AI.”

“It may not be as polished as we’re used to, but that’s how close we’re getting and I find that hard to believe, for all the creatives involved.”

‘I believe the genie is out of the bottle, I believe we have to live with this. I think it also makes it incredibly possible.

“I think all the parts of storytelling and British storytelling can be fleshed out, making this possible, but we have to protect the rights holders.”

James also told MPs that Slow Horses, a series he directed, was rejected by several British broadcasters before being picked up by AppleTV+.

“When Apple picked it up,” he said, “they wondered if it was just too idiosyncratic and too British, and whether this would go out into the world, even though we obviously have a reputation for the spy genre.

‘Gary Oldman’s following (and) subsequent success shows that even ‘quirky Brits’ can travel and it is now the longest-running repeat series on Apple.

James also told MPs that Slow Horses, a series he directed, was rejected by several British broadcasters before being picked up by AppleTV+

James also told MPs that Slow Horses, a series he directed, was rejected by several British broadcasters before being picked up by AppleTV+

James also told MPs that Slow Horses, a series he directed, was rejected by several British broadcasters before being picked up by AppleTV+

‘It has shown that we can think beyond the British parochial, or that we can turn Britain’s smaller stories into stories that look outward and have universal themes.

‘I think that is very important. We have to be aware of the balance, the critical balance, between the benefits of foreign investment and having our own domestic industry.”

Developments in artificial intelligence, including the rise of ChatGPT, have threatened multiple industries in recent years.

Earlier this week, a “scary” new tool, Sora, which can produce hyper-realistic videos from text, prompted warnings from experts.

Sora, unveiled Thursday by Open AI, demonstrates powerful examples such as drone footage of Tokyo in the snow, the waves crashing against the cliffs of Big Sur or a grandmother enjoying a birthday party.

Experts have warned that the new artificial intelligence tool could wipe out entire industries such as film production and lead to a surge in deep fake videos in the run-up to the crucial US presidential election.

“Generative AI tools are evolving so quickly and we have social networks, which is creating an Achilles heel in our democracy and it couldn’t have happened at a worse time,” Oren Etzioni, founder of TruMedia.org, told CBS.

“As we try to resolve this, we are faced with one of the most consequential elections in history,” he added.

The quality of AI-generated images, audio and video has increased rapidly over the past year, with companies like OpenAI, Google, Meta and Stable Diffusion racing to create more advanced and accessible tools.

“Sora can generate complex scenes with multiple characters, specific types of movements, and precise details of the subject and background,” OpenAI explains on its website.

“The model understands not only what the user is asking for in the prompt, but also how those things exist in the physical world.”

The tool is currently being tested and evaluated for potential security risks, but a public release date has not yet been announced.

The company has revealed examples that are unlikely to offend, but experts warn the new technology could unleash a new wave of extremely lifelike deepfakes.

“We’re trying to build this plane as we fly it, and it will land in November if not sooner, and we don’t have the Federal Aviation Administration, we don’t have the history and we don’t have the resources to do this,” Etzioni warned.

Sora “will make it even easier for malicious actors to generate high-quality video deepfakes, and give them more flexibility to create videos that can be used for offensive purposes,” says Dr. Andrew Newell, chief scientific officer of identity verification company iProov. told CBS.

READ ALSO  Sheryl Sandberg’s journey: from Google to Meta to philanthropy

“Voice actors or people who create short videos for video games, educational purposes or advertising will be most directly affected,” Newell warned.

Deep fake videos, including of a sexual nature, are becoming an increasing problem, both for private individuals and for people with a public profile.

‘Look where we have come in a year of image generation. Where will we be in a year?’ Michael Gracey, a film director and visual effects expert, told the Washington Post.

Earlier this week, a 'scary' new tool, Sora, which can produce hyper-realistic videos from text, prompted warnings from experts

Earlier this week, a 'scary' new tool, Sora, which can produce hyper-realistic videos from text, prompted warnings from experts

Earlier this week, a ‘scary’ new tool, Sora, which can produce hyper-realistic videos from text, prompted warnings from experts

Another AI-generated video of Tokyo in the snow has shocked experts with its realism

Another AI-generated video of Tokyo in the snow has shocked experts with its realism

Another AI-generated video of Tokyo in the snow has shocked experts with its realism

“We will take several important security steps before making Sora available in OpenAI products,” the company wrote.

“We are working with red teamers – domain experts in areas such as disinformation, hateful content and bias – who will test the model in an adversarial manner.

Adding, “We’re also building tools to help detect misleading content, such as a detection classifier that can tell when a video was generated by Sora.”

Deep fake images gained extra attention earlier this year when AI generated sexual images of Taylor Swift that circulated on social media.

The images are from the website Celeb Jihad, which features Swift performing a series of sexual acts while clothed Kansas City Chief memorabilia and in the stadium.

The star was left ‘furious’ and considering legal action.

President Joe Biden has also spoken about the use of AI and revealed that he has fallen for deepfakes of his own voice.

‘It’s already happening. AI devices are used to deceive people. Deep fakes use AI-generated audio and video to smear reputations, Biden said, and “spread fake news and commit fraud.”

‘With AI, fraudsters can record your voice for three seconds. I’ve watched one of mine a few times – I said, ‘When the hell did I say that?’ Biden said to a crowd of officials.

He then spoke about technology’s ability to fool people through scams. IT experts have also warned about the potential for misuse of AI technology in the political space.

On Friday, several major tech companies signed a pact to take “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.

Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok have pledged to take preventative measures.

“Everyone recognizes that no technology company, no government, no civil society organization is capable of addressing the advent of this technology and its potentially nefarious uses on its own,” said Nick Clegg, president of global affairs for Meta na the signature.

WATCH VIDEO

DOWNLOAD VIDEO

Advertisement