Deepfakes are no longer a novelty. They are quickly becoming a systemic threat to business, society, and democracy. According to the European Parliament, around 8 million deepfakes will be shared in 2025, up from just 0.5 million in 2023. In the UK, two in five people claim to have come across at least one deepfake in the past six months. But where once they might have been relatively easy to spot, the increasing sophistication of publicly available AI models has made detection harder than ever.
Advances in generative adversarial networks (GANs) and diffusion models have been catalysts for the growth of advanced, hyper-realistic deepfakes. Both technologies have been instrumental in enabling seamless face-swapping and voice modulation in live video calls or streams. This has massively improved the user experience, with capabilities such as virtual avatars making gaming and meetings more personalised and immersive. But it has also opened the door to real-time impersonation scams.
You might think that only the uninitiated would fail to recognise an impersonation of someone they know well and trust. But in May last year, a group of scammers posed as a senior manager at the engineering firm Arup, successfully convincing an employee in the finance department to transfer HK$200m of funds to five local bank accounts. Similar attacks impersonating senior employees and CEOs have been launched against the likes of Ferrari, WPP, and Wiz within the past 12 months, undermining trust in digital communications.
Voice cloning has also surged alongside deepfakes. AI-driven voice synthesis is now capable of replicating the human voice with a startling degree of accuracy. Astonishingly, just a few seconds of audio are enough to create an almost perfect clone. That might be great for all sorts of creative uses, such as personalised audiobooks or dubbing, but it has the potential to cause immense harm.
In July of this year, a woman in Florida was duped into handing over US$15k in bail money after hearing what she believed was her daughter crying for help after a car accident. The caller, an AI clone acting as her daughter, eventually transferred the call to a supposed attorney, who provided instructions for the transfer. The fact that these clones are put together using mere snippets of people’s voices, which can be easily found through social media channels, highlights the potential for misuse.
On social media, the line between reality and fiction is blurring. AI-generated virtual influencers dominate the online marketing landscape, offering brands fully controllable personas. Audiences now have to navigate a world where legitimate and artificial personalities are virtually indistinguishable, raising questions about authenticity in the media. In Hollywood, deepfakes are used to de-age actors or recreate historical figures. While that gives production companies the ability to improve the quality of their content at relatively low cost, it also gives scammers the means to reproduce a convincing likeness of famous celebrities and use it to spark controversy.
But the stakes go far higher than celebrity misrepresentation. Deepfakes can be used to sow political division, by spreading false narratives or fabricating videos of political figures delivering fake speeches. The consequences can be profound, swaying public opinion, changing the course of national elections, and potentially poisoning global political discourse.
Faced with so many threats, governments worldwide are responding. In Europe, the AI Act contains a clause for mandatory labeling of content generated or modified with the help of AI, which must be labeled as such to make users aware of its origin. While the act stops short of banning deepfakes, it bans the use of AI systems that manipulate people covertly in certain contexts. Some governments are either actively using or investing in detection technologies that can identify subtle changes in voices, faces, or images.
But regulation still lags behind the technology. Mandatory labeling, AI artifact detection algorithms, and audio forensics are an important part of the solution, but quelling the deepfake threat requires a much broader and more comprehensive strategy. Robust regulation and ethical guidelines, together with investment in media literacy, have an equal, if not bigger, part to play in combatting deepfake fraud and misinformation.
Regulation and ethical guidelines must become more proactive, with watermarking and mandatory disclosure standards becoming customary features of any deepfake strategy. Media literacy, meanwhile, must be treated as a priority. Citizens must be equipped with the critical thinking skills to question what they see and hear. Only by working together, between regulators, the private sector, and civil society, can we protect digital life and ensure the deepfake threat becomes a thing of the past.
The post The Case for Deepfake Regulation appeared first on Metaverse Post.
Also read: Metaplanet Bitcoin Plan Surpasses Bullish, Targets MicroStrategy