This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| less than a minute read

M&C in the news - Deepfake regulation: A double-edged sword?

As generative AI continues to evolve, deepfake technology has rapidly become one of its most controversial and potentially dangerous applications. Initially popularised through humorous content, such as the memorable deepfake of the Pope wearing a Moncler jacket, deepfakes are now being created for more harmful purposes. 

While generative AI tools progress at a rapid pace, regulation struggles to keep up. New UK legislation is emerging, but gaps remain, leaving individuals and businesses vulnerable to both malicious use and unintentional legal pitfalls. Mike Shaw and Graeme Murray provide a deep dive into the steps businesses need to take to protect themselves in this article for leading, global technology publication TechRadar. 

Deepfake technology is rapidly emerging as AI’s latest ‘Pandora’s box. No longer limited to producing parodic content of politicians (who’ll ever forget the Pope sporting Moncler?), we are now seeing generative AI being actively weaponized, from misleading political deepfakes, clickbait celebrity advertisements, to school children deep-faking explicit pictures of classmates. As the capabilities of AI tools race ahead of regulation, many are growing concerned of the very real threat it poses. New legislation is coming in – but much of it is too narrow, or too vague to protect people comprehensively. And on the flip side, these new rules have implications that could easily catch out professionals trying to utilize generative AI in legitimate ways. So, what legal protection in the UK currently exists around deepfake technologies, and what behaviors are prohibited?

Subscribe to receive more articles like this here.

Tags

deepfakes, deepfake, newsroom, brands & trade marks, digital transformation, artificial intelligence