Refuge responds as offence criminalising the creation of intimate deepfakes comes into force

Refuge responds as offence criminalising the creation of intimate deepfakes comes into force amid news of a new deepfake detection framework

Emma Pickering, Head of the Tech-Facilitated Abuse and Economic Empowerment Team at Refuge, said: 

“Countless survivors have been harmed by horrific deepfake abuse while waiting for the law criminalising the creation, or requesting the creation, of intimate deepfakes without consent to come into force. The enactment of this offence represents a significant and long-overdue step towards protecting survivors from the rapidly increasing threat of AI-facilitated intimate image abuse. 

As the UK’s largest specialist domestic abuse charity, with a dedicated Technology-Facilitated Abuse and Economic Empowerment team, Refuge sees firsthand the devastating impact that intimate image abuse has on survivors. In 2025, the team received a 62% rise in referrals, reflecting a huge increase in reported rates of tech-facilitated abuse of all kinds. 

The new offence is the result of years of tireless campaigning by survivors, academics, civil society and dedicated parliamentarians. The decision to make it a priority offence under the Online Safety Act reflects the seriousness of this degrading form of abuse, placing a welcome onus on tech companies to take proactive steps to prevent it from occurring on their platforms.  

Despite this, we know that conviction rates for all intimate image abuse offences are woefully low. For the new law to result in meaningful change, it must be backed by mandatory training on tech-facilitated abuse for police and prosecutors, to ensure that perpetrators can effectively be brought to justice. 

The Government’s commitment to developing and implementing a deepfake detection evaluation framework to assess detection tools and technologies is a positive step towards tackling of AI-generated abuse. But, for this to deliver real protections for survivors, it must be underpinned by clear, enforceable requirements for online platforms to step up. 

It is essential that any framework to evaluate and set standards for deepfake detection technology is grounded in the needs of survivors impacted by this form of abuse. 

Survivors should not be expected to carry the burden of identifying and reporting deepfake abuse while tech companies profit from their own inaction. Online platforms must be required to play their part by proactively identifying and swiftly removing abusive content, and held accountable when they fail to do so. Change is overdue, and survivors cannot wait.”