Emma Pickering, Head of Technology-Facilitated Abuse and Economic Empowerment at Refuge, said:
“The disturbing rise in AI intimate image abuse, facilitated by platforms such as Grok, is not just a digital threat – it has dangerous consequences women and girls. Generative AI has made it easier than ever for perpetrators to create fake images at the expense of women’s safety, and at Refuge, we see firsthand the long-term impact that all forms of intimate image abuse can have on a survivor’s mental health and wellbeing.
“Although technology itself is not to blame, tech companies must be held accountable for implementing effective safeguards and preventing perpetrators from causing harm. Legislation to criminalise creating, or requesting the creation of, non-consensual deepfake intimate images has progressed through Parliament, but we are still waiting for the law to come into effect. Meanwhile, the sharing of real and synthetic intimate images without consent is already illegal, but in practice this law is not being effectively enforced, with woefully low conviction rates.
“As technology evolves, women and girls’ safety depends on tighter regulation around image-based abuse, whether real or deepfake, as well as specialist training for prosecutors and police. Women have the right to use technology without fear of abuse, and when that right is violated, survivors must be able to access swift justice and robust protections.”
If you have been affected by intimate image abuse, including AI-generated images, you are not alone. There are some things you can do:
- If you are in an emergency, please dial 999
- Report the post and account to X.
- Contact the Revenge Porn Helpline
- Contact the National Domestic Abuse Helpline
- You have the option to report to the police
- Consider legal (civil) action
If you are worried someone might be monitoring your devices, please take action from a safe device. Learn more about keeping your technology safe here.