Key Learnings from Refuge’s Tech Summit

By Meg Warren-Lister

Last month, Refuge held its second-ever Tech Safety Summit, the UK’s only conference dedicated to raising awareness of tech abuse. The two-day event featured major speakers including Adolescence’s Jack Thorne, Ofcom CEO Melanie Dawes, campaigner Adele Walton and Glamour editor Lucy Morgan, as well as experts from Refuge’s sector-leading Technology-Facilitated Abuse and Economic Empowerment Team, and survivors. 

Convened as a forum to spotlight emerging threats, the theme of this year’s summit was ‘innovative responses’ to tech-facilitated abuse, with panel sessions highlighting cutting-edge, survivor-informed responses to this growing form of Violence Against Women and Girls (VAWG).  

With talks spanning everything from the dark web to sextortion, we’ve rounded up the biggest takeaways – so you know exactly what we’re up against, and how we’re fighting back. 

Keeping up with tech abuse 

A key theme raised by speakers, including Refuge Chair Hetti Barkworth-Nanton, was the need for a regulatory framework that takes a proactive approach to tech abuse. In a panel session on online safety, Baroness Charlotte Owen noted that technology is evolving at an ever-faster pace, and that parliaments and governments around the world are “struggling to keep pace.” 

As a result, Baroness Owen said, we must start “horizon scanning” and put policies in place before new forms of tech abuse have a chance to proliferate and cause widespread harm. 

Speakers expressed particular concern about intimate image abuse, especially in light of the increasing availability of deepfake technologies. “This kind of abuse is like water, it always finds a way through,” Baroness Owen said, urging the Government to continue looking at global best practice. Another harm flagged by panellists was so-called “semen images” – manipulated images of women that depict semen on their faces. Baroness Owen explained that while these kinds of images are “clearly abuse,” they are not currently captured by existing laws. This highlights the much wider need for an improved legal and policy response to harmful misogynistic content online. 

Harmful but not illegal content  

In a similar vein to Baroness Owen, Ofcom’s CEO Melanie Dawes noted that while online misogyny is not illegal, it is still “horrific” and spreading at scale. In response, the regulator’s head spoke extensively about the need for companies to moderate their algorithms to ensure young people are not exposed to this content so intensely.  

Glamour editor Lucy Morgan also addressed the harmful impacts of the normalisation of misogyny. “Much of my reporting focuses on X (formerly Twitter). ‘Grok’, which is X’s AI chatbot, rates women out of 10. This might not be directly considered abuse, but it’s contributing to a culture in which women are dehumanised and once we get used to that, we’re more likely to tolerate women being deepfaked or sexually assaulted.” 

Panellists including Ms Morgan and Baroness Owen also criticised the use of ‘freedom of speech’ arguments in opposition to legislation tackling VAWG. As Baroness Owen put it:  “It’s always coming at it from the perspective of the abuser’s freedom of speech – not all the women who have been silenced, who are too afraid to have social media, who are too frightened to go out of their house because of what has happened to them.” 

Emerging threats 

Medical abuse 

Among other issues, the summit highlighted emerging forms of tech abuse. Jayne Mirzan, a Tech Abuse Lead at Refuge, warned that the NHS’s shift towards a ‘digital by default’ model could have serious ramifications for survivors. Reduced face-to-face contact with GPs can limit opportunities for disclosures of abuse and the documentation of injuries while perpetrators may exploit digital systems to exert further control.  

The proliferation of home technology under the umbrella of “tech-enabled care” – such as telecare systems and personal alarms – is intended to support independent and safe living, but these tools can be manipulated by abusers to prevent survivors from seeking help. Additionally, health-monitoring technology, including devices that track glucose levels, can be obstructed. Even seemingly innocuous features like medication reminders and online appointment systems can be sabotaged. 

Lindsay Balfour, a Lecturer in Communications at the University of Glasgow, also drew attention to the risks posed by femtech and cycle-tracking apps. If a perpetrator has or gains access to these apps, they can use the data as part of sexual abuse or to facilitate forced pregnancy. In some cases, information about sexual activity with new partners may be monitored even after a survivor has left the abusive relationship. 

Young people and VAWG 

In one AI-focused session, Refuge’s Youth Tech Abuse Lead, Larome Hyde, shared that the charity has seen a rise in young people talking about AI boyfriends and girlfriends. Panellists, including Chayn’s Head of Policy, Eva Blum-Dumontet, noted that AI relationships set a “completely inadequate expectation of what real relationships look like, and what their challenges are.” 

Based on this, the growing trend of children turning to AI to experience romantic relationships is a “source of concern” for gender-based violence activists. For Jay Jones, a Youth Voice and Influence Officer at National Youth Agency, the fact that some young people are using these AI bots to experiment with sex could lead to “warped perspectives and blurred lines around consent.” Discussing how to regulate around these concerns, Colette Collins-Walsh, Head of UK Affairs at 5Rights Foundation, called for a duty of care, arguing that companies should have a strict responsibility to ensure their products are safe for users. 

Tackling tech abuse  

Policing 

For consultant Alan Owen, some of the biggest concerns from a policing perspective are non-consensual intimate image abuse and deepfake pornography, which can have a “devastating effect on survivors – both psychologically and reputationally.” Spoofs – where survivors receive fake videos of themselves that are then shared online – are also an emerging threat. 

Compared to recent years, technology now enables the creation of harmful content much more quickly. “Years ago, there was a need for perpetrators to have some technical skills, but AI has lowered the bar. It’s now much more accessible for everybody to create harmful material, flood online spaces with misogynistic content, and even scrape different websites and databases to pull together information about a survivor,” explains Emma Pickering, Head of Refuge’s Technology-Facilitated Abuse and Economic Empowerment Team. 

As a result, there is an urgent need for this shift to be reflected in police responses. According to Alex Murray, the Director of Threats Leadership at the National Crime Agency, and the UKs Policing Lead for Artificial Intelligence, “we still see pushback with police in the sense that online harms can be treated as if they are of a lower risk and threat level compared to offline crimes – which is just not the case.” Mr Murray joined other experts at the summit in calling for targeted police training on the impact of image-based abuse and the legal framework available to support prosecution. 

Challenging techno-optimism  

For Adele Walton, tackling tech abuse requires challenging the culture of techno-optimism – the belief that all technology is inherently good and unbiased. As Ms Walton put it, “tech companies have managed to evade their corporate responsibility for far too long,” and it is essential to establish a duty of care that requires them to prevent harm to their users.  

Fellow panellist Baroness Owen also urged parliamentarians to protect the Online Safety Act (OSA), which has recently come under attack. 

Enforcement 

The need for effective enforcement of the OSA was also cited repeatedly, with Baroness Owen arguing that Ofcom must become more agile. “There’s a huge concern for me around the agility of Ofcom to go after companies, especially small aggregator sites,” she explained, referring to the ease with which smaller companies can shut down after being penalised and re-establish themselves under a new name the next day.  

As one step towards addressing this, Dame Melanie Dawes announced that within 18 months of the upcoming VAWG guidance being published, Ofcom will publish a transparency report showing which companies are complying with it. The Chief Executive of Ofcom also said that the regulator has proposed adding an additional hash matching technology measure to its Illegal Content Codes making it easier for tech companies identify and remove intimate images that have been shared without consent. 

Safety-by-design 

Across sessions, one recurrent call to action stood out namely, the need to champion safety-by-design approaches to technology development. In the words of Adele Walton: “Rather than having a reactive approach to online harms, we need to actually prevent them in the first place.” 

To this end, we urge companies to engage with survivors, campaigners and those with lived experience, starting from the design stage. As Glamour editor Lucy Morgan explained, consultation with survivors must be meaningful, and their recommendations actually implemented, to avoid the common trap of companies engaging in purely performative consultation. 

Melanie Dawes echoed the need for this approach, urging platforms to “think like perpetrators” when designing products, by anticipating how their systems could be exploited by VAWG perpetrators. Given the particular importance of privacy for survivors of domestic abuse, Dawes also called for built-in safety measures, such as an ‘I want to be private’ button.  

Currently, it is all too common for tech companies to prioritise profits over safety. This must change. The safety of women and girls cannot be an afterthought – it must be a non-negotiable foundation of every platform, every product, and every policy.