Blog

1 November 2023

Amidst the excitement for AI innovation, the government must not forget its responsibility for protecting women and girls.

By Dr Michaela Bruckmayer, Research Lead

On November 1st and 2nd, 2023, the UK government welcomes country representatives, tech companies, and other relevant stakeholders, from around the world for an Artificial Intelligence (AI) Safety Summit.¹ The first global event of its kind, the summit reflects the UK government’s commitment to “make Britain a global AI superpower.”² But protecting women and girls amidst these advances must be central to the government’s agenda.

Rishi Sunak’s first global AI Safety Summit will welcome world leaders such and tech executives such as Kamala Harris and Elon Musk to Bletchley Park to discuss the future of AI and its potential security risks and impacts. The focus on ‘safety’ and ‘risks’ of AI is welcomed, considering the severity of impacts that can result when AI is abused for harmful purposes. When it comes to technological advancements, excitement among experts, industry, and governments about the “enormous potential to power economic growth, drive scientific progress and wider public benefits”, frequently drown out concerns about negative consequences of the technology.³ This seems to particularly be the case when it comes to the risks and dangers experienced by women and minoritised communities. As pointed out by Bailey et al (2021), governments have a history of overlooking issues like hate speech and children’s exposure to pornography in favour of creating environments that foster innovation.⁴

In a White Paper published in August 2023, the UK government promised to take a ‘pro-innovation approach to AI’.⁵ Protecting women and girls from harms caused by AI must be central to this agenda.

AI technology is used to create deepfakes. The most common deepfakes on the Internet are non-consensual sexual depictions of women.

Deepfake technology uses AI technology to “merge, combine, replace, and superimpose images and video clips onto another to create authentic-looking ones.”⁶ This, for example, can include superimposing the picture of someone’s face onto another person’s body. In short, “deepfake technology can make it appear as though anyone is saying or doing anything” when, in reality, they aren’t.⁷ These technologies are easy to use and widely accessible.⁸ Because of continued technological advancements, it is getting harder and harder to distinguish between real and fake media.⁹

Much of public discussion on deepfakes has focused on “misuse of deepfakes to manipulate elections, perpetrate fraudulent business practices, alter public opinion, and threaten national security.”10 However, the most common deepfakes shared on the internet are non-consensual sexual depictions of women.11 Similar to other forms of intimate image abuse, such as so-called ‘revenge porn’ and ‘sextortion’, “non-consensual sexual deepfakes can be used by abusive men to control, intimidate, isolate, shame, and micromanage victims, who are by far, mostly women.”12

Impacts of deepfakes on survivors are far-reaching and long-lasting, and perpetrators are seldom held to account

Much like other forms of intimate image abuse, the impacts of deepfakes and other forms of digitally altered image abuse on survivors can be severe and far-reaching. They can “damage a person’s reputation [and] employability.”13 They can harm a person’s health and wellbeing by causing psychological trauma and feelings of “humiliation, fear, embarrassment and shame.”14

Furthermore, survivors may find it difficult to be online following this type of abuse, as it can feel re-traumatising.15 Survivors might also withdraw from online connections, feeling that they cannot trust them anymore.16

In addition, survivors of non-consensual deepfakes or other forms of intimate image abuse can fear for their personal safety in ‘the real world’. It has been shown that they can “experience an increased risk of offline harassment, stalking and assault, especially where their contact details are shared alongside the imagery.”17 Designed to be convincing, there have been cases where deepfakes were used to harass and intimidate women’s right activists and campaigners.18

The Online Safety Act, which achieved Royal Assent on 26 Oct. 2023, criminalises the sharing or threatening to share of photos or films “made or altered by computer graphics or in any other way.”19 However, more needs to be done. Refuge conducted an analysis of intimate image abuse in January 2023.20 Despite being a criminal offense, Refuge found that only 4% of reported cases of sharing or threatening to share intimate images resulted in charges being pressed.xxi Deepfakes can also be used to “perpetuate gendered and racialized stereotypes about women, reinforce men’s sexual entitlement to women’s bodies, and shame and degrade women for being featured in sexually explicit content.”21 In addition, deepfake technology can be used for deceptive purposes, including blackmail, bullying, or fabrication of evidence.22 Policy-makers must find ways to protect women from this type of abuse.

There are AI tools which aim to support and protect abuse survivors. However, they place too much responsibility on women and are often faulty.

While many argue that AI can be used to protect survivors of domestic abuse,23 some are starting to challenge this. Often, these applications or tools put too much responsibility on the survivors, rather than on law enforcement, technology companies, or perpetrators.24

In addition, as AI tools are starting to be used by police and support services, concerns are being raised about algorithms being used to assess a survivors’ risk of domestic abuse.25 There have been incidents where algorithms ranked a survivor’s risk as ‘too low’, which meant that they did not receive the required support or protection.26 The risks posed by these technologies must be weighted appropriately relating to any potential benefits.

In its quest to become the world leader in AI, the UK government must ensure that women and girls are adequately protected and perpetrators are held accountable. Refuge will continue to examine the impacts of AI on domestic abuse and monitor trends in AI development.

If you have been affected by the issues raised in this blog and are interested in becoming involved in Refuge’s public relations and advocacy work, please get in touch at policy@refuge.org.uk.

If you are experiencing abuse, please contact the National Domestic Abuse Helpline at 0808 2000 247 or visit to www.nationaldahelpline.org.uk for support.

For information relating specifically to technology-facilitated abuse, please visit www.refugetechsafety.org.

 

References

[1]Department for Science, Innovation and Technology and The Rt Hon Michelle Donelan MP, Guidance: AI Safety Summit: Introduction, Gov.uk, 25 Sept. 2023. Last accessed on 26 Oct. 2023, https://www.gov.uk/government/publications/ai-safety-summit-introduction

[2]HM Government (2021), National AI Strategy. Available here.

[3] About the AI Safety Summit, gov.uk, last accessed on 26 Oct. 2023, https://www.gov.uk/government/topical-events/ai-safety-summit-2023/about

[4] Bailey, J. et al (2021), “AI and Technology Facilitated Violence and Abuse” in Florian Martin-Bariteau & Teresa Scassa, eds., Artificial Intelligence and the Law in Canada. Available here.

[5] Policy paper: A pro-innovation approach to AI regulation, Presented to Parliament by the Secretary of State for Science, Innovation and Technology by Command of His Majesty on 29 March 2023. Available here.

[6] Lucas, K. T. (2022) Deepfakes and Domestic Violence: Perpetrating Intimate Partner Abuse Using Video Technology, Victims & Offenders, 17:5, 647-659. Available here.

[7] Mulko, M. What is deepfake technology and how does it work? And is there any effective way to detect it? Interesting Engineering.com, 17 Nov. 2022. Last accessed on 26 October 2023. https://interestingengineering.com/culture/deepfake-technology-how-work

[8] Ibid. See also: Bailey, J. et al (2021), “AI and Technology Facilitated Violence and Abuse” in Florian Martin-Bariteau & Teresa Scassa, eds., Artificial Intelligence and the Law in Canada. Available here.

[9] Ibid. See also: Bailey, J. et al (2021), “AI and Technology Facilitated Violence and Abuse” in Florian Martin-Bariteau & Teresa Scassa, eds., Artificial Intelligence and the Law in Canada. Available here.

[10] Lucas, K. T. (2022) Deepfakes and Domestic Violence: Perpetrating Intimate Partner Abuse Using Video Technology, Victims & Offenders, 17:5, 647-659. Available here.

[11] Ibid.

[12] Ibid.

[13] Flynn, A. et al. (2022) Deepfakes and digitally altered imagery abuse: A cross-country exploration of an emerging form of image-based sexual abuse. British journal of criminology. [Online] 62 (6), 1341–1358 citing Dodge and Johnson (2018)

[14] Flynn, A. et al. (2022) Deepfakes and digitally altered imagery abuse: A cross-country exploration of an emerging form of image-based sexual abuse. British journal of criminology. [Online] 62 (6), 1341–1358. Available here.

[15] Ibid.

[16] Ibid.

[17] Ibid.

[18] Bailey, J. et al (2021), “AI and Technology Facilitated Violence and Abuse” in Florian Martin-Bariteau & Teresa Scassa, eds., Artificial Intelligence and the Law in Canada. Available here..

[19] Milmo, D. TechScape: How the UK’S online safety bill aims to clean up the internet. The Guardian, 24 Oct 2023. https://www.theguardian.com/technology/2023/oct/24/techscape-uk-online-safety-bill-clean-up-internet. House of Lords and House of Commons (2023), Online Safety Bill, 188(5), last accessed 26 Oct. 2023. Available here.

[20] Bottomley, B. and Michalea Bruckmayer, Intimate image abuse – despite increased reports to the police, charging rates remain low. Refuge.org. 25 January 2023. Last accessed on 26 October 2023. https://refuge.org.uk/news/intimate-image-abuse-despite-increased-reports-to-the-police-charging-rates-remain-low/.

[21] Bottomley, B. and Michalea Bruckmayer, Intimate image abuse – despite increased reports to the police, charging rates remain low. Refuge.org. 25 January 2023. Last accessed on 26 October 2023. https://refuge.org.uk/news/intimate-image-abuse-despite-increased-reports-to-the-police-charging-rates-remain-low/.

[22] Ibid.

[23] Lucas, K. T. (2022) Deepfakes and Domestic Violence: Perpetrating Intimate Partner Abuse Using Video Technology, Victims & Offenders, 17:5, 647-659. Available here.

[24] See for example: Kumari, M. Harnessing the Power of AI in the Violence Against Women and Girls, hopetraining.co.uk, 23 May 2023, last accessed on 26 oct. 2023.  https://hopetraining.co.uk/harnessing-the-power-of-ai-in-the-violence-against-women-and-girls/.

[25] Bellini R., et al (2020). Mechanisms of Moral Responsibility: Rethinking Technologies for Domestic Violence Prevention Work. Available here.

[26] Heikkilä, M. AI: Decoded: Spain’s flawed domestic abuse algorithm — Ban debate heats up — Holding the police accountable, POLITICO, 16 March 2022, last accessed on 26 Oct. 2023. https://www.politico.eu/newsletter/ai-decoded/spains-flawed-domestic-abuse-algorithm-ban-debate-heats-up-holding-the-police-accountable-2/.

[27]Ibid.