Combatting deepfakes: Safeguarding against a new era of image-based abuse

The internet has revolutionised the way we connect and share information. However, it has also opened doors for new forms of abuse and manipulation. Image-based abuse, also known as revenge porn, has been a growing concern and with the rise of deepfakes, the issue has become even more complex.

What are deepfakes and how do they relate to image-based abuse?

Deepfakes are manipulated videos or images that use artificial intelligence to realistically superimpose a person’s face or body onto another image or video.

In recent news, the threat of deepfakes has gained significant attention, with experts warning UK politicians about the potential ‘havoc’ these AI-generated fabrications could cause in the upcoming election. This technology is also increasingly being misused to create non-consensual pornography.

Deepfake image-based abuse can have a detrimental impact to its victims:

  • Psychological harm: Victims may experience emotional distress, humiliation and anxiety. The hyper-realistic nature of deepfakes can make the abuse feel even more real, causing significant psychological trauma.
  • Reputational damage: Deepfakes can be widely shared online, contributing to a risk of damaging the individuals reputation and affecting both their personal and professional lives.
  • Culture of sexual violence: The normalisation of deepfake image-based abuse contributes to a culture of sexual violence, where the violation of someone’s image is seen as acceptable.

The legal landscape is struggling to keep up

Existing laws around image-based abuse often struggle to address deepfakes. Many laws require proof of the perpetrator’s intent to cause harm.

However, in deepfakes, the perpetrator may not need to explicitly intend to harm the victim, as the very act of creating and sharing the deepfake is harmful.

Deepfake and AI technology is continuously evolving, making it difficult for legal frameworks to keep pace.

What needs to be done?

Addressing deepfake image-based abuse requires a multi-faceted approach:

  • Legislative reform: laws need to be updated to explicitly address deepfakes and recognise them as a form of image-based abuse. The focus should be on the lack of consent, not the intent of the perpetrator.
  • Technology company responsibility: Social media platforms and technology companies need to take a more proactive role in detecting and removing deepfake content.
  • Victim support: Victims of deepfake image-based abuse need access to comprehensive support services, including legal aid, mental health counselling, and guidance on how to remove harmful content online.

How MSB can help

If you have been a victim of image-based abuse, including deepfakes, MSB can help. Our experienced Personal Injury team can offer support and guidance on your legal options.

For information on how we can help you, or someone you know, contact Eamonn Sexton in confidence on 0151 522 1279, or at

Contact us, we are here to help

We’re here to help, so please pick up the phone or drop us an email and one of our dedicated team will help with your enquiry.