Noelle Martin (pictured) was horrified to discover in 2012 her face had been photo-shopped onto a series of pornographic websites

When Noelle Martin first googled herself in 2012 aged 17 she had no idea she would still be battling the horror of what she found over ten years later.

The then schoolgirl from Perth, in Western Australia, discovered her face had been photo-shopped onto a series of pornographic images. 

These ‘deepfakes’, images and video that have been digitally created or altered with artificial intelligence or machine learning, looked exactly like her.

Noelle Martin (pictured) was horrified to discover in 2012 her face had been photo-shopped onto a series of pornographic websites

Noelle Martin (pictured) was horrified to discover in 2012 her face had been photo-shopped onto a series of pornographic websites

Noelle Martin (pictured) was horrified to discover in 2012 her face had been photo-shopped onto a series of pornographic websites

The 28-year-old i sstill battling to have the images removed, declaring: 'You cannot win'

The 28-year-old i sstill battling to have the images removed, declaring: 'You cannot win'

The 28-year-old i sstill battling to have the images removed, declaring: ‘You cannot win’

To this day, Ms Martin, now aged 28, says she does not know who created the fake images, or videos of her engaging in sexual intercourse that she would later find. 

She suspects someone likely took a picture posted on her social media page or elsewhere and doctored it into porn.

Horrified, Ms Martin contacted different websites for a number of years in an effort to get the images taken down. Some didn’t respond. Others took it down but she soon found it up again.

‘You cannot win,’ said Ms Martin. ‘This is something that is always going to be out there. It’s just like it’s forever ruined you.’

The more she spoke out, she said, the more the problem escalated. Some people even told her the way she dressed and posted images on social media contributed to the harassment – essentially blaming her for the images instead of the creators.

The Perth-based lawyer helped reform laws which criminalised the distribution of non-consensual images in 2018

The Perth-based lawyer helped reform laws which criminalised the distribution of non-consensual images in 2018

The Perth-based lawyer helped reform laws which criminalised the distribution of non-consensual images in 2018

Eventually, Ms Martin turned her attention towards legislation, advocating for a national law in Australia that would fine companies $555,000 if they do not comply with removal notices for such content from online safety regulators.

But governing the internet is next to impossible when countries have their own laws for content that’s sometimes made halfway around the world. 

Ms Martin, currently an attorney and legal researcher at the University of Western Australia, says she believes the problem has to be controlled through some sort of global solution.

Image-based sexual abuse, as the criminal practice is now termed, is skyrocketing – and experts fear recent advancements in artificial intelligence (AI) will make it worse. 

‘The reality is that the technology will continue to proliferate, will continue to develop and will continue to become sort of as easy as pushing the button,’ said Adam Dodge, the founder of EndTAB, a group that provides trainings on technology-enabled abuse. ‘

And as long as that happens, people will undoubtedly … continue to misuse that technology to harm others, primarily through online sexual violence, deepfake pornography and fake nude images.’

In the meantime, some AI models say they’re already curbing access to explicit images.

OpenAI says it removed explicit content from data used to train the image generating tool DALL-E, which limits the ability of users to create those types of images. 

The company also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians. 

Midjourney, another model, blocks the use of certain keywords and encourages users to flag problematic images to moderators.

Meanwhile, the startup Stability AI rolled out an update in November that removes the ability to create explicit images using its image generator Stable Diffusion. 

Those changes came following reports that some users were creating celebrity inspired nude pictures using the technology.

Stability AI spokesperson Motez Bishara said the filter uses a combination of keywords and other techniques like image recognition to detect nudity and returns a blurred image. 

But it is possible for users to manipulate the software and generate what they want since the company releases its code to the public. 

Some social media companies have also been tightening up their rules to better protect their platforms against harmful materials.

TikTok said last month all deepfakes or manipulated content that show realistic scenes must be labeled to indicate they´re fake or altered in some way, and that deepfakes of private figures and young people are no longer allowed. 

The gaming platform Twitch also recently updated its policies around explicit deepfake images after a popular streamer named Atrioc was discovered to have a deepfake porn website open on his browser during a livestream in late January. 

The site featured phony images of fellow Twitch streamers.

Twitch already prohibited explicit deepfakes, but now showing a glimpse of such content – even if it´s intended to express outrage – ‘will be removed and will result in an enforcement,’ the company wrote in a blog post. 

And intentionally promoting, creating or sharing the material is grounds for an instant ban.

Research into deepfake porn is not prevalent, but one report released in 2019 by the AI firm DeepTrace Labs found it was almost entirely weaponized against women and the most targeted individuals were western actresses, followed by South Korean K-pop singers.

In February, Meta, as well as adult sites like OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves from the internet. The reporting site works for regular images, and AI-generated content – which has become a growing concern for child safety groups.

‘When people ask our senior leadership what are the boulders coming down the hill that we´re worried about? The first is end-to-end encryption and what that means for child protection. And then second is AI and specifically deepfakes,’ said Gavin Portnoy, a spokesperson for the National Center for Missing and Exploited Children, which operates the Take It Down tool.

‘We have not … been able to formulate a direct response yet to it,’ Portnoy said.

You May Also Like

Horror scenes as rush hour bus is SLICED in half after it ploughs into a tree at busy city

By ANTOINETTE MILIENOS FOR DAILY MAIL AUSTRALIA Published: 21:31 EDT, 3 April…

Sick ‘memoir’ teacher accused of raping 15-year-old student kept in Notes app on her phone is revealed

A married teacher accused of raping a 15-year-old student she was tutoring…

Kelly Clarkson Accidentally Confirms She Couldn't Care Less About Meghan Markle's Drama

Jose Perez/bauer-griffin/Getty While interviewing…

Bombshell twist over bucks night rapists who attacked three young women over one party weekend

By NICK WILSON FOR DAILY MAIL AUSTRALIA Published: 05:32 EDT, 4 April…