Artificial Intelligence is a Criminal’s Best Friend

Experts are ringing the alarm bells about the risks of artificial intelligence ending in the hands of criminals.

The most amazing and helpful tools for criminals came out in the last few years, and that was artificial intelligence products. The lack of security measures and safety guardrails led to a boom in AI scams, malware spread, and misinformation.

These images are fake, but they caused a chain reaction of reposts and genuine emotions before they were debunked. Both images were created with MidJourney, an artificial intelligence app that can create any image from text. The puffy jacket photo became mainstream in a few days, and people genuinely believed it and reposted it to their social media.

Close examination of both photos shows traditional AI issues – the lines are skewed, the glasses are deformed, too many fingers, strange skin textures, and the edges of the photos look blurry. But considering that a regular web user consumes the internet by scrolling, the details are very easy to miss.

Experts mention security measures like watermarks and disclaimers, but it’s not being deployed now. Government regulations are ages behind the current tech development. The creators of the images were banned from Reddit and Midjourney, but it will not stop criminals who want to use the services.

Europol released a paper detailing a way how text AIs can be used to write perfect scam and phishing emails and be used in disinformation, propaganda, cybercrime, and terrorism. They also mentioned, that safeguards put in place for ChatGPT, are easily removable and have many workarounds. The paper recommends that law enforcement officers start developing the necessary skills to work on cases involving AI-generated assets and train their own AI tools to help with new risk trends.

Online Fraud

Voice deepfakes are gaining traction in scams. A UK-based energy company lost $243,000 in 2019 after the CEO of the company received a call from an AI voice of the chief executive officer of their parent company asking to urgently transfer money. Symantec, a cyber security company, reported another three cases where the voice of an executive was impersonated by AI.

ElevenLabs, a synthetic speech start-up, was used to manipulate the voices of celebrities to make them make racist, transphobic, homophobic, and violent remarks. This company promises to synthesize a voice just by listening to a recording for a minute.

Cybersecurity firms have found evidence that AI was used to write malware code. The people who posted the code had no programming experience, which shows how easy it is to be used by criminals. Their report also showed that forums are used to discuss how to avoid detection by AI safety teams and sell ChatGPT prompts and codes helping to make money through hacking.

AI-Generated Images

GAN images (Generative Adversarial Network) have been seeing a rise in scams, fake documents, social media profiles, and all types of misrepresentations. Anyone can generate a GAN using a website like this-person-does-not-exist.com.

Luckily for us, even with AI advancements, there are ways to recognize an AI-generated photo. I posted a guide on how to understand if an image is a GAN with examples taken from a real scam on my blog. Investigators should pay attention to the asymmetry of ears, eyes, hair, and accessories, and pay attention to clothes, colors, and shapes in the background using reverse image search tools.


Oxana Korzun

Oxana Korzun is the voice behind the Investigator blog. She is a Certified Fraud Examiner, a professional investigator with more than eight years of experience in companies like Meta, AIG, and Transparency International.

Previous
Previous

Everyone Can See Your Leaked Data

Next
Next

Do You Really Know How to Investigate on Twitter?