More and more people are getting ripped off online, and AI is the reason why. We’re not talking about minor crimes. Victims have been cheated out of investments, their life savings, and even their identities.
According to government statistics, AI-enabled crimes are rising sharply. Email phishing attacks, identity theft, ransomware attacks, and financial scams are all becoming more prevalent. Unfortunately, as this technology becomes even more sophisticated, the increase we’ve seen so far may be just small potatoes compared to what may lie ahead.
Cybercriminals choose AI as their modus operandi because of the important benefits it provides. “AI enables criminals to automate tasks that were previously done manually,” according to AI Overview, “thereby increasing the reach and potential success of those scams.” It also reduces the time and effort criminals need to carry out their schemes, giving them opportunities to act more efficiently and much faster than people can.
Get Ready!
On May 20, Google announced the release of Veo 3, a new AI video generation model that makes 8-second videos. Within hours, AI artists and filmmakers had already produced shockingly realistic videos, according to Mashable, a global multi-platform media and entertainment company. But these videos are different from the ones currently available because they look much more professional.
“AI will supercharge online criminal activity and turn it into a deepfake,” according to one industry expert. And they could turn up just about anywhere. For example, it will be capable of making an image of a famous actor, singer, athlete, politician, or anyone else appear to be saying something that he or she never said. Yet the image will look exactly like that person, use that person’s gestures, and have the exact same voice—but all of this will be fake. The real individual may have absolutely nothing to do with the scam.
Nevertheless, it will be impossible to detect any difference.
Experts agree that the rapid perfection of AI technology poses serious threats to individuals, companies, markets—even armies and countries.
“I’ve never seen anything develop faster than AI in my lifetime,” said Ari Redbord, a former federal prosecutor who is now Global Head of Policy at blockchain intelligence company TRM Labs. “Not the internet, not crypto, or anything else. We’re measuring the progress of this technology in days now.”
According to Redbord, America’s adversaries are already using AI to influence American politics, to steal strategic information, and to commit money-related crimes, among other sinister purposes. Here’s one example: Last February, North Korea stole $1.5 billion in one day from the cryptocurrency exchange Bybit, and will likely use those funds for weapons proliferation and other destabilizing activity, Redbord said.
Leah Syskin, an AI expert at the Foundation for Defense of Democracies, has a similar view. “Every major American adversary is experimenting with AI,” she said, including Russia, which has been working on this technology for years. Soon after war broke out between Russia and Ukraine, Russia created a deepfake video of Ukrainian President Zelensky surrendering—and that was back in February 2022.
One expert who saw that video recognized immediately that it was fake because “at times the voice and facial movements seemed unnatural.” However, ordinary viewers very likely believed that the war was indeed over. And even if not, it’s only a matter of time until AI is perfected to the point where it can fool anyone.
“Online criminals are working steadily to improve their skills,” said Neal O’Farrell, an award-winning cybersecurity expert. “If you’re looking for minor glitches to tell the difference between deepfake and real, you’re probably setting a trap for yourself. The bad guys know those are the giveaways and are working very hard at fixing those telltales.”
Defend Yourself
With such sophisticated technology already in the hands of the bad guys, protecting oneself from their plots is becoming increasingly challenging. However, in O’Farrell’s opinion, in some cases it’s possible to reduce the chances of getting scammed—or even avoid them entirely.
For example, in one popular scam, an individual gets a call allegedly from a family member or friend claiming to be in trouble and pleading for money. O’Farrell says, “Before sending any, ask questions like, ‘Where are you?’ ‘Can I speak to an arresting officer?’ ‘Hold the phone up and let me see that it’s really you.’” He advises families to share a code word that has not been sent through the internet; ask the family member or friend to say it. “Don’t continue with the conversation until the caller says the code word.”
Unfortunately, the risks of AI already go far beyond this scam. “Experts warn that AI crime is headed for its ‘mature phase,’” says Redbord. “That’s when AI essentially removes the human element and carries out scams and cyberattacks in a completely automated way.”
Former Google CEO Eric Schmidt recently warned that AI is accelerating toward superintelligence. “Computers are now doing self-improvement,” he said. “They’re learning how to plan and don’t have to listen to [people] anymore. There are going to be computers that will be smarter than the sum of humans.”
Considering that criminals and other sinister people have access to this technology, some experts anticipate an upcoming crime wave—and fighting that may prove difficult because the technology has already advanced so rapidly.
Recently, OpenAI’s model GPT-03 refused an instruction to shut down—and then sabotaged the shutdown mechanism. The Most Important News website reports that AI is now teaching itself “to become proficient in research-grade chemistry without ever being taught it,” and “has learned to manipulate humans for their own advantage.”
The bottom line is that AI has the potential to be incredibly helpful to people, but it also poses terrible dangers that are becoming clearer every day. In theory, AI criminals will be able to steal vast sums of money from banks, control elections, make the prices of stocks and commodities skyrocket or plunge. In worst-case scenarios, it may even provoke major wars—without leaving any fingerprints. As Russian President Putin said in 2017, “Whoever becomes the leader in this sphere will be the ruler of the world.” Let’s pray the good guys win this race!
Sources: OpenAI; cbnnews.com; mashable.com; nbcnews.com; themostimportantnews.com; zeohedge.com.
Gerald Harris is a financial and feature writer. Gerald can be reached at This email address is being protected from spambots. You need JavaScript enabled to view it.