SUMMARY
Generative AI could help cyber criminals produce even more cunning traps. This calls for even greater awareness, caution and online security.
How might generative artificial intelligence (AI) improve our lives?
Since the launch of tools such as ChatGPT from late 2022, there has been growing excitement over the potential of this technology, which can create new written, picture, video and audio content based on existing data.
More accurate medical diagnoses and targeted treatments, lessons customized to a learner’s needs, faster responses to natural disasters and productivity gains across the economy are just a few of the possibilities.
However, not all applications stand to benefit humanity. Among those already exploring malicious uses are cyber criminals, who aim to take their attempts to defraud, defame and disrupt to the next level.
Here, we explore how generative AI could enable two prevalent scams – and how we might defend against the threat.
Phishing
With billions sent every day, phishing emails are the most common variety of cyber-attack.
“Your account has been frozen,” “Package received,” “Confirm your identity,” and “Claim your prize,” are just a few of the inbox ruses from criminals posing as financial institutions, government bodies, couriers, streaming entertainment providers and much more besides.
These attempts to get recipients to divulge account logins and passwords, credit card details or other sensitive information can sometimes be clumsy fakes.
Basic spelling or grammatical errors, the wrong tone of voice or sloppy presentation can provide important clues that all is not as it should be.
Generative AI, however, can help cyber criminals produce cleaner text that closely matches the writing style and design of genuine emails from the sender they’re impersonating.
Vishing
While less common than phishing, deceiving people into handing over sensitive data on voice calls – or vishing – is another threat.
Generative AI can enable voices to be mimicked in a highly convincing way, right down to the choice of words and tone of a speaker.
This raises the risk of targeted phone calls where staff receive a call from someone sounding exactly like one of their firm’s senior leaders, demanding that they make wire transfers or release information on the spot.
Another possibility is calls that are so good tthey can pass automated voice recognition checks.
Defending against AI-powered cyber crime
The first step in trying to stay safe in the face of intensified cyber threats is being aware.
As well as understanding the threat yourself, make sure that others, such as your loved ones, employees and family office staff, do too.
Robust cyber security policies and education within companies are essential.
For phishing emails, even greater caution is needed when assessing incoming mails, given that the malicious items may be increasingly hard to distinguish from the real deal. Pay particular attention to the sender’s email address.
In the case of vishing, rigorous cross-checks should always be made before granting the request called in from someone who sounds exactly like the boss or an important client.
Turning on multi-factor authentication on all sensitive accounts where available adds a layer of security.
AI itself is likely to play a role in the fight against AI-powered cybercrime.
For example, email filtering software and other advanced detection tools based on this technology could help identify anomalies and other vital clues that the human eye may not be able to see.
By combining awareness of the emerging threats, the latest technological defenses and proactive security practices, individuals and organizations will be better placed to defend their data, assets and reputation.