Artificial intelligence: authentic scams.

AI tools are being maliciously used to send “hyper-personalized emails” that are so sophisticated victims can’t identify that they’re fraudulent.

According to the Financial Times, AI bots are compiling information about unsuspecting email users by analyzing their “social media activity to determine what topics they may be most likely to respond to.”

Scam emails are subsequently sent to the users that appear as if they’re composed by family and friends. Because of the personal nature of the email, the recipient is unable to identify that it is actually nefarious.

“This is getting worse and it’s getting very personal, and this is why we suspect AI is behind a lot of it,” Kristy Kelly, the chief information security officer at the insurance agency Beazley, told the outlet.

“We’re starting to see very targeted attacks that have scraped an immense amount of information about a person.” 

“AI is giving cybercriminals the ability to easily create more personalized and convincing emails and messages that look like they’re from trusted sources,” security company McAfee recently warned. “These types of attacks are expected to grow in sophistication and frequency.”

While many savvy internet users now know the telltale signs of traditional email scams, it’s much harder to tell when these new personalized messages are fraudulent.

Gmail, Outlook, and Apple Mail do not yet have adequate “defenses in place to stop this,” Forbes reports.

“Social engineering,” ESET cybersecurity advisor Jake Moore told Forbes “has an impressive hold over people due to human interaction but now as AI can apply the same tactics from a technological perspective, it is becoming harder to mitigate unless people really start to think about reducing what they post online.”

Bad actors are also able to utilize AI to write convincing phishing emails that mimic banks, accounts and more. According to data from the US Cybersecurity and Infrastructure Security Agency and cited by the Financial Times, over 90% of successful breaches start with phishing messages.

These highly sophisticated scams can bypass the security measures, and inbox filters meant to screen emails for scams could be unable to identify them, Nadezda Demidova, cybercrime security researcher at eBay, told The Financial Times.

“The availability of generative AI tools lowers the entry threshold for advanced cybercrime,” Demidova said.

McAfee warned that 2025 would usher in a wave of advanced AI used to “craft increasingly sophisticated and personalized cyber scams,” according to a recent blog post.

Software company Check Point issued a similar prediction for the new year.

“In 2025, AI will drive both attacks and protections,” Dr. Dorit Dor, the company’s chief technology officer, said in a statement. “Security teams will rely on AI-powered tools tailored to their unique environments, but adversaries will respond with increasingly sophisticated, AI-driven phishing and deepfake campaigns.”

To protect themselves, users should never click on links within emails unless they can verify the legitimacy of the sender. Experts also recommend bolstering account security with two-factor authentication and strong passwords or passkeys.

“Ultimately,” Moore told Forbes, “whether AI has enhanced an attack or not, we need to remind people about these increasingly more sophisticated attacks and how to think twice before transferring money or divulging personal information when requested — however believable the request may seem.”

Share.

Leave A Reply

Exit mobile version