Q&A: How is AI enhancing scams? This UVA expert knows

By now, you know the email from a wealthy African prince is a fraud. But is that really a friend’s voice on the telephone saying they’re in trouble?

Lana Swartz, a media studies professor in the University of Virginia’s College of Arts & Sciences, studies the intersection of AI and fraud, and how you can protect yourself. She recently chatted with UVA Today.

Q. Are AI scams something new? Or is AI a tool that has helped fraudsters take common scams to a new level?

A. Both. AI automates the old mass scams – traditional phishing emails sent to millions – but it’s also democratizing what used to be elite techniques.

“Harpoon phishing” used to mean highly targeted, personalized attacks on high-value individuals like CEOs, requiring teams of researchers and social engineers. It’s now much less resource-intensive to run the same kind of operations on ordinary people. AI can be used to make scam communication more effective – better language translation, targeting people with dementia or lower reading levels.

Portrait of Lana Swartz on the UVA Grounds

Swartz says sophisticated AI-enabled scam artists can easily create fake but convincing online personas to gain trust before fleecing their victims. (Photo by Lathan Goumas, University Communications)

AI is also creating entirely new categories of scams like synthetic identity fraud, which uses AI-generated faces, backstories and social media histories to create people who never existed, but have complete digital footprints that can pass most verification systems.

Q. What are the most common AI-assisted scams and fraud?

A. Voice cloning for “grandparent scams” gets the headlines, and it’s important, but we’re also seeing fully AI-augmented “pig butchering” – where scammers use AI to maintain fake relationships for months. Initially, the pig-butchering victim sees some return on the investment, triggering the victim to make larger and larger cryptocurrency investments. Once the “pig” is fattened up, the scammer takes the money. 

Even without a direct approach to invest, AI can be used to generate fake crypto tokens with fake celebrity endorsements and fake trading platforms. Sextortion is another big one, using AI-generated compromising images to blackmail victims. Every scam you can think of has an application for AI. There are AI-generated fake product reviews, job postings and customer service representatives.

Q. What should people know about AI scams? How can they protect themselves?

A. Be skeptical of urgent requests for money or personal information, especially those involving voice calls claiming to be from family members in distress. Verify identities through independent channels before taking action. Slow down when faced with high-pressure situations.

Discovery and Innovation: NASA selects UVA researcher for asteroid mission
Discovery and Innovation: NASA selects UVA researcher for asteroid mission

Be particularly cautious of investment opportunities promising quick returns, especially when they come from online romantic partners, friends and mentors. Don’t hold conversations off established platforms. Remember that legitimate organizations won’t ask for sensitive information via unsolicited calls or messages. When in doubt, hang up and contact the organization directly using official contact information you find independently.

Q. If someone falls prey to an AI scam, what should they do next? 

A. Act immediately. Contact your bank or payment processor. Freeze your credit. Report to the FBI’s Internet Crime Complaint Center, the Federal Trade Commission and the platform where the scam occurred. Document and take screenshots of everything. Credit cards offer much better protection than wire transfers or crypto and even debit cards.

Be aware that if the person being scammed has developed a relationship with a scammer, they will likely face emotional manipulation. Unfortunately, cross-border cryptocurrency scams rarely result in recovery. The faster you act, the better your chances. It’s also important to reduce the shame and stigma around being scammed.

Q. You co-wrote a report about how AI has elevated fraud. With rapid AI advances and no shortage of scammers, what keeps you up at night?

A. The sheer complexity of what we’re dealing with. This isn’t just a technology problem or a regulatory problem or an education problem; it’s all of those simultaneously. It’s a problem with global reach. Many scams are perpetrated by human trafficking victims in places with failed state governance.

On the U.S. side, it’s all happening in a context of reduced trust in institutions and economic uncertainty. More and more people are facing precarity and finding themselves using informal channels and making risky choices.

Q. What can be done?

A. The solution must be multi-pronged. We need government, industry and civil society collaboration across borders. We need cultural education that meets people where they are. Scams are already a huge part of popular culture, so let’s work with that instead of against it.

We need to invest in financial education that actually represents what people are facing today, not outdated advice that doesn’t take into account how complex the online financial landscape has become.

AI companies and tech platforms must accept greater responsibility for preventing AI-enabled scams by implementing robust detection systems, reporting mechanisms, and policies for removing fraudulent content. Beyond self-regulation, governments need to establish comprehensive regulatory frameworks that hold these platforms legally accountable.

Media Contacts

Mike Mather

Executive Editor University Communications