AI scams are coming for dentistry

TECH

AI scams are coming for dentistry

Your front-desk staff receives a call from a voice that sounds a lot like a top regional exec demanding they immediately share a password for a critical account. For most office staff, hanging up on a corporate VIP will sound like a bad career move, but what happens next could cost you millions of dollars, months of downtime, and the trust of your patients.

What’s happening: Cybercriminals are using generative AI to crank out more believable scams at higher volume, and the dental industry has become a target. 

  • Attackers use AI tools to clone the voice and likeness of trusted people, which can then be used to run sophisticated phishing campaigns against businesses.

  • Healthcare is a magnet for attackers because of the sensitivity of patient data, which can be used for identity theft and fraud. The Department of Health and Human Services tracked more than 530 cyberattacks on the U.S. healthcare sector over a six-month period in 2023–24, nearly half ransomware-related. 

The state of play: The dental industry has already been targeted by relatively simple versions of these scams. Reports of data breaches at practices and DSOs are now a regular occurrence.

  • Attackers will sometimes pose as patients and send infected PDFs to dental offices that install malware when opened. In another case, the ADA warned attackers were sending phishing emails threatening membership suspension unless the recipient clicked a link to a “payment advice document,” which then infected their systems.

The AI evolution: Generative AI is giving bad actors a copywriter, translator, researcher, and voice actor in one. AI tools can be used to automate phishing attacks, clone voices, and create deepfake videos to mimic execs, and even spin up entire fake businesses that look like legit operations.

Why it matters: As DSOs adopt more digital systems, the surface area for cyberattacks is growing, and the consequences of a successful one are becoming more severe. 

  • A breach in the healthcare space costs victims, on average, $7.42 million and takes 279 days to resolve, according to an IBM report.

How to protect yourself: Responding to this threat isn’t complicated, but it does take a disciplined commitment to cybersecure practices. Here are some practical, dental-operator-friendly tactics you can implement now: 

  • Update your training. Improve your staff’s cybersecurity training to account for AI tech. Your team should assume phishing emails will look perfect and that even phone calls with a familiar, trusted voice could be scams. “Verify, verify, verify” should be everyone’s default, and if you aren’t sure, confirm instructions with someone in person or on a known phone line.

  • Harden identity. Require multifactor authentication (MFA) everywhere, and use phishing-resistant options for admins and finance roles where feasible.

  • Lock down password resets and MFA changes. Use callback verification and documented approval before resets, new device enrollments, or access changes.

  • Add friction to money movement. Require dual approval for ACH/wire changes, and verify bank detail changes out of band using known phone numbers.

  • Treat attachments like biohazards. Train teams to slow down on emailed attachments claiming to be “forms,” “invoices,” and “payment advice documents,” especially those with urgency language.

  • Back up like you mean it. Keep encrypted, offsite copies and test restores.

Bottom line: AI is upgrading the con artist’s toolkit with faster scams, cleaner language, and more believable impersonation. You don’t need to become a cybersecurity company, but you do need the right protocols that can verify identity without relying on traditional signals of legitimacy.

If you enjoyed this article, you should sign up for the Morning Grind, the fast and free bi-weekly newsletter that keeps DSO leaders in the loop, without spam! Sign up at www.themorninggrind.com