The Federal Communications Commission (FCC) has banned robocalls using realistic AI voices.
The decision follows an incident during the New Hampshire primary in which an AI voice on the phone delivered a fake message purporting to be from President Joe Biden. Voters were urged not to vote in the primary.
The FCC sees the new ban as an important step in protecting consumers from the growing threat of advanced communications technologies and deceptively real AI fakes, such as voice cloning. “The rise of these types of calls has escalated during the last few years,” the agency said.
The new measure adds AI-generated robocalls to the 1991 Telephone Consumer Protection Act (TCPA), which already regulates unsolicited calls without the recipient’s prior consent.
Of course, phone scams, with or without an AI voice, are illegal anyway. Notable cases include a $5 million fine against activists who defrauded Black and People of Color (BPoC) voters, and a record $300 million fine against a company that generated unsolicited robocalls for car insurance.
However, the expansion criminalizes the use of AI voices for robocalls in general and allows state attorneys general to take action against them. The law is effective immediately.
FCC puts robocall scammers on notice
FCC Chairwoman Jessica Rosenworcel highlights the growing danger posed by AI-generated voices and images that can deceive consumers and make them victims of fraud. The upcoming rule change is designed to protect ordinary citizens, as well as celebrities and politicians, from robocall scams.
“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We’re putting the fraudsters behind these robocalls on notice,” said Rosenworcel.
The FCC’s challenge now is to strike a balance between protecting consumers and fostering technological innovation. The new ban signals a cautious approach by the FCC, ensuring that the benefits of AI advances are not overshadowed by its abuses.