State lawmakers eye promise, pitfalls of AI ahead of November elections
A Kentucky polling place welcomes voters in November 2023. Kentucky is among the states where lawmakers are exploring legislation to protect against the deceptive use of artificial intelligence in elections. (Photo by Matthew Mueller/Kentucky Lantern)LOUISVILLE, Ky. — Inside a white-walled conference room, a speaker surveyed hundreds of state lawmakers and policy influencers, asking whether artificial intelligence poses a threat to the elections in their states.
The results were unambiguous: 80% of those who answered a live poll said yes. In a follow-up question, nearly 90% said their state laws weren’t adequate to deter those threats.
It was among the many exchanges on artificial intelligence that dominated sessions at last week’s meeting of the National Conference of State Legislatures, the largest annual gathering of lawmakers, in Louisville.
“It’s the topic du jour,” Kentucky state Sen. Whitney Westerfield, a Republican, told lawmakers as he kicked off one of many panels centering on AI. “There are a lot of discussions happening in all of our state legislatures across the country.”
While some experts and lawmakers celebrated the promise of AI to advance services in health care and education, others lamented its potential to disrupt the democratic process with just months to go before November’s elections. And lawmakers compared the many types of legislation they’re proposing to tackle the issue.
This presidential election cycle is the first since generative AI — a form of artificial intelligence that can create new images, audio and video — became widely available. That’s raised alarms over deepfakes, remarkably convincing but fake videos or images that can portray anyone, including candidates, in situations that didn’t occur or saying things they didn’t.
“We need to do something to make sure the voters understand what they’re doing,” said Kentucky state Sen. Amanda Mays Bledsoe.
The Republican lawmaker, who chairs a special legislative task force on AI, co-sponsored a bipartisan bill this year aimed at limiting the use of deepfakes to influence elections. The bill would have allowed candidates whose appearance, action or speech was altered through “synthetic media” in an election communication to take its sponsor to court. The state Senate unanimously approved the proposal but it stalled in the House.
While Bledsoe expects to bring the bill up again next session, she acknowledged how complex the issue is: Lawmakers are trying to balance the risks of the evolving technology against their desire to promote innovation and protect free speech.
“You don’t want to go too fast,” she said in an interview, “but you also don’t want to be too behind.”
Rhode Island state Sen. Dawn Euer, a Democrat, told Stateline she’s concerned about AI’s potential to amplify disinformation, particularly across social media.
“Election propaganda and disinformation has been part of the zeitgeist for the existence of humanity,” said Euer, who chairs the Senate Judiciary Committee. “Now, we have high-tech tools to do it.”
Connecticut state Sen. James Maroney, a Democrat, agreed that concerns about AI’s effects on elections are legitimate. But he emphasized that most deepfakes target women with digitally generated nonconsensual intimate images or revenge porn. Research firm Sensity AI has tracked online deepfake videos for years, finding 90% of them are nonconsensual porn, mostly targeting women.
Maroney sponsored legislation this year that would have regulated artificial intelligence and criminalized deepfake porn and false political messaging. That bill passed the state Senate, but not the House. Democratic Gov. Ned Lamont opposed the measure, saying it was premature and potentially harmful to the state’s technology industry.
While Maroney has concerns about AI, he said the upsides far outweigh the risks. For example, AI can help lawmakers communicate with constituents through chatbots or translate messaging into other languages.
Top election officials on AI
During one session in Louisville, New Hampshire Republican Secretary of State David Scanlan said AI could improve election administration by making it easier to organize election statistics or get official messaging out to the public.
Still, New Hampshire experienced firsthand some of the downside of the new technology earlier this year when voters received robocalls that used artificial intelligence to imitate President Joe Biden’s voice to discourage participation in a January primary.
Prosecutors charged the political operative who allegedly organized the fake calls with more than a dozen crimes, including voter suppression, and the Federal Communications Commission proposed a $6 million fine against him.
While the technology may be new, Scanlan said election officials have always had to keep a close eye on misinformation about elections and extreme tactics by candidates or their supporters and opponents.
“You might call them dirty tricks, but it has always been in candidates’ arsenals, and this really was a form of that as well,” he said. “It’s just more complex.”
The way state officials responded, by quickly identifying the calls as fake and investigating their origins, serves as a playbook for other states ahead of November’s elections, said Cait Conley, a senior adviser at the federal Cybersecurity and Infrastructure Security Agency focused on election security.
“What we saw New Hampshire do is best practice,” she said during the presentation. “They came out quickly and clearly and provided guidance, and they really just checked the disinformation that was out there.”
Kentucky Republican Secretary of State Michael Adams told Stateline that AI could prove challenging for swing states in the presidential election. But he said it may still be too new of a technology to cause widespread problems for most states.
“Of the 99 things that we chew our nails over, it’s not in the top 10 or 20,” he said in an interview. “I don’t know that it’s at a maturity level that it’ll be utilized everywhere.”
Adams this year received the John F. Kennedy Profile in Courage Award for championing the integrity of elections despite pushback from fellow Republicans. He said AI is yet another obstacle facing election officials who already must combat challenges including disinformation and foreign influence.
More bills coming
With an absence of congressional action, states have increasingly sought to regulate the quickly evolving world of AI on their own.
NCSL this year tracked AI bills in at least 40 states, Puerto Rico, the Virgin Islands and Washington, D.C.
The Iowa House this year approved a bill requiring disclosure of campaign materials created using AI and providing a path to injunctive relief for candidates whose images, likenesses or speech was altered through the deceptive or fraudulent use of a deepfake. The Senate did not take up the legislation this year.
Without a doubt, artificial intelligence is being used to sow disinformation and misinformation, and I think as we get closer to the election, we’ll see a lot more cases of it being used.
– Texas Republican state Rep. Giovanni Capriglione
As states examine the issue, many are looking at Colorado, which this year became the first state to create a sweeping regulatory framework for artificial intelligence. Technology companies opposed the measure, worried it will stifle innovation in a new industry.
Colorado Senate Majority Leader Robert Rodriguez, a Democrat who sponsored the bill, said lawmakers modeled much of their language on European Union regulations to avoid creating mismatched rules for companies using AI. Still, the law will be examined by a legislative task force before going into effect in 2026.
“It’s a first-in-the nation bill, and I’m under no illusion that it’s perfect and ready to go,” he said. “We’ve got two years.”
When Texas lawmakers reconvene next January, state Rep. Giovanni Capriglione expects to see many AI bills flying.
A Republican and co-chair of a state artificial intelligence advisory council, Capriglione said he’s worried about how generative AI may influence how people vote — or even if they vote — in both local and national elections.
“Without a doubt, artificial intelligence is being used to sow disinformation and misinformation,” he said, “and I think as we get closer to the election, we’ll see a lot more cases of it being used.”
This story was originally published by Stateline, which is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Stateline maintains editorial independence. Contact Editor Scott S. Greenberger for questions: [email protected]. Follow Stateline on Facebook and X.
Welcome to Billionaire Club Co LLC, your gateway to a brand-new social media experience! Sign up today and dive into over 10,000 fresh daily articles and videos curated just for your enjoyment. Enjoy the ad free experience, unlimited content interactions, and get that coveted blue check verification—all for just $1 a month!
Account Frozen
Your account is frozen. You can still view content but cannot interact with it.
Please go to your settings to update your account status.
Open Profile Settings