As artificial intelligence starts to reshape society in ways predictable and not, some of Colorado’s highest-profile federal lawmakers are trying to establish guardrails without shutting down the technology altogether.
U.S. Rep. Ken Buck, a Windsor Republican, is cosponsoring legislation with California Democrat Ted Lieu to create a national commission focused on regulating the technology and another bill to keep AI from unilaterally firing nuclear weapons.
Sen. Michael Bennet, a Democrat, has publicly urged the leader of his caucus, Majority Leader Chuck Schumer, to carefully consider the path forward on regulating AI — while warning about the lessons learned from social media’s organic development. Sen. John Hickenlooper, also a Democrat, chaired a subcommittee hearing last September on the matter, too.
“We are intimately aware that even seemingly innocuous digital products can have deeply damaging effects on mental health, civic discourse, democratic legitimacy and Americans’ economic agency,” Bennet wrote in his letter to Schumer in late summer. “Repeating our oversight failure when it comes to more powerful technologies like AI would be disastrous.”
No proposed regulations have found footing in Congress yet — where almost nothing happens fast, even in more cooperative times — but the focus shows it may not be that way for long. Already, the European Union has reached a deal on how to regulate AI, and the United Nations has turned an eye toward the technology, too.
Inside Colorado, state lawmakers, with the backing of Secretary of State Jena Griswold, likewise have previewed legislation aimed at regulating AI’s use in election campaigning.
“Artificial intelligence represents a huge turning point in our history, and we must act to protect our elections,” Rep. Junie Joseph, a Boulder Democrat, said during a news conference Thursday in Denver.
Concerns about the future of AI run the gamut, from how it could be used to influence elections to potential economic disruptions to an apocalyptic nuclear weapons scenario.
But it also has significant potential upsides, including its usefulness in reading medical scans to identify irregularities, as an educational tool and other yet-to-come innovations.
“When the Industrial Revolution happened, there was a real change in our society,” Buck said in an interview. “When the internet came around, it was a real change in our society. And so this is something that is going to create a lot of difficulties and it’s hard to identify right now what some of those are going to be.”
Reining in Big Tech?
Buck said he turned his attention to AI as he looked into Big Tech and whether its major players needed to be reined in. Many of the companies behind everyday internet, including Facebook’s parent company, the social media platform X and Google’s parent company, are also developing AI tools.
He emphasized that he doesn’t want to stop innovation in the field, but make sure to prevent potential harm to Americans, similar to how he doesn’t want to shut down search engines but does want to make it hard for people to access information like how to build bombs or harm themselves.
Daniel Weiner, director of the Brennan Center’s Elections and Government Program, warned in an essay that “the early 2020s will likely be remembered as the beginning of the deepfake era in elections,” using the term for audio and video that’s been manipulated with artificial intelligence.
He goes on to cite elections far and near where AI-generated fakes have made the rounds. In Slovakia, they may have contributed to a pro-Western political party’s narrow loss to a pro-Russia faction; in the United States, Florida Gov. Ron DeSantis’ presidential campaign last year released a fake video in which former President Donald Trump kissed former National Institute of Allergy and Infectious Diseases Director Anthony Fauci on the nose, according to Reuters.
And the New Hampshire attorney general’s office announced Monday that it was investigating a robocall ahead of its first-in-the-nation primary that apparently used artificial intelligence to mimic President Joe Biden’s voice and discourage residents from voting.
“Advances in generative AI have become a real force multiplier,” Weiner said in an interview, referring to the AI used to generate media. “Photoshop and low-tech manipulation have been with us forever. Those sorts of dirty tricks are nothing new. But the way AI and synthetic content can be generated and distributed. … The potential in a very fraught political time for this technology to sow chaos is huge.”
Simply banning that use of the technology is off the table, because some of it can be used for satirical purposes, or for political commentary that’s protected by the First Amendment, Weiner said. Other uses fall into a distinctly gray area, such as if an AI-generated video made a digital clone of Trump read tweets that were legitimately sent from the former president’s social media accounts.
Weiner, for his part, thinks the most elegant solution is clear: prominent, mandated watermarks on AI-generated media. Bennet made a similar point in his letter to Schumer, and it’s the main thrust of a bill he introduced last spring.
Worries about stifling innovation
But that’s just one showy way that AI might can disrupt society.
Weiner said he was just as worried about election officials using AI for things like routine verification of voter rolls without guardrails to keep the machines from removing voters inappropriately.
While Weiner’s specialty is election law — and he notes political campaigns are already pretty heavily regulated — he also acknowledged the desire for a “light regulatory touch” broadly to not stifle innovation or put U.S. industry at a disadvantage.
He said Buck and Bennet are “among the group of members taking a leadership role” on artificial intelligence broadly. His organization hasn’t taken an official position on their efforts to put together a commission to study the matter, but he called it “generally a good idea.”
He noted that, absent congressional action, Biden has also issued an executive order to establish safety and security protocols while protecting Americans’ privacy.
“It’s urgent to start developing solutions,” Weiner said. “It’s also not smart to think you’re going to come up with one round of policy solutions and be done.”
Buck, who announced last fall that he wouldn’t seek reelection in 2024, agreed with that sentiment. As AI progresses, so will Congress’ need to revisit it and its capabilities, he said. In addition to just regulating it for security and health purposes, it’s also about keeping America economically competitive both internationally and internally.
“You’re going to have a whole lot of people in the have-not category, and some people in the have category,” Buck warned. “Some people who understand the technology, (who) have been trained and have used the technology, and a whole of people who aren’t. That is going to create a wealth disparity in this country and that really undermines one of the strengths of our society, our middle class.”
Stay up-to-date with Colorado Politics by signing up for our weekly newsletter, The Spot.