Lawmakers declare war on deepfakes that threaten to upend this year’s presidential election

State legislatures rather that Congress is taking the lead on controlling deepfakes used to influence elections.
State legislatures rather that Congress is taking the lead on controlling deepfakes used to influence elections.
Nicolas Economou—NurPhoto/Getty Images

Hello and welcome to Eye on AI. 

What do Taylor Swift and the 2024 election have in common? No, she’s not running for president. Rather, both were the focus of a deluge of deepfake-related headlines this week. 

The singer is the latest celebrity to have her likeness replicated using AI for use in faux ads on social media, in this case for Le Creuset cookware. At the same time, lawmakers introduced new bills aimed at reining in deepfakes, and companies announced new technologies to make it easier to detect them.

In the U.S., the states have so far taken the lead on tackling election risks around deepfakes, or media content, usually video, that’s been manipulated using AI to falsely depict a person or event. That push by the states continued this week when South Carolina lawmakers introduced legislation that would ban the distribution of deepfakes of candidates within 90 days of an election, joining Washington, Minnesota, and Michigan which passed similar election-targeted bills last year. (Seven other states introduced such bills in 2023 but failed to advance them.) 

These types of laws aren’t completely new, and it was actually Texas, California, and Virginia that were the first states to enact them back in 2019. But deepfakes have become far more convincing and easy to create since, and many questions remain including how to actually enforce these laws, how the rapid pace of AI development might impact all of this, and what happens when a particularly damaging deepfake proliferates anyway. 

South Carolina’s 90-day timeline, which is matched by several other states, means anything older is fair game. And while these laws would punish anyone who circulates a deepfake meant to influence an election with a fine and possible jail time, it would come only after such videos are already widely shared — and potentially believed by millions of people. Disinformation spreads like wildfire on social media, and the platforms aren’t going to save the day. Facebook and Instagram-parent Meta previously said it will require political advertisers on its platforms to disclose if they used AI, but this doesn’t apply to posts shared by everyday users. 

In Congress, several proposals to regulate the use of AI-created deepfakes in political campaigns have stalled, though the U.S. House yesterday unveiled the “No AI Fraud” act aimed primarily at protecting musical artists and actors from AI deepfakes and voice clones, a move heralded by the screen actors union (and likely helpful to Swift). Federal agencies, however, are paying closer attention. FBI and CSA officials spoke on the topic of AI deepfakes and election integrity this week at a CNBC event, describing how their approach is to stop the bad actors, not the content. 

“We’re not the truth police. We don’t aspire to be,” said FBI Director Christopher Wray, who went on to stress the need to partner with private-sector companies to improve detection. 

AI detection tools have so far proven mostly unreliable, but there’s still hope for using AI to combat AI. McAfee this week announced a new technology that it says is 90% effective at detecting maliciously altered audio in videos. Fox along with Polygon Labs also this week unveiled a new blockchain protocol for media companies to watermark their content as authentic to help consumers know what’s fake and what’s real. Other organizations from Intel to Truepuc have long been working on the problem as well, though there’s no telling if deepfake detection will ever truly be solved. It’s possible it will become a cat-and-mouse game, similar to cybersecurity, where advancements in technology keep malware detectors and the like only a step ahead of bad actors. 

Taken together, this is all still just the tip of the deepfake iceberg. AI deepfake technologies are also being used for everything from kidnapping scams to targeting women with non-consensual pornography and other disturbing and harmful content. But as far as elections go, the stakes are particularly high in 2024, as nearly half the global population, not just Americans, will be heading to the polls for various candidates and causes. 

And with that, here’s the rest of today’s AI news. 

Sage Lazzaro

sage.lazzaro@consultant.fortune.com

sagelazzaro.com

AI IN THE NEWS

OpenAI launches the GPT Store and ChatGPT Team. Initially rolling out to Plus, Enterprise, and Team users (the company’s new tier aimed at smaller businesses and teams), the GPT Store will allow users to find and share custom versions of ChatGPT created for niche uses. The store displays featured and trending GPTs, GPTs created by the ChatGPT team, and GPTs across categories such as writing, education, and lifestyle. It sounds like an app store, and it looks like one too. OpenAI said users have already created over three million GPTs since the company launched the feature in November, just before its leadership implosion. 

Microsoft uses AI to rapidly identify an alternative to lithium in batteries. Working with Pacific Northwest National Laboratory, the team uncovered sodium as a viable replacement that would enable the amount of lithium in batteries to be reduced by 70%. Sodium is cheap and abundant compared to lithium, the large-scale mining of which is harmful to the environment. "Something that could have taken years, we did in two weeks," Jason Zander, an executive vice president at Microsoft, told Reuters. The news is another promising look at AI’s potential in material science and potentially a breakthrough in how the next era of devices is made. 

Volkswagen plans to add ChatGPT functionality into its cars. That’s according to The Verge. The carmaker will integrate OpenAI’s chatbot to augment its voice control capabilities across its entire line in Q2, starting in Europe and then potentially expanding in the U.S. Drivers will be able to tap ChatGPT for general knowledge questions” and to control functions like air conditioning and heating.

Pennsylvania becomes the first state to adopt ChatGPT Enterprise. A small number of state employees will begin using ChatGPT to assist with their work in the coming weeks, such as writing job descriptions, updating policy language, and helping employees generate code, according to Gizmodo. No citizens will interact with the OpenAI program as part of the pilot, but it’s expected to roll out widely to the state government after the trial.

FORTUNE ON AI

The European bureaucrat who sends a chill through Big Tech just said she’s looking closely at Microsoft’s relationship with OpenAI —Paolo Confino

AI isn’t coming for your job, but it’s definitely going to be your new coworker —Jane Thier

Black workers could be shut out of AI wealth creation and lose out on more than $40 billion: ‘It can be the great leveler, but it can exacerbate the gap as well’ —Trey Williams

BRAINFOOD

OpenAI’s ‘de facto foreign minister.’ Sam Altman’s ascent to become the global face of AI wasn’t built just on ChatGPT’s success. It was also orchestrated by Anna Makanju.

That’s the takeaway in a new profile of Makanju, the company’s vice president of global affairs, published in the Washington Post

Makanju is a seasoned national security and policy advisor who worked in both the Obama and Biden administrations before jumping into the tech industry with roles at Elon Musk’s Starlink, Facebook, and then OpenAI. The story is an interesting look at how she strategically positioned Altman, transforming him from “a start-up darling into the AI industry’s ambassador” and sending him out for well-publicized meetings with global leaders that had all the makings of official state visits. But in many ways it also goes deeper, illustrating how Makanju is driving the discussions about AI playing out at the highest levels of global government (activities she frames as “AI education,” not lobbying). It’s worth a read. 

This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.