Artificial Intelligence Kamala Harris Technology

How a Kamala Harris presidency – or a second Trump term – could affect American AI policy

Author

By Webb Wright, NY Reporter

July 30, 2024 | 9 min read

Throughout her four years as vice-president, Harris has pushed for greater federal oversight of the AI industry, along with closer ties with the private sector. Can we expect a continuation of these efforts if she’s elected in November?

The White House

Kamala Harris announced her bid for the presidency earlier this month. / Adobe Stock

When Kamala Harris announced her bid for the US presidency earlier this month after President Biden bowed out of the race, it was unlikely that many people immediately asked themselves, ‘How could this affect the future of artificial intelligence?’ And yet, the future of the technology in the US could very well hang in the balance.

Throughout her tenure as vice-president, Harris has been an active force in pushing for greater federal oversight of the AI industry. She was instrumental in the drafting of the Biden administration’s executive order on AI, signed in October, which aimed to boost transparency around the private sector’s development and testing of advanced AI systems and mandated that individual agencies develop frameworks for grappling with some of the technology’s more immediate risks.

The following month, she led the US’s delegation to the UK’s Global Summit on AI Safety, held in London’s Bletchley Park.

Powered by AI

Explore frequently asked questions

And, in a time when many headlines have been dominated by fears of an ’AI-pocolypse,’ Harris has been actively pushing to draw the world’s attention toward some of the more tangible, often far more subtle dangers presented by an increasingly algorithm-dominated society.

As she was careful to point out in her speech during the UK Summit last November, some of these risks should also be viewed as “existential” – at least to the people whom they directly impact. “Consider, for example, when a senior is kicked off his healthcare plan because of a faulty AI algorithm. Is that not existential for him?” she asked. “When a woman is threatened by an abusive partner with explicit, deepfake photographs, is that not existential for her?”

Harris has also been pushing for closer ties between the federal government and the private companies working to develop cutting-edge AI. In May last year, she met with the executives of OpenAI, Microsoft, Anthropic and Google in the White House to discuss AI safety.

Harris “has played a really important role in a lot of these conversations and in a lot of the movement that we’ve seen in the executive branch around AI,” says Valerie Wirtschafter, a fellow in the Brookings Institution’s Artificial Intelligence and Emerging Technology Initiative. This is significant, she adds, because almost all of the regulatory progress that has been made around AI in the US thus far “has been from the executive branch” rather than via Congress.

Harris, who was born in Oakland, California, north of Silicon Valley, and whose mother was a cancer researcher, has also underscored her view that the world must act together to mitigate the risks of AI.

“I believe history will show that this was the moment when we had the opportunity to lay the groundwork for the future of AI,” she said at the UK Summit. “And the urgency of this moment must then compel us to create a collective vision of what this future must be.”

Should she be elected in November, Harris’s administration would “likely maintain many of the policies of the previous administration, with the Biden executive order as a guiding document and a continued focus on risk mitigation and responsible development and investment,” Wirtschafter and a colleague wrote in a Brookings report published last week.

Former president Trump, on the other hand, has vowed to “cancel” Biden’s executive order on AI if he were to reclaim the White House in November. This stance, as the new Brookings report points out, broadly aligns with a view among many American conservatives that the executive order is an abuse of the Defense Production Act – a Korean War-era law that grants the president authority to mobilize industries in the face of a national defense crisis – and that it stifles domestic industry and innovation.

“The executive order, which is really the only major governing document around [AI] risk and transparency, is really hanging in the balance here,” Wirtschafter says. “If that goes away, we’re basically going to lose the only binding transparency mechanism for companies that are working on these frontier models … then we’re back to the drawing board.”

And while federal regulation of AI might face major hurdles under a second Trump presidency, the technology’s advancement in Silicon Valley certainly won’t. Practically on a daily basis, big tech companies – driven by the lucrative goal of being first to market with new products – are releasing new AI models and tools, presenting technological possibilities but also new dangers.

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

Deepfakes are an illustrative example. On the one hand, generative AI has opened new creative doors – for a brand, say, that wants to recreate the likeness of a deceased celebrity for an ad campaign. But it’s also already proving to be the cause of great suffering, as has been the case with a number of schools across the US that have been grappling with the issue of explicit deepfake images depicting teenage students.

The private sector can’t be relied on to thread the needle between the benefits and risks of AI, according to Jerome Greco, supervising attorney at the Legal Aid Society, a nonprofit that provides legal assistance for criminal and civil cases in New York City. “These are not unforeseeable consequences,” he says. “It’s just that having to deal with those consequences first hinders [companies’] ability to make more money. That’s why we can’t leave it solely up to these companies to make these decisions.”

Greco says that should Harris win the White House, he hopes she would focus not just on AI companies, but also on the technology’s impact on the lives of everyday Americans. “That’s a concern I have with all politicians,” he says. “Sometimes it’s easy to lose sight of what the general public wants and what individuals want versus what companies want.”

For more on the latest happenings in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter.

Artificial Intelligence Kamala Harris Technology

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +