Federal Trade Commission Google Artificial Intelligence

Weekly AI Recap: US & UK partner on safety, Artifact acquired

Author

By Webb Wright, NY Reporter

April 4, 2024 | 9 min read

Plus, more than 200 musicians sign open letter warning about the encroachments of AI within the music industry.

Artificial intelligence

The US and the UK have agreed to work together to ensure the safe development of new, advanced AI models. / Adobe Stock

US and UK sign AI safety deal

On Monday, the US and the UK signed a Memorandum of Understanding (MOU) through which the two countries have agreed to jointly oversee the safety evaluation of advanced AI models.

The deal, signed by US Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan, takes effect immediately and aims to foster a collaborative effort between the two countries to develop new safety testing frameworks and methods.

“AI continues to develop rapidly, and both governments recognize the need to act now to ensure a shared approach to AI safety that can keep pace with the technology’s emerging risks,” the US Department of Commerce wrote in a press release. “As the countries strengthen their partnership on AI safety, they have also committed to develop similar partnerships with other countries to promote AI safety across the globe.”

President Joe Biden signed an executive order in November requiring (among other things) that American AI companies submit the results of their safety tests to federal officials.

That same week, the British government released the Bletchley Declaration, a document signed by representatives from 28 countries that acknowledges that AI could potentially pose a serious threat to humanity – and that it, therefore, warrants a cooperative, international effort between governments to develop safeguards.

Powered by AI

Explore frequently asked questions

Yahoo acquires Artifact

Yahoo announced on Tuesday that it had acquired Artifact, an AI-powered news aggregation platform.

Launched in January of last year by Instagram co-founders Kevin Systrom and Mike Krieger, Artifact algorithmically suggests news articles to users to cut through irrelevant clutter and create a more personalized, streamlined news-reading experience.

However, Yahoo is simply acquiring the company’s underlying tech, the recommendation engine of which will be integrated into Yahoo News. A standalone Artifact app will no longer be offered. The acquisition “accelerates [Yahoo’s] vision to offer a more personalized experience for discovering news and information across platforms,” the media giant wrote in a press release.

Artists call upon tech and music industries to halt ‘predatory use of AI’

More than 200 musicians – including high-profile artists like Sheryl Crow, Billie Eilish, Elvis Costello and Katy Perry – signed an open letter earlier this week imploring “AI developers, technology companies, platforms and digital music services to cease the use of artificial intelligence (AI) to infringe upon and devalue the rights of human artists.“

The letter, issued by the advocacy group Artists Rights Alliance, calls the non-consensual use of artists’ materials for the training of AI models “an assault on human creativity” that could “destroy the music ecosystem.”

You can read the full letter here.

On The Daily Show, FTC Chair Lina Khan underscores her commitment to holding AI companies accountable

Federal Trade Commission (FTC) chair Lina Khan appeared on The Daily Show earlier this week and spoke with host John Stewart about the need for government oversight of the burgeoning AI industry.

“The first thing we need to do is be clear-eyed that there’s no ‘AI exemption’ from the laws on the books,” she said, referring to US anti-monopolization laws which date back to the late nineteenth century.

Khan added that tech companies have in the past attempted to “dazzle” federal law enforcement officials by claiming that new technologies, being so different from anything the world has previously seen, should be exempt from existing regulations. “That’s basically what ended up happening with web 2.0 [ie social media] and now we’re reeling from the consequences,” she said.

At the end of the conversation, Stewart asked Khan if she was “optimistic that we will be able to catch up to this in time before something truly catastrophic happens through AI.“

“There’s no inevitable outcome here,” she responded. “We are the decision-makers, and so we need to use the policy tools and levers that we have to make sure that these technologies are proceeding on a trajectory that benefits Americans and we’re not subjected to all of the risks and harms.”

In January, the FTC launched an antitrust inquiry into five leading AI companies – Alphabet, Amazon, Anthropic, OpenAI and Microsoft – to “scrutinize corporate partnerships and investments,” according to a statement from the agency, and search for antitrust violations.

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

Google reportedly planning to add new AI-powered search features to premium services

According to a Wednesday report from the Financial Times, Google is considering adding new AI-powered search features to its premium services. Citing anonymous sources, the report also claimed the Google could eventually deploy “certain elements“ of the new AI search features to the free version of its online search engine.

Subscribers to premium Google services already have access to Gemini, Google’s multimodal AI chatbot, through Gmail and Docs.

We’re continuing to rapidly improve the product to serve new user needs ... As we’ve done many times before, we’ll continue to build new premium capabilities and services to enhance our subscription offerings across Google,” a Google spokesperson told The Drum. However, the spokesperson did not confirm or deny the reports, saying, “We don’t have anything to announce right now.”

OpenAI unveils Voice Engine

On March 29, OpenAI announced in a blog post that it had developed an AI model capable of reproducing human speech based on just 15 seconds of audio and a text prompt.

Dubbed Voice Engine, the new model has not yet been publicly released; OpenAI has deployed it on a small scale to study the risks that such a technology might pose.

Andrew Grotto, the William J Perry international security fellow at Stanford University, told The Drum that he’s particularly concerned that Voice Engine might be used for the “impersonation of trusted figures for malicious ends – for example, hijacking the voice of an authority figure to spread falsehoods.”

OpenAI wrote in its blog post that it has banned the non-consensual use of people’s voices for Voice Engine. The company has also introduced a system for watermarking audio created by the model.

For more on the latest happenings in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter.

Federal Trade Commission Google Artificial Intelligence

More from Federal Trade Commission

View all

Trending

Industry insights

View all
Add your own content +