Microsoft Google Artificial Intelligence

Weekly AI recap: Feds eye OpenAI-Microsoft partnership, Google’s new AI marketing tool

Author

By Webb Wright, NY Reporter

January 26, 2024 | 10 min read

Plus, the thorny issue of deepfakes is getting thornier by the day.

OpenAI and Microsoft

Microsoft first invested in OpenAI in 2019. / Adobe Stock

DOJ and FTC eye OpenAI-Microsoft partnership

The United States Department of Justice (DOJ) and Federal Trade Commission (FTC) are discussing a potential antitrust investigation into the multibillion-dollar partnership between Microsoft and OpenAI, but the talks have become mired in uncertainty surrounding which agency has the authority to initiate the investigation, according to a January 19 report from Politico.

Citing anonymous sources familiar with the discussions, Politico reports that both agencies – the jurisdictional boundary between which can sometimes be blurry – are vying for the chance to spearhead the federal inquiry into whether the OpenAI-Microsoft partnership is producing an unfair advantage to the two companies in the rapidly expanding AI market.

Microsoft invested $1bn in OpenAI in 2019, following the launch of OpenAI’s for-profit arm; the AI company was originally founded as a nonprofit, but its leadership soon realized that it would need to raise capital and attract top talent in order to build the large language models and consumer products for which it would soon become famous. In January of last year, Microsoft reportedly invested another $10bn in OpenAI. It now has the rights to integrate OpenAI’s technology into its existing and future products, as it already has with the Microsoft 365 Copilot, which just last week was released to businesses of all sizes.

A Microsoft spokesperson told The Drum that the company "does not own any portion of OpenAI and is simply entitled to [a] share of profit distributions."

Antitrust regulators in both the US and the UK have reportedly had their eyes on the OpenAI-Microsoft partnership at least since December, following a near-implosion within OpenAI that began with the board’s firing of CEO Sam Altman and culminated five days later in his return to the same position. After the dust began to settle, Microsoft was granted a non-voting seat on the newly reorganized OpenAI board.

The ongoing DOJ-FTC talks are specifically concerned with the OpenAI-Microsoft partnership and are not focused on settling the question of which agency might have the authority to oversee developments in the broader AI industry, according to the Politico report.

This wouldn’t be Microsoft’s first legal dispute with the FTC: After being subjected to a long antitrust case spearheaded by the agency, Microsoft was ultimately allowed last summer to move forward with its acquisition of the video game holding company Activision-Blizzard.

Powered by AI

Explore frequently asked questions

Google launches new generative AI-powered tool for marketers

On Tuesday, Google introduced a new feature powered by Gemini – the company’s multimodal large language model, which was released last month – designed to help marketers develop online ad campaigns through the use of simple text prompts.

Google Ads’ new “conversational experience workflow is designed to help you build better search campaigns through a chat-based experience,” the company wrote in a blog post. “It combines your expertise with Google AI.”

According to the blog post, marketers need only to enter their website URLs, and the Gemini-powered feature will generate “relevant ad content, including creatives and keywords.” The company is using its proprietary watermarking technology SynthID to identify any AI-generated images created in Google Ads.

A beta version of the new conversational experience in Google Ads is now available to English language advertisers in the US and the UK and will be rolled out to English language advertisers globally in the coming weeks, according to Google’s blog post.

The deepfake dilemma escalates

AI is rapidly becoming a scapegoat for politicians seeking to rid themselves of incriminating evidence, The Washington Post wrote in a report published January 22.

The sharing of deepfake images on social media has been on the rise in recent months, sparked by the rise of AI-powered image-generating platforms like Midjourney and Dall-E. AI-generated images of Pope Francis and former President Donald Trump, which many believed at the time to be real photographs, made headlines last year.

Now, according to the Post, some unscrupulous politicians – including former President Trump – seem to be taking advantage of the widespread uncertainty engendered by the rise of deepfakes to claim that embarrassing or damning images, video and audio were, in fact, generated by AI.

The problem is compounded by the fact that deepfakes are advancing and proliferating more quickly than mechanisms designed to identify them. A small handful of companies, such as Meta and TikTok, have introduced labeling policies for AI-generated content, but as one source quoted in the Post report points out, the current dynamics of social media algorithms – which tend to promote emotionally triggering content as a means of holding users’ attention – tech companies don’t have much of an incentive to try to smooth over the waters by making it easier for users to discriminate between authentic and AI-generated content.

The deepfake problem was also highlighted this past weekend when some New Hampshire voters received a phone call, apparently from President Biden but almost certainly in a reenactment of his voice generated by AI, encouraging them not to vote in this week’s primary election. The call has reportedly led to an investigation from state officials.

“The political deepfake moment is here,” Robert Weissman, president of the consumer advocacy nonprofit Private Citizen, said in a statement in response to the New Hampshire deepfake calls. “Policymakers must rush to put in place protections or we’re facing electoral chaos. The New Hampshire deepfake is a reminder of the many ways that deepfakes can sow confusion and perpetuate fraud.”

Then, on Thursday morning, media reports of sexually explicit deepfake images of Taylor Swift rampantly being shared on X and other social media platforms began to flood in. A handful of states, including Texas and New York, have already banned non-consensual deepfake pornography like that which has affected Swift.

Microsoft hits historic $3tn valuation

On Wednesday, Microsoft became the second company in history (following Apple) to reach a valuation of $3tn; the company’s share price at the time of writing stands just shy of $405, its highest ever.

The soaring popularity and proliferation of AI over the past year has been an enormous boon to Microsoft. Under the leadership of CEO Satya Nadella, the company has been quick to position itself as a pioneering force in the burgeoning AI era; as discussed above, it has become the main financial backer of OpenAI and has already begun to integrate AI into its fleet of office products, such as Word and Excel.

Analysts expect Microsoft to post record-high revenues when the tech giant publishes its earnings for the final quarter of 2023 next week. Elsewhere in the tech space, Netflix’s fourth-quarter earnings exceeded expectations, causing the streaming giant’s stock price to jump 7% earlier this week.

Bulletin of Atomic Scientists points to AI as a major existential threat facing humanity

The famous Doomsday Clock, a symbolic representation of humanity’s proximity to apocalypse developed in the aftermath of the Second World War, is as close to running out as it has ever been.

On Tuesday, the organization announced the Clock to be at 90 seconds to midnight for the second consecutive year. For context, the clock stood at seven minutes to midnight during the Cuban Missile Crisis, which is widely regarded as the closest the world has ever come to a full nuclear exchange between two global superpowers.

One of the major contributing factors to this grim prognosis, along with the ongoing threat of nuclear war and impending ecological catastrophe, is generative AI. The Bulletin specifically highlighted the technology’s capacity to generate and spread misinformation and its rapid adoption by militaries as threats to humanity’s survival.

It also, however, gave a cautiously optimistic nod to international efforts that are currently underway to impose guardrails around the development and deployment of AI, including President Joe Biden’s recent executive order. “But these are only tiny steps,” the organization wrote in a blog post, “[and] much more must be done to institute effective rules and norms, despite the daunting challenges involved in governing artificial intelligence.”

Publicis Groupe unveils $326mn AI project

Publicis Groupe – one of the largest marketing companies in the world – announced a plan earlier this week to invest $326mn in an internal AI-powered system geared towards employee efficiency and organizational integration. “CoreAI,” as the system is being called, will be accessible via a single user interface (UI) across the full breadth of Publicis Groupe’s subsidiary agencies, among which include Saatchi & Saatchi and Le Pub.

A beta version of the CoreAI is expected to be rolled out for most Publicis Groupe employees sometime next year.

Through CoreAI, “each individual in the group will have access to everything we know at Publicis on every expertise and every geography,” the company’s chief executive officer Arthur Sadoun said during a one-hour, livestreamed video for shareholders and clients. “To cut a long story short, we are bringing the power of everyone to the power of one.”

Read the full report from The Drum senior reporter Sam Bradley here.

For more on the latest happenings in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter.

Microsoft Google Artificial Intelligence

More from Microsoft

View all

Trending

Industry insights

View all
Add your own content +