Skip to main content

AI concentrating more power in Big Tech's hands, NYU researchers warn

The rise of artificial intelligence is entrenching more economic and political power in the hands of Big Tech companies, according to researchers at New York University.

The rise of artificial intelligence is entrenching more economic and political power in the hands of Big Tech companies, according to researchers at New York University (NYU) who argue AI must undergo more scrutiny and regulation.

AI Now, a research institute at NYU, released a new report detailing how major tech companies wield significant control over AI, arguing such influence must be addressed now before the situation gets too out of hand.

"This report is written with this task in mind: we are drawing from our experiences inside and outside government to outline an agenda for how we — as a group of individuals, communities, and institutions deeply concerned about the impact of AI unfolding around us — can meaningfully confront the core problem that AI presents, and one of the most difficult challenges of our time: the concentration of economic and political power in the hands of the tech industry — Big Tech in particular," the document states.

The authors add that AI development has been "foundationally reliant" on resources controlled by Big Tech, including data and computer power. Plus, they write, Big Tech companies have gained geopolitical importance by playing a central role in the U.S.-China race for AI supremacy, thereby conflating "the continued dominance of Big Tech as synonymous with U.S. economic prowess, and [ensuring] the continued accrual of resources and political capital to these companies."

CONGRESS WEIGHS IN: SHOULD TECH COMPANIES PAUSE 'GIANT AI EXPERIMENTS' AS ELON MUSK AND OTHERS SUGGEST?

The report also argues that Big Tech has shaped much of the narrative surrounding AI, such as the idea that AI needs "unrestricted innovation," that AI development goes hand in hand with societal progress, and that regulation stifles such progress.

According to Sarah Myers West, managing director of AI Now and co-author of the report, reform is necessary to fix a broken system that's moving in a potentially dangerous situation with unrestricted AI development.

"AI as we know it today has fundamental dependencies on resources that are consolidated among a handful of big tech firms, including massive amounts of data and computational power," West told Fox News Digital. "Because of this, any meaningful reform of AI will need to tackle Big Tech's advantage in the market through strong competition and privacy regulations. This is crucial as AI systems are already being integrated into infrastructures around us — regulators need to learn from the past decade of tech-enabled crises and move swiftly towards action."

A key theme throughout the report is that the public, not the tech industry, should define the future of AI, and that regulation is a key vehicle to make that happen.

"It's time for regulators, and the public, to ensure that there is nothing about artificial intelligence (and the industry that powers it) that we need to accept as given," AI Now writes. "This watershed moment must also swiftly give way to action: to galvanize the considerable energy that has already accumulated over several years towards developing meaningful checks on the trajectory of AI technologies. This must start with confronting the concentration of power in the tech industry."

MUSK ON AI REGULATION: 'IT'S NOT FUN TO BE REGULATED' BUT ARTIFICIAL INTELLIGENCE MAY NEED IT

AI Now presents several recommendations to discourage further consolidation of power under Big Tech. One is ensuring these companies collect no more data than is necessary — a practice known as data minimization — since they normally hold the most data and have the computing power to best utilize it.

Another proposal is to reform and better enforce antitrust laws to encourage competition not just between Big Tech giants but also between smaller tech companies.

The report also suggests various regulations for ChatGPT, BARD, and other "large-scale general purpose AI models."

The argument that AI must undergo stricter regulation was echoed this week by none other than Twitter CEO and tech billionaire Elon Musk, who told Fox News host Tucker Carlson in a wide-ranging interview that AI is too potentially dangerous to be left unregulated.

"I think we should be cautious with AI, and I think there should be some government oversight because it is a danger to the public," said Musk. "AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production … it has the potential, however small one may regard that probability, but it is non-trivial, it has the potential of civilization destruction."

REGULATORS SHOULD KEEP THEIR HANDS OFF AI AND FORGET MUSK-BACKED PAUSE: ECONOMIST

However, other experts counter that emerging AI tech could make society wealthier and more productive if regulators stay out of the way — and that there's a national security component to consider as well.

"I completely sympathize with those who are afraid of it. And I share their fears," economist Peter St. Onge of the Heritage Foundation told Fox News Digital earlier this week. "Fundamentally, the most important question to me in AI is: 'Who is going to get there first?' And the most likely candidates are Silicon Valley and the Chinese version of Silicon Valley, which has deep Chinese government influence."

St. Onge said that when it comes down to the international AI race, he would prefer an outcome where U.S. "tech bros" are the authority of AI rather than the ruling Chinese Communist Party.

He also told Fox News Digital that regulations tend to kill jobs and effectively "attack" existing producers and companies.

"Tech is a river that makes us rich," said St. Onge. "The problem is everybody needs to get out of the way. We need to have a respect for the fundamental economic freedoms that we always had."

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.