cuk cuk cuk cuk cuk cuk cuk cuk cuk cuk cuk cuk cuk cuk cuk cuk cuk cuk

How soon will machines outsmart humans? The biggest brains in AI disagree

A string of breakthroughs in artificial intelligence over the past year has raised expectations for the arrival of machines that can outperform expert humans across a range of intellectual tasks.

With progress in AI seeming to accelerate, there is near consensus in Silicon Valley that abilities that seemed far-fetched only a few years ago are now closer at hand.

But asking AI researchers exactly when they expect to hit this milestone will produce more arguments than precise dates.

Elon Musk, the Tesla and SpaceX billionaire who is investing increasing amounts of time and effort on AI, said this week that so-called artificial general intelligence could be achieved as soon as next year.

Often used as a shorthand for machines whose intelligence exceeds that of humans, AGI is an amorphous concept that is characterised by those trying to achieve it as a moving target rather than a fixed destination.

Musk’s pronouncement drew scorn from rival researchers including Yann LeCun, Meta’s chief AI scientist, who argues AGI is decades away.

As the race heats up to achieve AGI — and with it supremacy over a technology forecast to create trillions of dollars in value — here is how the top figures in the field are placing their bets on when and how it might arrive.

A montage of the year 2025 and a photo of Elon Musk

Musk has emerged as Silicon Valley’s biggest AI optimist. The past year’s improvements in large language models — the systems that sit behind apps such as ChatGPT — and the computing power available to fuel them led him to bring forward his AGI forecast, he said this week.

“My guess is that we’ll have AI that is smarter than any one human probably around the end of next year,” Musk said. By the end of this decade, the capabilities of AI will probably be greater than that of all of humanity combined, he added.

AI that goes far beyond the level of individual human experts is often categorised as “superintelligence”, rather than AGI.

“AI is the fastest advancing technology that I’ve ever seen of any kind, and I’ve seen a lot of technology,” Musk told Nicolai Tangen, chief executive of Norges Bank Investment Management, in an interview on his X site this week.

The South Africa-born billionaire has consistently been one of the most bullish voices in tech, with his predictions — such as for the capabilities of self-driving Teslas — often proving over-optimistic.

Some see a financial incentive behind his latest comments. Musk — a co-founder of OpenAI who left after boardroom clashes — is attempting to raise billions of dollars for his own AI start-up, xAI.

A montage of images of Dario Amodei, Sam Altman and Demis Hassabis

Others in the AI industry are doubtful about such an imminent breakthrough. But many of Musk’s closest rivals do not think AGI is too far away.

Anthropic’s co-founder Dario Amodei, a prominent figure in the space, has forecast that AI could match a “generally well-educated human” within two or three years. However, he refuses to be drawn into predicting when AI might exceed human capabilities.

Future developments are likely to hinge on factors that are hard to foresee, including breakthroughs in machine learning, continued increases in computing power and an accommodating regulatory environment.

Demis Hassabis, co-founder of Google’s DeepMind, recently speculated that AGI could be achieved by 2030. “When we started DeepMind back in 2010, we thought of it as a 20-year project and I think we’re kind of on track,” he said on a podcast hosted by Dwarkesh Patel in February. “I wouldn’t be surprised if we had AGI-like systems within the next decade.”

OpenAI’s chief executive Sam Altman has consistently resisted putting a precise date on AGI. He told an audience at the World Economic Forum in January that it could be achieved in the “reasonably close-ish future”.

His investors seem confident in its capabilities: eight-year-old OpenAI is now valued at more than $80bn.

But when asked to estimate when AI might outperform an individual across multiple domains, Altman said: “I don’t know how to put a precise timeline on it, or what AGI means any more.”

Altman, who in a blog post last year defined AGI as “AI systems that are generally smarter than humans”, may have his own reasons to demur. Elements of OpenAI’s commercial relationship with its biggest backer Microsoft will be dissolved once that threshold is reached.

Microsoft’s investment gives it licence to use OpenAI’s technology up until the start-up’s board determines AGI has been achieved, according to a lawsuit filed by Musk against the company.

A montage of images of Gary Marcus and Yann LeCun

While the founders of Silicon Valley’s hottest start-ups foresee AGI by the end of the decade, some of the industry’s most prominent researchers are more cautious.

“We hear a lot of people saying, ‘Oh my God, we’re going to get AGI within the next year’,” said LeCun of Meta, which has built its own highly powerful AI models. “Very prominent people say this and it is just not happening . . . It’s much harder than we think.”

LeCun, one of the most respected figures in AI, has consistently played down the idea that AGI will arrive imminently. Far from threatening humans, he said last year that the latest models were no better at learning than a cat.

“The emergence of intelligent machines or superintelligent machines is not going to be an event. Progress is going to be continuous over years,” he said. “It is going to take years, maybe decades . . . The history of AI is this obsession of people being overly optimistic and then realising that what they were trying to do was more difficult than they thought.”

Some experts are even more sceptical. Gary Marcus, a cognitive scientist who sold his AI start-up to Uber in 2016, bet Musk $10mn this week that “we won’t see human-superior AGI by the end of 2025”.

Marcus has previously written that AGI will be reached, “possibly before the end of this century”. But, he has said, today’s models are not remotely close to AGI and will not be until they can plug holes in “semantics, reasoning, common sense, theory of mind”.

Additional reporting by Cristina Criddle and Madhumita Murgia

Via

Leave a Comment