AI scientists are sceptical that modern models will lead to AGI

- Advertisement -


Many AI firms say their models are on the road to artificial general intelligence, but not everyone agrees

MANAURE QUINTERO/AFP via Getty Images

Tech companies have long claimed that simply expanding their current AI models will lead to artificial general intelligence (AGI), which can match or surpass human capabilities. But as the performance of the most recent models has plateaued, AI researchers doubt that today’s technology will lead to superintelligent systems.

In a survey of 475 AI researchers, about 76 per cent of respondents said it was “unlikely” or “very unlikely” that scaling up current approaches will succeed in achieving AGI. The findings are part of a report by the Association for the Advancement of Artificial Intelligence, an international scientific society based in Washington DC.

Read more

How to avoid being fooled by AI-generated misinformation

This is a notable change in attitude from the “scaling is all you need” optimism that has spurred tech companies since the start of the generative AI boom in 2022. Most of the cutting-edge achievements since then have been based on systems called transformer models, which have improved in performance as they have been trained on increasing volumes of data. But they seem to have stagnated in the most recent releases, which showed only incremental changes in quality.

“The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced,” says Stuart Russell at the University of California, Berkeley, a member of the panel that organised the report. “I think that, about a year ago, it started to become obvious to everyone that the benefits of scaling in the conventional sense had plateaued.”

Nonetheless, tech companies plan to collectively spend an estimated $1 trillion on data centres and chips in the next few years to support their AI ambitions.

Sign up to our The Weekly newsletter

Receive a weekly dose of discovery in your inbox.

Sign up to newsletter

The hype around AI technologies may explain why 80 per cent of survey respondents also said current perceptions of AI capabilities don’t match reality. “Systems proclaimed to be matching human performance – such as on coding problems or mathematics problems – still make bone-headed mistakes,” says Thomas Dietterich at Oregon State University, who contributed to the report. “These systems can be very useful as tools for assisting in research and coding, but they are not going to replace any human workers.”

AI companies have more recently focused on so-called inference-time scaling, which involves AI models using more computing power and taking longer to process queries before responding, says Arvind Narayanan at Princeton University. But he says this approach is “unlikely to be a silver bullet” for reaching AGI.

Although tech companies frequently describe AGI as their ultimate goal, the very definition of AGI is unsettled. Google DeepMind has described it as a system that can outperform all humans on a set of cognitive tests, while Huawei has suggested reaching this milestone requires a body that lets AI interact with its environment. As for Microsoft and OpenAI, an internal report stated that they will consider AGI achieved only when OpenAI has developed a model that can generate $100 billion in profit.

Topics:

  • artificial intelligence/
  • computing
FacebookTwitterEmailLinkedInPinterestWhatsAppTumblrCopy LinkTelegramRedditMessageShare
- Advertisement -
FacebookTwitterEmailLinkedInPinterestWhatsAppTumblrCopy LinkTelegramRedditMessageShare
error: Content is protected !!
Exit mobile version