Researchers seek consensus on what constitutes Artificial General Intelligence

- Advertisement -
AI
Credit: Pavel Danilyuk from Pexels

A team of researchers at DeepMind focusing on the next frontier of artificial intelligence—Artificial General Intelligence (AGI)—realized they needed to resolve one key issue first. What exactly, they asked, is AGI?

It is often viewed in general as a type of artificial intelligence that possesses the ability to understand, learn and apply knowledge across a broad range of tasks, operating like the human brain. Wikipedia broadens the scope by suggesting AGI is “a hypothetical type of intelligent agent (that) could learn to accomplish any intellectual task that human beings or animals can perform.”

OpenAI’s charter describes AGI as a set of “highly autonomous systems that outperform humans at most economically valuable work.”

AI expert and founder of Geometric Intelligence Gary Marcus defined it as “any intelligence that is flexible and general, with resourcefulness and reliability comparable to (or beyond) human intelligence.”

With so many variations in definitions, the DeepMind team embraced a simple notion voiced centuries ago by Voltaire: “If you wish to converse with me, define your terms.”

In a paper published on the preprint server arXiv, the researchers outlined what they termed “a framework for classifying the capabilities and behavior of AGI models.”

In doing so, they hope to establish a common language for researchers as they measure progress, compare approaches and assess risks.

“Achieving human-level ‘intelligence’ is an implicit or explicit north-star goal for many in our field,” said Shane Legg, who introduced the term AGI 20 years ago.

In an interview with MIT Review, Legg explained, “I see so many discussions where people seem to be using the term to mean different things, and that leads to all sorts of confusion. Now that AGI is becoming such an important topic we need to sharpen up what we mean.”

In the arXiv paper, titled “Levels of AGI: Operationalizing Progress on the Path to AGI,” the team summarized several principles required of an AGI model. They include a focus on the capabilities of a system, not the process.

“Achieving AGI does not imply that systems ‘think’ or ‘understand’ (or) possess qualities such as consciousness or sentience,” the team emphasized.

An AGI system must also have the ability to learn new tasks, and know when to seek clarification or assistance from humans for a task.

Another parameter is a focus on potential, and not necessarily actual deployment of a program. “Requiring deployment as a condition of measuring AGI introduces non-technical hurdles such as legal and social considerations, as well as potential ethical and safety concerns,” the researchers explained.

The team then compiled a list of intelligence thresholds ranging from “Level 0, No AGI,” to “Level 5, Superhuman.” Levels 1–4 included “Emerging,” “Competent,” “Expert” and “Virtuosos” levels of achievement.

Three programs met the threshold of the label AGI. But those three, generative text models (ChatGPT, Bard and Llama 2), reached only “Level 1, Emerging.” No other current AI programs met the criteria for AGI.

Other programs listed as AI included SHRDLU, an early natural language understanding computer developed at MIT, listed at “Level 1, Emerging AI.”

At “Level 2, Competent” are Siri, Alexa and Google Assistant. The grammar checker Grammarly ranks at “Level 3, Expert AI.”

Higher up this list, at “Level 4, Virtuoso,” are Deep Blue and AlphaGo. Topping the list, “Level 5, Superhuman,” are DeepMind’s AlphaFold, which predicts a protein’s 3D structure from its amino acid sequence; and StockFish, a powerful open-source chess program.

However, there is no single proposed definition for AGI, and there is constant change.

“As we gain more insights into these underlying processes, it may be important to revisit our definition of AGI,” says Meredith Ringel Morris, Google DeepMind’s principal scientist for human and AI interaction.

“It is impossible to enumerate the full set of tasks achievable by a sufficiently general intelligence,” the researchers said. “As such, an AGI benchmark should be a living benchmark. Such a benchmark should therefore include a framework for generating and agreeing upon new tasks.”

FacebookTwitterEmailLinkedInPinterestWhatsAppTumblrCopy LinkTelegramRedditMessageShare
- Advertisement -
FacebookTwitterEmailLinkedInPinterestWhatsAppTumblrCopy LinkTelegramRedditMessageShare
error: Content is protected !!
Exit mobile version