A history of definitions of powerful AI

I made this timeline as supporting research for this essay: AGI (disambiguation). Excerpted:

This is a timeline of how visions of AGI and powerful AI have shifted over the past ~75 years, from the fathers of computing to today's frontier labs.

On net, definitions have inched toward the measurable, practical, and achievable; emphasizing an AI's performance—especially vis-a-vis human benchmarks, on human jobs—over internal "consciousness" or "agency," recursive self-improvement, or the ability to generalize to any new task or environment.

Ultimately, AGI is better understood through the lens of faith, field-building, and ingroup signaling than as a concrete technical milestone. AGI represents an ambition and an aspiration; a Schelling point, a shibboleth. The AGI-pilled share the belief that we will soon build machines more capable than ourselves—that humans won't retain our hegemony on intelligence for long. The specifics aren't important: it's a feeling of existential weight.

Please email jaswsunny at gmail dot com for suggestions or corrections.

1947
Automatic Computing Engine

A good working rule is that the ACE can be made to do any job that could be done by a human computer, and will do it in one ten-thousandth of the time. I was researching on what might now be described as an [3] investigation of the theoretical possibilities and limitations of digital computing machines. I considered a type of machine which had a central mechanism, and an infinite memory which was contained on an infinite tape. This type of machine appeared to be sufficiently general.

"Lecture to the London Mathematical Society" from the Collected Works of Alan M. Turing, Volume 1: Mechanical Intelligence
This was the first known conception of an electronic computer intelligent enough to complete human-like tasks, learn from experience, etc. The lecture also discusses issues like labor revolts, alignment problems, and hallucination.
1955
artificial intelligence

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.

Proposal for the Dartmouth Summer Research Project on AI
This was the first published usage of "artificial intelligence" and is credited with the invention of the AI field.
1980
strong AI

According to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.

Minds, brains, and programs
Searle refutes the strong AI hypothesis with the Chinese Room thought experiment.
1997
advanced artificial general intelligence

By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of ... operations where a human intelligence would otherwise be needed.

Nanotechnology and National Security
This was the first recorded usage of AGI, but went mostly unnoticed by the AI community.
2002
AGI

What is meant by AGI is, loosely speaking, AI systems that possess a reasonable degree of self-understanding and autonomous self-control, and have the ability to solve a variety of complex problems in a variety of contexts, and to learn to solve new problems that they didn't know about at the time of their creation. A marked distinction exists between practical AGI work and, on the other hand:

• Pragmatic but specialized "narrow AI" research which is aimed at creating programs carrying out specific tasks like playing chess, diagnosing diseases, driving cars and so forth (most contemporary AI work falls into this category.)
• Purely theoretical AI research, which is aimed at clarifying issues regarding the nature of intelligence and cognition, but doesn't involve technical details regarding actually realizing artificially intelligent software.

Artificial General Intelligence (2005)
Goertzel and Legg are credited with the popular diffusion of the term AGI. This became the title of Goertzel's 2005 anthology of AGI research papers.
2005
strong AI

Artificial intelligence permeates our economy. It's what I define as "narrow" AI: machine intelligence that equals or exceeds human intelligence for specific tasks… So what are the prospects for "strong" AI, which I describe as machine intelligence with the full range of human intelligence?

Long Live AI
2005
human-level intelligence

Machines exhibiting true human-level intelligence should be able to do many of the things humans are able to do. Among these activities are the tasks or "jobs" at which people are employed. I suggest we replace the Turing test by something I will call the "employment test."

To pass the employment test, AI programs must be able to perform the jobs ordinarily performed by humans. Progress toward human-level AI could then be measured by the fraction of these jobs that can be acceptably performed by machines. Rather than work toward this goal of automation by building special-purpose systems, I argue for the development of general-purpose, educable systems that can learn and be taught to perform any of the thousands of jobs that humans can perform.

Human-Level Artificial Intelligence? Be Serious!
2007
universal intelligence

"Intelligence measures an agent's ability to achieve goals in a wide range of environments."

The universal intelligence of agent π:

Universal intelligence formula: Upsilon(pi) := sum over mu in E of 2^(-K(mu))V_mu^pi
Universal Intelligence: A Definition of Machine Intelligence
2011
AGI

"Artificial General Intelligence" (hereafter, AGI) is the emerging term of art used to denote "real" AI (see, e.g., the edited volume Goertzel and Pennachin 2006). As the name implies, the emerging consensus is that the missing characteristic is generality. Current AI algorithms with human‐equivalent or ‐superior performance are characterized by a deliberately‐programmed competence only in a single, restricted domain… It is a qualitatively different class of problem to handle an AGI operating across many novel contexts that cannot be predicted in advance.

The Ethics of Artificial Intelligence
2014
high-level machine intelligence

Define a 'high–level machine intelligence' (HLMI) as one that can carry out most human professions at least as well as a typical human. We need to avoid using terms that are already in circulation and would thus associate the questionnaire with certain groups or opinions, like "artificial intelligence", "singularity", "artificial general intelligence" or "cognitive system". For these reasons, we settled for a definition that a) is based on behavioral ability, b) avoids the notion of a general 'human–level' and c) uses a newly coined term.

Future Progress in Artificial Intelligence: A Survey of Expert Opinion
A 2014 survey of AI expert opinion, where "HLMI" is used to be debiasing.
2014
superintelligence

Superintelligence is defined as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest."

Superintelligence (2014)
2016
transformative AI

Roughly and conceptually, transformative AI refers to potential future AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution. I also provide (below) a more detailed definition. The concept of "transformative AI" has some overlap with concepts put forth by others, such as "superintelligence" and "artificial general intelligence." However, "transformative AI" is intended to be a more inclusive term, leaving open the possibility of AI systems that count as "transformative" despite lacking many abilities humans have.

Some Background on Our Views Regarding Advanced Artificial Intelligence
2016
high-level machine intelligence

"High-level machine intelligence" (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers.

When Will AI Exceed Human Performance? Evidence from AI Experts
Grace's survey of AI experts on their views on AGI, first done in 2016.
2018
AGI

OpenAI's mission is to ensure that artificial general intelligence (AGI) — by which we mean highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity.

OpenAI Charter
2019
general intelligence

The intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty. "General intelligence" is not a binary property which a system either possesses or lacks. It is a spectrum, tied to 1) a scope of application, which may be more or less broad, and 2) the degree of efficiency with which the system translate its priors and experience into new skills over the scope considered, 3) the degree of generalization difficulty represented by different points in the scope considered. It is conceptually unsound to set "artificial general intelligence" in an absolute sense (i.e. "universal intelligence") as a goal. The consensus definition of AGI, "a system that can automate the majority of economically valuable work," while a useful goal, is an incorrect measure of intelligence.

On the Measure of Intelligence
Chollet has criticized other definitions of AGI, arguing that they measure memorization rather than true generality.
2022
human-level AI

I think the phrase AGI should be retired and replaced by "human-level AI". There is no such thing as AGI. Even human intelligence is very specialized. We do not realize that human intelligence is specialized because all the intelligent tasks we can think of are task that we can apprehend. But that is a tiny subset of all tasks. The overwhelming majority of tasks are completely out of reach of un-augmented human intelligence.

LinkedIn
2023
AGI

We use AGI to refer to systems that demonstrate broad capabilities of intelligence, including reasoning, planning, and the ability to learn from experience, and with these capabilities at or above human-level.

Sparks of Artificial General Intelligence: Early experiments with GPT-4
2024
levels of AGI

Artificial General Intelligence (AGI) is an important and sometimes controversial concept in computing research, used to describe an AI system that is at least as capable as a human at most tasks.

DeepMind's AGI levels table showing progression from No AI through Emerging, Competent, Expert, Virtuoso to Superhuman levels, across Narrow and General capabilities
Levels of AGI for Operationalizing Progress on the Path to AGI
This paper emphasizes both generality and performance. They create a separate spectrum for levels of autonomy, associated with risk.
2024
stages of AGI

OpenAI has internally defined five "stages" of AGI

OpenAI's five levels of AGI: Level 1 (Chatbots), Level 2 (Reasoners), Level 3 (Agents), Level 4 (Innovators), Level 5 (Organizations)
Bloomberg: OpenAI Sets Levels to Track Progress Toward Superintelligent AI
(In a 2023 blog post, Sam Altman distinguishes between "initial AGI" and "successor systems.")
2024
powerful AI

I find AGI to be an imprecise term that has gathered a lot of sci-fi baggage and hype. I prefer "powerful AI" or "Expert-Level Science and Engineering"which get at what I mean without the hype. By powerful AI, I have in mind an AI model—likely similar to today's LLM's in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:

• In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.

• In addition to just being a "smart thing you talk to", it has all the "interfaces" available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access.

• It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on.

• It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.

• It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.

• It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use.

• The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed. It may however be limited by the response time of the physical world or of software it interacts with.

• Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.

We could summarize this as a "country of geniuses in a datacenter."

Machines of Loving Grace
2025
AGI

Systems that start to point to AGI* are coming into view, and so we think it's important to understand the moment we are in. AGI is a weakly defined term, but generally speaking we mean it to be a system that can tackle increasingly complex problems, at human level, in many fields.

Three Observations
Sam's blog post definition is notably vaguer and weaker from the OpenAI 2018 charter's definition, "highly autonomous systems that outperform humans at most economically valuable work"