đŹ See here Leopold's interview with Dwarkesh Patel
A growing number of researchers and engineers at the forefront of AI development believe that artificial general intelligence (AGI) could arrive as early as 2027. This view is based not on hype, but on consistent patterns across model performance, scaling behavior, compute infrastructure, and capital flows. These experts are observing rapid advances in hardware availability, continual gains in algorithmic efficiency, and unprecedented focus from leading AI labsâall of which are accelerating faster than many anticipated.
Taken together, these developments point to a near-term horizon where machines could match or exceed expert-level performance across a wide range of tasks. This isnât a speculative scenario decades awayâitâs something potentially within reach in just a few years.
Leopold Aschenbrenner refers to this perspective as situational awareness: the capacity to recognize where the technology is actually heading based on empirical signals, and to understand that AGI may arrive well before most institutions are prepared. His argument is that AGI is not a theoretical future milestoneâitâs an approaching inflection point that will significantly impact economics, national security, and the structure of global power.
â
1. Why AGI by 2027 is plausible
Scaling laws and trendlines
- From 2019 (GPT-2) to 2023 (GPT-4), large language models progressed from generating basic, often error-prone text to performing at near-expert levels on complex academic and professional tasks, including standardized tests and coding benchmarks.
- These improvements were not the result of fundamental theoretical breakthroughs, but came from systematically increasing compute power, expanding model sizes, using more refined datasets, and optimizing training procedures.
- This pattern follows a set of "scaling laws," which show that model performance improves predictably as compute and data increase.
- Aschenbrenner argues that if these scaling trends persist, we can expect 2027-era models to perform at or above the level of top human experts across many domainsâeffectively meeting a functional definition of AGI.
Effective compute will 100xâ1000x
- Training compute for frontier models has grown at a rate of ~0.5 OOMs (orders of magnitude) per yearâaround 10Ă annuallyâoutpacing Mooreâs Law by a significant margin.
- Public and private investment in GPU clusters and datacenter infrastructure is accelerating. By 2027, models may benefit from 2â3 additional OOMs, representing 100â1000Ă more effective compute.
Algorithmic efficiency
- Over the past decade, algorithmic improvements have significantly reduced the compute required to reach a given performance levelâby several orders of magnitude in some cases. These gains have come from innovations in training methods, model architectures, data utilization, and optimization techniques.
- Aschenbrenner estimates that we could see another 1â3 OOMs (orders of magnitude) of efficiency gains by the end of the decade. These advances would not only enhance what current compute can deliver but also multiply the effects of hardware scaling. This means that even without major breakthroughs in compute infrastructure, smarter algorithms alone could push performance into AGI territory.
âUnhobblingâ the models
- Many current models are artificially constrained by weak prompting, short context windows, and limited tools.
- New techniquesâsuch as reinforcement learning from human feedback (RLHF), tool use, agentic scaffolding, and long-context reasoningâdramatically enhance practical capabilities without increasing base model size.
- By âunhobblingâ models and deploying them as agents, the same base capabilities can yield much greater real-world utility.
Bottom line: a continuation of current scaling trends, plus existing optimization methods, would be sufficient to reach AGI by 2027.
â
2. The intelligence explosion
What happens after AGI?
- Once AI systems reach the level of top-tier researchers, they can begin contributing directly to AI research and development, acting as both collaborators and tools in accelerating innovation.
- These systems would be able to analyze experimental results, generate novel ideas, optimize architectures, and run large-scale simulations at unprecedented speed and scale.
- This sets up a feedback loop where advanced AI helps design even better AI, rapidly improving capabilities with each iteration.
- The recursive self-improvement dynamic could lead to an "intelligence explosion," in which AI systems quickly surpass human-level performance across a wide range of domainsâscientific, strategic, and technical.
Millions of fast, parallel AI scientists
- In a post-AGI world, organizations may deploy millions of AGI instances in parallel, each operating at accelerated speed.
- This would constitute a research force hundreds of times larger and faster than any human scientific community in history.
- Aschenbrenner argues that this could compress a decade of algorithmic progress into less than a year.
Superintelligence is not far behind
- A few additional OOMs of algorithmic and compute gains beyond AGI could result in systems that are vastly superhuman.
- Such systems could rapidly solve scientific challenges, optimize strategic planning, and develop entirely new technologies.
- The gap between human-level AI and superintelligence may not be decadesâit could be measured in months or a few years.
If AGI arrives by 2027, superintelligence could plausibly follow before 2030.

3. The geopolitical stakes
A new arms race
- The implications of superintelligence extend beyond technological innovationâthey may determine which nations gain long-term geopolitical control.
- A 1â2 year lead in deploying superintelligent systems could translate into decisive advantages across military operations, economic leverage, intelligence gathering, and cyber capabilities.
- Such an advantage may be difficult, if not impossible, for others to recover from in a competitive, high-stakes global environment.
Lessons from history
- Historical military conflicts demonstrate the importance of technological asymmetry: the Gulf War showed how a few decadesâ lead in defense tech can produce decisive victories.
- Similar dynamics may apply in the AI domain, but with the added risk of global-scale consequences.
The stakes for the free world
- If democratic nations lead the transition to AGI, they may be able to shape global norms for safe development, implement enforceable alignment standards, and build coalitions for international governance.
- In contrast, if authoritarian regimes like the CCP take the lead, the likely outcome includes stronger global surveillance systems, centralized control over advanced AI capabilities, and a reduced chance of transparency or global coordination.
- Aschenbrenner argues that U.S. leadership is essential not only for strategic security but also for preserving the possibility of accountable, values-aligned AGI deployment on a global scale.
âSuperintelligence is a matter of national security, and the United States must win.â
â
4. What needs to happen
Security is not on track
- Current AI labs operate with minimal cybersecurity relative to the sensitivity of their work.
- Model weights, training data, and research artifactsâoften worth hundreds of millions of dollarsâare stored in environments that frequently lack rigorous security protocols.
- These assets remain vulnerable to theft by state actors, cybercriminals, or insider threats, raising concerns not only about intellectual property loss but also about the uncontrolled proliferation of powerful models.
- A shift toward classified, defense-grade information security is urgently needed, including hardened infrastructure, strict access controls, active monitoring, and collaboration with national security agencies to identify and mitigate emerging threats.
The coming âAGI Manhattan Projectâ
- As AGI approaches, the U.S. government is likely to assume a more direct role in overseeing frontier AI development.
- Aschenbrenner anticipates the creation of a centralized, national-scale AGI effort modeled on the original Manhattan Project.
- This would entail secure facilities, cross-agency collaboration, top-tier talent recruitment, and multi-trillion-dollar funding.
âNo startup can handle superintelligence.â
â
5. Situational awareness is rare
- Despite the evidence and strategic implications, most decision-makers remain unaware of how quickly AI is advancing.
- Even within tech, many leaders view AGI timelines as speculative or distant.
- Aschenbrennerâs message is that situational awarenessârecognizing the implications of the current trajectoryâis essential for institutions that hope to remain relevant in the years ahead.
âIf weâre right about the next few years, we are in for a wild ride.â
â
Situational Awareness is not merely a forecastâit is a framework for interpreting the present. The trends in compute, model performance, and institutional behavior suggest that AGI is not a multi-decade project. It's a near-term transition that will reshape society, the economy, and international relations.
Whether or not one agrees with every element of Aschenbrennerâs timeline, his core message is difficult to discard: world-changing AI tools are arriving faster than expected, and preparation must begin now.
â
Quotes
"It is strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer. That doesnât require believing in sci-fi; it just requires believing in straight lines on a graph."
â
"Look. The models, they just want to learn. You have to understand this. The models, they just want to learn." â ilya sutskever (circa 2015, via Dario Amodei)
â
"A decade earlier, models could barely identify simple images of cats and dogs; four years earlier, GPT-2 could barely string together semi-plausible sentences. Now we are rapidly saturating all the benchmarks we can come up with. And yet this dramatic progress has merely been the result of consistent trends in scaling up deep learning."
â
"We are racing through the OOMs extremely rapidly [...]. While the inference is simple, the implication is striking. Another jump like that very well could take us to AGI, to models as smart as PhDs or experts that can work beside us as coworkers. Perhaps most importantly, if these AI systems could automate AI research itself, that would set in motion intense feedback loops."
â
"If you keep being surprised by AI capabilities, just start counting the OOMs."
â
"It used to take decades to crack widely-used benchmarks; now it feels like mere months."
â
"If thereâs one lesson weâve learned from the past decade of AI, itâs that you should never bet against deep learning."
â
"With each OOM of effective compute, models predictably, reliably get better. If we can count the OOMs, we can (roughly, qualitatively) extrapolate capability improvements. Thatâs how a few prescient individuals saw GPT-4 coming."
â
"[...] The upshot is clear: we are rapidly racing through the OOMs. There are potential headwinds in the data wall [...]âbut overall, it seems likely that we should expect another GPT-2-to-GPT-4-sized jump, on top of GPT-4, by 2027."
â
"Thereâs a very real chance things stall out [...]. But I think itâs reasonable to guess that the labs will crack it, and that doing so will not just keep the scaling curves going, but possibly enable huge gains in model capabilities."
â
"Scaling up simple deep learning techniques has just worked, the models just want to learn, and weâre about to do another 100,000x+ by the end of 2027. It wonât be long before theyâre smarter than us."
â
"AI progress wonât stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into 1 year. We would rapidly go from human-level to vastly superhuman AI systems."
â
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an âintelligence explosion,â and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make." â i. j. good (1965)
â
"After initially learning from the best human games, AlphaGo started playing against itselfâand it quickly became superhuman, playing extremely creative and complex moves that a human would never have come up with."
â
"The jump to superintelligence would be wild enough at the current rapid but continuous rate of AI progress [...]. But it could be much faster than that, if AGI automates AI research itself."
â
"Once we get AGI, we wonât just have one AGI. [...] Given inference GPU fleets by then, weâll likely be able to run many millions of them [...]."
â
"Get to AGI [...]âand AI systems will become superhumanâvastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualitatively smarter than an elementary schooler."
â
"Automated AI research could probably compress a human decade of algorithmic progress into less than a year (and that seems conservative)."
â
"Superintelligence would likely provide a decisive military advantage, and unfold untold powers of destruction. We will be faced with one of the most intense and volatile moments of human history."
â
"We could see economic growth rates of 30%/year and beyond, quite possibly multiple doublings a year. This follows fairly straightforwardly from economistsâ models of economic growth."
â
"Military power and technological progress has been tightly linked historically, and with extraordinarily rapid technological progress will come concomitant military revolutions."
â
"The barriers to even trillions of dollars of datacenter buildout in the US are entirely self-made. [...] The clusters can be built in the US, and we have to get our act together to make sure it happens in the US. American national security must come first [...]. If American business is unshackled, America can build like none other (at least in red states)."
â
"We must be prepared for our adversaries to âwake up to AGIâ in the next few years. AI will become the #1 priority of every intelligence agency in the world. In that situation, they would be willing to employ extraordinary means and pay any cost to infiltrate the AI labs."
â
"The German project had narrowed down on two possible moderator materials: graphite and heavy water. [...] Since Fermi had kept his result secret, the Germans did not have Fermiâs measurements to check against, and to correct the error. This was crucial: it left the German project to pursue heavy water insteadâa decisive wrong path that ultimately doomed the German nuclear weapons effort."
â
"[...] We will face a situation where, in less than a year, we will go from recognizable human-level systems [...] to much more alien, vastly superhuman systems that pose a qualitatively different, fundamentally novel technical alignment problem."
â
"[...] I expect that within a small number of years, these AI systems will be integrated in many critical systems, including military systems (failure to do so would mean complete dominance by adversaries). It sounds crazy, but remember when everyone was saying we wouldnât connect AI to the internet? The same will go for things like âweâll make sure a human is always in the loop!â"
â
"What makes this incredibly hair-raising is the possibility of an intelligence explosion: that we might make the transition from roughly human-level systems to vastly superhuman systems extremely rapidly, perhaps in less than a year."
â
"Superintelligence will be the most powerful technologyâ and most powerful weaponâmankind has ever developed. It will give a decisive military advantage, perhaps comparable only with nuclear weapons."
â
"Our generation too easily takes for granted that we live in peace and freedom. And those who herald the age of AGI in SF too often ignore the elephant in the room: superintelligence is a matter of national security, and the United States must win."
â
"At stake in the AGI race will not just be the advantage in some far-flung proxy war, but whether freedom and democracy can survive for the next century and beyond."
â
"The free world must prevail over the authoritarian powers in this race."
â
"[...] A healthy lead gives us room to maneuver: the ability to âcash inâ parts of the lead, if necessary, to get safety right [...]."
â
"The national security state will get involved [...] by 27/28, weâll get some form of government AGI project."
â
"No startup can handle superintelligence."
â
"If the government project is inevitable, earlier seems better."
â
"Now it feels extremely visceral. I can see it. I can see how AGI will be built. [...] I can basically tell you the cluster AGI will be trained on and when it will be built, the rough combination of algorithms weâll use, the unsolved problems and the path to solving them, the list of people that will matter. I can see it."
â
"Will we tame superintelligence, or will it tame us? Will humanity skirt self-destruction once more? The stakes are no less."
â