There are a few things I believe in artificial intelligence:
Over the past few years, AI systems have begun to outperform humans in many domains (mathematics, coding, medical diagnosis), and, to name just a few, believe they are getting better every day.
I'd argue that one or more AI companies have created artificial general information, perhaps in 2026 or 2027, but perhaps this year soon. This is usually defined as something like “a general purpose AI system that can perform almost any cognitive task that a human can do.”
When an AGI is announced, there is debate about the definition and debate about whether it is considered a “real” AGI, but these are of little importance. This means that we have lost our monopoly on human-level intelligence and our transition to a world with extremely powerful AI systems.
Over the next decade, we believe that powerful AI will generate trillions of dollars of economic value and will balance political and military power against the nation that controls it.
I believe that most people and institutions are not entirely prepared for the AI systems that exist today, let alone more powerful, and there are no realistic plans at the government level to mitigate the risks of these systems or to acquire the benefits of these systems.
I believe that hardened AI skeptics who insist on all progress being smoke and mirrors and dismissing Agi as a delusional fantasy are not only wrong with merit, but also give people a false sense of security.
Whether you think AGI is great or awful for humanity, honestly, it may be too early to say, but its arrival raises important economic, political and technical questions that currently have no answers.
I think it's time to start preparing for your AGI.
All this may sound crazy. But I didn't get to these views as a starry futurist, a man who hosted my AI portfolio and took too many magic mushrooms and saw “Terminator 2.”
I arrived to them as a journalist. Journalists talk to engineers who build powerful AI systems, investors fund them, and researchers study the effectiveness of the program. And I've come to believe that what's happening with AI is greater than most people understand.
In San Francisco, where I'm based, the idea of Agi is neither fringed nor exotic. People here talk about “feeling AGIs” and building AI systems that are smarter than humans has become a clear goal for some of the big Silicon Valley companies. Every week I meet engineers and entrepreneurs working on AI, and the corner is the kinds of changes that are shaking the world, the kinds of changes I've never seen before.
“What was once called the 'short timeline' (which AGI think will be created over the last decade) over the past year or two, recently told me by Miles Brundage, an independent AI policy researcher who left Openai last year.
Outside the Bay Area, few people have heard of AGI. And in my industry, journalists who take AI advances seriously risk being fooled and sneered as industry sils.
Honestly, I get a response. Currently, AI systems contribute to Nobel Prize-winning breakthroughs, but 400 million people use ChatGPT a week, but many of the AI people encounter in their daily lives are annoying. I sympathize with anyone who sees AI slops being painted all over their Facebook feed, or has clumsy interactions with customer service chatbots and thinks. What does this take over the world?
I was also laughing at this idea. But I came to believe I was wrong. Several things have convinced me to take AI advances more seriously.
The insiders are worried.
The most misguided thing about the AI industry today is that people closest to technology, namely employees and executives at major AI labs, tend to be the most concerned about how quickly it is improving.
This is very rare. In 2010, when I was covering the rise of social media, no one on Twitter, Foursquare, or Pinterest warned that their app could cause social disruption. Mark Zuckerberg had not found evidence that it could be used to test Facebook to create new biological weapons or carry out autonomous cyberattacks.
But today, those who have the best information about AI advancements – those who build powerful AI with access to systems that are more sophisticated than the public see, say a major change is approaching. Large AI companies are actively preparing to prepare for the arrival of AGIs and are studying the potentially scary properties of the model. For example, whether they can plan and deceive in anticipation of them becoming more capable and autonomous.
“There's a system in sight that starts pointing out AGI,” says Sam Altman, CEO of Openai.
Demis Hassabis, CEO of Google Deepmind, says AGI is probably “3-5 years away.”
Mankind's chief executive, Dario Amodei (who doesn't like the term AGI and agrees with the general principle) said last month that he believed he was a year or two away from having “a much smarter AI system than almost every human being.”
Perhaps these predictions should be neglected. After all, AI executives can benefit from the hype of inflated AGIs and may have an incentive to exaggerate.
But many independent experts have said the same, including Geoffrey Hinton and Yoshua Bengio, two of the world's most influential AI researchers, and Ben Buchanan, the top AI expert in the Biden administration. So are other prominent economists, mathematicians and national security officials.
To be fair, some experts doubt that AGI is imminent. But even if you ignore anyone who works in AI companies or has a vested interest in the outcome, there are still plenty of reliable and independent voices with short AGI timelines.
The AI model continues to improve.
For me, as persuasive as experts say, is evidence that today's AI systems are improving quickly in a fairly obvious way for those who use them.
In 2022, when Openai released CHATGPT, major AI models suffered from basic arithmetic, often failing with complex inference problems, often creating “hastisation,” or non-existent facts. Chatbots of that era were able to do impressive things with the right prompts, but never use them for anything very important.
Today's AI models are far better. Currently, the specialized model has medalist-level scores in the International Mathematics Olympiad, and the general model is very good at solving complex problems, so new and difficult tests had to be created to measure competence. Hallucinations and de facto mistakes still occur, but they are rare in new models. And many companies now trust AI models enough to build their core for their customers.
(The New York Times sued Openai and its partner Microsoft, accusing them of copyright infringement of news content related to AI Systems. Openai and Microsoft denied the claim.)
Part of the improvement is a function of scale. With AI, larger models trained with more data and processing power tend to produce better results, with today's major models being significantly larger than their predecessors.
However, this is also due to breakthroughs made by AI researchers in recent years. Most notably, the emergence of “inference” models constructed to take additional computational steps before giving a response.
The inference model, which includes Openai's O1 and Deepseek's R1, is trained to solve complex problems and is built using reinforcement learning. This is a technique used to teach AI to play board games. They seem to have succeeded in what stumbled over previous models. (One example: The standard model released by Openai, GPT-4O won 9% in AIME 2024. This was the O1, an inference model released by Openai a few months later, and it won 74% in the same test.)
As these tools improve, they are becoming more useful for many types of white-collar knowledge work. My colleague Ezra Klein recently wrote that ChatGpt's Deep Research output is a premium feature that creates complex analytical briefs, and is “at least the median” of the human researchers he worked with.
I also found many uses for AI tools in my work. I don't write columns using AI, but I use it for many other things, such as preparing interviews, summarizing research papers, building personalized apps to help with administrative tasks, and more. This was not possible a few years ago. And I think anyone who regularly uses these systems for serious work can conclude that they hit the plateau.
If you really want to know how good AI is these days, talk to your programmer. A year or two ago, AI coding tools existed, but rather than swapping them, they were trying to speed up human coders. Today, software engineers say that AI does most of the actual coding and increasingly feel that their job is to oversee AI systems.
Jared Friedman, partner at Y Combinator, a startup accelerator, recently said that a quarter of the current startup batch of accelerators uses AI to write almost all of their code.
“A year ago, they would have built their products from scratch, but now 95% is built by AI,” he said.
Better than excessive preparation.
In a spirit of epistemological humility, I have to say that I and many others may be wrong about our timeline.
AI progress will be a bottleneck that we didn't expect. It will have a lack of energy to prevent AI companies from building larger data centers, or it will limit access to powerful chips used to train AI models. Maybe today's model architecture and training techniques can't take us all the way to AGI and need more breakthroughs.
However, even if AGI arrives 10 years later than expected in 2036, rather than 2026, I think we should start preparing now.
Most of the advice I've heard about how to prepare for AGI is what we should do anyway: modernize our energy infrastructure, strengthen cybersecurity defenses, speed up the AI-designed drug approval pipeline, create regulations to prevent the most serious AI harms, and educate AI literacy to educate school literacy. All of these are wise ideas with or without AGI
Some technicians worry that an early fear of AGI will regulate AI too aggressively on us. But the Trump administration wants to speed up AI development and shows that it won't slow it down. And there's not much chance that big AI companies will spontaneously pump the brakes, so there's enough money spent creating next-generation AI models (thousands of billions, even hundreds of millions of dollars).
People who are overpreparing for AGIs aren't worried either. I think the big risk is that most people don't realize that there's a powerful AI here until they stare into their faces. Eliminate their work, get caught up in scams, hurt them, hurt their loved ones. This, roughly, happened in the social media era. This is when tools like Facebook and Twitter are too risky to be recognized until they are too large to change.
That's why we believe in taking the potential of AGI seriously, even if we don't know exactly when it will arrive or exactly what shape it will take.
If we are denial or simply not paying attention, we may lose the opportunity to shape this technology when it matters most.