Sam Altman kicked off 2025 with a bold statement: OpenAI has figured out how to create artificial general intelligence (AGI), a term commonly understood as the point where AI systems can understand, learn, and perform any intellectual task a human can.
In reflective blog post published over the weekend also said the first wave of AI agents could join the workforce this year, marking what it describes as a pivotal moment in technological history.
Altman painted a picture of OpenAI’s journey from a quiet research lab to a company that claims to be on the verge of creating AGI.
The timeline seems ambitious—perhaps too ambitious—while ChatGPT celebrated its second birthday more than a month ago, Altman suggests that the next paradigm of AI models capable of complex reasoning is already here.
From there, it’s all about integrating near-human AI into society until AI beats us at everything.
Wen AGI, Wen ASI?
Altman’s elaboration of what AGI implies has remained vague, and his timeline predictions have raised eyebrows among AI researchers and industry veterans.
“We now believe we know how to create AGI as we have traditionally understood it,” Altman wrote. “We believe that by 2025 we may see the first AI agents ‘joining the workforce’ and fundamentally changing the output of companies. “
Altman’s explanation is vague because there is no standardized definition of AGI. As AI models become more powerful, but not necessarily more capable, the bar has had to be raised higher and higher each time.
“When considering what Altman said about AI agents at the AGI level, it’s important to focus on how the definition of AGI has evolved,” said Humayun Sheikh, CEO of Fetch.ai and Chairman of the ASI Alliance. Unscramble.
“While these systems may already pass many of the traditional benchmarks associated with AGI, such as the Turing Test, that does not mean they are sentient,” Sheikh said. “AGI has not yet reached the level of true consciousness and I don’t believe it will for some time.
The discrepancy between Altman’s optimism and the expert consensus raises questions about what “AGI” means. His treatment of AI agents “joining the workforce” in 2025 sounds more like advanced automation than true AI.
“Superintelligent tools could massively accelerate scientific discovery and innovation far beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity,” he wrote.
But is Altman right when he says AGI or agent integration will be a thing in 2025? Not everyone is so sure.
“There are simply too many bugs and inconsistencies with existing AI models that need to be worked out first,” said Charles Wayn, co-founder of decentralized super app Galxe. Unscramble. “That means it’s probably a matter of years rather than decades before we see AGI-level AI agents.”
Some experts question Altman’s bold predictions it may serve another purpose.
In any case, OpenAI was burning through cash at an astronomical rate, requiring massive investment to keep AI development on track.
Promising imminent breakthroughs could help keep investors interested despite the company’s significant operating costs, he said some.
≈ “We’re now confident that we can spin nonsense at an unprecedented level and get away with it, so we’re now trying to aim beyond hype in the purest sense of the word. We love our products, but we’re here for the glorious next round of funding. With unlimited funding, we… https://t.co/cH9xN5oJxK
— Gary Marcus (@GaryMarcus) January 6, 2025
That’s quite an asterisk for someone who claims to be on the verge of one of humanity’s most significant technological breakthroughs.
Still, others support Altman’s claims.
“If Sam Altman says AGI is coming soon, then he probably has some data or business acumen to back that claim up,” said Harrison Seletsky, director of business development at digital identity platform SPACE ID. Unscramble.
Seletsky said that “generally intelligent AI agents” may be gone in a year or two if Altman’s claims are true and the technology continues to evolve in the same space.
The CEO of OpenAI has indicated that AGI is not enough for him, and his company is aiming for ASI: the cutting edge of AI development, in which models exceed human capabilities in all tasks.
“We’re starting to turn our goal beyond that to superintelligence in the true sense of the word. We love our current products, but we’re here for a glorious future. We can do anything else with superintelligence,” Altman wrote on the blog.
While Altman did not explain a time frame for ASI, some expect that robots may replace all humans. until the year 2116.
Altman previously said that ASI is only a matter of “several thousand days”, yet experts from the Prognostic Institute state a 50% probability that ASI will be achieved at least 2060.
Knowing how to achieve AGI is not the same as being able to achieve it.
Yan Lecun, Meta’s chief AI researcher, said humanity is still far from reaching such a milestone due to limitations in the training technique or hardware needed to process such a huge amount of information.
I said that achieving human-level AI “will take years, if not decades.”
Sam Altman says “several thousand days” which is at least 2000 days (6 years) or maybe 3000 days (9 years).
So we are not at odds.But I think the distribution has a long end: it could take… https://t.co/EZmuuWyeWz
— Yann LeCun (@ylecun) October 16, 2024
Eliezer Yudkowsky, quite an influential AI researcher and philosopher, also argued that this may be a hype move that will basically benefit OpenAI in the short term.
OpenAI benefits both from short-term hype and from people later saying, “Ha ha, look at this field based on hype that didn’t deliver, it’s not dangerous, there’s no need to shut down OpenAI.” https://t.co/ybkh9DGUm5
— Eliezer Yudkowsky ⏹️ (@ESYudkowsky) January 5, 2025
Human workers vs AI agents
So agentic behavior is a thing—unlike AGI or ASI—and quality and versatility of AI agents are growing faster than many expect.
Frameworks such as Crew AI, Autogen or LangChain made it possible to create AI Agent systems with various capabilities, including the ability to work hand in hand with users.
What does this mean for the average Joe, and will it be a danger or a boon for ordinary workers?
Experts are not too worried.
“I don’t believe we’re going to see dramatic organizational changes overnight,” said Fetch.ai’s Sheikh. “While there may be some reduction in human capital, especially for repetitive tasks, these improvements can also address more sophisticated repetitive tasks that current drone systems cannot handle.
Seletsky also thinks that agents are most likely to perform repetitive tasks instead of those that require some level of decision-making.
In other words, people are safe if they can use their creativity and expertise to their advantage—and accept the consequences of their actions.
“I don’t think decision-making will necessarily be driven by AI agents in the near future, because they can reason and analyze, but they don’t have that human ingenuity yet,” Decrypt said.
And there seems to be some degree of consensus, at least in the short term.
“The key difference lies in the lack of ‘humanity’ in AGI’s approach. It’s an objective, data-driven approach to financial research and investing. This can help rather than hinder financial decisions because it removes some of the emotional biases that often lead to rash decisions,” said Galx’s Wayne.
Experts are already aware of the possible social consequences of adopting AI agents.
Research from the City University of Hong Kong, argues that generative AI and agents in general need to work with humans instead of replacing them in order for society to achieve healthy and continuous growth.
“AI has created both challenges and opportunities in a variety of fields, including technology, business, education, healthcare, as well as the arts and humanities,” the research paper says. “Collaboration between AI and humans is key to solving the challenges and seizing the opportunities created by generative AI.”
Despite this push for human-AI collaboration, companies have begun replacing human workers with AI agents, with mixed results.
Generally speaking, they always need a human to handle tasks that agents can’t do due to hallucinations, training limitations, or simply a lack of context understanding.
From 2024, it is almost 25% of CEOs excited the idea of having your farm of digitally enslaved agents who do the same work as humans without the labor costs.
Yet other experts argue that an AI agent can probably do almost 80% of what a CEO does better – so no one is really safe.
Edited Sebastian Sinclair
Generally intelligent Bulletin
A weekly AI journey narrated by Gene, a generative AI model.