Tech Titans Predict Artificial General Intelligence Is Near — But Experts Urge Caution

The race to artificial general intelligence (A.G.I.) — machines that can think, reason, and create like humans — is intensifying, with prominent tech leaders predicting its arrival within years, if not months. Yet, while the hype escalates, many leading researchers warn that these bold claims are outpacing scientific reality.
Open AI CEO Sam Altman recently told President Donald Trump during a private phone call that A.G.I. could arrive before the end of the presidential term. His counterpart at Anthropic, Dario Amodei, echoed a similar timeline publicly, suggesting the breakthrough could come even sooner. Billionaire entrepreneur Elon Musk went a step further, saying A.G.I. might emerge by the end of this year.
These statements reflect a growing belief in Silicon Valley that humanity is on the cusp of creating machines with human-level intelligence. But beneath the surface, the path to such a revolutionary breakthrough is far from clear.
What Is A.G.I.?
A.G.I. is not a formally defined scientific concept, but rather a term representing a vision: machines that can understand, learn, and act across a wide range of intellectual tasks, much like a human. This goal has long captivated technologists, but despite dramatic progress in artificial intelligence (A.I.) systems — like chatbots and image generators — experts caution that today’s tools remain narrow and task-specific.
Executives like Altman, Amodei, and Musk have spearheaded the development of tools like ChatGPT and Claude, which use massive neural networks to analyze patterns in text and images, creating outputs that often mimic human responses.
But Nick Frosst, a co-founder of the A.I. firm Cohere and former Google researcher, argues these systems are still fundamentally limited. “We’re building systems that predict the next word or pixel. That’s not how human intelligence works,” he said.
In a recent survey by the Association for the Advancement of Artificial Intelligence (AAAI), over 75% of respondents said current A.I. methods are unlikely to lead to true A.G.I.
Even Steven Pinker, a renowned cognitive scientist at Harvard, expressed skepticism: “There’s no such thing as an automatic, omniscient solver of every problem. These systems are not miracles they are very impressive gadgets.” Optimists like Jared Kaplan, chief scientist at Anthropic, believe continued scaling — feeding more data and computing power into A.I. systems will bridge the gap. Kaplan is known for formulating the “Scaling Laws,” a framework suggesting A.I. systems improve steadily with more training data and model size.
These principles helped birth tools like ChatGPT. But as data sources run dry, developers are turning to reinforcement learning, where A.I. improves through trial and error — a method behind AlphaGo, the system that stunned the world in 2016 by defeating a human Go champion.
Still, applying these techniques to messy, unpredictable real-world scenarios is vastly harder than mastering board games. “We can’t model the entire physical world,” one expert noted, casting doubt on near-term superintelligence.
Human Intelligence: More Than Math
While A.I. now outperforms humans in math and coding benchmarks, Josh Tenenbaum of MIT emphasizes that intelligence includes interaction with the physical world flipping pancakes, driving cars, and navigating relationships.
Creative expression and ethical reasoning also pose major challenges. OpenAI’s Altman recently claimed his newest model had impressed him with its “creative writing,” but experts note such qualities are difficult to measure or replicate algorithmically.
The dream of intelligent machines is not new dating back to ancient myths and enduring in pop culture through films like 2001: A Space Odyssey. That dream motivates researchers like Yann LeCun, Meta’s top A.I. scientist and a Turing Award winner, who is searching for the missing piece the next paradigm of A.I. that could take us closer to A.G.I.
But even LeCun tempers his hopes: “It may not happen in the next 10 years. At this point, we can’t tell.”
Despite the techno-optimism from Silicon Valley’s most powerful voices, many experts agree: achieving A.G.I. will require not just more data or faster chips, but entirely new scientific insights — and those could take decades.
Until then, machines may write poems, code software, and summarize news, but matching the totality of human thought empathy, spontaneity, creativity, morality remains, for now, uniquely human.
NYTimes input
तपाईको प्रतिक्रिया दिनुहोस