Big Bets On Which Of These Pathways Will Push Today’s AI To Become Prized AGI

By: bitcoin ethereum news|2025/05/04 16:00:04
0
Share
copy
Laying out the pathways from today’s convention AI to reach the vaunted AGI. In today’s column, I examine the most likely pathways to get us from today’s contemporary AI to the vaunted AGI (artificial general intelligence). This is a mighty big open question and there are AI makers and humongous tech firms all making bets on which path will be the winner-winner chicken dinner when it comes to attaining AGI. Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. Strawman Dates On Attaining AGI Since attaining AGI seems to be a greater chance in the relatively near-term versus achieving ASI, let’s put our minds to trying to foresee how AGI is going to be reached. I will use some strawman dates to help illuminate the murky matter. Recent surveys of AI specialists indicate that the overall guess is that the year 2040 will be the presumed date by which AGI will have been accomplished. Numerous AI luminaries are touting that we will arrive at AGI sooner, such as in the next 3 to 5 years from today, thus they are staking their brazen claims on the years 2028 to 2030. I find this to be doubtful. They are also using Jedi mind tricks to twist the definition of AGI into being a lot less than what AGI is really supposed to denote, which helps to bolster their emboldened date forecasts. For my analysis of the various predicted dates and assorted definitions of AGI, see the link here. The strawman we will use here is the year 2040. That gives us a runway of 15 years. It is useful to put some thought into how those fifteen years are going to play out. Timeline Considerations As you well know, we are currently sitting just about mid-way through the year 2025. Trying to envision arriving at AGI in the year 2040 seems like a daunting task. It is quite a long distance in time from our present-day AI status. No worries, we will do a divide-and-conquer approach to see what we can come up with. One possibility is that the advances in AI occur smoothly on a year-by-year basis which ultimately culminates in AGI. Assume that each year there is an incremental advancement, and the advancement is roughly the same amount of progression each year. In other words, if we improve AI by about 7% per year, doing so over roughly 15 years, AGI will become a reality by 2040 (I’m using rounded numbers for this thought exercise). Some AI prognosticators believe that simply incrementing AI each year is not the ticket to success. Their view is that the current methodologies and practices are not going to scale up. Concerns are that everyone in AI is pretty much part of a massive one-way-fits-all mindset, blindly pursuing the same kinds of algorithms and approaches. Only if we break free of this malaise and come up with radically new ideas will AGI be attained. For more on this AI progression heated debate, see my coverage at the link here. The Bet On A Miracle Here’s what vocal critics of the incremental approach say is potentially going to happen. Their hope is pinned on the idea that an enterprising AI developer will miraculously see beyond the bounds of existing AI and derive a groundbreaking new approach that no one has yet even imagined. This breakthrough will be the Holy Grail that gets us to AGI. Shortly after inventing or figuring out this incredible innovation, AGI will be right around the corner. Consider how this gives a different perspective on the timeline. Maybe the incremental approach muddles along for a dozen years. Some progress is being made, and ongoing self-congratulations are occurring. But AGI doesn’t seem within view. Investors in this AI are getting perturbed and asking hard questions about when AGI is finally going to be had. Boom, out of nowhere, the enterprising AI developer comes up with an incredible breakthrough, doing so around year 13 or 14. Then, this breakthrough is rapidly nursed into becoming AGI. In that scenario, there are twelve years of modest incremental progress that is then suddenly punctuated by a new way of devising AI. Once that occurs, in relatively short order the vaunted AGI is figured out. Variations on that timeline are roughly the same in the sense that over the fifteen years, there is a sudden transformative eureka about AI that puts AGI in the picture. Perhaps this happens in year 10 instead of year 13. Or maybe it occurs at the last moment, arising in year 14. A disconcerting problem with that timeline is that it is a bet on a kind of miracle occurring during the AGI pursuit. You might have seen a popular cartoon of two scientists standing at a chalkboard that is filled with arcane equations, and in the middle of the chalkboard, there is a noticeable gap. One scientist asks the other one, what goes in that gap? The response is that a miracle goes in that spot. Seven Major Pathways I’ve come up with seven major pathways that AI is going to advance to become AGI. The first listed path is the incremental progression trail. The AI industry tends to refer to this as the linear path. It is essentially slow and steady. The idea of a sudden miracle happening is usually coined as the moonshot path. Besides those two avenues, there are five more. Here’s my list of all seven major pathways getting us from contemporary AI to the treasured AGI: (1) Linear path (slow-and-steady): This AGI path captures the gradualist view, whereby AI advancement accumulates a step at a time via scaling, engineering, and iteration, ultimately arriving at AGI. (2) S-curve path (plateau and resurgence): This AGI path reflects historical trends in the advancement of AI (e.g., early AI winters), and allows for leveling-up via breakthroughs after stagnation. (3) Hockey stick path (slow start, then rapid growth): This AGI path emphasizes the impact of a momentous key inflection point that reimagines and redirects AI advancements, possibly arising via theorized emergent capabilities of AI. (4) Rambling path (erratic fluctuations): This AGI path accounts for heightened uncertainty in advancing AI, including overhype-disillusionment cycles, and could also be punctuated by externally impactful disruptions (technical, political, social). (5) Moonshot path (sudden leap): Encompasses a radical and unanticipated discontinuity in the advancement of AI, such as the famed envisioned intelligence explosion or similar grand convergence that spontaneously and nearly instantaneously arrives at AGI (for my in-depth discussion on the intelligence explosion, see the link here). (6) Never-ending path (perpetual muddling): This represents the harshly skeptical view that AGI may be unreachable by humankind, but we keep trying anyway, plugging away with an enduring hope and belief that AGI is around the next corner. (7) Dead-end path (AGI can’t seem to be attained): This indicates that there is a chance that humans might arrive at a dead-end in the pursuit of AGI, which might be a temporary impasse or could be a permanent one such that AGI will never be attained no matter what we do. You can apply those seven possible pathways to whatever timeline you want to come up with. I used the fifteen years of reaching AGI in 2040 as an illustrative example. It could be that 2050 is more likely and this will play out over 25 years. If 2028 is going to be the AGI arrival year, the pathway is going to be markedly compressed. Placing Your Bets How does a belief in one pathway over another pathway shape the placing of your bets? If the linear path is where you are putting your poker chips, it would seem that all that needs to happen is to continue doing what is already being done right now. Keep the ship steady and presumably on course. Don’t let anything distract from that direction. The sudden leap to AGI via the moonshot path would appear to necessitate a maverick departure from what is being done at this time. Do whatever is feasible to think outside the prevailing box. Fund those wild and wide-eyed new ideas. Nurture them along and do not let the myopic pressures of others convince you otherwise. Similar strategies apply to each respective pathway. I’m betting you are avidly curious as to which of the seven pathways is thought to be the most likely. In addition, you might be mildly interested in which of the seven is seen as the least likely. In talking with many of my fellow AI researchers, a casual and highly informal sense is that the S-curve is the most likely. This generally aligns with high-tech development curves. It also abides by the belief that what we are doing now isn’t going to scale up. During a period of a plateau, some new change is going to nudge us forward and open the door to scaling up. It won’t be a miracle breakthrough. Instead, ingenuity and novelty will help move the needle. Which of the seven pathways suits your fancy? In terms of the least likely of the pathways, the same ad hoc semblance of AI colleagues speculates that the moonshot won’t be the rescuer to get us to AGI. In their minds, the miracle cure gets worse odds than lighting striking you while a meteor lands on your head. Maybe this skepticism reflects a belief that what we know is what we know and that there isn’t something else extraordinary that we haven’t yet devised. I certainly don’t want that sentiment to dampen any AI innovators from stretching boundaries and trying outsized new ideas. Please keep your spirit strong. Do not let naysayers stop you from your heart’s pursuit. As the famous American art historian remarked: “Miracles happen to those who believe in them.” The same might happen with attaining AGI. Source: https://www.forbes.com/sites/lanceeliot/2025/05/04/big-bets-on-which-of-these-pathways-will-push-todays-ai-to-become-prized-agi/

You may also like

Some Key News You Might Have Missed Over the Chinese New Year Holiday

On the day of commencement, should we go long or short?

Key Market Information Discrepancy on February 24th - A Must-Read! | Alpha Morning Report

1. Top News: Tariff Uncertainty Returns as Bitcoin Options Market Bets on Downside Risk 2. Token Unlock: $SOSO, $NIL, $MON

$1,500,000 Salary Job: How to Achieve with $500 AI?

The Essence of Agentification: Use algorithms to replicate your judgment framework, replacing labor costs with API costs.

Bitcoin On-Chain User Attrition at 30%, ETF Hemorrhage at $4.5 Billion: What's Next for the Next 3 Months?

The network appears to be still running, but participants are dropping off.

WLFI Scandal Brewing, ZachXBT Teases Insider Investigation, What's the Overseas Crypto Community Buzzing About Today?

What's Been Trending with Expats in the Last 24 Hours?

Debunking the AI Doomsday Myth: Why Establishment Inertia and the Software Wasteland Will Save Us

Original Title: Against Citrini7Original Author: John Loeber, ResearcherOriginal Translation: Ismay, BlockBeats


Editor's Note: Citrini7's cyberpunk-themed AI doomsday prophecy has sparked widespread discussion across the internet. However, this article presents a more pragmatic counter perspective. If Citrini envisions a digital tsunami instantly engulfing civilization, this author sees the resilient resistance of the human bureaucratic system, the profoundly flawed existing software ecosystem, and the long-overlooked cornerstone of heavy industry. This is a frontal clash between Silicon Valley fantasy and the iron law of reality, reminding us that the singularity may come, but it will never happen overnight.


The following is the original content:


Renowned market commentator Citrini7 recently published a captivating and widely circulated AI doomsday novel. While he acknowledges that the probability of some scenes occurring is extremely low, as someone who has witnessed multiple economic collapse prophecies, I want to challenge his views and present a more deterministic and optimistic future.


Never Underestimate "Institutional Inertia"


In 2007, people thought that against the backdrop of "peak oil," the United States' geopolitical status had come to an end; in 2008, they believed the dollar system was on the brink of collapse; in 2014, everyone thought AMD and NVIDIA were done for. Then ChatGPT emerged, and people thought Google was toast... Yet every time, existing institutions with deep-rooted inertia have proven to be far more resilient than onlookers imagined.


When Citrini talks about the fear of institutional turnover and rapid workforce displacement, he writes, "Even in fields we think rely on interpersonal relationships, cracks are showing. Take the real estate industry, where buyers have tolerated 5%-6% commissions for decades due to the information asymmetry between brokers and consumers..."


Seeing this, I couldn't help but chuckle. People have been proclaiming the "death of real estate agents" for 20 years now! This hardly requires any superintelligence; with Zillow, Redfin, or Opendoor, it's enough. But this example precisely proves the opposite of Citrini's view: although this workforce has long been deemed obsolete in the eyes of most, due to market inertia and regulatory capture, real estate agents' vitality is more tenacious than anyone's expectations a decade ago.


A few months ago, I just bought a house. The transaction process mandated that we hire a real estate agent, with lofty justifications. My buyer's agent made about $50,000 in this transaction, while his actual work — filling out forms and coordinating between multiple parties — amounted to no more than 10 hours, something I could have easily handled myself. The market will eventually move towards efficiency, providing fair pricing for labor, but this will be a long process.


I deeply understand the ways of inertia and change management: I once founded and sold a company whose core business was driving insurance brokerages from "manual service" to "software-driven." The iron rule I learned is: human societies in the real world are extremely complex, and things always take longer than you imagine — even when you account for this rule. This doesn't mean that the world won't undergo drastic changes, but rather that change will be more gradual, allowing us time to respond and adapt.


The Software Industry Has "Infinite Demand" for Labor


Recently, the software sector has seen a downturn as investors worry about the lack of moats in the backend systems of companies like Monday, Salesforce, Asana, making them easily replicable. Citrini and others believe that AI programming heralds the end of SaaS companies: one, products become homogenized, with zero profits, and two, jobs disappear.


But everyone overlooks one thing: the current state of these software products is simply terrible.


I'm qualified to say this because I've spent hundreds of thousands of dollars on Salesforce and Monday. Indeed, AI can enable competitors to replicate these products, but more importantly, AI can enable competitors to build better products. Stock price declines are not surprising: an industry relying on long-term lock-ins, lacking competitiveness, and filled with low-quality legacy incumbents is finally facing competition again.


From a broader perspective, almost all existing software is garbage, which is an undeniable fact. Every tool I've paid for is riddled with bugs; some software is so bad that I can't even pay for it (I've been unable to use Citibank's online transfer for the past three years); most web apps can't even get mobile and desktop responsiveness right; not a single product can fully deliver what you want. Silicon Valley darlings like Stripe and Linear only garner massive followings because they are not as disgustingly unusable as their competitors. If you ask a seasoned engineer, "Show me a truly perfect piece of software," all you'll get is prolonged silence and blank stares.


Here lies a profound truth: even as we approach a "software singularity," the human demand for software labor is nearly infinite. It's well known that the final few percentage points of perfection often require the most work. By this standard, almost every software product has at least a 100x improvement in complexity and features before reaching demand saturation.


I believe that most commentators who claim that the software industry is on the brink of extinction lack an intuitive understanding of software development. The software industry has been around for 50 years, and despite tremendous progress, it is always in a state of "not enough." As a programmer in 2020, my productivity matches that of hundreds of people in 1970, which is incredibly impressive leverage. However, there is still significant room for improvement. People underestimate the "Jevons Paradox": Efficiency improvements often lead to explosive growth in overall demand.


This does not mean that software engineering is an invincible job, but the industry's ability to absorb labor and its inertia far exceed imagination. The saturation process will be very slow, giving us enough time to adapt.


Redemption of "Reindustrialization"


Of course, labor reallocation is inevitable, such as in the driving sector. As Citrini pointed out, many white-collar jobs will experience disruptions. For positions like real estate brokers that have long lost tangible value and rely solely on momentum for income, AI may be the final straw.


But our lifesaver lies in the fact that the United States has almost infinite potential and demand for reindustrialization. You may have heard of "reshoring," but it goes far beyond that. We have essentially lost the ability to manufacture the core building blocks of modern life: batteries, motors, small-scale semiconductors—the entire electricity supply chain is almost entirely dependent on overseas sources. What if there is a military conflict? What's even worse, did you know that China produces 90% of the world's synthetic ammonia? Once the supply is cut off, we can't even produce fertilizer and will face famine.


As long as you look to the physical world, you will find endless job opportunities that will benefit the country, create employment, and build essential infrastructure, all of which can receive bipartisan political support.


We have seen the economic and political winds shifting in this direction—discussions on reshoring, deep tech, and "American vitality." My prediction is that when AI impacts the white-collar sector, the path of least political resistance will be to fund large-scale reindustrialization, absorbing labor through a "giant employment project." Fortunately, the physical world does not have a "singularity"; it is constrained by friction.


We will rebuild bridges and roads. People will find that seeing tangible labor results is more fulfilling than spinning in the digital abstract world. The Salesforce senior product manager who lost a $180,000 salary may find a new job at the "California Seawater Desalination Plant" to end the 25-year drought. These facilities not only need to be built but also pursued with excellence and require long-term maintenance. As long as we are willing, the "Jevons Paradox" also applies to the physical world.


Towards Abundance


The goal of large-scale industrial engineering is abundance. The United States will once again achieve self-sufficiency, enabling large-scale, low-cost production. Moving beyond material scarcity is crucial: in the long run, if we do indeed lose a significant portion of white-collar jobs to AI, we must be able to maintain a high quality of life for the public. And as AI drives profit margins to zero, consumer goods will become extremely affordable, automatically fulfilling this objective.


My view is that different sectors of the economy will "take off" at different speeds, and the transformation in almost all areas will be slower than Citrini anticipates. To be clear, I am extremely bullish on AI and foresee a day when my own labor will be obsolete. But this will take time, and time gives us the opportunity to devise sound strategies.


At this point, preventing the kind of market collapse Citrini imagines is actually not difficult. The U.S. government's performance during the pandemic has demonstrated its proactive and decisive crisis response. If necessary, massive stimulus policies will quickly intervene. Although I am somewhat displeased by its inefficiency, that is not the focus. The focus is on safeguarding material prosperity in people's lives—a universal well-being that gives legitimacy to a nation and upholds the social contract, rather than stubbornly adhering to past accounting metrics or economic dogma.


If we can maintain sharpness and responsiveness in this slow but sure technological transformation, we will eventually emerge unscathed.


Source: Original Post Link


Popular coins

Latest Crypto News

Read more