5 Ways AGI Could Unfold — And How to Prepare Now

Are we about to achieve artificial general intelligence (AGI) or is it an impossible goal?

Define your perspective on artificial general intelligence

AGI doesn’t even exist, but ask any group of experts to share their ideas about artificial general intelligence, and you will likely hear multiple strong and conflicting opinions. Some will say AGI is inevitable; others will claim it’s impossible.

This can be confusing for leaders. To help cut through the noise, and give you ammunition to ask probing questions, we’ve laid out five perspectives on AGI — that it is imminent, impossible, unpredictable, irrelevant or a vision — you can use to navigate hype, address concerns and determine the right actions to take today.

Learn more about how Gartner can help you

Talk to us to discover how we scope emerging technologies — such as AGI — and help you understand, select and operationalize them.

By clicking the "Continue" button, you are agreeing to the Gartner Terms of Use and Privacy Policy.

5 potential futures for AGI

Artificial general intelligence (AGI) is the (currently hypothetical) capability of a machine that can match or surpass the capabilities of humans across all cognitive tasks. In addition, AGI will be able to autonomously learn and adapt in pursuit of predetermined or novel goals in a wide range of both physical and virtual environments.

Consider the following five perspectives on what’s next — and what each requires of your organization.

Perspective No. 1: AGI is coming

Those who subscribe to the idea of AGI as a goal humanity is working toward — and may be close to achieving — base that belief on the following real-world dynamics:

  • Progress: Creative outputs from GenAI models coupled with rapid improvements in multimodal models combining language and vision are driving fast improvements in AI.

  • Investment: Investors have put billions of dollars into AI companies, even as smaller actors like DeepSeek achieve ambitious goals through more efficient methods.

  • Hope: AGI capabilities are considered critical for solving seemingly intractable problems, like climate change and water scarcity.

The benefit of viewing AGI as imminent lies in optimism about a big, aspirational goal. Organizations that adopt this perspective should look out for innovations and improvements that exceed expectations. Closely follow composite and neurosymbolic AI, as these could mark the path to AGI.

Perspective No. 2: AGI is impossible

Proponents of this perspective hold that real intelligence doesn’t exist in a vacuum, nor can it be simulated. Instead, it must be embodied in living beings and grounded in reality. Moreover, we shouldn’t underestimate the complexity of true intelligence, it most likely cannot be built.

The benefit of this belief is that it forces people to sharpen their understanding of intelligence and identify promising ways to overcome AI’s limitations, regardless of whether those innovations meet the definition of AGI. 

Those who adopt the “AI is impossible” view should validate their perspective by looking for AI hype to die down, advances to slow and attention to shift to new approaches. At the same time, entertain the possibility that you are wrong and reassess your beliefs every so often.

Perspective No. 3: AGI is unpredictable

Those who see AGI as unpredictable argue that we do not know what it will look like because intelligence is a relative, subjective concept that is continually redefined and reshaped by context. As such, AGI could take one or several evolutionary paths that we cannot predict.

The appeal of this view lies in its openness to the possibility of alternative, non-human versions of intelligence. Yet its agnosticism also makes it difficult to know when or if AGI has arrived. By that logic, it may already be here and we’ll only know in retrospect. This makes it hard to plan for or to take action on, so keep actively following discussion of the topic.

Perspective No. 4: AGI is irrelevant

Despite all the hype and investment surrounding AGI, some question whether achieving it is the right goal. It seems more important to have AI systems that can address difficult issues or solve challenging problems in a single domain than systems that promise cross-disciplinary capabilities.

Validation of this position will come if new AI issues beyond hallucinations and bias arise as a result of more cross-domain, continually evolving AGI-like systems. Similarly, an ever-lengthening timeline for achieving AGI could signal it’s the wrong path.

Perspective No. 5: AGI is a vision

Some artificial general intelligence enthusiasts argue that the point is not to make AGI a reality, but rather to view it as a guiding ideal. Never reaching AGI could encourage investors and innovators to focus on developing solutions that address significant challenges and require generalist capabilities that exceed human capacity.

The benefit of this perspective is that it positions AGI as a progress accelerator that can solve problems beyond current human computational ability. Evidence that this position is taking hold will appear in the form of AGI’s inclusion in the mission statements of diverse companies and its integration into interdisciplinary projects.

Artificial general intelligence (AGI) FAQs

What is artificial general intelligence (AGI)?

Gartner defines artificial general intelligence (AGI) as the (currently hypothetical) capability of a machine that can match or even surpass the capabilities of humans across all cognitive tasks. In addition, artificial general intelligence will be able to autonomously learn and adapt to work toward goals in both physical and virtual environments.


When will we achieve AGI?

Leaders like OpenAI CEO Sam Altman say 2025; futurist Ray Kurzweil predicts it will be 2029. Others stress the number of breakthroughs that are still necessary to achieve and surpass human cognitive capabilities, and predict that achieving AGI is unlikely to occur before 2050 — and maybe not even by 2100.


What will AGI allow us to do?

AGI will allow us to solve problems that previously were unsolvable, particularly the complex and dynamic problems that we cannot address using conventional methods like advanced analytics and automation. Referred to as “wicked problems,” these types of challenges do not have clear boundaries, change according to the context, lack a single solution and are often connected to deeply held beliefs of values. The new forms of intelligence that AGI reveals will be essential for tackling wicked problems.

Drive stronger performance on your mission-critical priorities.