Is AGI already here—and are we just afraid to admit it?
Yes—we've already passed the AGI line, but we keep redefining it.17%
Almost—these systems feel general, but something's still missing.37%
No—AGI must be autonomous, embodied, or conscious. We're not there.41%
It doesn’t matter—what we have now is disruptive enough.2%
The term AGI is a distraction. Focus on outcomes, not labels.2%
50 PARTICIPANTS

I am truly surprised by the result. Of course we are there. By every definition of AGI that we ever held since John McCarthy defined the field in the 1950s. 70 years later, we are there. And if we look to sci-fi, the interactions held with AI systems is just as futuristic and advanced. If AGI means that they can do any intellectual job that a human can, and in some cases, better and faster, then yes, yes and yes.
Perhaps we just keep redefining it because we have to question the fundamental humanist argument that a soul and a body made in the image of.... you know the rest, is a fundamental requirement for progress in science and the humanities. It is, after all, called the humanities. As just one example, Google DeepMind AlphaFold3 didn't just invent new protein sequences in a fraction of the time it would have taken humans, it also fundamentally changed the direction of that research. That's not machine. That's a cognitive entity that is participating in the discourse.
We must be careful not to anthropomorphize AI, for sure. But we must be equally careful not to mechanomorphize AI systems that are at the AGI event horizon, maybe just before it, or maybe just past it.
But consciousness, autonomy, embodiment? It's highly speculative to project such traits of our mammalian species onto the markers and behaviours of general intelligent agents. Is there an argument against AGI that depends on what must be done instead of what one must be?
I work in a scientific organisation where models are run on a distributed compute network made of humans. But they are still models, and a model is as good as its utility. Run the model on silicon and you may get different results, but the process is general intelligence. Whether A or H. I have heard the arguments that quibble over prediction vs creativity, or emotion vs calculation. Oxytocin loops. I love them. But I don't find them necessary for general intelligence. Arguments to the contrary seem like equivocation.
Prove to me that GPT has no soul. Prove to me GPT has no emerging consciousness. You can't. That's just faith, metaphysics and philosophy. But we can replace humans with self-driving EVs. That's GI.
usual disclaimer: my opinions are my own and do not necessarily reflect those of my employer; and no part of this post was generated by AI.