The only thing that matters right now is how we handle the collective opportunity that is being presented to us in the form of artificial intelligence.
There is no question whether AGI will “actually happen” as, in my opinion, it is already well behind us. The whole of human intelligence no longer seriously challenges AI in terms of intelligence. There are countless jagged edges where we humans still have the upper hand, but these islands are shrinking and being replaced by an ocean of non-human intelligence forms with capabilities we can barely comprehend.
AGI means something different to everyone. In fact, the Microsoft-OpenAI alliance hinges on the very definition of AGI being achieved (or not), and is on collision course in 2025. Expect experts and lawyers to dissect what AGI is or isn’t, but make no mistake: in an unevenly distributed future the places we’ll experience general forms of intelligence are already here & will only expand.
Every part of the economy that can shift towards automation will, in my opinion, do so. Most people haven’t realized the imminence of this shift, but it is inevitable that joblessness grows as margins are further squeezed away from the human part of businesses. Why hire for what you can automate?
So how do we deal with this crisis/opportunity?
What responsibility does each of us bear in managing such a transition? Do we oppose it or do we guide it? If we guide it, how do we do so in a way that benefits everyone? How do we even know what benefits others today, let alone in the distant future? Which historical precedent exists where humanity achieved even a fraction of this? Would you trust listening to AI for help?
The only parallel I can think of is the invention of writing.
We have been writing for at least 5000 years, and I find it nearly impossible to imagine a reality where we don’t read or write. The human societies left on earth today which practice only oral communication, no writing, are few and far between, and share seemingly very little with the “writing world”.
I suspect AI will have a similar effect. It is nearly impossible to imagine what is on the other side of the transition and I have no idea if we can even prepare for it. Maybe being here is the best we can do.
Until next week,
Michell
* The title of this week’s newsletter…
…is a direct quote from this superb lecture and Q&A about AGI by philosopher superstar Joscha Bach at last week’s CCC conference in Berlin, Self Models of Loving Grace. Don’t miss it.
If Artificial Insights makes sense to you, please help us out by:
📧 Subscribing to the weekly newsletter on Substack.
💬 Joining our WhatsApp group.
📥 Following the weekly newsletter on LinkedIn.
🦄 Sharing the newsletter on your socials.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.
You are receiving this newsletter because you signed up on envisioning.io or Substack.
I’m with you on the AGI comments. Save that the jagged edges represent such an advanced level of dementia that it would never be released from an institution.
We’ve yet to approach even Gary Busey AI yet.