Happy Monday and welcome to your weekly countdown to foom & gloom.
Of all the issues we face in a post-AI society, the one standing out to me as most concerning is the generational skill gap we are setting ourselves up for.
Consider the complexity of all our built systems, and ask yourself what would happen if we could no longer maintain them. We all rely on tens of thousands of interdependent systems to keep our societies functioning. Some of these are local, some are national and some international. From our electric grids to banking to the universities preparing next generations, together we rely on a staggering number of hyperobjects which together enable our collective affordances and expectations as a society.
Among these systems, one stands out not only for its complexity but for its unparalleled rate of change: AI. Unlike traditional infrastructures, which evolve incrementally, AI’s development is driven by exponential advances in computing power, investment, and societal expectations. The compounding effect of trillions of dollars and peak ambitions accelerates this transformation, reinforcing the narrative that AI will soon redefine the fabric of our world.
It’s easy to overlook what might happen if we lose the ability to advance or even maintain these systems. We share an implicit assumption that there will be an increasingly qualified workforce interested and ready to pick up the research and development after we are gone. Assumptions can be misleading. Maybe I should not have re-watched Idiocracy before the elections, but there is a very real scenario where next generations are unprepared or even uninterested in carrying the torch of scientific progress, and instead fall into the trap of increased dependence without a proportional degree of involvement.
The AIs would seem as gods to them.
Future generations might inherit AI systems from us, perceiving them as mystical tools operated through incomprehensible means. The AIs would run our society to their own will, and leave only enough wiggle room to keep us alive and entertained.
I am not convinced this is avoidable. I’m born in 1982, and as a digital native consider technology a second language. I was brought up with explainable systems which could be safely disassembled and reassembled. The enclosed systems kids today are growing up with don’t necessarily create the same sense of ownership, and instead of empowering us, our current tablets and smartphones set a completely different set of expectations: that technology is neat and contained and available off the shelf. I hear from professors, parents and IT professionals that Gen Z and Alpha are simply not tech savvy in a way that we would recognize. While digitally native, they struggle with complex systems and fundamentally don’t understand abstractions like file systems – arguably because these have been polished away by operating systems like iOS.
Maybe this is a ridiculous extrapolation on my behalf, and maybe I am missing the forest for the trees, and maybe we will all be impressed by the acumen and preparedness of the generations set to inherit our world. Let’s hope so.
Until next week,
MZ
Exploring the Future of AI with Sam Altman (20 min)
In this interview with Phil Edwards, Sam Altman goes deep into the transformative impact of AI on society, addressing benefits and ethical considerations, emphasizing the importance of responsible AI development.
Keynote: AI's Impact On Industries (30 min)
In this recent keynote from Slush, Benedict Evans, a renowned technology analyst, examines AI’s profound influence on various sectors, highlighting its role in automating complex tasks and reshaping traditional business models. He emphasizes the necessity for companies to adapt to this rapidly evolving landscape to maintain competitiveness.
AI is not just another tool; it's a change in the fundamental structure of how industries operate.
The Future of AI in the Workplace (17 min)
Hosted by FT journalist Isabelle Baric, this discussion unpacks the collision of AI’s potential with its uneven adoption in the workplace. Leaders are racing to implement AI tools but are overlooking workforce preparedness: 43% of employees report no guidance on AI use. Salesforce research highlights stark divides—“maximalists” embrace AI eagerly, while “observers” take a wait-and-see stance. Companies investing in AI without addressing trust, clarity, and training risk inefficiencies. An interesting highlight: a Slack survey revealed that managers fostering trust double AI adoption rates among employees. This underscores that technology adoption isn’t merely technical but deeply human.
CEOs have bought Ferraris in AI systems, but haven’t given their staff any driving lessons.
AI Engineers Navigating High Demand (10 min)
In this CNBC International report, AI engineers discuss the complexities of their roles, highlighting the necessity for continuous learning and adaptability in a rapidly evolving field. They emphasize the importance of ethical considerations in AI development and the challenges of balancing innovation with responsible practices.
One report from Goldman Sachs in 2023 estimated 300 million full-time jobs could be lost to automation.
Future Risks and Opportunities (35 min)
In this interview, Eric Schmidt, former CEO of Google, discusses the dual-edged nature of AI’s rapid advancement. Schmidt underscores the importance of international collaboration to establish ethical standards and prevent misuse. He also touches on the societal implications of AI, including job displacement and the need for educational reforms to prepare the workforce for an AI-integrated future.
The challenge is to ensure that AI serves humanity’s best interests, which requires a combination of regulation, innovation, and ethical considerations.
AI Rap (3 min)
Inspired by this ChatGPT + Suno collab, here is my own rap: a song written by ChatGPT about AI & ML. Surprisingly listenable as far as content goes.
Inside NotebookLM (45 min)
Great podcast with the creators of NotebookLM with insights around building an AI native application.
Interestingness is controlled surprise.
Future of AI Coaching (30 min)
GPTCoach by Stanford: coaching methodology plus LLMs with loads of personal context. Super interesting from a product perspective.
If Artificial Insights makes sense to you, please help us out by:
📧 Subscribing to the weekly newsletter on Substack.
💬 Joining our WhatsApp group.
📥 Following the weekly newsletter on LinkedIn.
🦄 Sharing the newsletter on your socials.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.
You are receiving this newsletter because you signed up on envisioning.io or Substack.
More than enough humans are convinced that the pyramids of Giza were created by aliens or alien technology.
Why?
Because culturally we presume that human social intelligence is always compounding, and never forgetting or losing things. So we invent alien theories to assuage our cognitive dissonance between modern and ancient Egyptians.
Meanwhile, our human sense of direction and wayfinding is already eroding beneath our feet with our dependence of GPS maps.