Welcome to 2025!
One of the things I noticed during the quiet days of the holidays was how the response time from my AIs shortened considerably. It's not immediately noticable on language models, as they are respond quickly all year round. But as a heavy user of diffusion models, for example creating the images adorning this weekly publication, the response time went down from 10+ seconds to something like 2-3. That might not sound like much but the experience of rapid interaction and response time felt profound. Instead of having the time to gather your thoughts and get distracted doing something else, when the response time approaches immediacy our interaction with the tool changes because of the tightening feedback loop. The faster the response time, the higher the usability, reducing our cognitive load and improving the sense of flow with the machine. Working with people is natural, and the way we interact with machines will determine our entire experience with them.
The probable reason for a speed-up is: fewer people using shared GPUs means more computation is available for those terminally online. What's interesting about this ebb-and-flow of AI use is that few people have started deeply internalizing the shift of skills necessary to exploit an almost boundless intelligence in your favor.
Readers like you are experimenting with AI in your personal and professional lives and quickly finding out how the world works "after" AI. You might delve into language models for writing or research, and dabble with images, code and other media. This places you ahead of around 99.5% of the global population in terms of sheer exposure to the field. Despite all the hype, we represent an incredibly tiny population, with more leverage than you imagine.
For me, this means spending an inordinate amount of time "programming" by means of instructing Claude to incrementally build literally anything I can think of. It started with web apps, but the past two weeks have been a deep dive into iOS app development, something I never dreamt possible. Yet thanks to hundreds and hundreds of individual requests, I have managed to build a functioning prototype mobile app for something we've been exploring the past year and a half: using carefully tuned AI to help people navigate their inner sense of purpose. After our web-based experiments, we repeatedly felt the experience would work better as a mobile-first experience. Having never developed mobile apps before, I quickly realized the only thing holding me back was ignorance (of how capable LLMs are for software development) and fear (of failure). I'll have a LOT more to show about this app in coming weeks and months.
Until next week,
MZ
P.S. I will be in São Paulo later this week for a quick trip. Would you be interested in a meetup (em português) next Monday end of day somewhere around Av. Paulista? We can coordinate in the Brasil WhatsApp group and I'll announce the time & place here next week. Tchau!
Sam Altman just published a blog post with Reflections on the second birthday of ChatGPT and where we’ll probably going next.
We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.
Simon Willison organized a wonderful overview about everyting we learned about LLMs in 2024. Don’t miss it.
I like people who are skeptical of this stuff. The hype has been deafening for more than two years now, and there are enormous quantities of snake oil and misinformation out there. A lot of very bad decisions are being made based on that hype. Being critical is a virtue.
If we want people with decision-making authority to make good decisions about how to apply these tools we first need to acknowledge that there ARE good applications, and then help explain how to put those into practice while avoiding the many unintiutive traps.
I stumbled upon this Reddit thread where folks are sharing all the quirky and inventive ways they're using ChatGPT. You should check it out.
Great 60-min podcast by Alison Gopnik and Ted Chiang comparing developing AI to raising kids. They break down how AI systems need guidance, structure, and feedback to "grow" into something useful and responsible—basically parenting (allegedly).
Excellent list of real-world GenAI appications from Google Cloud—they gathered 101 ways companies are using generative AI right now. It’s like a cheat sheet of how big names in healthcare, retail, and tech are transforming their work with AI.
Melanie Mitchell’s article dives into OpenAI’s recent leaps in abstract reasoning—the kind of stuff that makes you go, "Wait, what?". OpenAI’s new model is tackling logic puzzles and tricky concepts in ways that feel a bit closer to human thinking, probably inching us closer to AGI.
AI, Intuition and What’s Next (52 min)
I have shared other Dive Club sessions before, so don’t miss Pran, a designer at Vercel, diving into how tools like V0 are flipping the script on design.
At the end of the day, nothing can replace having good taste and having good design intuition, no matter how good your LLM is.
From Starcraft to Superintelligence (51 min)
Hannah Fry and Oriol Vinyals (the guy behind Starcraft-beating AI) catch up on how AI’s been leveling up. They go from gaming bots to Gemini, DeepMind’s latest brainiac model, breaking down how imitation learning, reinforcement, and slick architectures are rewriting the AI playbook. It’s all about getting closer to AIs that can think, reason, and maybe even make life a little easier for us humans—without losing sight of the gnarly challenges like memory, scaling, and personalization.
There’s a limit to how these models scale... they don’t fit on a single chip, so you have a mesh of chips communicating, and at some point, the efficiency drops.
The Future of Intelligence and Agency (1h07)
Michael Levin discusses intelligence as a spectrum, from simple cells to collective societies. It challenges old-school ideas of control, calling for humility in understanding emergent minds. There’s a focus on building systems that care—balancing individual autonomy with collective purpose—while rethinking what it means to create and coexist with radically different kinds of intelligence.
The deep questions here are not about these AIs at all... This is a question about how we recognize and ethically relate to other beings that are radically not like us.
If Artificial Insights makes sense to you, please help us out by:
📧 Subscribing to the weekly newsletter on Substack.
💬 Joining our WhatsApp group.
📥 Following the weekly newsletter on LinkedIn.
🦄 Sharing the newsletter on your socials.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.
You are receiving this newsletter because you signed up on envisioning.io or Substack.