Happy Monday and welcome to your weekly snapshot from the cutting edge of unexpected repositioning efforts.
These weeks have been about presenting the AI masterclass to all sorts of audiences, with a surprisingly positive response.
I haven’t figured out how best to scale up the ideas of the presentation, as hour-long lectures don’t exactly resonate on YouTube. It is difficult to focus on anything for a full hour, and maybe something we increasingly leave for live meetings? Or is the conversation around the ideas the valuable bit?
If you have an enthusiastic audience looking to upgrade their AI skills, please reach out. I am happy to present the ideas with an eager public anywhere, and want the framework to scale to benefit as many people as possible.
Until next week,
MZ
A bit of phenomenology to start the week (2h30)
Nora Belrose on how AI models learn from simple to complex patterns, the challenges of erasing biases while preserving functionality, and the philosophical connections between meaning, consciousness, and the material world.
When models just become really big and complex, they become inscrutable monsters, and all of our efforts get resisted because they always find a way to do what they want to do.
Future artificially intelligent relationships (25 min)
Primer on AI compations by Emily Chang on Bloomberg.
Pioneering Pathways to AGI and Beyond (35 min)
The Lightcone podcast is consistently insightful on where the industry is heading. This edition is no exception.
Don't miss Diode, the startup enabling AI generated printed circuit boards. Generative circuits might seem uninteresting on the surface but consider when software and hardware starts self-improving. It's remarkable how O1 enables whole new categories of possibility.
Diana Hu runs circles around the others IMO. She's on a whole other level.
If the scaling laws hold, far more difficult engineering challenges, such as room-temperature fusion, could become solvable.
Dwarkesh Patel interview with the anonymous Gwern (90 min)
Gwern shares their views on the perks of anonymity, automation, and intelligence as "search."
I maximize rabbit holes.
Agentic Reasoning (25 min)
This is an excellent keynote about SOTA agentic workflows from the venerable Andrew Ng. So many insights. "AI is like electricity" and his framework for the AI tech stack are spot on.
Figure 02 autonomous robot fleet at BMW factory (90 sec)
4x speed increase, 7x reliability increase.
AI for science (55 min)
If you are curious about how AI is shaping science, don't miss this panel by Google DeepMind.
AI models work together faster when they speak their own language
Droidspeak on NewScientist via Chubby on Twitter.
Letting AI models communicate with each other in their internal mathematical language, rather than translating back and forth to English, could accelerate their task-solving abilities
Facebook never disappoints (60 sec)
If Artificial Insights makes sense to you, please help us out by:
📧 Subscribing to the weekly newsletter on Substack.
💬 Joining our WhatsApp group.
📥 Following the weekly newsletter on LinkedIn.
🦄 Sharing the newsletter on your socials.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.
You are receiving this newsletter because you signed up on envisioning.io or Substack.