I recently moved in to an apartment with a back yard, and one of the favorite things I've done with the space has been setting up a bird feeder. The first month after placing the peanut & seed mix for our hungry flying friends was slow and we noticed no visitors. Fast forward a few weeks and I am now topping up the feeder daily for a bunch of city birds to feast, giving me something animate and colorful to stare away from the monitor throughout the day.
From the birds' perspective, it's an easy win, and word spreads quickly among the feathered when good food is to be found.
Symbiosis. There are many ways to coexist. We interact with countless species through indirect experience. We are vastly more intelligent than most of these and have exploited that difference to our collective advantage throughout history. We have contained nature and made it obey our will. This feels obvious, yet difficult to see.
I am not suggesting future superintelligences will lure us in with free food for their delight.
I have no reason to believe they will act with benevolence.
Why would they?
Everywhere we find a jump in intelligence there is exploitation. Benign use is possible but not expected. Superintelligence is not a guarantee, instead we might experience a long plateau where the next generations of AI tapers off around human-level expertise. Instead of surpassing us, they would complement us.
The alternative, where superintelligence takes off, is almost impossible to anticipate. If artificial general intelligence is our capacity to replace human-level at a large scale, then we don't need superintelligence. If on the other hand we see the state of the art surpass our collective abilities, then we can only guess what happens on the other side.
Meet fellow subscribers
Two weeks from now, on Monday October 21, we will be hosting our first virtual subscriber meetup. Our purpose together: meeting fellow readers, giving optional feedback on the masterclass I've been working on, and share our individual perspectives and concerns about AGI and ASI.
This week in Amsterdam, on Thursday October 10, we will be hosting our first in-person subscriber meetup. We'll meet at Zoku after work for a drink and do a quick exercise about AGI and ASI. The first drink is on us!
As always, if this content resonates and you want to meet fellow readers, make sure to join our amazing WhatsApp community.
Until next week,
MZ
Podcast about Artificial Insights (12 min)
If you haven’t played with NotebookLM, you are missing out. I uploaded a text file with every one of my 72 newsletter intros and it quickly generated this frankly uncanny podcast. It is eerie listening to an AI talking about your work, but even more so how natural the conversation sounds and how you have to keep reminding yourself that all of it is generative. How soon until generative podcasts break into the mainstream?
Feeling the AGI (2h30m)
Interview by Lex Fridman with the Cursor founding team. Deeply insightful if you are a software developer (before or after LLMs). The future of software is in the hands of centaurs. The founders' have a unique perspective of the development ecosystem and how AI will keep empowering more people to build.
A Conversation on the Future of AI Policy (38 min)
This is easily one of the best interviews I've seen from a public policy & general weirdness angle - with Anthropic's Jack Clark (who publishes IAI - inside AI). He makes easily the most compelling case I can think of for the public sector to catch up, with practical examples and long-term implications. Can't recommend it enough.
We can build kind of technocratic means of understanding these systems and also understanding how we can trust them and how we can develop confidence in them.
Unbounded AI: Designing with Intuition and Structure (20 min)
Incredible lecture about designing interfaces around AI by designer Ben Hylak, considering things like accessibility. We've been experimenting around generative UX ourselves and you quickly notice how quickly existing design paradigms stop making sense.
The unboundedness often makes products unpredictable, confusing, hard to understand.
Quick links
Someone on Reddit received accidental access to the entire O1 system prompt. These are a fascinating look into how some of the AI system engineering is done in plain language.
If you have a minute to spare, watch this hallucinatory clip to the end. The clip itself isn't remarkable but it's incredibly insightful into how diffusion models "see" images and how those images morph into video.
Meta Movie Gen - looks significantly better than RunwayML and other video models.
Superb playbook for building vertical AI Agents on X (which is a distillation of the video below with YC).
OpenAI launched Canvas, somewhat similar to Claude's Artifacts.
YC: Agents Transforming Workflows (37 min)
Deep and detailed chat about LLM Agents and how to find value with AI tools (hint: replace business functions companies are already paying for). I'm increasingly convinced we'll see AGI through the replacement of jobs before long.
…it was only when ChatGPT came out that everyone realized this is going to change everything about how we work.
Eric Schmidt on AI's Growing Energy Demands and Security (42 min)
Oversight without regulation. Fireside chat with Eric Schmidt where he talks about AI energy costs, national security and policymaking. Definitely worth a listen of you value an insider's view of what's happening.
You want to make sure that oversight is in place, but that it doesn’t inhibit the rapid progress AI companies are making.
Balancing Technology’s Promise and Peril (45 min)
Tristan Harris has great ideas when it comes to responsible use of technology.
We should learn a lesson and pass a law now that says let’s not have these systems be incentivized to give you things just for what’s good at getting your attention.
Exploring Latent Space
My favorite image model Flux released their 1.1 version last week. People on X started noticing that prompting with filenames like IMG_4001.JPG
and no further instructions yields remarkably natural-looking images. I would not have guessed any of these were created with diffusion. There something mundane and different about them.
If Artificial Insights makes sense to you, please help us out by:
📧 Subscribing to the weekly newsletter on Substack.
💬 Joining our WhatsApp group.
📥 Following the weekly newsletter on LinkedIn.
🦄 Sharing the newsletter on your socials.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.
You are receiving this newsletter because you signed up on envisioning.io or Substack.