This newsletter began as an attempt to make sense of our transition toward AGI. I still believe ‘generality’ is next for AI, and that most of us will experience it in our lifetimes. Doing this involves learning, testing, and building things with AI.
Over time, I’m realizing the work of writing weekly might become something else: a slow, structured attempt to document how I use AI. Not the tools themselves, but the mechanics of collaborating with different intelligences – how reasoning changes when shared, and what that reveals about our own thought.
Thinking with AI can be about outsourcing cognition, or about extending it. It’s a way of tracing how ideas form, fracture, and return clearer. By writing what I believe and watching it reflected back, I see my thinking as a living system: open, self-correcting, and occasionally surprising. What once stayed internal now becomes a shared exploration.
These observations are starting to form a sort of ‘field notes from a centaur’ – a collection of ways that AI is working itself into my life.
Each chapter explores a different dimension of that collaboration, starting with thinking, because everything else begins here: before creating, before deciding, before making meaning. These are not polished essays but field notes: records of actual interactions, co-created with AI, where AI helps me observe my own mind at work. They show what it’s like to reason with another intelligence, and what that process teaches about being human.
How I think with AI:
Concept Refinement: When a thought is still forming, I describe it roughly and let AI question, restate, and reframe. The back-and-forth peels away noise until the core idea stands on its own.
Strategic Thinking: Whenever I have a complex question to answer, I give AI as much contextual information as possible, outline my logic and ask for counterpoints or blind spots. The friction of disagreement sharpens judgment and reduces assumptions.
Philosophical Correspondence: I often interrogate AI to better understand the open terrain of abstract questions — AI helps me map worldviews, contrast ideas, and debate paradoxes without forcing them into conclusions.
Comparative Analysis: By rephrasing a single decision through multiple perspectives, I can see how framing itself shapes outcome and bias. Sometimes a single added word, like “assume” or “doubt,” can tilt the whole conclusion, which helps me see bias as a movable part of reasoning rather than a fixed flaw.
Temporal Thinking: I describe observations about present signals and noted patterns into AI to imagine how they might evolve. The process turns foresight into a discipline of language, using words to model change.
Each of these field notes is co-written with the same system that observes how I use it. I literally ask ChatGPT to analyse my queries, my newsletter posts — and reveals the pattern behind my reasoning. It’s a kind of mirrored cognition: half human, half machine, thinking together in public.
Until next week,
MZ
Talking to Trees (37 min)
Artist Manuel Axel Strain at Vancouver AI blends Indigenous knowledge & machine learning to decode plant communication, dream with flowers, & regenerate forests through song. Wild? Maybe. Necessary? Definitely.
AI is asking us to go to places we haven’t been. Indigenous knowledge is asking us to remember where we come from.
Your Only Moat Is Speed (45 min)
Everyone’s stressing about moats in AI—what if OpenAI just clones your startup tomorrow? The answer: move faster than anyone else. Moats come after you’ve built something people want. Until then? Speed is the only moat that matters.
A moat is inherently a defensive thing—and you have to have something to defend.
Boring Biz, Big Bucks (27 min)
AI + Google Maps = local cash printer. The Boring Marketer drops a playbook for finding high-ticket, low-competition local niches—then automating media + lead gen with AI. Forget HVAC. Think koi ponds, smart homes, exotic car wraps. Quietly print money.
Most people chase competitive niches with sharks. We hunt quiet corners and use AI to scale.
Build Evals, Not Just Vibes (1h45)
Everyone’s building AI agents—but no one knows how to test them. Hamel Husain & Shreya Shankar say evals are the core skill of modern AI product builders. Less vibes, more signal. They break down how to debug, test, and trust your LLM workflows.
The new stack is: prompt → eval → iterate. Testing is the product now.
Sora 2: The Internet’s New Imagination Engine (20min)
Sora 2 brings realistic motion, native audio, and the wild new “Cameo” feature, OpenAI launched a full social app where your friends can cast you into any world. Think: TikTok meets Pixar meets AGI.
On the path to AGI, the gains aren’t just about productivity. It’s also about creativity and joy.
If Artificial Insights makes sense to you, please help us out by:
📧 Subscribing to the weekly newsletter on Substack.
💬 Joining our WhatsApp group.
📥 Following the weekly newsletter on LinkedIn.
🦄 Sharing the newsletter on your socials.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.