If thinking refines ideas, creating brings them into form.
I’ve been interrogating my own use of AI in an effort to create a document or framework that might help others benefit as much as I have from applying AI across as many aspects of life as possible.
I’m a technology maximalist – I believe more technology is the solution to most problems – and with AI, my approach has been to embrace it in every dimension I can. This newsletter has often been the recipient of my ongoing experiments and growing list of interesting use cases.
I’ve been documenting this process by asking the very models I interact with – models that retain memory of our exchanges – to help me summarize how they perceive my use of AI.
This has worked surprisingly well, giving me a baseline of observations to edit and refine. What follows is the second of seven short chapters based on those exchanges. I expect to consolidate these into a book of sorts later, using the weekly newsletter cadence as an excuse to keep the drafts flowing while building toward something hopefully useful.
What I’ve learned is that AI excels at one thing: reflecting intent. When I describe what I want to make, it gives me back exactly what I said, not what I meant.
That gap between thought and expression is where most of my work happens. The model’s literalness forces clarity. When I can’t explain an idea cleanly, it usually means I don’t understand it yet.
These field notes document how I actually work with AI day to day: writing, designing, testing, and refining in conversation.
How I create with AI:
Writing refinement: Most of my writing starts as rough, fast drafts—notes dumped into ChatGPT. Then I ask the model to highlight weak transitions, redundant phrasing, or broken logic. I don’t let it rewrite; I use its feedback to rewrite myself. Over time this became a rhythm: I write → it critiques → I tighten. My voice stays intact, but sharper.
Idea scaffolding: When I’m developing frameworks or tools, I use GPT to simulate reasoning. I’ll outline the structure, then ask the model to find gaps, stress-test assumptions, or suggest what’s missing. It’s like working with a brutally honest co-editor who never gets tired.
Prompt engineering as writing: For image generation or structured outputs, I treat prompts as tiny pieces of creative writing. I describe intent, tone, atmosphere, and constraints before I mention style or detail. GPT helps me rewrite prompts until they express exactly what I want, not just what I imagine.
Constraint as method: I often tell GPT to respond under limits: “explain this in 80 words,” “argue against your previous point,” “reframe this for a newsletter intro.” These constraints aren’t gimmicks; they sharpen the signal. They make thinking visible.
Pattern recognition: When working across projects, I feed GPT fragments of old notes or past outputs to see what patterns emerge: recurring themes, phrases, or blind spots. It’s surprisingly good at showing the shape of my own thinking, like a mirror for intellectual habits.
Creation becomes dialogue when you treat the system as a collaborator in precision.
AI doesn’t imagine for you, but it holds you accountable to what you’re trying to say. That accountability turns making into a discipline of language, where every choice must be articulated to be realized.
Until next week,
MZ
Let’s get political (12 min)
Bernie Sanders on AI-induced unemployment.
The Limits of AI (20 min)
How data, information, knowledge and wisdom are different when it comes to AI, from IBM’s excellent research channel.
What’s your AGI strategy? (40 min)
YC talk by Jordan Fisher from Anthropic.
High Stakes (50 min)
Sam Altman is in PR mode and joined a couple of podcasts. I like these to get a better sense of the public persona behind the labs.
The Last Question (30 min)
For those who haven’t read Asimov’s The Last Question, the short story is highly recommended. Regardless of having read it, here’s a brilliant video explaining the story and how it connects to the current moment we are experiencing around AI.
Hallucinations can be expensive (7 min)
Deloitte AU and unchecked hallucinations.
We are living in the era of AI slop
What is intelligence? (70 min)
If the Antikythera Institute and Long Now Foundation mean anything you, don’t miss Blaise Aguera y Arcas from Google research on intelligence.
What could go wrong? (1h40)
Excellent interview with Jeff Hinton by Jon Stewart. Incredibly technical for a general audience.
If Artificial Insights makes sense to you, please help us out by:
📧 Subscribing to the weekly newsletter on Substack.
💬 Joining our WhatsApp group.
📥 Following the weekly newsletter on LinkedIn.
🦄 Sharing the newsletter on your socials.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.