Happy Monday — and kudos for investing time in understanding what’s next.
One of the most effective ways to grow your AI skills is by “using AI to build with AI”. This sounds recursive, but it’s exactly how I developed our new Signals tool.
Instead of prompting for answers, I began by articulating what I wanted: a tool that scans signals of change from various AI models based on few inputs. Then, I worked with ChatGPT and Claude to prototype, code, and refine it – step by step. The most surprising aspect of this process was literally how many versions of the tool were necessary to build in order to achieve something stable. Your first, second and third iterations are unlikely to become definitive, unless you have an incredibly clear picture of what you want to achieve.
Signals is a lightweight web-based tool that generates a radar of emerging signals tailored to you. You provide two inputs: an organization and region. These are used to generate a horizon scan across multiple AI models, creating a comprehensive overview of the signals likely to shape the future of your selected organization.
The next steps automatically identifies duplicate signals from different models, and engages a reasoning model to find potential missing insights. These are assessed, summarized and presented as an interactive data visualization with your results.
Below is a real-time recording of our tool, applied here for IKEA (2 min).
We are experimenting with different types of scans, all of which lean on language models, web search and our own proprietary research on emerging tech.
The output is a comprehensive overview of the factors AI considers on the horizon for the given organization. This set of structured data is presented as an interactive map with categories and impact scores according to the various models. To emphasise: our workflow generated, evaluated and summarized the insights from select inputs, in seconds.
While all of the above is experimental, I am looking for foresight practitioners who want to take Signals for a spin.
If you feel this could be useful in your workflow, please schedule a quick chat with me to get a better sense for where our capabilities and your needs overlap!
Until next week,
MZ
Vibe Coding 101 (90 min)
Short course by Replit on Deeplearning.ai.
Build and share two applications—a website performance analyzer and a voting app—while using an AI coding agent to debug, customize, and strengthen your coding skills.
Learn the principles of agentic code development and skills to effectively build, host, and share your apps with Replit coding agents and assistants.
Use product requirement documents, wireframes, and good prompting practices to prototype, debug, and iterate your applications.
Panel on AI strategy (30 min)
Superb session on how to think about AI strategy in your organization, with practical and theoretical considerations. Maria Axente, Lara Burns and Lexy Prodromou discuss effective approaches for organizations to scale AI successfully, emphasizing aligning AI initiatives with business objectives, fostering a culture of innovation, and ensuring robust data governance.
If you iterate too quickly on strategy that means it’s not strategic.
Why Superhuman Coding is About to Arrive (90 min)
Eiso Kant, CTO of Poolside AI, discusses the company’s approach to AI software development. Kant predicts human-level AI in knowledge work could be achieved within 18-36 months.
Trickster Jumps Sides (75 min)
Not strictly about AI, but easily the most insightful podcast I’ve listened to in the recent past. Explains much of our collective predicament and how we deal with differences. A bit U.S.-centric but should resonate everywhere.
2027 Intelligence Explosion (3h)
Deep dense interview with Scott Alexander & Daniel Kokotajlo with an unmissable essay about our impending move to AGI (AI 2027). Don’t miss it.
Accelerating Scientific Discovery with AI (60 min)
Sir Demis Hassabis always gets my attention. Here is a one hour lecture about AI applied to discovering new science.
How AI Models Steal Creative Work (15 min)
Ed Newton-Rex proposes licensing as an approach to protect the creative work language models depend on to succeed. Recorded at TEDAI in San Francisco.
If Artificial Insights makes sense to you, please help us out by:
📧 Subscribing to the weekly newsletter on Substack.
💬 Joining our WhatsApp group.
📥 Following the weekly newsletter on LinkedIn.
🦄 Sharing the newsletter on your socials.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.