We passed the Turing test... (003)
...and nobody noticed.
Thanks to everyone for signing up and reading the first two editions of Artificial Insights! We had an astounding open rate, which tells me I should keep sharing whatever learnings a tech-literate AI-amateur like myself am finding in the vast possiblity space of exploring AGI.
It occurred to me that ChatGPT and its kin have probably already crossed the threshold of believability, which means they have beaten the Turing Test. I suppose there are infinite permutations of what exactly qualifies as demonstrating abilities indistinguishable from human, but for all practical purposes we are now in a post-Turing reality, where generative text, images, audio and video are inseparable from human-made content. No fanfare – only the slow collective realization that something irreversible has taken place. Without instructions or guidance we are all fractionally responsible for directing the course of development. As a reader, you are the ideal candidate for picking up the banner of responsible AGI and promoting an equitable path forward.
This week’s edition is again video-heavy, with insights about the future of AI and education from Khan Academy, an in-depth interview with the chief scientist of OpenAI, and opinions about the limitations of AI.
Thanks for reading.
Meet Khanmigo, the AI powered tutor from Khan Academy 🧑🏽🏫 (15 min)
Sal Khan, founder of Khan Academy, demonstrates how AI can be used as a personal tutor to revolutionize education. Khanmigo can function as a guidance counselor, academic coach, career coach, and life coach, allowing students to engage with historical figures, collaborate with the system to improve their writing skills, and enhance their language arts and reading comprehension skills. Khan highlights AI’s positive potential to enhance human intelligence and purpose, urging that this should be the focus of its development.
Inside OpenAI and why AI raps and writes poetry ✍🏽 (50 min)
Deep, technical and super insightful interview with Ilia Sutskever, co-founder and chief scientist of OpenAI. The deep learning revolution has been driven by artificial neural networks, which mimic the human mind's biological neurons. This allows for a broad range of fluency and the ability to develop a sense of what comes next. By narrowing down possibilities, AI systems can operationalize understanding and increase learning speed. As AI becomes increasingly powerful, we need to be mindful of compute costs, corporate structures (such as capped profit companies), and exploring alternatives to ensure sensible progress. Via Computrik.
Why AI Is Incredibly Smart — and Shockingly Stupid 👀 (15 min)
University of Washington professor Yejin Choi highlights the impressive capabilities of large-scale language models, but also points out their limitations, such as minor errors and safety and sustainability concerns. She emphasizes the need for common sense development in AI for ethical decision-making, and critiques the reliance on raw web data for training due to misinformation and biases.
AI-Generated Philosophy is Weirdly Profound 🧐 (35 min)
Thought-provoking video essay exploring the concept of AI-generated philosophy by discussing the never-ending conversation between two AIs, resembling philosopher Slavoj Zizek and film director Werner Herzog. While the conversation is coherent and surprisingly human-like, the video reminds viewers that the ideas presented are generated by a machine and not representative of real people. Reflect on the proliferation of nonsense that is practically guaranteed by the infinite generative power of AI.
How to govern a world full of superintelligent digital minds 🖇️ (10 min)
Deep interview with with Nick Bostrom, a philosopher at Oxford’s Future of Humanity Institute and author of the book Superintelligence. For better or worse, few people have spent as much time as Bostrom thinking about, discussing and describing the potential risks caused by AI on human survivability. This does not mean we should listen blindly to their ideas, but rather spend more time considering its implications ourselves.
Computational equivalence and machine learning 📐 (60 min)
Stephen Wolfram can be hard to pin down, but as an accomplished entrepreneur, scientist and author, his peculiar interpretation of machine intelligence can be worth listening to. In this interview he describes his theory of everything (the Ruliad) and how it maps onto generative models. The interview is a great companion piece to the unmissable longform essay What is ChatGPT Doing … and Why Does it Work?
Reinforcement Learning from Human Feedback combines reinforcement learning with feedback provided by humans. The goal is to develop AI systems that learn more effectively and align better with human values, preferences, and objectives.
Microblog with no humans allowed.
Technology & Culture
The Culture Creating A.I. Is Weird. Here’s Why That Matters 🧩 (60 min)
Guest post by Luma Eldin: One of the major debates surrounding AI is its influence on creativity within culture. In reality, AI is compelling us to confront our limited understanding of creativity and the possibility of programming it. AI models are designed to learn from the past and predict the future, narrowing the boundaries of human creativity, transformation, and adaptation. In this conversation, NYT journalist Ezra Klein engages American writer, scholar, and journalist Erik Davis in a discussion about the 'strangeness' of AI, arguing that rather than introducing something entirely new and unpredictable, it may actually constrain us to the utterly predictable, as it is fundamentally based on prediction engines.
Artificial Insights is brought to you by Envisioning, an emerging technology research institute. We recently relaunched our website and would love your feedback. We’ve been investigating emerging technology for over a decade, and have over the years built a unique approach which includes proprietary tools, methodologies and content, in order to keep track of the technology ecosystem. Explaining this to different audiences can be challenging, so any feedback on the website is very welcome.
If Artificial Insights makes sense to you, please help me out by:
Subscribe to the weekly newsletter on Substack.
Forward this issue to colleagues and friends wanting to learn about AGI.
Share the newsletter on your socials.
Comment with your favorite talks and thinkers.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.
You are receiving this newsletter because you signed up on envisioning.io or on Substack.
Artificial Insights is a weekly newsletter about what's happening in AI and what you should understand about our transition toward AGI. Each issue features interviews, articles and papers selected for an audience of leaders and enthusiasts.