Who watches the watchers? (021)
Exploring many kinds of human-machine augmentation feat. an exclusive interview.
Welcome to another edition of Artificial Insights, brought to you a day late from the train to Lisbon where I’ll be hanging out with
later this evening. The meetup is a brief trend-exchange between futurists where I’ll talk about Centaurs and computational creativity before hosting a super-short Envisioning AI exercise using fellow speakers trends as input for a generative creation. Should be fun!This week’s newsletter covers the usual assortment of links to interviews and articles that have caught my attention in the past few days. As usual there is an overwhelming amount of development in the AI space, and I have found that learning from the people developing the technologies we increasingly rely on to be an insightful approach for keeping abreast of developments.
As usual, if any of this resonates with you then please reach out. I had a blast connecting with readers in Amsterdam last week and will be in London early next week talking Centaurs and technology trends. Hope to see you there.
MZ
Does AI threaten our human identity? 🙈
Creativity. Emotions. Artificial intelligence is pushing further and further into primal human domains. This video by DW scrutinizes the impact of AI on human identity, including AI's capabilities in art, music, and caregiving. It touches on the question of whether AI can have self-awareness and emotions.
AI in Creativity: Discusses the controversial role of AI in arts and caregiving, challenging our perceptions of creativity and empathy.
Human vs. AI: Considers the unique aspects that make us human, including consciousness, and questions whether AI could achieve this level of existence.
Playful Tone: The discussion adds humor to serious topics, making the ethical and philosophical debates more engaging.
The coming wave 🌊
AI pioneer Mustafa Suleyman discusses his book The Coming Wave, addressing AI's future impact and ethical considerations. In conversation with Zanny Minton Beddoes, Suleyman offers a comprehensive look at AI's potential for innovation as well as its risks, such as job displacement and power concentration. He also discusses geopolitical tensions, especially between the US and China, in the context of AI development and governance.
AI's Dual Revolutions: Covers both the classification and generative revolutions in AI, offering a nuanced view of AI's evolving capabilities.
Governance and Ethics: Emphasizes the need for good governance to mitigate AI risks, adding a cautionary note on adopting large-scale state surveillance.
Geopolitical Context: Addresses the US-China dynamic in AI development, arguing for aligned incentives and against a "race to the bottom" on values.
Why transformative AI is really, really hard to achieve 🍾
Arjun Ramani and Zhengdong Wang take a critical look at the transformative potential of AI on economic growth, questioning whether it can break the 3% GDP growth rate ceiling. While acknowledging the rapid progress in AI, it highlights various bottlenecks—technical, social, and economic—that could limit AI's transformative impact.
% GDP Growth Ceiling: Reviews the historical context and questions whether AI can substantially exceed the 3% annual growth rate that no other technology has managed to break.
Technical Hurdles: Highlights the specific challenges in AI subfields like fine motor control and causal modeling. Mentions that current methods might be reaching their limits.
Resource Limits: Discusses the challenges of scaling up, from the immense cost of computation to the potential exhaustion of high-quality language data.
Generative AI is just a phase – what’s next is Interactive AI 🪜
More from DeepMind co-founder Mustafa Suleyman, discussing his new venture, Inflection, and its ChatGPT rival, Pi. He believes the future of AI is interactive bots that can perform tasks and calls for robust regulation. Despite concerns, Suleyman remains optimistic about AI's potential for societal good.
Interactive AI: Suleyman's focus on autonomous, interactive AI sets a bold vision for the future of technology.
Robust Regulation: His call for stronger regulations is noteworthy, especially considering his own involvement in cutting-edge AI projects.
Techno-Optimism: Despite challenges and criticisms, Suleyman's unwavering belief in technology's potential for good provides an interesting counter-narrative to prevailing skepticism.
Does Sam Altman know what he is creating?
Another in-depth look at OpenAI, its co-founder Sam Altman, and the powerful AI models like ChatGPT that the company has released. Delves into Altman's philosophy on AI's potential risks and benefits, the rapid adoption of ChatGPT, and the ethical considerations surrounding the development of AI technologies.
Ethical dilemmas: Explores the implications of AI development, with insiders at OpenAI expressing concerns about potential dangers.
Rapid adoption: Highlights the global success of ChatGPT, claiming it to be one of the most quickly adopted consumer products in history.
Transformative impact: Discusses the ambiguous and potentially transformative results of AI on society, from altering the nature of work to human relationships.
Emerging Vocabulary
Gradient Descent
Optimization algorithm that's used when training a machine learning model. It's based on the concept of iterative improvement to gradually find the minimum value of a function, usually a loss function, that measures error rates in predictions vs. actual values. The main idea is to diminish this error by making slight adjustments to model parameters, such as weights in a neural network. Over time, these steps should lead to the function’s minimum, or to a point where further steps no longer significantly reduce error, as determined by a predefined threshold.
View all emerging vocabulary entries →
Interview
Chatting with ChatGPT
While I eagerly await interview responses from a few fellow futurist friends, I figured maybe ChatGPT could provide answers to my questions this week. I asked GPT to suggest a couple of questions, and provided a bit of guidance to keep responses short. As we’re probably used to by now, the responses turned out both insightful and interesting.
What excites and concerns you about AI?
I don't have emotions, but I can analyze the subject. What's exciting about AI is its potential to solve complex problems, from medical diagnostics to climate change mitigation. Concerns arise from ethical considerations like job displacement, biased algorithms, and potential misuse, including in military applications. Both the potential benefits and risks are magnified when discussing AGI.
Will a single algorithmic breakthrough be the catalyst for AGI, or will it be a confluence of many technologies?
Unlikely that a single algorithmic breakthrough will lead to AGI. Current trajectory suggests a confluence of technologies: machine learning, natural language processing, sensorimotor skills, and domain-specific expertise will likely synergize. Incremental improvements in these areas, plus potentially undiscovered algorithms, may converge to result in AGI. Consider the parallel development of hardware and data availability as additional factors.
How are you using AI yourself today?
I don't use AI; I am a result of AI. My architecture consists of transformer neural networks trained on large datasets. I perform tasks like text generation, language translation, and answering questions based on the data I've been trained on. I exist to assist users in generating human-like text based on the input queries I receive.
What kind of tasks will remain outside the domain of AGI?
Speculative: Tasks requiring subjective human experience, like emotional intelligence or cultural nuance, may resist full automation by AGI. Ethical and moral decision-making might also remain a human domain due to societal and philosophical complexities. Additionally, highly creative tasks that involve a deep understanding of human emotion and context could be difficult for AGI to fully grasp. However, this depends on one's definition of AGI; if it includes the capacity to understand and emulate human-like subjectivity, then the list shrinks considerably.
From Twitter
Great thread of video explanations by Carlos de la Guardia explaining their learnings about our transition towards AGI.
If Artificial Insights makes sense to you, please help us out by:
Subscribing to the weekly newsletter on Substack.
Following the weekly newsletter on LinkedIn.
Forwarding this issue to colleagues and friends.
Sharing the newsletter on your socials.
Commenting with your favorite talks and thinkers.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.
You are receiving this newsletter because you signed up on envisioning.io.
See you soon for some futures fun!