Happy Monday and welcome to your weekly download of AI insights, commentary and memes.
Celebrating fifty newsletter editions, I took the opportunity to review the 250+ links that have been shared in the past year, in order to retroactively identify the content which I think retains the most value today. Having absorbed countless hours of interviews, lectures and research papers, some of these links have only become more important over time – especially those outlining shape of AGI and our changing relationship to work. They are featured below.
Reflecting on the experience of covering such a fast-moving field closely, I am more determined than ever to keep this publication going. Many of you have followed me since the very first issue, and others are just finding out about our ways of seeing AI. What we all share is a desire to understand what is going on, and how it will affect us and those around us.
Future issues will try taking on a more practical approach to working with AI. We will absolutely keep featuring high-level insights from the industry, but I want to start bringing as many of these ideas into practice as possible. Things like better prompting, clearer understanding of the kinds of AI we are talking about (not everything is generative), and so on.
My biggest ask right now is: find the others. Share the publication (or its content) with those around you capable of doing something with this information, and help me reach more AI experts who care.
Until next time,
MZ
From AI is having a moment (001)
Sebastian Bubeck speaks about Sparks of AGI
This lecture by a Microsoft AI researcher sparked my wanting to do this newsletter. Early experiments with GPT-4 show a limited ability in generalizing its problem-solving approach. In other words, GPT-4 is starting to demonstrate the capacity of broadening its abilities and act out unexpected types of intelligence and behaviors. Highly recommended.
From The opposite of a trap is a garden (006)
Work after AGI (006)
Another insightful interview with Sam Altman, this time at the Technical University of Munich where they specifically address how AI might change the nature of work, careers and collective growth. I especially liked their discussion about limitations and benefits of open source models and the importance of attention to detail and rigorous testing in creating AI products.
From Which kinds of intelligence make sense? (017)
Decoding AGI: Eugenics, Transhumanism, and Bias
Timnit Gebru explores the intersections of Eugenics and Transhumanism in AGI development. She warns against AGI perpetuating societal biases and advocates for a focus on preventing harm. Gebru's work in deep learning and her project 'Gender Shades' demonstrate her commitment to responsible AI.
From Aesthetic Futures (035)
Highly recommended and brief interview of Sam Altman by Bill Gates about his take on our near future. Don’t miss it.
From What remains (026)
Ethics at the Helm
DeepMind's Shane Legg deliberates on the journey towards AGI, underscoring the imperative of ethical alignment in powerful machine learning systems, acknowledging the existing architectural deficiencies particularly in handling episodic memory and misinformation, and contemplating the prospect of realizing AGI by 2028.
From Amalgamated Intelligences, Inc. (019)
New links from the WhatsApp group
AI: Beyond Tools to Digital Species
Mustafa Suleyman is an excellent author, co-founder of Deep Mind and Inflection AI and now CEO of AI at Microsoft:
What is it that we are actually creating? What does it mean to make something totally new, fundamentally different to any invention that we have known before?
Decoding Minds: Bridging AI, Philosophy, and Cognitive Science
Joscha Bach is among my favorite thinkers in the space, especially his focus on the philosophy behind the technology. Recent talk:
If you are super smart and very good at managing things you now have an army of interns that are pretty autistic but that are doing exactly what you tell them and you have as many as you want for $20 a month.
Envisioning.io
Don’t miss the new signal database on Envisioning.io. We have lots of upcoming features planned, but right now it’s a great starting point to learn more about our research. Check it out!
If Artificial Insights makes sense to you, please help us out by:
📧 Subscribing to the weekly newsletter on Substack.
💬 Joining our WhatsApp group.
📥 Following the weekly newsletter on LinkedIn.
🏅 Forwarding this issue to colleagues and friends.
🦄 Sharing the newsletter on your socials.
🎯 Commenting with your favorite talks and thinkers.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.
You are receiving this newsletter because you signed up on envisioning.io or Substack.