Increasing the Temperature (008)
Your weekly review of our transition toward AGI and computational creativity.
Welcome to this week’s edition of Artificial Insights – your guide to navigating our slow but inevitable transition toward Artificial General Intelligence by means of a weekly selection of articles, interviews and personal commentary.
We launched on Substack two months ago, and last week saw the introduction of a LinkedIn edition as well. I was pleasantly surprised by the number of new subscribers with whom the subject matter resonated, and hope the newsletter will live up to and exceed your expectations moving forward.
Keeping up with AI can be a lot, so I hope this publication will make sense to everyone who is looking to stay informed and ahead of the curve. In particular, I am trying to explore the intersection of general intelligence and computational creativity, and believe one might actually lead to the other. Creativity is irreducible, fuzzy and difficult to pin down – but maybe the way we create new ideas together with machines holds the key to better intelligence. Let’s find out.
If you are new to Artifical Insights and craving more, make sure to check out our previous editions – and send your feedback and questions my way.
MZ
Hundreds attend AI church service in Germany 🛐
Still not sure if this is legit, but apparently an AI-generated church service was attended by hundreds of German Protestants a couple of weeks ago in Nuremberg. The 40-minute service included prayers, music, sermons and blessings led by ChatGPT and created by theologian and philosopher Jonas Simmerlein from the University of Vienna. The entire service was “led” by four different avatars on the screen, two young women, and two young men. One of the participants said she was excited and curious when the service started but found it increasingly off-putting as it went along. Exciting times.
Social, political and ecological implications of AI 📍
Writer and academic Kate Crawford is a principal researcher at Microsoft Research and recently published The Atlas of AI which explores the hidden costs of AI, including natural resources, energy, labor and data. In this presentation she discusses how AI is a political activity by nature and why we should consider the externalities of our AI systems before using them, like its environmental footprint and the vast amounts of dispersed human labor. Her emphasis on the need for stronger, shared legislation especially resonated.
Quantum AI and non-linear futures 👾
Geordie Rose, founder of the Canadian quantum computer company D-Wave presents a unique perspective about what the many-worlds interpretation of quantum physics might mean for computation – how it is effectively opening up doors to other realities and in a sense jumping between dimensions when performing calculations. His predictions turned out mostly wrong, but the talk builds an interesting rationale about what it means to be “real” and how it might shape AI in the long term. Interestingly, Rose left the company and now works with embodied AI robots. If this resonates, make sure to also check out a more recent interview with Rose where he discusses their approach of building intelligence and creative problem-solving by first building an immersive internal model of the world for robots to reason through.
Responsible AI and collective decision-making 🌳
Insightful interview with OpenAI Chief Architect Mira Murati covering some of the particular challenges faced by AI companies and the importance of collaboration between humans and machines as AI continues to integrate into the workforce. Touches on responsible innovation, social decision-making and the need for a trusted authority to audit AI systems based on agreed-upon principles to guide AI development while being mindful of potential risks.
People-focused futures 🌚
Long interview with Meta CEO Mark Zuckerberg by Lex Fridman discussing a range of topics around the future of AI. One bit that stood out to me is the creation of generative AI versions of influencers. Meta’s avatar game is strong, and the use case of influencers with lots of training data and limited time to answer fan questions might become a impactful application of such tools.
People and Computers Thinking Together 🔱
I am utterly fascinated by the different kinds of symbiosis that are possible when combining human and machine intelligence. In this video (and related book) MIT professor Thomas Malone discusses the concept of superminds, where groups of people and computers work together to achieve more intelligent outcomes than either could alone. Malone outlines five basic cognitive processes any intelligent entitye requires and introduces five types of superminds for decision-making, with several rich examples in business, democracy and forecasting. I am confident we will see an increasing number (and kinds) of Centaurs in the future, and Malone presents a solid framework for how to conceive of these. Lots more to cover in future editions of this newsletter.
Long Read
Emerging Vocabulary
Temperature
In the context of AI language models, temperature is a parameter you can adjust when generating text. A higher temperature like 0.7 or 1.0 will generate more random outputs, while a lower temperature like 0.2 or 0.3 will make the output more focused and deterministic, often sticking more closely to the most likely next word at each step. This parameter is essential for controlling the balance between exploration (randomness) and exploitation (determinism), especially in reinforcement learning environments.
View all emerging vocabulary entries →
Project Showcase
Designsparks.io
A web tool that combines artificial intelligence with creative thinking techniques for people engaged in design projects. Via Laly Akemi.
Spotted
Technology & Culture
Machine Learning, Language, and Risk
By Luma Eldin: Much of the conversations we have these days delve into ChatGPT as a personified subject matter, and quickly unravel into parodies of the impending future of AI. Naïve questioning of “Do you use it??” and bold and broad declarations like “’Mo Gawdat is scared” at first enthralled me with wonder and curious engagement. After months of following the news, and indulging in debates, I find myself questioning my own participation, which has me pondering an overarching cultural question:
Is the real risk with AI in our perception of it as an ‘intelligence’?
If Artificial Insights makes sense to you, please help us out by:
Subscribing to the weekly newsletter on Substack.
Following the weekly newsletter on LinkedIn.
Forwarding this issue to colleagues and friends wanting to learn about AGI.
Sharing the newsletter on your socials.
Commenting with your favorite talks and thinkers.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.
You are receiving this newsletter because you signed up on envisioning.io.