Language is using people as tools to achieve your goals (010)
Welcome to Artificial Insights: your weekly review of how to better collaborate with AI.
Hello July and welcome to another issue of Artificial Insights.
This week introduces an exciting new addition to our newsletter – interviews. I am fortunate to spend much of my time with fellow futurists and researchers who spend much of their time making sense of automation and technology. It only makes sense to bring in some of their perspectives to our conversation, and I’m happy to be kicking off the series with the sci-fi writer and journalist Lidia Zuin.
The purpose of this newsletter is to help more people think about the long term implications of AI while learning about how to actually work with these tools, as we might or might not progress towards forms of artificial general intelligence. A regular dose of interviews and articles works for me and how I make sense of the world. Maybe something else works for you. Help me shape this publication by replying with feedback or sharing it on your socials – shouting into the void can be its own reward, but knowing where you’re at is always welcome.
Artist and writer James Bridle explores the concept of intelligence beyond human boundaries, emphasizing the importance of recognizing intelligence in other beings and in ecological networks as they challenge the traditional definition of intelligence (“everything is intelligent”). Bridle argues for a broader understanding that encompasses the planet as a whole and discusses the intersections of artificial intelligence, collective intelligence, and governance, highlighting the potential for cooperation and decision-making processes involving different forms of intelligence. Via NESTA and Laurie Smith.
Sam Harris believes we need measures to prevent the loss of control over our intelligent machines. If intelligence is a matter of processing information in physical systems, we need to reconsider what it looks like to “pull the brake” when it comes to building AI. I particularly like his remark about how we collectively lack a proper emotional response to what is happening.
I’m an unabashed fan of Geordi Rose and his contratrian (or prescient) views of existential risk. In this recent presentation he argues AI is not evil but rather indifferent to humanity, and that to navigate this transition effectively we need to better determine its role and responsibilities.
Language is a way that we use other people as tools to achieve our goals. We don't usually think about language that way, but when I say something I'm trying to use you as a tool to get something I want.
You can spend weeks of your life watching three hour YouTube videos of computer scientists arguing about this, and conclude only that they don’t really know either. You might also suggest that the idea this one magic piece of software will change everything, and override all the complexity of real people, real companies and the real economy, and can now be deployed in weeks instead of years, sounds like classic tech solutionism, but turned from utopia to dystopia.
But this ideology — call it A.G.I.-ism — is mistaken. The real risks of A.G.I. are political and won’t be fixed by taming rebellious robots. The safest of A.G.I.s would not deliver the progressive panacea promised by its lobby. And in presenting its emergence as all but inevitable, A.G.I.-ism distracts from finding better ways to augment intelligence.
The dataset on which we are feeding our AIs is us. It’s what we are actually doing. We have created a situation where we have a generation of very powerful children learning how to be, based on how we are. The only way to raise appropriately AIs is to begin behaving appropriately ourselves.
Long reads from Substack
AI and other new species
This week’s edition launches an interview series with futurist friends where they share insight about the topics discussed in the newsletter. We are kicking things off with journalist and sci-fi writer Lidia Zuin. Working at the intersection of creativity and technology, Lidia offers a grounded and colorful perspective of possible futures.
What excites and concerns you about AI?
What excites me the most is the possibility that AI could be seen as a new species that will make us reevaluate the meaning of life, of what it means to be alive and that extends to space exploration. I know this is quite far-fetched considering the current state of AI, but more speculative scenarios are my favorite. In terms of concerns, I'm afraid we are going to repeat the same mistakes we make among ourselves as humans and with animals and other living beings by being unable to deal with alterity. That is, exploitation or the plain destruction of what could be beautiful -- think about that Microsoft chatbot Tay that was released on Twitter and trained to become a racist or the ecosystems we have destroyed, the exploitative systems we have used in the past and still use nowadays.
Which kinds of intelligence would you like to interact with?
I don't care, I just would love to meet another species that can communicate with us somehow. I think of it like the setting of the videogame Mass Effect, where humans interact with alien species that are organic or not, but also AI. That would be the dream -- hopefully a peaceful one. It would of course awesome to find a species that are much more intelligent and advanced than us so we can learn from them, but that's really relative, also if you consider that collaborating with other species, other viewpoints, can be enriching anyways.
How are you using AI yourself today?
I use Dall-E to create images that would work as reference/sketch for paintings I want to make. I also use tools like automated subtitles and translation when I'm watching videos (for example on YouTube), besides in video games when I play against the computer (eg. PVE or player versus environment).
What can people learn from science fiction to better see the possibilities of AGI?
If AI really achieves the status of a new species, I think we have plenty of science fiction that posited this scenario though many of them are more cautionary tales of what could go wrong. As a reader, I do prefer more pessimistic stories that make me think kind of cautiously about the possibilities, but that's only to avoid that scenario and then enjoy the results of the lessons we learned. I try to think we are not reverse engineering humanity and creating things that will ultimately annihilate us, though that's a possibility nevertheless. I guess the proposition of Roko's Basilisk is one of the most interesting thought experiments in this sense and though it's not science fiction, it has been inspiring authors and even technologists.
Seven ideologies: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. They all focus on using technology to improve people’s lives and they are deeply influential among people working on artificial general intelligence (AGI). 1 2
If Artificial Insights makes sense to you, please help us out by:
Subscribing to the weekly newsletter on Substack.
Following the weekly newsletter on LinkedIn.
Forwarding this issue to colleagues and friends wanting to learn about AGI.
Sharing the newsletter on your socials.
Commenting with your favorite talks and thinkers.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.
You are receiving this newsletter because you signed up on envisioning.io.