Greetings from a surprisingly sunny Lisbon where AI is the main subject on the agenda of WebSummit. I’m here to hang out with the Envisioning team and other friends and collaborators while keeping an eye on opinions and developments around generative AI.
Last week, like so many others, saw an array of developments from the AI companies we have come to know. Humane launched a wearable AI pin and OpenAI launched GPT-4 Turbo and GPTs. While this isn’t the place to discuss particular corporate announcements, but rather track the big picture and direction of change, it’s noteworthy when some of the fastest-growing tech companies are launching products that are both better and more affordable year over year.
Like in previous weeks, I have been experimenting a ton, especially with ChatGPT. This includes launching a very experimental GPTarot.ai 🎴 where you can ask GPT to pull three cards and maybe give you a sense of direction. Check it out! The entire web app was developed alongside ChatGPT, including the card designs, technical assistance and troubleshooting. As a non-programmer it’s remarkable how having access to these tools has changed my expectation of what is possible, and there is no reversal in sight.
Below is a selection of links and lectures that struck my fancy in the last week along with summaries of what they contain. I hope some of them resonate with you!
MZ
From Twitter
https://twitter.com/daveg/status/1723673147021357443
https://twitter.com/deliprao/status/1724163062830153814
Liv Boeree explores the negative aspects of competition in AI, especially in social and news media. She highlights the detrimental effects of AI beauty filters on body image and the destructive race to the bottom in media, fueled by clickbait and polarization. Boeree stresses that misaligned incentives in various industries encourage harmful strategies that defer costs to the future. She acknowledges recent positive steps by AI labs but urges a shift in competitive focus towards creating robust security criteria and prioritizing alignment research, to counteract the destructive force she refers to as "Moloch."
Competition's Downside: AI's competitive landscape, especially in media, has harmful societal impacts.
Misaligned Incentives: Current industry incentives promote short-term gains over long-term well-being.
Constructive Shift: A call for reorienting competition towards security and alignment in AI development.
Sasha Luccioni addresses the multifaceted impacts of AI. She emphasizes AI's significant environmental footprint, highlighting the energy-intensive nature of large model training and its consequent carbon emissions. Luccioni also tackles AI's potential for bias, especially in law enforcement, underscoring the need for tools to measure and correct these biases. Crucially, she argues that AI's negative effects aren't inevitable, advocating for a collective, value-aligned approach to AI development to harness its potential for societal and environmental benefit.
Environmental Impact: AI's energy use and carbon emissions are significant, demanding sustainable development strategies.
Bias and Equity: AI models can encode biases, affecting law enforcement and society; mitigation tools are essential.
Direction of AI: The negative trajectory of AI isn't fixed; collective action and value alignment can steer its positive use.
Barack Obama discusses the impact and regulation of AI. He emphasizes AI's transformation of the economy and lifestyle, advocating for responsible handling to avoid potential harm. Obama stresses the importance of governmental awareness and intentional rule-setting in AI development. He highlights the role of social media in his presidency and its regulation failure, underlining the need for a smart framework to address these challenges, citing the Biden Administration's executive order as a crucial step.
Regulation Significance: Obama advocates for intentional AI regulation, emphasizing its transformative impact on society.
Governmental Role: Stresses government's crucial role in awareness and rule-setting for responsible AI development.
Framework Necessity: Highlights the need for a smart framework to address AI and social media challenges, referencing Biden's executive order.
Holly Herndon, an experimental electronic musician and artist, explores AI's impact on creativity and autonomy. Herndon, with collaborator Mathew Dryhurst, experiments with AI-generated art, delving into concepts like embedding personal identity in AI models. Their work, including the "classified" series, investigates how AI interprets Herndon's identity. This extends to her music, where AI plays a significant role, reflecting on platform capitalism and AI's cultural impact. Herndon's approach is unique in its blend of art, technology, and commentary on AI's role in the creative process and society.
Innovative Artistry: Herndon's fusion of AI and personal identity challenges traditional notions of creativity.
Cultural Commentary: Her art reflects on AI's societal impact, especially on artists' rights and autonomy.
Technological Intersection: Her work lies at the crossroads of AI technology, music, and visual arts, pioneering new forms of expression.
Envisioning your Purpose
Maybe you have been trying to figure out your purpose in life. At Envisioning we have been working on an AI-augmented approach for designing interactive workshops (Sandbox), and are looking for different applications of this model. One of the applications we’ve built is about exploring the intersection of things you love, are good at, can be paid for and the world needs. We do this by infusing a little bit of GPT magic along the way in order to help you identify and relate to ideas you might not have thought about before. If this is something you want to explore, reply to this email and we’ll send you an access link to the application.
Emerging Vocabulary
Mechanistic Interpretability
Understanding how an AI system's internal mechanisms work, particularly in complex models like deep neural networks. It involves dissecting and analyzing the inner workings of the model to comprehend how it processes inputs and makes decisions or predictions. This is challenging due to the complexity and often opaque nature of advanced AI systems, where layers of processing and numerous parameters contribute to the final output. Mechanistic interpretability aims to make AI decision-making processes transparent, aiding in trust, debugging, and ethical considerations.
View all emerging vocabulary entries →
If Artificial Insights makes sense to you, please help us out by:
Subscribing to the weekly newsletter on Substack.
Following the weekly newsletter on LinkedIn.
Forwarding this issue to colleagues and friends.
Sharing the newsletter on your socials.
Commenting with your favorite talks and thinkers.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.
You are receiving this newsletter because you signed up on envisioning.io.