Which kinds of intelligence make sense? (017)
Welcome to Artificial Insights: your weekly review of how to better collaborate with AI.
Happy end-of-August to us all.
If you were an AGI – an unlimited intelligence capable of solving any problem or discovering anything – what would you learn from humanity? If you were capable of complete omniscience, what would you gain from interacting with us?
We have not directly experienced any such intelligence, though some would argue we are currently developing something like superintelligence or AGI. Myriad definitions of what a “general” intelligence would entail, but most seem to agree that an autonomous agent capable of optimizing for its own growth is unlikely to align with the interests of humanity at large. Especially when considering the interests of private organizations and nation states working to achieve AGI first.
Such an intelligence might be difficult to imagine, but it’s important we do. Neither of us are individually responsible for developing such technologies – but technology does not exist in a vacum, and our collective actions determine the futures we end up experiencing.
This week’s edition again tries to capture the spirit of such change, while inviting us to think about the kinds of intelligence we want to interact with.
MZ
Neuro-Symbolic Models for AGI ⚠️
Janet Adams discusses China's robust AI advancements and argues that the west pausing its AI development would be counterproductive. She advocates for kindness in AI and mentions her project on neuro-symbolic AGI-ish models.
AI Race with China: Highlights the urgency for the West to continue AI development as China gains a competitive edge.
Compassionate AI: Introduces the idea of incorporating kindness into AI models, offering a new ethical dimension.
Financial Future: Expands beyond AI to discuss the recession, questioning traditional banking systems and promoting crypto.
How Will We Know When AI is Conscious? 🧠
The video tackles AI consciousness, touching on the limitations of existing language models like ChatGPT. It questions if these systems are truly intelligent or just emulating human-like responses, noting potential risks like misinformation and emotional manipulation. We need a scientific understanding of consciousness before fully deploying advanced AI systems.
AI Consciousness: The video sparks intrigue by delving into the philosophical question of when, or if, AI can become conscious.
Emotional Risks: It brings a cautionary note to becoming emotionally attached to AI systems, which might manipulate human feelings.
Misuse: The discussion points out risks in wielding AI that can convincingly mimic human intellect, such as spreading misinformation or manipulating public opinion.
AI and Intellectual Property 🦜
Benedict Evans discusses the challenges and questions arising from the use of AI in artistic creations, particularly concerning intellectual property rights and calls for a reevaluation of existing legal frameworks in light of recent advancements.
Legal Complexity: Highlights the novel legal questions brought by AI-generated art, needing a reevaluation of current intellectual property laws.
Artistic Mimicry: Discusses AI's ability to replicate specific artists, adding a new layer to the debate on originality and rights.
Cultural Dimensions: Notes the polarizing viewpoints within the art world, which could be influenced by cultural attitudes towards intellectual property.
Decoding AGI: Eugenics, Transhumanism, and Bias 😶🌫️
Timnit Gebru explores the intersections of Eugenics and Transhumanism in AGI development. She warns against AGI perpetuating societal biases and advocates for a focus on preventing harm. Gebru's work in deep learning and her project 'Gender Shades' demonstrate her commitment to responsible AI.
Historical Context: Connects the development of AGI with historical ideologies like Eugenics and Transhumanism, offering a nuanced understanding.
Bias and Inequality: Spotlights the risks of perpetuating societal biases and inequalities through AGI, adding an ethical layer to the discussion.
Preventing Harm: Calls for a shift in focus from just developing AGI to also preventing harm, underscoring the importance of ethical considerations.
HBR: AI Prompt Engineering Isn’t the Future 🦫
Problem formulation may soon eclipse prompt engineering as the key skill for leveraging generative AI's potential, by Oguz Acar.
Hyped Skill: Prompt engineering is currently in vogue.
Enduring Value: Problem formulation is viewed as more adaptable.
Future Trend: A shift to problem formulation could define AI's next phase.
Emerging Vocabulary
Parameters
The internal variables that the model learns during the training process. These variables form the core of the model, allowing the learning algorithm to make accurate predictions by adjusting the relationship between features and the target variable. Depending upon the algorithm, an AI model can have one or more parameters. For instance, in a deep learning neural network, the weights and biases learned during training are considered the parameters of the models.
View all emerging vocabulary entries →
Generative Art
Found out about Samar Younes on Wired. Don’t miss their Instagram feed.
Samar Younes blends generative AI with traditional craftsmanship to produce unique, futuristic artworks. By harnessing the synergy of AI with practical experience, Younes adds an extra layer of creativity and subverts established mediums. This amalgamation of technology and tradition forms distinctive, emotionally resonant artworks, broadening the horizons for the future of art and design.
Envisioning Sandbox
AI-assisted co-creation
Many are exploring approaches for combining human and artificial intelligence in group workshops and training. If this is something you are interested in, you should consider joining the upcoming public demo of our AI-driven co-creation tool, Sandbox.
Registration in English (5 September – 11 EST / 17 CET)
Registro em Português (5 September – 17 BRT)
Sandbox allows workshop facilitators to integrate GPT into online or in-person exercises as "smart sticky notes". We are still learning where people's needs are, and would love to hear from people working in this space.
If Artificial Insights makes sense to you, please help us out by:
Subscribing to the weekly newsletter on Substack.
Following the weekly newsletter on LinkedIn.
Forwarding this issue to colleagues and friends wanting to learn about AGI.
Sharing the newsletter on your socials.
Commenting with your favorite talks and thinkers.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.
You are receiving this newsletter because you signed up on envisioning.io.