What do you make of the chaos playing out at OpenAI in the past 72 hours? Anyone following the drama is left with an interminable list of questions of how the hottest company in tech can be subject to such volatility. For those who haven’t been glued to Twitter over the weekend, Sam Altman the CEO of OpenAI was suddenly fired due to an apparent disagreement with the nonprofit board overseeing the company. After a couple of attepts at consolidating the different parties’ interest over the weekend, Satya Nadella of Microsoft intervened and has apparently offered to hire most of the OpenAI team into a new company led by Altman.
This newsletter avoids commenting on the news as I prefer spending time covering the big-picture thinking shaping the arc of development instead. OpenAI is an exceptional company with an outsized effect on the industry. Whether they were right in accelerating their approach for getting the technology into the hands of users or whether they should exercise caution is a discussion for another time. There is little point in speculating how the reorganization will unfold, and I am certain this weekend will be considered a watershed moment moving forward, regardless of what happens next.
To make sense of the individuals involved, I have selected interviews and presentations with each of the five key players, which might help you understand their various incentives and worldviews moving forward.
I started this newsletter 29 weeks ago with the intent of tracking our (inevitable) transition toward AGI. According to how you read the Twitter tea leaves, the boiling point between the OpenAI board and leadership were about the misaligned risks and incentives around the impending development of something that looks a lot like AGI. Exciting times!
MZ
PS. To everyone who requested access to Envisioning Purpose – we are working on an improved version of the interface and will share it later this week.
Meme Galore
Ilya Sutskever, OpenAI's Co-Founder and Chief Scientist, delves into the company's evolution, emphasizing their focused research on neural networks for AI applications. He outlines OpenAI's blend of top-down and bottom-up research approaches and its commitment to specific, scalable research directions. Key topics include the reliability of models, the role of open source in AI, and the capabilities and boundaries of transformer architectures. Sutskever also explores the concepts of AI autonomy, super alignment, the driving factors behind rapid AI advancement, and its societal impacts.
Strategic Direction: Emphasis on centralization and targeted research areas, ensuring efficient progress in AI.
Technological Impact: Exploration of transformer architectures and their potential to shape future AI developments.
Societal Relevance: Insight into the accelerating AI progress and its implications for society at large.
Mira Murati of OpenAI outlines her AI journey, emphasizing the role of physics and math in AI, decision-making in AI projects, and the future trajectory of AI models. She underscores the importance of aligning AI with human values and maintaining human oversight. Murati foresees AI models incorporating diverse modalities and moving towards more comprehensive models that mimic human understanding. Addressing concerns about AI misalignment, she highlights OpenAI's commitment to super alignment, reflecting a methodical approach to AI's evolution.
Human-Centric AI: Alignment with human values crucial for autonomous AI systems.
Expansive Vision: AI to encompass multiple modalities, aiming for human-like comprehension.
Alignment Challenge: OpenAI prioritizes super alignment in AI's evolution.
Sam Altman, (ex) CEO of OpenAI, in his address as the 2023 Hawking Fellow at Cambridge Union, delves into the progress and challenges of artificial intelligence (AI), focusing on the development of highly capable AI (HCAI) and artificial general intelligence (AGI). Altman discusses OpenAI's achievements with GPT models, emphasizing the balance between innovation and safety. He explores social media regulation, the need for advanced computing hardware, and integrating human rights into AI systems. Altman highlights the impact of AI on society, stressing the need for responsible development, regulatory frameworks, and aligning AI with human values.
Innovation vs. Safety: Balancing AI advancement with ensuring safety and ethical considerations is critical.
Impact on Society: AI's potential to enhance or disrupt society necessitates responsible development and regulation.
Human-Centric AI: Aligning AI with human values and rights is crucial for beneficial and equitable outcomes.
In a dialogue with Greg Brockman, co-inventor of ChatGPT, he shares insights on his path to AI, underlining the foundation of OpenAI and its societal aims. Brockman's narrative spans his early interests in math and gaming to spearheading OpenAI's development of general-purpose AI. He stresses the importance of responsibly harnessing AI in fields like education, medicine, and arts. Brockman also delves into AI’s risks and the criticality of careful progress. The conversation touches on AI's role in complementing human creativity, indicating a synergy between technological advancement and creative human endeavors.
Visionary Leadership: Brockman's journey from math enthusiast to AI pioneer underlines the diverse roots of innovation in AI.
Ethical Integration: Emphasizes responsible AI integration, balancing advancement with societal implications and risks.
Creative Synergy: Explores the intersection of AI and human creativity, suggesting a collaborative future in creative fields.
Satya Nadella, CEO of Microsoft, discusses Microsoft's transformation under his leadership, focusing on strategy rethinking in the digital era. He emphasizes the importance of organic growth, partnerships, and adapting to AI advancements in technology and business. Nadella advises a paradigm shift in application interfaces towards natural user interfaces and advocates for continued experimentation with emerging technologies. His narrative underscores Microsoft's commitment to leveraging these technologies and partnerships to bring value to the market, highlighting the company's evolving role in the technology sector.
Strategic Transformation: Nadella leads Microsoft's shift to prioritize organic growth and beneficial partnerships in the technology sector.
AI at the Forefront: Emphasizes the need for companies to innovate and adapt to the rapidly evolving role of AI in business and technology.
Interface Innovation: Advocates for natural user interfaces and continuous exploration of emerging technologies to stay ahead in the market.
Emerging Vocabulary
Adversarial Instructions
Inputs or commands deliberately designed to manipulate or exploit the weaknesses in an AI system. These can be used to test, confuse, or cause the AI to produce incorrect or unintended outcomes. The concept is akin to adversarial examples in machine learning, where slightly altered inputs cause misclassification in models. Adversarial instructions often aim to probe the limitations of AI understanding, response generation, or ethical boundaries, and can be a tool for assessing the robustness and security of AI systems.
View all emerging vocabulary entries →
If Artificial Insights makes sense to you, please help us out by:
Subscribing to the weekly newsletter on Substack.
Following the weekly newsletter on LinkedIn.
Forwarding this issue to colleagues and friends.
Sharing the newsletter on your socials.
Commenting with your favorite talks and thinkers.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.
You are receiving this newsletter because you signed up on envisioning.io.