Unleashing a symphony of synthetic minds (011)
Welcome to Artificial Insights: your weekly review of how to better collaborate with AI.
While the southern hemisphere braces for winter, in the north it seems everyone is (or wish they were) on summer break. This week’s edition keeps it brief and inspiring, featuring insights and links to help shape your thinking among the growing fields of artificial intelligence.
Every week I spend as much time as possible watching and reading the voices of those creating and describing these fields like founders, researchers and journalists. I am intentionally inclusive in where I scope these news and updates, and yet feel the walls of embedded biases closing in when week after week the selection of names comes up predominantly white and male. You can help diversify this publication further by sharing links to thinkers we should all be paying attention to. I am deliberately breaking out of the filter bubble wherever possible and need your help to keep pushing against the edges.
MZ
Will AI change our memories? 💭
Short video by Evan Puschak (the Nerdwriter) exploring the impact of AI tools like Generative Fill and Magic Editor on our photos and memories. While these tools give us more control of our images, they also raise questions about the accuracy and authenticity of the past. Highly recommended.
Is AI racist and antidemocratic? 🗳️
Computer scientist Timnit Gebru is one of the most critical voices against the unethical use of AI and in this conversation she emphasizes the need for regulation to protect individual’s privacy and ensure the safety of AI products and data collection (like biometrics and facial recognition). Gebru also calls for the reassessment of funding structures and research processes in order to build AI tools that benefit more people.
Shoggoth, Basilisks, Paperclips & Grimes 📎
Science fiction author Bruce Sterling offers a primer on AI risk for Newsweek: the fervor around the field is leading to folklore and mythical stories about the technology in pop culture. These symbolize the complex and unpredictable nature of AI, and sometimes hides the unpredictable and uncontrollable aspects of the tools. Sterling suggests that the excitement around AI is a phase and predicts an eventual "trough of disillusionment" when the technology's weaknesses and challenges become more apparent.
Folk stories are never facts. Often they're so weird that they're not even wrong. But when people are struck to the heart—even highly technical people—they're driven to grasp at dreams of monsters.
Unreasonable AI 🧞
Blogger and entrepreneur Anil Dash on the escalating hype around today’s AI and raises concerns about their unpredictability. Dash argues that these systems contradict a fundamental principle of technology: reasonability - the ability to understand and predict a system’s outcomes consistently. These inconsistencies lead to increased risk and the potential misuse of these technologies by exploiters.
The very act of debugging assumes that a system is meant to work in a particular way, with repeatable outputs, and that deviations from those expectations are the manifestation of that bug, which is why being able to reproduce a bug is the very first step to debugging.
What will GPT-2030 look like? 🔮
Jacob Steinhardt makes predictions about where GPT might be heading next: specific capabilities (imagine GPT programming, calculating or processing information), increased inference speed (quicker responses), parallel copies (multiple agents working together) and knowledge sharing (different approaches for collaborating in groups and between AIs).
How can we be less surprised by developments in machine learning? Our brains often implicitly make a zeroth-order forecast: looking at the current state of the art, and adding on improvements that “feel reasonable”. But what “seems reasonable” is prone to cognitive bias, and will underestimate progress in a fast-moving field like ML.
Long reads from Substack
How people are using AI 👀
Long read on The Verge By Jacob Kastrenakes and James Vincent.
Interview
Improving our questions to AI with analytical thinking
This week’s edition features an interview with data scientist and researcher Ricardo Cappra. His work and writing focuses on the use of data for decision-making and helping organizations use data more intelligently.
What excites and concerns you about AI?
What excites me the most about the advancement of artificial intelligence is envisioning an intelligent assistant that knows my characteristics and takes on repetitive tasks in my life and work. It would be like a brain supplement capable of supporting and performing tasks with storage, processing, and speeds far superior to my biological limits. I find it fascinating to think about reproducing behavioral traits that maintain my personality in task execution, even if I'm not directly involved.
What scares me the most is the opposite effect of what fascinates me. When I die, this artificial intelligence could potentially remain alive, with my characteristics existing in digital environments, replicating my behavior as if I were still alive. This topic deeply impresses me, to the extent that I have even written an essay about it.
How can analytical thinking make us ask better questions to our autonomous systems?
Analytical thinking is a logical approach to information processing, primarily supported by the schools of critical thinking and computational thinking. Autonomous systems, when built to truly be autonomous, rely fundamentally on premises to understand what they will execute, learn, and repeat. These premises are typically structured through analytical exercises. This analytical activity requires an understanding of which variables should be considered, how the system functions, and the architecture of questions to enable incremental intelligence. Here lies the answer to your question: developed analytical capacity will work in a more structured manner, enhancing the architecture of questions and allowing for greater extraction of value from automated system executions. The best questions are often composite questions, resulting from a sequence of logical inquiries that integrate together, creating a knowledge tree on a specific subject.
How are you using AI yourself today?
Currently, I harness the power of artificial intelligence in the co-creation process. When I'm crafting a text, for instance, I rely on ChatGPT to kickstart the writing process, explore different approaches to a given topic, or find improved ways to structure the text. AI proves to be an invaluable tool for such tasks, given its ability to calculate possibilities and simulate hypotheses instantly, thereby conserving my energy in completing the task at hand. Another way I leverage AI is when studying scientific articles, particularly for analysis and synthesis. ChatPDF, a tool I use, facilitates a "dialogue with the document" through a question-and-answer system. The AI highlights parts of the text where it finds references, enabling a more targeted exploration. This resource streamlines the process of locating specific elements within extensive studies, optimizing efficiency. I believe the true contribution of AI lies in optimizing tasks that would otherwise require significant effort and repetition, thereby freeing up human capacity to engage in critical and intellectual work.
What should people know about of data science to better understand AI?
Data science plays a crucial role throughout the entire process of conceiving artificial intelligence, encompassing data collection, preprocessing, processing, modeling, analysis, and data utilization. When these data are set into motion through automation, they become artificial intelligence systems. There are two ways in which data can be set into motion: through technical impulses or through rules. Technical impulses are mathematically incremental, generated by predefined models that are systematically computed and updated. On the other hand, rule-based impulses are determined by humans, representing the logic for the system to function. This logic is typically embedded within both the learning process and the execution of automated tasks, enabling the AI system to exhibit responsive behavior based on the established rules. Even the artificial augmentation of intelligence, as seen in generative artificial intelligence, relies on a sequence of human-made rules to determine the data and modeling to be utilized. In essence, any technique in AI fundamentally depends on a logically predetermined determination to support what we currently know as artificial intelligence. In summary, data science is the fusion of techniques, logic, and mathematics that form the behavioral codes of artificial intelligence.
Learn more about Ricardo Cappra and his recommended synthwave soundtrack.
Emerging Vocabulary
Fine Tuning
The process that takes a pre-trained model and adapts it to a slightly different task. First, an AI model is trained on a large dataset. This model learns a lot of features and patterns from this data, which can be a very time-consuming and computationally expensive process. This model is usually trained to perform a certain task, but the features it learns can be more generally applicable. After pre-training, the model can be adapted or "fine-tuned" to perform a different task. The main idea is to leverage the features that the model has already learned during pre-training and apply them to the new task.
View all emerging vocabulary entries →
If Artificial Insights makes sense to you, please help us out by:
Subscribing to the weekly newsletter on Substack.
Following the weekly newsletter on LinkedIn.
Forwarding this issue to colleagues and friends wanting to learn about AGI.
Sharing the newsletter on your socials.
Commenting with your favorite talks and thinkers.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.
You are receiving this newsletter because you signed up on envisioning.io.