Welcome to another week of Artificial Insights where we try making sense of the tools shaping the future of… everything? Last week’s edition seems to have struck a nerve with many readers, some of which reached out to comment on it. I’m a firm believer that the advent of AI (whether we call it AGI or ASI or simply forget calling it anything) is simply not going away. We can somewhat opt out the use of these tools on a personal level, but collectively our transition into this reality is a like a ratchet – with every new step of technological capacity, our expectations are squeezed towards greater reliance on the tool. There is no natural reversal of this course. We will keep colliding towards increased dependence on AI. Only those who prepare by integrating AI tools into their workflows today in order to fundamentally rethink their work moving forward are likely to succeed.
In other words, it is not enough to automate your existing work in order to save time. Rather, you should rethink the very role and purpose of your efforts, from the perspective of a future where AI performs some or maybe most of the value you provide as a professional to the market. What is left for you to do?
MZ
Soulful machines
What happens if we endowing machines with a 'soul' through an open-source cognitive layer atop GPT? Via Gust Nogueira and Ross Dawson.
Innovative Interface: Open-source layer over GPT facilitates human-like AI interactions.
Empathy Angle: Advocates for an empathetic approach to AI, referencing cultural misinterpretations in Westworld.
Demonstration Driven: Utilizes a mock Zoom call to illustrate the technology's potential for realistic AI interactions.
Automated augmentation
This presentation and panel delineates the impact of AI on various sectors, emphasizing the balance between human input and AI automation. It underscores AI's productivity boost, job displacement concerns, and the significance of continuous reassessment as AI evolves.
Rapid Evolution: AI's accelerated development, impacting productivity and job structures critically.
Generative Leap: Generative AI's potential in revolutionizing content production and business operations.
Human-AI Symbiosis: Continuous reassessment for achieving a harmonious human-AI interaction in various sectors.
Brain atlas breakthrough
Joseph Ecker and Patrick Hof, leading the charge in the US-led BRAIN Initiative, unveil comprehensive brain atlases, casting a beam on cellular operations and diseases like Alzheimer's. This monumental stride, akin to the Human Genome Project, stands on 21 papers, heralding a fusion of anatomy with cell functionality, and a collaborative milestone in neuroscience with a shared vision of deciphering brain diseases.
Unveiling Intricacies: New atlases provide a granular view of brain operations at a cellular level.
Disease Discernment: Enhanced understanding of brain diseases, propelling targeted treatment approaches.
Collective Conquest: Exemplifies successful global scientific collaboration, reminiscent of the Human Genome Project.
Fine-tuning the Economy of Scale: OpenAI's Cost Edge
Vikram Sreekanti and Joseph E. Gonzalez articulate that OpenAI's cost-effectiveness, stemming from economies of scale in infrastructure and service quality, overpowers other LLM providers. The cost disparity is evident when comparing fine-tuning costs on OpenAI and AWS, showcasing the fiscal impracticality of self-hosted LLM deployments. OpenAI's stronghold potentially diminishes the viability of open-source models, despite their distinct relevance.
Cost Comparison: OpenAI's fine-tuning is 8-20x cheaper than AWS, underscoring its economic advantage.
Infrastructure Efficiency: Scalable infrastructure and service quality create a cost-effective moat for major LLM providers like OpenAI.
Open-Source Dilemma: The fiscal challenge posed to open-source models by OpenAI's cost structure demands a strategic rethink for sustainable competitiveness.
Emerging Vocabulary
Quantization
Technique used to compress and represent data in a reduced range of values. It involves converting continuous variables into a finite set of discrete values, typically using uniform intervals. Quantization is often employed to reduce the memory requirements and computational complexity for storing and processing data, especially in deep learning models. Quantization is crucial in AI for optimizing models to run efficiently on various hardware platforms, such as mobile devices or embedded systems, which often have limited resources. By reducing the precision of weights, activations, and gradients used in neural networks, quantization achieves lower memory usage and faster processing times.
Generative Art
GPTarot
In my side job as an information designer for arcane & occult subjects, I have also been experimenting with diffusion models like DALL-E 3 and ended up creating a 22-card major arcana Tarot in only a couple of hours, which embraces the AI aesthetic without losing symbolic accuracy. Would love your feedback!
If Artificial Insights makes sense to you, please help us out by:
Subscribing to the weekly newsletter on Substack.
Following the weekly newsletter on LinkedIn.
Forwarding this issue to colleagues and friends.
Sharing the newsletter on your socials.
Commenting with your favorite talks and thinkers.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.
You are receiving this newsletter because you signed up on envisioning.io.