Happy 2024 and welcome to the second year of Artificial Insights.
The holidays to me are the best time for focused work and I spent the last few weeks immersed in my favorite code editor typing around the clock. The primary output was a new personal website – it’s always been important for me to keep an online presence, and I’ve have blogged on different platforms since forever. Having moved between platforms over the years means there was no cohesive place for myself to keep track of my ideas, and figured a new blog with posts spanning nearly 20 years to be the ultimate holiday gift for myself (and my readers).
In terms of AI news, little was announced in the quiet weeks. I spent a lot of time thinking about what this newsletter should be about. What do I enjoy learning, and what kind of writing is currently underserved? It feels like everything that can be said about AI is being said - and I often stop to question whether it even makes sense to keep the format going in such a crowded space.
Yet here we are. With another edition, and renewed interest in keeping the flow. After a brief hiatus, welcome back to Artificial Insights and thanks for reading!
MZ
Can Artificial Intelligence become conscious?
Spectacular lecture by philosopher and cognitive scientist Jocha Bach at the Chaos Computer Club about synthetic sentience. Almost impossible to summarize, and I highly recommend watching the whole thing when you have time.
What does it take to create a mind?
Stuff we learned about AI in 2023
2023 was a breakthrough year for LLMs and Simon Willison has been doing an incredible job keeping track of developments on his blog. Here is an incredible overview of things we have learned about AI in the last year.
On the one hand, we keep on finding new things that LLMs can do that we didn’t expect—and that the people who trained the models didn’t expect either. That’s usually really fun! But on the other hand, the things you sometimes have to do to get the models to behave are often incredibly dumb.
Turquoise Automated Driving Lights
This caught my eye: Mercedes got approval for turquoise automated driving lights when the Level 3 driving system is active. The car's lights will glow turquoise so other drivers know you're allowed to be distracted. Glimpses of futures.
What's the coolest non standard application of LLMs you've seen?
I love these – here is a Hacker News thread about non-standard uses of AI in people’s workflows. Lots of automation and learning.
Interview with Yann LeCun
I’ve been enjoying listening to other interviews of Yann and recommend this conversation with Steven Levy in last month’s WIRED with loads of insights into how they're using AI at Meta.
How do you define AGI?
I don't like the term AGI because there is no such thing as general intelligence. Intelligence is not a linear thing that you can measure. Different types of intelligent entities have different sets of skills.
If Artificial Insights makes sense to you, please help us out by:
Subscribing to the weekly newsletter on Substack.
Following the weekly newsletter on LinkedIn.
Forwarding this issue to colleagues and friends.
Sharing the newsletter on your socials.
Commenting with your favorite talks and thinkers.
Artificial Insights is written by Michell Zappa, CEO and founder of Envisioning, a technology research institute.
You are receiving this newsletter because you signed up on envisioning.io.
That HackerNews thread is an interesting one. It reflects a lot of the misconceptions I've seen about LLMs and GAI in general, resulting in a lot of expensive, bloated, over-engineered solutions for deterministic problems much more efficiently (and accurately) solved with structured programming. But because the authors can't really program that well, they turn to an LLM to fill the skill gaps.
That's fine for a demo or something quick and dirty. But it is fishing with dynamite.
Right now LLM business models don't exist much beyond "sell more cloud services" from infrastructure providers. To the extent LLM use veers away from that subsidization, users and startups are going to bleed out of pocket for these inefficient solutions.
Another case in point I found was a fully remote company seeking an executive to lead their GAI/LLM-based education product/service. They turned to a third-party cloud-based recruiting application management and pre-filtering provider (Crossover). Out of curiosity, I stepped through their GAI skill badging tests to gauge my own skills and prompt engineering.
Instead I was dismayed by the construction of the tests. Applicants were asked to use ChatGPT engines to essentially do text transformations/ETLs on text-based information and construct basic if-the-else logic flows from problem instructions. All things that could have been done easily and efficiently with existing programming languages. But by applying LLMs to solve it, it's a gross waste and misuse of a technology -- not to mention a less deterministic one when you need an output that is deterministic.
There's a maturity stage more will need to get to where we recognize that throwing every problem at an LLM can be a really bad, buggy, and expensive idea. Maslow's Law of the Instrument right now is a red flag to know when someone really doesn't know what's under the covers and hasn't considered the tradeoffs. As a result, the tests reflected an organization that seemed clueless on how to usefully apply GAI to their problem space... which is not the impression they likely wanted.