1 Comment

That HackerNews thread is an interesting one. It reflects a lot of the misconceptions I've seen about LLMs and GAI in general, resulting in a lot of expensive, bloated, over-engineered solutions for deterministic problems much more efficiently (and accurately) solved with structured programming. But because the authors can't really program that well, they turn to an LLM to fill the skill gaps.

That's fine for a demo or something quick and dirty. But it is fishing with dynamite.

Right now LLM business models don't exist much beyond "sell more cloud services" from infrastructure providers. To the extent LLM use veers away from that subsidization, users and startups are going to bleed out of pocket for these inefficient solutions.

Another case in point I found was a fully remote company seeking an executive to lead their GAI/LLM-based education product/service. They turned to a third-party cloud-based recruiting application management and pre-filtering provider (Crossover). Out of curiosity, I stepped through their GAI skill badging tests to gauge my own skills and prompt engineering.

Instead I was dismayed by the construction of the tests. Applicants were asked to use ChatGPT engines to essentially do text transformations/ETLs on text-based information and construct basic if-the-else logic flows from problem instructions. All things that could have been done easily and efficiently with existing programming languages. But by applying LLMs to solve it, it's a gross waste and misuse of a technology -- not to mention a less deterministic one when you need an output that is deterministic.

There's a maturity stage more will need to get to where we recognize that throwing every problem at an LLM can be a really bad, buggy, and expensive idea. Maslow's Law of the Instrument right now is a red flag to know when someone really doesn't know what's under the covers and hasn't considered the tradeoffs. As a result, the tests reflected an organization that seemed clueless on how to usefully apply GAI to their problem space... which is not the impression they likely wanted.

Expand full comment