Discussion about this post

User's avatar
Tommy Leung 🇺🇸's avatar

Great read!

Expand full comment
KBall's avatar

100% hard agree on the premise - knowing what AI can't do is more useful than knowing what it can.

It turns out the Turing test is the wrong test - as humans, we naturally anthropomorphize everything. We see ChatGPT able to reproduce a logical argument when prompted and jump to the conclusion that it has a logical model similar to how we do, rather than is reproducing a structure of words that shows up all over its training material. The difference is in generalization and validation. A true logical model can be used to infer novel ideas that are internally consistent, and validate proposed ideas against that model. A statistical prediction of text can do neither.

The same problem applies to physical models, models of complex systems, and relationships.

This doesn't devalue LLMs, but rather puts them in context. We have created a step-function increase in our ability to process natural language, but that processing is still fundamentally "flat" in that it does not truly encode dimensions other than language. It is a very very useful tool in our ability to parse, generate, translate, and recombine "content" of various forms, but is not even close to AGI.

Where it does get very very interesting though is identifying the classes of problems that can be either reframed as "content problems" (which may well include a lot of the mechanics of writing code) *or* where "content" is a better user interface into some other structured system that may itself have the ability to model something other than language.

Expand full comment
5 more comments...

No posts