2 Comments

Great article!

Expand full comment

Great article! But to what extend is it not possible to make the argument in reverse? The fact that LLMs suck at language (as you illustrate with the arcane prompting), but can still create valuable results in the right context, could also point to immense potential value for each small improvement in language ability right? MJv4 didn't have a fundamental improvement in language ability, just fine tuning on user results. But it still lead to a significantly better model.

Expand full comment