7 Comments

Great read!

Expand full comment

100% hard agree on the premise - knowing what AI can't do is more useful than knowing what it can.

It turns out the Turing test is the wrong test - as humans, we naturally anthropomorphize everything. We see ChatGPT able to reproduce a logical argument when prompted and jump to the conclusion that it has a logical model similar to how we do, rather than is reproducing a structure of words that shows up all over its training material. The difference is in generalization and validation. A true logical model can be used to infer novel ideas that are internally consistent, and validate proposed ideas against that model. A statistical prediction of text can do neither.

The same problem applies to physical models, models of complex systems, and relationships.

This doesn't devalue LLMs, but rather puts them in context. We have created a step-function increase in our ability to process natural language, but that processing is still fundamentally "flat" in that it does not truly encode dimensions other than language. It is a very very useful tool in our ability to parse, generate, translate, and recombine "content" of various forms, but is not even close to AGI.

Where it does get very very interesting though is identifying the classes of problems that can be either reframed as "content problems" (which may well include a lot of the mechanics of writing code) *or* where "content" is a better user interface into some other structured system that may itself have the ability to model something other than language.

Expand full comment
Jan 30, 2023ยทedited Jan 30, 2023

I disagree that the "first step fallacy" is a fallacy. I think narrow and general intelligence seem like they are on a contunuum. I think alpha zero, which could learn a wide range of board games is in some sense a bit more general than deep blue that only played chess.

Expand full comment

With regards to the Lex Tesla sensor clip, I still think a single forward looking RADAR adds a super-power that normal cameras can't address, seeing through fog or the vehicle in front would be a safety benefit. Anyway an interesting read.

Expand full comment

Source for the girl selling paintings art:

https://www.pixiv.net/en/artworks/100774081

Expand full comment

If I had described GPT3 to someone 3 years ago, over 50% of researchers would have agreed that it constituted AGI.

There are a lot of details to figure out, but the progress continues to accelerate.

Expand full comment

> we donโ€™t have AGI yet, and so the class of problems that are โ€œAGI hardโ€ should be assumed impossible/not worth working on without also having a plan for solving AGI.

I don't think many paradigm shifting breakthroughs happen with the person having a concrete plan before hand. What happens is people try, fail, learn from failure and iterate... I recommend this article on this topic

https://www.lesswrong.com/posts/nvP28s5oydv8RjF9E/mats-models

NP vs P hard complexity theory has mathematical grounding. what is AGI hard? why?

> lol I asked a few friends and they seem to agree... trust our intuitions bro

He seems to have defined the alignment problem as "empathy"

> It would take empathy for an AI language tutor to determine what kind of learner I am, and adjust accordingly.

He seems to not consider the possibility that AI can be superintelligent and dangerous to all of us without having empathy. I am confused... he clearly says that it is fine for the ai to be superintelligent as long as it is "aligned"/having "empathy". What about the converse?

I like how he made falsifiable claims about what he thinks AI can't do so... but I think he has not found the least impressive thing before he will be worried. By the time AI is able to independently invent new useful math like calculus from first principles, it might be too close to the final danger.

As I recently tweeted... sure AGI might be harder than some people who buy into the hype think. Great... but if they are wrong its all fine. What if you are wrong and AGI is easier than the detractors think? Are we prepared for that possibility? or are you so sure of your model of what general intelligence is considering the evidence we have that you know that we are no where close...

Expand full comment