6 Comments
User's avatar
Archimedes's avatar

You are on target again. Models labs are the brain, Agent Labs are the arms and legs that move AI. They need each other but serve completely different functions. Keep up your great work!

Expand full comment
David Stark's avatar

This article is not one of your best. It lacks organization and you throw in a lot of extraneous references that only confuse the reader. (I offer this as constructive feedback, not as an insult to your abilities.)

Expand full comment
Hashim Warren's avatar

I stopped reading and went to the comments to see if anyone was thinking the same thing.

Not a slam to you swyx, you're great at iterating your thinking in public. I think you may want to revisit this one

Expand full comment
Latent.Space's avatar

appreciate the reading and support anyway - i do try to always "leave it all out on the field" and this one was no exception. could def have used some more revision time, but was juggling AIE alongside of this (although I realize of course that every piece of work should stand on its own as gooda s it can be).

Expand full comment
Austin King's avatar

Model Labs finetune RL with some harness (in the context of coding). If you switch out this harness at inference time, it seems reasonable to assume on first principles you’d see less results. As RL scaling increases and you’d expect this effect to increase.

Does this dynamic imply Agent Labs are structurally designed to fail? Curious to hear your thoughts.

Expand full comment
Latent.Space's avatar

they could indeed very well fail but there are certainly a bunch of companies now trying and also some people who believe they may end up better businesses in the medium term than model labs!

Expand full comment