Finally. This was overdue. I wrote about benchmark contamination recently and the 60% failure rate on remaining problems is exactly the kind of data that should make everyone rethink how they evaluate coding tools.
When I compared Claude Code to Codex on a real codebase, benchmark scores were irrelevant. What mattered was: does it understand my project structure? Does it make reasonable architecture decisions? Can it handle multi-file changes without breaking things? https://thoughts.jock.pl/p/claude-code-vs-codex-real-comparison-2026
SWE-Bench Pro sounds promising. Real-world tasks with longer completion times is the right direction.
Finally. This was overdue. I wrote about benchmark contamination recently and the 60% failure rate on remaining problems is exactly the kind of data that should make everyone rethink how they evaluate coding tools.
When I compared Claude Code to Codex on a real codebase, benchmark scores were irrelevant. What mattered was: does it understand my project structure? Does it make reasonable architecture decisions? Can it handle multi-file changes without breaking things? https://thoughts.jock.pl/p/claude-code-vs-codex-real-comparison-2026
SWE-Bench Pro sounds promising. Real-world tasks with longer completion times is the right direction.