Latent Space
Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0
RWKV: Reinventing RNNs for the Transformer Era — with Eugene Cheah of UIlicious
0:00
-1:12:11

RWKV: Reinventing RNNs for the Transformer Era — with Eugene Cheah of UIlicious

The international, uncredentialed community pursuing the "room temperature superconductor" of LLMs - the scalability of Transformers, without the quadratic cost - all fully open source!

The AI Engineer Summit Expo has been announced, presented by AutoGPT (and future guest Toran Bruce-Richards!) Stay tuned for more updates on the Summit livestream and Latent Space University.

This post was on HN for 10 hours.


What comes after the Transformer? This is one of the Top 10 Open Challenges in LLM Research that has been the talk of the AI community this month. Jon Frankle (friend of the show!) has an ongoing bet with Sasha Rush on whether Attention is All You Need, and the most significant challenger to emerge this year has been RWKV - Receptance Weighted Key Value models, which revive the RNN1 for GPT-class LLMs, inspired by a 2021 paper on Attention Free Transformers from Apple (surprise!).

What this means practically is that RWKV models tend to scale in all directions (both in training and inference) much better than Transformers-based open source models:

from the RWKV paper

While remaining competitive on standard reasoning benchmarks:

swyx was recently in Singapore for meetings with AI government and industry folks2, and grabbed 2 hours with RWKV committee member Eugene Cheah for a deep dive, the full recording of which is now up on Latent Space TV:

Today we release both the 2hr video and an edited 1hr audio version, to cater to the different audiences and provide “ablation opportunities” on RWKV interest level.

The Eleuther Mafia?

The RWKV project is notable not merely because of the credible challenge to the Transformers dominance. It is also a distributed, international, mostly uncredentialed community reminiscent of early 2020s Eleuther AI:

  • Primarily Discord, pseudonymous, GPU-poor volunteer community somehow coordinating enough to train >10B, OPT/BLOOM-competitive models

  • Being driven by the needs of its community, it is extremely polyglot (e.g. English, Chinese, Japanese, Arabic) not because it needs to beat some benchmarks, but because its users want it to be for their own needs.

  • “Open Source” in both the good and the bad way - properly Apache 2.0 licensed (not “open but restricted”), yet trained on data taken from commercially compromised sources like the Pile (where Shawn Presser’s Books3 dataset has been recently taken down) and Alpaca (taking from Steven Tey’s ShareGPT which is technically against OpenAI TOS)

The threadboi class has loved tracking the diffusion of Transformers paper authors out into the industry:

Jeremy Howard on X: "Ouch. Google failed to keep a single one of the authors  of the Transformers paper." / X
This is now out of date; Vaswani and Parmar left Adept, and Jones finally left Google to join David Ha to work on “nature-inspired intelligence”

But perhaps the underdog version of this is tracking the emerging Eleuther AI mafia:

It will be fascinating to see how both Eleuther and Eleuther alums fare as they build out the future of both LLMs and open source AI.

Audio Version Timestamps

assisted by smol-podcaster. Different timestamps vs the 2hr YouTube

  • [00:05:35] Eugene's path into AI at UIlicious

  • [00:07:33] Tokenizer penalty and data efficiency of Transformers

  • [00:08:02] Using Salesforce CodeGen

  • [00:10:17] The limitations of Transformers for handling large context sizes

  • [00:13:17] RWKV compute costs compared to Transformers

  • [00:16:06] How Eugene found RWKV early

  • [00:18:52] RWKV's focus on supporting many languages, not just English

  • [00:21:24] Using the RWKV model for fine-tuning for specific languages

  • [00:24:45] What is RWKV?

  • [00:33:46] Overview of the different RWKV models like World, Raven, Novel

  • [00:41:34] Background of Blink, the creator of RWKV

  • [00:49:55] The linear vs quadratic scaling of RWKV vs Transformers

  • [00:53:29] RWKV matching Transformer performance on reasoning tasks

  • [00:54:31] The community's lack of marketing for RWKV

  • [00:57:00] The English-language bias in AI models

  • [01:00:33] Plans to improve RWKV's memory and context handling

  • [01:03:10] Advice for AI engineers wanting to get more technical knowledge

Show Notes

Companies/Organizations:

People:

Models/Papers:

Misc Notes

RWKV is not without known weaknesses - Transformers do well in reasoning because they are expressive in the forward pass, yet the RWKV docs already note that it is sensitive to prompt formatting and poor at lookback tasks. We also asked pointed questions about RWKV’s challenges in the full podcast.

1

The Unreasonably Effective architecture strikes back!

2

Let us know if a dedicated essay/podcast on AI industrial policy would be ideal, and who you’d like to hear on the topic. We don’t have policy experience, but as citizens of smaller countries we do care about offering any help we can.

0 Comments
Latent Space
Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0
The podcast by and for AI Engineers! In 2023, over 1 million visitors came to Latent Space to hear about news, papers and interviews in Software 3.0.
We cover Foundation Models changing every domain in Code Generation, Multimodality, AI Agents, GPU Infra and more, directly from the founders, builders, and thinkers involved in pushing the cutting edge. Striving to give you both the definitive take on the Current Thing down to the first introduction to the tech you'll be using in the next 3 months! We break news and exclusive interviews from OpenAI, tiny (George Hotz), Databricks/MosaicML (Jon Frankle), Modular (Chris Lattner), Answer.ai (Jeremy Howard), et al.
Full show notes always on https://latent.space