Maybe he is right and LLMs are a dead end, maybe even the transformer architecture, but the attention mechanism isn't. Its the crucial genius part of today's AI and certainly can be used to create world models that dont just talk about things but really experience them.
Very little content on the actual thing he says will be replacing LLM: World models
“If you are a Ph.D. student in AI, you should absolutely not work on LLMs.”