HNNewShowAskJobs
Built with Tanstack Start
Universal Reasoning Model (53.8% pass 1 ARC1 and 16.0% ARC 2)(arxiv.org)
87 points by marojejian 11 hours ago | 11 comments
  • amlutoan hour ago

    This design implicitly does something similar to something that I sometimes think conventional transformers should try: allowing later layers to query the KV data from earlier layers. As far as I can tell, with a conventional transformer, if a layer (and presumably higher-level-thinking) layer wants wants to take input from earlier tokens from something lower down, it needs to get it from the output and “remember” it by itself instead of just reading it directly.

    But suppose an extra attention head were added that queried the KV data from lower layers. At the very least, I imagine this might cleanly solve the STRAWBERRY problem: whatever layer has figured out that the prompt wants to count instances of R could attend to lower layers that actually perceive those Rs.

    • krackers10 minutes ago |parent

      > extra attention head were added that queried the KV data from lower layers

      Isn't this sort of similar to latent looping? E.g. [1]. But actually as [2] argues, even that wasn't a good experiment because it used the very last hidden state, which is too close to the logits and loses most of the rich embedding structure. Perhaps you don't even need access to the state of anything except the penultimate hidden layer, since based on my vague reading of [3] the residual stream doesn't "lose information" as it passes deeper down the attention layers, so each block maybe manipulates a different subspace of the residual stream.

      [1] https://arxiv.org/abs/2412.06769

      [2] https://snimu.github.io/2025/03/30/multi-layer-language-head...

      [3] https://news.ycombinator.com/item?id=45758093

  • marojejian11 hours ago

    Sounds like a further improvement in the spirit of HRM & TRM models.

    Decent comment via x: https://x.com/r0ck3t23/status/2002383378566303745

    I continue to be fascinated by these architectures that: - Build in recurrence / inference scaling to transformers more natively. - Don't use full recurrent gradient traces, and succeed not just despite, but because of that.

  • Moosdijk8 hours ago

    Interesting. Instead of running the model once (flash) or multiple times (thinking/pro) in its entirety, this approach seems to apply the same principle within one run, looping back internally.

    Instead of big models that “brute force” the right answer by knowing a lot of possible outcomes, this model seems to come to results with less knowledge but more wisdom.

    Kind of like having a database of most possible frames in a video game and blending between them instead of rendering the scene.

    • omneity6 hours ago |parent

      Isn’t this in a sense an RNN built out of a slice of an LLM? Which if true means it might have the same drawbacks, namely slowness to train but also benefits such as an endless context window (in theory)

      • ctoa5 hours ago |parent

        It's sort of an RNN, but it's also basically a transformer with shared layer weights. Each step is equivalent to one transformer layer, the computation for n steps is the same as the computation for a transformer with n layers.

        The notion of context window applies to the sequence, it doesn't really affect that, each iteration sees and attends over the whole sequence.

    • nl4 hours ago |parent

      > Instead of running the model once (flash) or multiple times (thinking/pro) in its entirety

      I'm not sure what you mean here, but there isn't a difference in the number of times a model runs during inference.

  • mysterEFrank7 hours ago

    I'm surprised more attention isn't paid to this research direction, that nobody has tried to generalize it for example by combining the recurrence concept with next token prediction. That said despite the considerable gains this seems to just be some hyperparameter tweaking rather than a foundational improvement.

    • in-silico2 hours ago |parent

      > nobody has tried to generalize it for example by combining the recurrence concept with next token prediction

      Here you go: https://arxiv.org/abs/2502.05171

    • whiplash4516 hours ago |parent

      Not just hyper parameter tweaking. Not foundational research either. But rather engineering improvements that compound with each other (conswiglu layers, muon optimizer)

  • E-Reverance2 hours ago

    It should be noted that this is NOT the official scores on the private evaluation set