HNNewShowAskJobs
Built with Tanstack Start
Show HN: TetrisBench – Gemini Flash reaches 66% win rate on Tetris against Opus(tetrisbench.com)
91 points by ykhli 14 hours ago | 36 comments
  • bubblesorting13 hours ago

    Very cool! I am a good Tetris player (in the top 10% of players) and wanted to give brick yeeting against an LLM a spin.

    Some feedback: - Knowing the scoring system is helpful when going 1v1 high score

    - Use a different randomization system, I kept getting starved for pieces like I. True random is fine, throwing a copy of every piece into a bag and then drawing them one by one is better (7 bag), nearly random with some lookbehind to prevent getting a string of ZSZS is solid, too (TGM randomizer)

    - Piece rotation feels left-biased, and keeps making me mis-drop, like the T pieces shift to the left if you spin 4 times. Check out https://tetris.wiki/images/thumb/3/3d/SRS-pieces.png/300px-S... or https://tetris.wiki/images/b/b5/Tgm_basic_ars_description.pn... for examples of how other games are doing it.

    - Clockwise and counter-clockwise rotation is important for human players, we can only hit so many keys per second

    - re-mappable keys are also appreciated

    Nice work, I'm going to keep watching.

    • vunderba12 hours ago |parent

      I actually grew up playing the Spectrum HoloByte version of Tetris for PC, which only lets you rotate in one direction. As a result, I ended up playing NES Tetris for years as a kid before realizing it lets you rotate clockwise / counterclockwise!

      https://en.wikipedia.org/wiki/Tetris_(Spectrum_HoloByte)

    • Keyframe7 hours ago |parent

      cool to hear about more randomizers. I am not a great tetris player but I absolutely love the game. I put NES, 7-bag and my own take on randomizer (crap) in my online tetris-like experiment https://www.susmel.com/stacky/ (you can press c for more controls or h to see shortcuts)

      One of my dream goals was to make a licensed low lag competitive game kind of like TGM, but I heard licensing is extremely cost-prohibitive so I kind of gave up on that goal. I remember I said to someone I was ready to pony up few tens of thousands for a license + cut, but reportedly it starts at an order of the magnitude higher.

    • qsort12 hours ago |parent

      The worst thing is that the delayed auto shift is slightly off and it messes my finesse. (I used to play competitive tetris as well, but between getting older -> worse reflexes and vision problems I can't really play anymore. Weirdly, finesse muscle memory is still working.)

      I don't think the goal is to make a PvP simulator, it would be too easy to cheese or do weird strategies. It's mostly for LLMs to play.

      • bubblesorting12 hours ago |parent

        Hello fellow Tetris nerd with a -sort username :)

        On the topic of reflexes decaying (I'm getting there, in my late 30s): Have you played Stackflow? It's a number go up roguelite disguised as an arcade brick stacking game, but the gravity is low enough that it is effectively turn based. More about 'deck' building, less about chaining PCs and C-Spins.

        • kinduff7 hours ago |parent

          Stackflow looks nice! I'm a Balatro fan and I didn't know about this variant.

          By the way, kudos on your feedback. If I was OP, I would've been honored to get that type of fine-tuning comments.

  • ykhli11 hours ago

    Thanks for all the questions! More details on how this works:

    - Each model starts with an initial optimization function for evaluating Tetris moves.

    - As the game progresses, the model sees the current board state and updates its algorithm—adapting its strategy based on how the game is evolving.

    - The model continuously refines its optimizer. It decides when it needs to re-evaluate and when it should implement the next optimization function

    - The model generates updated code, executes it to score all placements, and picks the best move.

    - The reason I reframed this problem to a coding problem is Tetris is an optimization game in nature. At first I did try asking LLMs where to place each piece at every turn but models are just terrible at visual reasoning. What LLMs great at though is coding.

    • dakom4 hours ago |parent

      How does it deal with latency? Afaict remote LLMs need seconds to process, but Tetris can move much faster..

  • bityard10 hours ago

    Looks fun, but I'm not willing to give out my email address just to play a game.

    Also, if the creator is reading this, you should know that Tetris Holdings is extremely aggressive with their trademark enforcement.

  • vunderba12 hours ago

    Interesting but frustratingly vague on details. How exactly are the models playing? Is it using some kind of PGN equivalent in Tetris that represents a on-going game, passing an ASCII representation, encoding as a JSON structure, or just directly sending screenshots of the game to the various LLMs?

    • storystarling11 hours ago |parent

      It has to be turn-based. Even with Flash's speed, the inference latency would kill you in a real-time loop. They're likely pausing the game state after every tick to wait for the API response before resuming.

    • ykhli11 hours ago |parent

      answered this in a comment above! It's not turn or visual layout based since LLMs are not trained that way. The representation is a JSON structure, but LLMs plug in algorithms and keeps optimizing it as the game state evolves

      • vunderba10 hours ago |parent

        Thanks for the clarification! Kind of reminds me of the Brian Moore's AI clocks which uses several LLMs to continuously generate HTML/CSS to create an analog clock for comparisons.

        https://clocks.brianmoore.com

        • ykhli9 hours ago |parent

          Wow this is incredible!!

      • mhh__9 hours ago |parent

        I suppose you could argue about whether it's an LLM at that point but vision is a huge part of frontier models now, no?

  • OGEnthusiast13 hours ago

    Gemini 3 Flash is at a very nice point along the price-performance curve. A good workhorse model, while supplementing it with Opus 4.5 / Gemini 3 Pro for more complex tasks.

  • augusteo6 hours ago

    LLMs playing Tetris feels like testing a calculator's ability to write poetry. Interesting as a curiosity, but the results don't transfer to the tasks where these models actually excel.

    Curious what the latency looks like per move. That seems like the actual bottleneck here.

  • burkaman13 hours ago

    It's actually 80% against Opus, 66% average against the 5 models it's tested with.

  • p0w3n3d11 hours ago

    Guys, I don't know how to tell you but... Tetris can web solved without LLM...

    • brookman64k2 hours ago |parent

      Why run something in a few CPU cyles on a 40 year old home computer when you can do the same (but worse) on a billion-dollar GPU cluster?

  • esafak12 hours ago

    I imagine this is because Tetris is visual and the Gemini models are strong visually.

    • bogtog12 hours ago |parent

      I figure OP would try and give the models pure text forms of the game?

      .....

      l....

      l....

      l.ttt

      l..t.

  • arendtio13 hours ago

    There are some concepts clashing here.

    I mean, if you let the LLM build a testris bot, it would be 1000x better than what the LLMs are doing. So yes, it is fun to win against an AI, but to be fair against such processing power, you should not be able to win. It is only possible because LLMs are not built for such tasks.

    • i_cannot_hack11 hours ago |parent

      Fun fact: Humans were not build for playing Tetris either!

    • westurner11 hours ago |parent

      Task: play tetris

      Task: write and optimize a tetris bot

      Task: write and safely online optimize a tetris bot with consideration for cost to converge

      openai/baselines (7 years ago) was leading on RL and then AlphaZero and Self-Attention Transformer networks.

      LLMs are trained with RL, but aren't general purpose game theoretic RL agents?

  • tiahura11 hours ago

    I'd like to see a nethackbench.

  • segmondy10 hours ago

    ... and what does this prove? what can you decide to use one LLM to solve over another based on this tetrisbench besides play tetris?

  • indigodaddy10 hours ago

    Is there a tl;dr on why this is? Does it just make faster decisions?

    • ykhli3 hours ago |parent

      my unvalidated theory is that this comes down to the coding model’s training objective: Tetris is fundamentally an optimization problem with delayed rewards. Some models seem to aggressively over-optimize toward near term wins (clearing lines quickly), which looks good early but leads to brittle states and catastrophic failures later. Others appear to learn more stable heuristics like board smoothness, height control, long-term survivability even if that sacrifices short-term score

      That difference in objective bias shows up very clearly in Tetris, but is much harder to notice in typical coding benchmarks. Just a theory though based on reviewing game results and logs

  • purplecats11 hours ago

    watch link?

  • akomtu13 hours ago

    It would be more interesting to make it build a chess engine and compare it against Stockfish. The chess engine should be a standalone no-dependencies C/C++ program that fits in NNN lines of code.

    • vunderba11 hours ago |parent

      My back-of-the-envelope guess would be that 99% of LLMs given the task to build a chess engine would probably just end up implementing a flavor of negamax and calling it a day.

      https://en.wikipedia.org/wiki/Negamax

    • gpm12 hours ago |parent

      Comparing against stockfish isn't fair. That's comparing against enormous amounts of compute spent experimenting with strategies, training neutral nets, etc.

      It will lose so badly there will be no point in the comparison.

      Besides you could compare models (and harnesses) directly against eachother.

      • akomtu6 hours ago |parent

        Stockfish is a good reference point, an objective measure of how far the LLM's advanced.

        • mikkupikku2 hours ago |parent

          It's not. Maybe if you used old versions of stockfish that predate the neural net methods used by current versions, because otherwise you'd be comparing the hand-rolled (by an LLM) position evaluation functions against an NNUE and the results of that are a forgone conclusion; stockfish will stomp it every time.

          Maybe that's the result you want for some sort of rhetorical reason, but it would nonetheless not be an informative test.

    • ykhli9 hours ago |parent

      oh that is super interesting. ty for the idea!