HNNewShowAskJobs
Built with Tanstack Start
Advancing AI Benchmarking with Game Arena(blog.google)
90 points by salkahfi 6 hours ago | 44 comments
  • ofirpress5 hours ago

    This is a good way to benchmark models. We [the SWE-bench team] took the meta-version of this and implemented it as a new benchmark called CodeClash -

    We have agents implement agents that play games against each other- so Claude isn't playing against GPT, but an agent written by Claude plays poker against an agent written by GPT, and this really tough task leads to very interesting findings on AI for coding.

    https://codeclash.ai/

    • 63stack4 hours ago |parent

      >this really tough task leads to very interesting findings on AI for coding

      Are you going to share those with the class or?

    • RobRivera2 hours ago |parent

      https://ai.meta.com/research/publications/gaia-a-benchmark-f...

      ?

    • Instantnoodl5 hours ago |parent

      Cool to see core war! I feel it's mostly forgotten by now. My dad is still playing it to this day though and even attends tournaments

    • riku_iki5 hours ago |parent

      Leaderboard looks very outdated..

  • kenforthewinan hour ago

    Let's add NetHack to the mix!

    https://kenforthewin.github.io/blog/posts/nethack-agent/

  • ZeroCool2u4 hours ago

    I'd really like to see them add a complex open world fully physicalized game like Star Citizen (assuming the game itself is stable) with a single primary goal like accumulating currency as a measure of general autonomy and a proxy for how the model might behave in the real world given access to a bipedal robot.

  • iNic2 hours ago

    I feel uneasy about werewolf being included here. I don't want AI labs to actively try and make their LLMs deceptive!

  • cv50055 hours ago

    My personal threshold for AGI is when an AI can 'sit down' - it doesn't need to have robotic hands, but it needs to only use visual and audio inputs to make its moves - and complete a modern RPG or FPS single player game that it hasn't pre-trained on (it can train on older games).

    • anematodean hour ago |parent

      Isn't this a bit too visual-centric? By this criterion Helen Keller, author of 14 books, would not be generally intelligent.

      Ultimately I think it's impossible to define AGI. Maybe "I know it when I see it"—except everyone sees it at a different point (evidently).

      • jamilton25 minutes ago |parent

        It could have hands that feel but no vision, I think they were getting at that they thought embodiment and playing games in the modality of humans, without thousands of hours of play to reach competency, would be an important milestone.

    • bob10294 hours ago |parent

      https://arxiv.org/abs/2507.03793

  • 10xDev5 hours ago

    If AI can program, why does it matter if it can play Chess using CoT when it can program a Chess Engine instead? This applies to other domains as well.

    • RivieraKid3 hours ago |parent

      It can write a chess engine because it has read the code of a thousand of chess engines. This benchmark measures a different aspect of intelligence.

      And as a poker player, I can say that this game is much more challenging for computers than chess, writing a program that can play poker really well and efficiently is an unsolved problem.

      • 10xDev2 hours ago |parent

        The program doesn't need to be a solver. It can be anything that helps it.

        It doesn't even need to be one tool but a series of tools.

    • NitpickLawyer4 hours ago |parent

      > If AI can program, why does it matter if it can play Chess using CoT when it can program a Chess Engine instead?

      Heh, we really did come full circle on this! When chatgpt launched in dec22 one of the first things that people noticed is that it sucked at math. Like basic math 12 + 35 would trip it up. Then people "discovered" tool use, and added a calculator. And everyone was like "well, that's cheating, of course it can use a calculator, but look it can't do the simple addition logic"... And now here we are :)

      • paxys3 hours ago |parent

        IMO there's an expectation for baseline intelligence. I don't expect an "AGI" model to beat Magnus Carlsen out of the box but it should be able to do basic grade school level arithmetic and play chess at a complete beginner level without resorting to external tools.

    • 10xDev2 hours ago |parent

      I'm not going to respond to everything but the key to my comment was "This applies to other domains as well." But people are limiting their imagination to the chess engine example given for chess. The tool or program (or even other neural networks that are available) can be literally anything for any task... Use your imagination.

      Maybe we should just get rid of tedious benchmarks like chess altogether at this point that is leading people to think of how to limit AI as a way of keeping it a relevant benchmark rather than expanding on what is already there.

    • Davidzheng5 hours ago |parent

      They should be allowed to! In fact i think better benchmark would be to invent new games and test the models ability to allocate compute to minmax/alphazero new games in compute constraints

    • simianwords4 hours ago |parent

      Its the same reason we are asked to write exams without using calculators but the real world does have them.

      How you work without calculators is a proxy for real world competency.

      • 10xDev4 hours ago |parent

        Funny, you used probably the most useless form of benchmarking used on people as an example of measuring "competency" in the real world.

        • doctorpangloss4 hours ago |parent

          A lot of the insights of math come from knowing how to do things efficiently. That’s why the tests are timed. I don’t know, this is pretty basic pedagogy that you are choosing to grief.

        • simianwords4 hours ago |parent

          are you in favour of children using calculators in exams?

          • 10xDev4 hours ago |parent

            It is a program. I need it to get task X done and I don't care how, whether it is strictly through CoT or with tools. There is no such thing as cheating in real work and no reason to handicap it. Just test the limits of what it can do with whatever means possible.

            Trying to solve everything with CoT alone without utilising tools seems futile.

            • simianwords4 hours ago |parent

              you are not understanding. its a proxy for how well it does other things.

              • 10xDev2 hours ago |parent

                A good proxy is knowing which tools to use to solve the problem. Not how to try and emulate how a human would play chess. That is pointless...

    • CooCooCaCha3 hours ago |parent

      CoT is upstream of building a chess engine.

      Chess engines don’t grow on trees, they’re built by intelligent systems that can think, namely human brains.

      Supposedly we want to build machines that can also think, not just regurgitate things created by human brains. That’s why testing CoT is important.

      It’s not actually about chess, it’s about thinking and intelligence.

  • mclau1532 hours ago

    Claude plays Pokemon Red

  • tiahura6 hours ago

    How about nethack?

    • tux32 hours ago |parent

      For reference for anyone who missed it, the 2021 NetHack challenge results: https://nethackchallenge.com/report.html

      That was a whole half a decade ago, but back then deep learning AIs were defeated very badly by handcrafted scripts. Even the best bot in the neural net category was actual a symbolic script/neural net hybrid.

  • eamag6 hours ago

    Curious why they decided to curate poker hands instead of a normal poker

    • qsort6 hours ago |parent

      Poker has very high variance, you'd need several hundred thousand hands to confidently say who's better. Also, you probably want to precompute the GTO-optimal play for benchmarking purposes.

      • johndhi5 hours ago |parent

        But can't computers play several hundred thousand poker hands easily in a couple of hours ?

      • eamag5 hours ago |parent

        But now because the hands are so strong we don't see any folds

  • simianwords5 hours ago

    Gemini tops all benchmarks but when it comes to real world usage it is genuinely unusable

    • CuriouslyC3 hours ago |parent

      It's legit good at visual stuff. It's not just a great agent and does some weird stuff sometimes.

    • goniszewski4 hours ago |parent

      It’s not that bad. I’ve been using 3 Pro for some time now and I’m quite happy with how it works. Best paired with Opus and Codex, like most models, but it’s solid as a full-stack buddy.

  • bennyfreshness4 hours ago

    Wow. I'm generally in the AI maximalist camp. But adding Werewolf feels dangerous to me. Anyone who's played knows lying, deceipt, and manipulation is often key to winning. We really want models climbing this benchmark?

    • rustyhancock3 hours ago |parent

      Oddly in the highlighted game I watched the werewolf simply gives up in the last round and says I'm the werewolf well-done... Vote me.

      Bizarre.

      • minihat25 minutes ago |parent

        This is a legitimate strategy for the werewolf, no?

    • bilekas4 hours ago |parent

      Good question, but who's going to stop them?

      AI already has a very creative imagination for role play so this just adds extra to their arsenal.

    • PunchyHamster4 hours ago |parent

      confidently and charismatically lying to clueless users has been one of fundaments of AI adoption

  • chaostheory5 hours ago

    Anecdotal data point, but recently I’ve found Gemini to perform better than ChatGPT when it came to intent analysis.

  • PunchyHamster4 hours ago

    making models target benchmark about being good at lying and getting away with it (werewolf) is certainly an interesting choice