HNNewShowAskJobs
Built with Tanstack Start
Qwen3-235B-A22B-Thinking-2507(huggingface.co)
153 points by tosh 3 days ago | 65 comments
  • danielhanchen3 days ago

    I'm making dynamic GGUFs for local inference at https://huggingface.co/unsloth/Qwen3-235B-A22B-Thinking-2507... Also guide to run them https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tun...

    • DrPhish3 days ago |parent

      I generally download the safetensors and make my own GGUFs, usually at Q8_0. Is there any measurable benefit to your dynamic quants at that quant level? I looked at your dynamic quant 2.0 page, but all the charts and graphs appear to cut off at Q4.

      • danielhanchen2 days ago |parent

        Oh I also upload Q8_K_XL for eg, which will upcast important layers to BF16 / F16 as well!

        Oh the blog at https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs does talk about 1, 2, 3, 4, 5, 6 and 8bit dynamic GGUFs as well!

        There definitely is a benefit for dynamically selecting layers to be at diff bit rates - I wrote about the difference between naively quantizing and selectively quantizing: https://unsloth.ai/blog/deepseekr1-dynamic

        • DrPhish2 days ago |parent

          Thanks Daniel. I know you upload them, but I was hoping for some solid numbers on your dynamic q8 vs a naive quant. There doesn't seem to be anything on either of those links to show improvement at those quant levels.

          My gut feeling is that there's not enough benefit to outweigh the risk of putting a middleman in the chain of custody from the original model to my nvme.

          However, I can't know for sure without more testing than I have the time or inclination for, which is why I was hoping there had been some analysis you could point me to.

    • arcanemachiner3 days ago |parent

      Would I notice a difference between the Q2_K and Q2_K_XL variants?

      • danielhanchen3 days ago |parent

        Oh I would always use Q2_K_XL :) It uses our dynamic methodology to quantize certain layers in different bits ie 2, 3, 4, 5, 6, 8 bits - the more important the layer is, the higher the bitrate

        • Squeeze26643 days ago |parent

          How do you determine the importance of a layer in this case?

          • smallerize2 days ago |parent

            https://unsloth.ai/blog/dynamic-v2

            • danielhanchen2 days ago |parent

              Yes also https://unsloth.ai/blog/deepseekr1-dynamic, https://unsloth.ai/blog/dynamic-4bit, https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs

          • kkzz992 days ago |parent

            Afaik they have a test bench that they use and take the activation data from that.

            • danielhanchen2 days ago |parent

              Yes we have around 1 to 3 million tokens of high quality self verified data that we use to calibrate models!

    • mycpuorg2 days ago |parent

      @danielhanchen, can we use these steps to fine tune other qwen3 models too? like 480B coder or embeddings model?

      • danielhanchen2 days ago |parent

        Oh for finetuning - we do have some code for MoE finetuning for Qwen at https://github.com/unslothai/unsloth, but we haven't yet announced it yet!

    • lostmsu2 days ago |parent

      How usable are 1 and less than 1 bit quants?

      • danielhanchen2 days ago |parent

        Oh 1bit reminder isn't 1bit, but a mixture of 1 to 8bit! I would still use 2bit dynamic - 1bit dynamic sometimes can go into weird repetitive loops, but it still does produce reasonable output.

        Larger models with 1bit do better - for eg 480B Coder 1bit actually does very well!

    • aliljet2 days ago |parent

      I see the term 'local inference' everywhere. It's an absurd misnomer without hardware and cost defined. I can also run a coal fired power plant in my backyard, but in practice, there's no reasonable way to make that economical beyond being a toy.

      (And I should add, you are a hero for doing this work, only love in my comment, but still a demand for detail$!)

      • danielhanchen2 days ago |parent

        The trick of llama.cpp and our dynamic quants is you can actually offload the model to RAM / even an SSD! If you have GPU VRAM + RAM + SSD > the model size (say 90GB for dynamic 2bit quant), then it'll run well!

        Ie you can actually run it on a local desktop or even your laptop now! You don't need a 90GB GPU for example, but say a 24GB GPU + 64GB to 128GB RAM.

        The speeds are around 3 to 5 tokens / second, so still ok! I write more about improving speed for local devices here: https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tun...

      • regularfry2 days ago |parent

        Hardware and cost is assumed to be approximately desktop-class. If you've got a gaming rig with an RTX 4090 and 128MB RAM, you can run this if you pick the right quant.

        • cmpxchg8b2 days ago |parent

          128MB? Quantization has come a long way!

          • danielhanchen2 days ago |parent

            I think they mis-spoke 128GB* :)

            • regularfry2 days ago |parent

              Wishful thinking there on my part.

              • danielhanchen2 days ago |parent

                Though technically < 1GB is enough - one had offload it to the SSD, albeit with very slow speeds!

  • christianqchung2 days ago

    For what it's worth, the Qwen team misreported an ARC-AGI score benchmark on the non-thinking model by a factor of 4, which has not been explained yet. They claimed a score of 41.8% on ARC-AGI 1 [0] which is much higher than what non-chain of thought models have been able to achieve (GPT 4.5 got 10%). The ARC team later benchmarked it at 11%[1], which is still a high score, but not the same as 41.8%. It's still probably a significant update on the model though.

    [0] https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507

    [1] https://x.com/arcprize/status/1948453132184494471

    • ducviet002 days ago |parent

      Maybe 41.8% is the score of Qwen3-235B-A22B-Thinking-2507, lol. 11% for the non-thinking model is pretty high

      • jug2 days ago |parent

        Makes sense, it's in line with Gemini 2.5 Pro in that case. It aligns with their other results in the post.

      • christianqchung2 days ago |parent

        They made it very clear that they were reporting that score for the non-thinking model[0]. I still don't have any guesses as to what happened here, maybe something format related. I can't see a motivation to blatantly lie on a benchmark which would very obviously be publicly corrected.

        [0] https://x.com/JustinLin610/status/1947836526853034403

    • coolspot2 days ago |parent

      They have provided repro for the 41.8% result here: https://github.com/QwenLM/Qwen3/tree/main/eval

    • 2 days ago |parent
      [deleted]
    • mattnewton2 days ago |parent

      Could it be the public eval set vs the private eval the ARC team has? The public eval set is slightly easier and may have had a some unintentional data leakage since it was released before their training data cutoff.

  • sophia013 days ago

    If this is actually competitive with Gemini 2.5 Pro that would be insane esp for an Apache2 truly open weights model, let's hope it's not too hacked to shine on benchmarks!

    • lvl1553 days ago |parent

      Qwen3 models are solid and at such a low cost, it doesn’t hurt to pair it with something like Sonnet 4 as a check. I mean it does eliminate a lot of Claude’s “You’re absolutely right!” loops.

      • apwell233 days ago |parent

        > I mean it does eliminate a lot of Claude’s “You’re absolutely right!” loops.

        not as scary as "Let me try a completely different approach" . Now you have to throw out all the AI slop and start from scratch.

        • cma2 days ago |parent

          If you aren't using source control

          • 2 days ago |parent
            [deleted]
  • pama3 days ago

    Anyone here has tips for the code and hardware setup to get best per-GPU throughput on H200 or B200 hardware for large reasoning traces and inputs of around 10k–40k tokens? Is there an equivalent effort to sglang’s optimization of the V3/R1 throughput for this class of models?

  • sophia012 days ago

    Have been using this all morning for some integral-heavy math for my PhD (trying to bound certain analytically intractable integrals). It's a bit hit-or-miss. It's been able to come up with some pretty impressive bounds but also feels like more than half it does some really dumb stuff. Compared to Gemini 2.5 Pro it's pretty solid. Its thought traces are really silly though sometimes: it'll pretend to check websites or "pull out a calculator".

  • OldfieldFund2 days ago

    Impressive evals, but... benchmarks aren't everything.

    Put this prompt into qwen3-thinking, and then compare with gemini 2.5 pro:

    ---

    As candidates for creators, we should first address chaos. What is chaos? If for a given event X in A, all possible events can occur in B, and if such independence is universal, we are faced with chaos. If, however, event X in A limits in some way what can occur in B, a relationship exists between A and B. If X in A limits B unequivocally (we flip a switch, the lamp turns on), the relationship between A and B is deterministic. If X in A limits B in such a way that after X in A, events Y or Z can occur in B, where Y occurs 40 times out of 100 after X in A, while Z occurs 60 times, then the relationship between A and B is probabilistic.

    ---

    You have to rewrite the above acting as David Foster Wallace in 2025. Don't mention the year. Make it postmodern. Refer to current and projected events and trends. AI, robotics, etc. you have full creative control. you can make it long if you wish. change every word. make it captivating and witty. You are acting as a demiurge DFW. You need to pass the Turing test here. Sell it to the reader. Write good, high-brow fiction. Avoid phrases that are typical to LLMs/AI writers.

  • adamredwoods2 days ago

    Interesting, Qwen won't answer specific historical events (Tiananmen Square).

    • yunohn2 days ago |parent

      Is it really that interesting to point out for every Chinese oss model release?

      • ondra2 days ago |parent

        Yes. The original DeepSeek-R1 answered those questions just fine. The newer models seem to be much more brainwashed.

      • mceachen2 days ago |parent

        Is it not relevant to reiterate the bias (or lobotomization) for people new to this space?

        • lurking_swe2 days ago |parent

          No, it’s not really relevant. Should I point out that all the models from providers in the west are very “left-leaning” every time one is released? Is that helpful to the technical discussion, in any way?

          If you are using an LLM for historical knowledge, questions, or research, then the chinese censorship is relevant. Or for questions about geopolitics.

          • mattnewton9 hours ago |parent

            If you had a specific example where the LLMs showed “left leaning” bias, then yes, it would be interesting to me.

    • OldfieldFund2 days ago |parent

      It's made by Alibaba :)

  • tosh3 days ago

    If the evals hold up this is a mindblowing new weight to capability ratio

    edit: afaiu deepseek r1 was 671B with 37B active params

    • energy1232 days ago |parent

      Am I the only one who ignores evals, unless they're holdout datasets like a new IMO competition, or at a minimum evals with a semi-private test set like ARC-AGI 2? How can we trust that these companies don't put copies of these benchmarks in their training data? They can get whatever score they want, up to 100%, easily, by just training on that data sufficiently many times.

      • christianqchung2 days ago |parent

        There is something of a middle ground here for benchmark skepticism. Big companies wouldn't want a massive divergence between benchmarks and real performance that people could actually notice, and I'd argue for the most part that this hasn't happened too much (although above I posted a problem with Qwen and ARC). However, finetunes by random people/groups don't carry the same downside so I'm basically skeptical of all finetunes before using them for a particular case.

        • energy1232 days ago |parent

          I don't believe these companies see their customers as being able to tell the difference between a real GPQA score and a GPQA score that's fudged upwards by 10%. Look at Elon Musk presenting himself to the world as a Path of Exile expert when in reality he likely hired an expert to level up his account while he himself is an amateur. They think we are idiots and will lie to us to capture market share and lock us into their ecosystem.

          • christianqchung2 days ago |parent

            That's true, I certainly wouldn't be able to tell. I was thinking on the order of a 20% score vs 70%, but I realize that's not a very compelling range for my point when people are boasting about <5% shifts.

  • donedanadone2 days ago

    Evals aside why are American labs not able to release open source models at the same speec?

    • ttul2 days ago |parent

      The Chinese labs can’t compete on inference scale because they have been prevented from accessing the most efficient chips. But since training is a mere fraction of inference these days, they can at least hurt the American companies that are generating billions via inference services.

      If you can’t beat ‘em, at least pour some sand into their moat, giving China some time to perfect its own nanometer-scale fabrication. It’s a society-wide effort.

    • Eisenstein2 days ago |parent

      They don't release such huge open weights models because people who run open weights don't have the capability to run them effectively. Instead they concentrate on models like Gemma 3 which goes from 1B to 27B, which when quantized fits perfectly into the VRAM you can get on a consumer GPU.

      • lossolo2 days ago |parent

        > They don't release such huge open weights models because people who run open weights don't have the capability to run them effectively

        This is a naive take. There are multiple firms that can host these models for you, or you can host them yourself by renting GPUs. Thousands of firms could also host open-source models independently. They don’t release them because they fear competition and losing their competitive advantage. If it weren’t for Chinese companies open-sourcing their models, we’d be limited to using closed-source, proprietary models from the U.S., especially considering the recent LLaMA fiasco.

        • Eisenstein2 days ago |parent

          Given the assumption that Google has Google's own interests at heart, the question isn't 'why doesn't Google release models that allow other companies to compete with them' but 'what is the reasoning behind the models they release' and that reasoning is 'for research and for people to use personally on their own hardware'.

          We should be asking why Meta released the large Llama models and why the Chinese are releasing large models. I can't figure out a reason for it except prestige.

      • regularfry2 days ago |parent

        That shouldn't be the case here. Yes, it's memory-bandwidth-limited, but this is an MOE with 22B active. As long as the whole thing fits in RAM, it should be tolerable. It's right at the limit, though.

        • 2 days ago |parent
          [deleted]
    • bugglebeetle2 days ago |parent

      They could, they’re just greedy, self-serving, and short-sighted. China’s doing the AI equivalent of Belt and Road to reap tremendous strategic advantages, as well as encourage large-scale domestic innovation.

  • osti2 days ago

    For the coding benchmarks, does anyone know what are OJBench and CFEval?

  • nonhaver3 days ago

    impressive evals. i wonder how much of that can be attributed to the enhanced context understanding. i feel like that/length are the bottleneck of the majority of commercial models.

    • Eisenstein2 days ago |parent

      I don't know, I think that extending context windows is actually detrimental because people assume they can just dump things in there until it fills up. You still have to deal with the limited attention that the models have, and only filling the context with things relevant to the particular thing you are trying to solve is going to be the most effective approach. If you have too much information for it to fit into a 128K window, I think you just have too much information. The entirety of Don Quixote at over 1000 pages is less than 64,000 tokens.

      • CamperBob22 days ago |parent

        That sounds low by about 10x, assuming Don Quixote has 430k words (per Google).

        Still, yes, I don't know of a single model that doesn't go off the rails if you actually try to take advantage of its context length specification.

        • Eisenstein2 days ago |parent

          Well, I loaded up Llama 3 and downloaded the novel, and for the English translation we get 545997 tokens and in the original Spanish 653981 tokens. So when I estimated it did lose a an order of magnitude. Thanks for the correction.

  • Alifatisk2 days ago

    Alibaba has been on fire lately, do they even sleep?

    • esafak2 days ago |parent

      A rhetorical question, I suppose: https://en.wikipedia.org/wiki/996_working_hour_system

      • Der_Einzige2 days ago |parent

        Folks who know how to train SOTA LLMs dictate their own working conditions. No one is doing 996 there unless they want to.