HNNewShowAskJobs
Built with Tanstack Start
DeepSeek releases open-weights math model with IMO gold medal performance(huggingface.co)
112 points by victorbuilds 3 hours ago | 35 comments
  • victorbuilds2 hours ago

    Notable: they open-sourced the weights under Apache 2.0, unlike OpenAI and DeepMind whose IMO gold models are still proprietary.

    • SilverElfin2 hours ago |parent

      If they open source just weights and not the training code and data, then it’s still proprietary.

      • very_illiteratean hour ago |parent

        Stop kvetching and read the submission title.

      • ekianjo26 minutes ago |parent

        It's just open weights, the source has no place in this expression

      • mips_avatar2 hours ago |parent

        Yeah but you can distill

        • amelius2 hours ago |parent

          Is that the equivalent of decompile?

          • c0balt2 hours ago |parent

            No, that is the equivalent of lossy compression.

      • falcor842 hours ago |parent

        Isn't that a bit like saying that if I open source a tool, but not a full compendium of all the code that I had read, which led me to develop it, then it's not really open source?

        • KaiserPro33 minutes ago |parent

          No its like releasing a binary. I can hook into it and its API and make it do other things. But I can't rebuild it from scratch.

        • nextaccountican hour ago |parent

          No, it's like saying that if you release under Apache license, it's not open source even though it's under an open source license

          For something to be open source it needs to have sources released. Sources are the things in the preferred format to be edited. So the code used for training is obviously source (people can edit the training code to change something about the released weights). Also the training data, under the same rationale: people can select which data is used for training to change the weights

        • exe3417 minutes ago |parent

          "open source" as a verb is doing too much work here. are you proposing to release the human readable code or the object/machine code?

          if it's the latter, it's not the source. it's free as in beer. not freedom.

        • fragmede2 hours ago |parent

          No. In that case, you're providing two things, a binary version of your tool, and the tool's source. That tool's source is available to inspect and build their own copy. However, given just the weights, we don't have the source, and can't inspect what alignment went into it. In the case of DeepSeek, we know they had to purposefully cause their model to consider Tiananmen Square something it shouldn't discuss. But without the source used to create the model, we don't know what else is lurking around inside the model.

          • NitpickLawyeran hour ago |parent

            > However, given just the weights, we don't have the source

            This is incorrect, given the definitions in the license.

            > (Apache 2.0) "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.

            (emphasis mine)

            In LLMs, the weights are the preferred form of making modifications. Weights are not compiled from something else. You start with the weights (randomly initialised) and at every step of training you adjust the weights. That is not akin to compilation, for many reasons (both theoretical and practical).

            In general licenses do not give you rights over the "know-how" or "processes" in which the licensed parts were created. What you get is the ability to inspect, modify, redistribute the work as you see fit. And most importantly, you modify the work just like the creators modify the work (hence the preferred form). Just not with the same data (i.e. you can modify the source of chrome all you want, just not with the "know-how and knowledge" of a google engineer - the license can not offer that).

            This is also covered in the EU AI act btw.

            > General-purpose AI models released under free and open-source licences should be considered to ensure high levels of transparency and openness if their parameters, including the weights, the information on the model architecture, and the information on model usage are made publicly available. The licence should be considered to be free and open-source also when it allows users to run, copy, distribute, study, change and improve software and data, including models under the condition that the original provider of the model is credited, the identical or comparable terms of distribution are respected.

            • fragmedean hour ago |parent

              > In LLMs, the weights are the preferred form of making modifications.

              No they aren't. We happen to be able to do things to modify the weights, sure, but why would any lab ever train something from scratch if editing weights was preferred?

              • NitpickLawyeran hour ago |parent

                training is modifying the weights. How you modify them is not the object of a license, never was.

                • v9van hour ago |parent

                  Would you accept the argument that compiling is modifying the bytes in the memory space reserved for an executable?

                  I can edit the executable at the byte level if I so desire, and this is also what compilers do, but the developer would instead be modifying the source code to make changes to the program and then feed that through a compiler.

                  Similarly, I can edit the weights of a neural network myself (using any tool I want) but the developers of the network would be altering the training dataset and the training code to make changes instead.

                  • NitpickLawyer18 minutes ago |parent

                    I think the confusion for a lot of people comes from what they imagine compilation to be. In LLMs, the process is this (simplified):

                    define_architecture (what the operations are, and the order in which they're performed)

                    initialise_model(defined_arch) -> weights. Weights are "just" hardcoded values. Nothing more, nothing less.

                    The weights are the result of the arch, at "compile" time.

                    optimise_weights(weights, data) -> better_weights.

                    ----

                    You can, should you wish, totally release a model after iitialisation. It would be a useless model, but, again, the license does not deal with that. You would have the rights to run, modify and release the model, even if it were a random model.

                    tl;dr; Licenses deal with what you can do with a model. You can run it, modify it, redistribute it. They do not deal with how you modify them (i.e. what data you use to arrive at the "optimal" hardcoded values). See also my other reply with a simplified code example.

                • noodletheworldan hour ago |parent

                  > And most importantly, you modify the work just like the creators modify the work

                  Emphasis mine.

                  Weights are not open source.

                  You can define terms to mean whatever you want, but fundametally if you cannot modify the “output” the way the original creators could, its not in the spirit of open source.

                  Isnt that literally what you said?

                  How can you possibly claim both that a) you can modify it the creators did, b) thats all you need to be open source, but…

                  Also c) the categorically incorrect assertion that the weights allow you to do this?

                  Whatever, I guess, but your argument is logically wrong, and philosophically flawed.

                  • NitpickLawyer35 minutes ago |parent

                    > Weights are not open source.

                    If they are released under an open source license, they are.

                    I think you are confusing two concepts. One is the technical ability to modify weights. And that's what the license grants you. The right to modify. The second is the "know-how" on how to modify the weights. That is not something that a license has ever granted you.

                    Let me put it this way:

                    ```python

                    THRESHOLD = 0.73214

                    if input() < THRESHOLD:

                      print ("low")
                    
                    else:

                      print ("high")
                    ```

                    If I release that piece of code under Apache 2.0, you have the right to study it, modify it and release it as you see fit. But you can not have the right (at least the license doesn't deal with that) to know how I reached that threshold value. And me not telling you does not in any way invalidate the license being Apache 2.0. That's simply not something that licenses do.

                    In LLMs the source is a collection of architecture (when and how to apply the "ifs"), inference code (how to optimise the computation of the "ifs") and hardcoded values (weights). You are being granted a license to run, study, modify and release those hardcoded values. You do not, never had, never will in the scope of a license, get the right to know how those hardcoded values were reached. The process by which those values were found can be anything from "dreamt up" to "found via ML". The fact that you don't know how those values were derived does not in any way preclude you from exercising the rights under the license.

                    • roblabla11 minutes ago |parent

                      You are fundamentally conflating releasing a binary under an open source license with the software being open source. Nobody is saying that they're violating the license of Apache2 by not releasing the training data. What people are objecting to is that calling this release "open source", when the only thing covered by the open source license is the weights, to be an abuse of the meaning of "Open Source".

                      To give you an example: I can release a binary (without sources) under the MIT - an open source license. That will give you the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of said binary. In doing so, I would have released the binary under an open source license. However, most people would agree that the software would not be open source under the conventional definition, as the sources would not be published. While people could modify it by disassembling it and modifying it, there is a general understanding that Open Source requires distributing the _sources_.

                      This is very similar to what is being done here. They're releasing the weights under an open source license - but the overall software is not open source.

        • nurettinan hour ago |parent

          Is this a troll? They don't want to reproduce your open source code, they want to reproduce the weights.

      • amelius2 hours ago |parent

        True. But the headline says open weights.

  • ilmj84262 hours ago

    It's impressive to see how fast open-weights models are catching up in specialized domains like math and reasoning. I'm curious if anyone has tested this model for complex logic tasks in coding? Sometimes strong math performance correlates well with debugging or algorithm generation.

    • stingraycharlesa minute ago |parent

      kimi-k2 is pretty decent at coding but it’s nowhere near the SOTA models of Anthropic/OpenAI/Google.

  • yorwba2 hours ago

    Previous discussion: https://news.ycombinator.com/item?id=46072786 218 points 3 days ago, 48 comments

    • victorbuildsan hour ago |parent

      Ah, missed that one. Thanks for the link.

  • H8crilA40 minutes ago

    How do you run this kind of a model at home? On a CPU on a machine that has about 1TB of RAM?

    • pixelpoet4 minutes ago |parent

      Wow, it's 690GB of downloaded data, so yeah, 1TB sounds about right. Not even my two Strix Halo machines paired can do this, damn.

  • sschueller23 minutes ago

    How is OpenAI going to be able to serve ads in chatgpt without everyone immediately jumping ship to another model?

    • Coffeewine21 minutes ago |parent

      I suppose the hope is that they don’t, and we wind up with commodity frontier models from multiple providers at market rates.

    • KeplerBoy10 minutes ago |parent

      Google served ads for decades and no one ever jumped ship to another search engine.

      • bootsmanna minute ago |parent

        They pay $30bn (more than OpenAIs lifetime revenue) each year to make sure noone does.

      • sschueller2 minutes ago |parent

        Because Google gave the best results for a long time.

    • miroljub22 minutes ago |parent

      I don't care about OpenAI even if they don't serve ads.

      I can't trust any of their output until they become honest enough to change their name to CloseAI.

  • terespuwash2 hours ago

    Why isn’t OpenAI’s gold medal-winning model not available to the public yet?