HNNewShowAskJobs
Built with Tanstack Start
Coq: The World's Best Macro Assembler? (2013) [pdf](nickbenton.name)
128 points by addaon 17 hours ago | 58 comments
  • addaon16 hours ago

    Poster here. I re-read this paper about once a year. I continue to think that it may be one of the most important papers I've read. As someone who works on high-reliability safety-critical real-time systems (automotive, avionics), being able to work in an environment where I could prove semantic properties of assembly code is pretty close to my dream -- the cost of demonstrating code correct is already so much higher than the cost of writing the code that the incremental cost of going to assembly seems minor, if there's even a small payback in time to demonstrate correctness. In practice, I think my dream language would be a relatively rich macro assembler plus a (guided, greedy) register allocator, designed entirely to simplify and allow semantic proofs. I don't really have a sense as to whether Coq (Rocq) is the right answer today, vs other options; or if there's a newer literature advancing this approach; but I'd truly love to see a deeper focus of making low level languages more useful in this space, rather than moving towards more constrained, higher level languages.

    • quanto16 hours ago |parent

      During the days I was studying/working with Coq, one visiting professor gave a presentation on defense software design. An example presented was control logic for F-16, which the professor presumably worked on. A student asked how do you prove "correctness", i.e. operability, of a jet fighter and its control logic? I don't think the professor had a satisfying answer.

      My question is the same, albeit more technically refined. How do you prove the correctness of a numerical algorithm (operating on a quantized continuum) using type-theoretic/category-theoretic tools like theorem provers like Coq? There are documented tragedies where numerical rounding error of the control logic of a missile costed lives. I have proved mathematical theorems before (Curry-Howard!) but they were mathematical object driven (e.g. sets, groups) not continuous numbers.

      • dwohnitmok16 hours ago |parent

        You use floating point numbers instead of real numbers in your theorems and function definitions.

        This sounds flippant, but I'm being entirely earnest. It's a significantly larger pain because floating point numbers have some messy behavior, but the essential steps remain the same. I've proved theorems about floating point numbers, not reals. Although, again, it's a huge pain, and when I can get away with it I'd prefer to prove things with real numbers and assume magically they transfer to floating point. But if the situation demands it and you have the time and energy, it's perfectly fine to use Coq/Rocq or any other theorem prover to prove things directly about floating point arithmetic.

        The article itself is talking about an approach sufficiently low level that you would be proving things about floating point numbers because you would have to be since it's all assembly!

        But even at a higher level you can have theorems about floating point numbers. E.g. https://flocq.gitlabpages.inria.fr/

        There's nothing category theoretic or even type theoretic about the entities you are trying to prove with the theorem prover. Type theory is merely the "implementation language" of the prover. (And even if there was there's nothing tying type theory or category theory to the real numbers and not to floats)

        • quanto15 hours ago |parent

          > when I can get away with it I'd prefer to prove things with real numbers and assume magically they transfer to floating point.

          True for some approaches, but numerical analysis does account for machine epsilon and truncation errors.

          I am aware that Inria works with Coq as your link shows. However, the link itself does not answer my question. As a concrete example, how would you prove an implementation of a Kalman filter is correct?

          • thesmtsolver15 hours ago |parent

            There is nothing inherently difficult about practical implementations of continuous numbers for automated reasoning compared to more discrete mathematical structures. They are handleable by standard FOL itself.

            See ACL2's support for floating point arithmetic.

            https://www.cs.utexas.edu/~moore/publications/double-float.p...

            SMT solvers also support real number theories:

            https://shemesh.larc.nasa.gov/fm/papers/nfm2019-draft.pdf

            Z3 also supports real theories:

            https://smt-lib.org/theories-Reals.shtml

          • kragen15 hours ago |parent

            I'm curious about how you'd do that, too. I haven't tried doing anything like that, but I'd think you'd start by trying to formalize the usual optimality proof for the Kalman filter, transfer it to actual program logic on the assumption that the program manipulates real numbers, try to get the proof to work on floating-point numbers, and finally extract the program from the proof to run it.

            https://youtu.be/_LjN3UclYzU has a different attempt to formalize Kalman filters which I think we can all agree was not a successful formalization.

            • thesmtsolver3 hours ago |parent

              It is really not that difficult. Here is a paper that formalizes a version of feed forward networks to prove properties about them.

              https://arxiv.org/pdf/2304.10558

              • kragen3 hours ago |parent

                I only skimmed this paper, but it doesn't mention floating point (it's only modeling the FNN as a function on reals), and I don't think you can extract a working FNN implementation from an SMT-LIB or Z3 problem statement, so I think you have to take on faith the correspondence between the implementation you're running and the thing you proved correct.

                But that's the actual problem we're trying to solve; nobody really doubts Kalman's proof of his filter's optimality.

                So this paper is not a relevant answer to the question, "how would you prove an implementation of a Kalman filter is correct?"

          • dwohnitmok2 hours ago |parent

            The link was to show Coq's floating point library.

            > As a concrete example, how would you prove an implementation of a Kalman filter is correct?

            This would be fun enough to implement if I had more time. But I need to spend time with my family. I'm usually loathe to just post LLM output, but in this case, since you're looking just for the gist, and I don't have the time to write it out in detail, here's some LLM output: https://pastebin.com/0ZU51vN0

            Based on a quick skim, I can vouch for the overall approach. There's some minor errors/things I would do differently, and I'm extremely doubtful it typechecks (I didn't use a coding agent, just a chat window), but it captures the overall idea and gives some of the concrete syntax. I would also probably have a third implementation of the 1-d Kalman filter and would prefer some more theorems around the correctness of the Kalman filter itself (in addition to numerical stability). And of course a lot of the theorems are just left as proof stubs at the moment (but those could be filled in given enough time).

            But it's enough to demonstrate the overall outline of what it would look like.

            The overall approach I would have with floating point algorithms is exemplified by the following pseudo-code for a simple single argument function. We will first need some floating point machinery that establishes the relationship between floating point numbers and real numbers.

              # Calculates the corresponding real number from the sign, mantissa, and exponent
              floatToR : Float -> R
              
              # Logical predicate that holds if and only if x can be represented exactly in floating point
              IsFloat(x: R)
            
            Then define a set of three functions, `f0` which is on the idealized reals, `f1` which is on floating point numbers as mechanically defined via binary mantissa, exponent, and sign, and `f2` which is on the finite subset of the reals which have an exact floating point representation.

              f0 : R -> R
              f1 : Float -> Float
              f2 : (x : R) -> IsFloat(x) -> R
            
            The idea here is that `f0` is our "ideal" algorithm. `f1` is the algorithm implemented with exact adherence to low-level floating point operations. And `f2` is the ultimate algorithm we'll extract out to runnable code (because it might be extremely tedious, messy, and horrendous for runtime performance, to directly tie `f1`'s explicit representation of mantissa, exponent, and sign, which is likely some record structure with various pointers, to the machine 32-bit float).

            Then I prove the exact correspondence between `f1` and `f2`.

              f1_equal_to_f2 : forall (x: R), (isFloatProof: IsFloat(x)) -> f1(floatToR(x)) = floatToR(f2(x, isFloatProof))
            
            This gives the following corollary FWIW.

              f2_always_returns_floats : forall (x: R), IsFloat(x) -> IsFloat(f2(x))
            
            This lets me throw away `f1` and work directly with `f2`, which lets me more easily relate `f2` and `f0` since they both work on reals and I don't need to constantly convert between `Float` and `R`.

            So then I prove numerical stability on `f2` relative to `f0`. I start with a Wilkinson style backwards error analysis.

              f2_is_backwards_stable : forall (x: R), exists (y: R), IsFloat(x) -> f2(x) = f0(y) and IsCloseTo(y, x)
            
            Then, if applicable, I would prove a forward error analysis (which requires bounding the input and then showing that overflow doesn't happen and optionally underflow doesn't happen)

              f2_is_forwards_stable : forall (x: R), IsFloat(x) -> IsReasonablyBounded(x) -> IsCloseTo(f2(x), f0(x))
            
            And then given both of those I might then go on to prove some conditioning properties.

            Then finally I extract out `f2` to runnable code (and make sure that compilation process after that has all the right floating point flags, e.g. taking into consideration FMA or just outright disabling FMA if we didn't include it in our verification, making sure various fast math modes are turned off, etc.).

        • grumpymuppet10 hours ago |parent

          I have been curious about this. Where can you find definitions for the basic operations to build up from?

          IEE754 does a good job explaining the representation, but it doesn't define all the operations and possible error codes as near as I can tell.

          Is it just assumed "closest representable number to the real value" always?

          What about all the various error codes?

          • adrian_b7 hours ago |parent

            The standardized operations, e.g. multiplication or square root extraction, are precisely defined, i.e. the result is always defined exactly, by the combination of the corresponding operation with real numbers and by the rounding rule that is applied.

            IEEE 754 also contains a list of operations that are recommended, but not defined by the standard, such as the exponential function and other functions where it is difficult to round exactly the result.

            For the standardized operations, all the possible errors are precisely defined and they must either generate an appropriate exception or produce as result a special value that encodes the kind of error, depending on how the programmer configures the processor.

            The standard is perfectly fine. The support of the standard in the popular programming languages is frequently inconvenient or only partial or even absent. For instance it may be impossible to choose to handle the errors by separate exception handlers and it may be impossible to unmask some of the exceptions that are masked by default. Or you may lack the means to control the rounding mode or to choose when to use FMA operations and when to use separate multiplications.

            If you enable all the possible exceptions, including that for inexact results, the value of an expression computed with IEEE 754 operations is the same as if it were computed with real numbers, so you do not need to prove anything extra about it.

            However this is seldom helpful, because most operations with FP numbers produce inexact results. If you mask only the exception for inexact results, the active rounding rule will be applied after any operation that produces an inexact result.

            Then the expression where you replace the real numbers with FP numbers is equivalent with a more complex expression with real numbers that contains rounding operations besides the explicit operations.

            Then you have to prove whatever properties are of interest for you when using the more complex expression, which includes rounding operations.

            The main advantage of the IEEE 754 standard in comparison with the pathetic way of implementing FP operations before this standard, is that the rounding operations are defined exactly, so you can use them in a formal proof.

            Before this standard, most computer makers rounded the results in whatever way happened to be cheaper to implement and there were no guarantees about which will be the result of an operation after rounding, so it was impossible to prove anything about FP expressions computed in such computers.

            If you want to prove something about the computation of an expression when more exceptions are masked, not only the inexact result exception, that becomes more complex. When a CPU allows a non-standard handling of the masked exceptions, like flush-to-zero on underflow, that can break any proof.

        • anthk12 hours ago |parent

          You use reducing rationals everywhere you can, not floast.

      • auggierose2 hours ago |parent

        Curry-Howard is not needed for theorem proving, it's just that type theorists like it.

        See https://www.cl.cam.ac.uk/~jrh13/papers/thesis.html for your question, John Harrison got a job with Intel based on this after their floating point disaster.

        But in short: theorem proving is not about equalities, it is about inequalities. And theorems about numerical algorithms are a great example of this.

      • OneDeuxTriSeiGo8 hours ago |parent

        So the answer is that you are proving two things:

        1. That the model/specification makes sense. i.e. that certain properties in the model hold and that it does what you expect.

        2. That the SUV/SUT (system under verification/test) corresponds to the model. This encompasses a lot but really what you are doing here is establishing how your system interacts with the world, with what accuracy it does so, etc. And from there you are walking along the internal logic of your system and mapping your representations of the data and the algorithms you are using into some projection from the model with a specified error bound.

        So you are inherently dealing with the discrete nature of the system the entire time but you can reason about that discrete value as some distribution of possible values that you carry through the system with each step either

        - introducing some additional amount of error/variability or

        - tightening the bound of error/variability but trapping outside values into predictable edge cases.

        Then it's a matter of reasoning about those edge cases and whether they break the usefulness of the system compared against the idealised model.

      • tensegrist15 hours ago |parent

        > There are documented tragedies where numerical rounding error of the control logic of a missile costed lives.

        curious about this

        • codenaught14 hours ago |parent

          This is likely a reference to the Patriot missile rounding issue that arguably led to 28 deaths.

          https://www-users.cse.umn.edu/~arnold/disasters/Patriot-dhar...

          https://www.gao.gov/assets/imtec-92-26.pdf

          • oersted11 hours ago |parent

            Just to clarify for others because it’s a tiny bit clickbaity:

            The Patriot missile didn’t kill 28 people accidentally, it simply failed to intercept an enemy missile.

            And it wasn’t launched on an incorrect trajectory either, the radar was looking at a slightly wrong distance window and lost track. Furthermore, the error only starts having an effect after 100 hours of operation, and it seems to have only been problematic with the faster missiles in Iraq that the system wasn’t designed for. They rushed the update and they did actually write a function to deal with this exact numerical issue, but during the refactor they missed one place where it should have been used.

            28 lives are obviously significant, but just to note that there are many mitigating factors.

    • kragen16 hours ago |parent

      Have you found Coq or other formal-methods tooling useful?

      • addaon16 hours ago |parent

        No, I haven't, but I want to. I haven't had the opportunity to go deep into formal methods for a work project yet, and on personal projects I've mostly played with Ada SPARK, not Coq. I've played with Coq a little (to the extent of re-implementing the linked paper for a subset of a different ISA), but there's definitely a learning curve. One of my personal benchmark projects is to implement a (fixed-maximum-size) MMM heap in a language, and convince myself it's both correct and of high performance; and I've yet to achieve that in any language, even SPARK, without leaning heavily on testing and hand-waving along with some amount of formal proof. I've written the code and comprehensive testing in various assembly languages (again, this is sort of my benchmark bring-up-a-new-platform-in-the-brain example), and I /think/ that for the way I think proving properties as invariants maintained across individual instructions maps pretty well to how I convince myself it's correct, but I haven't had the time, depth, or toolchain to really bottom any of this out.

        • gjadi14 hours ago |parent

          What's a MMM heap? Is it a typo on min-max heap?

          • layer89 hours ago |parent

            A min-max-median heap, I would assume.

            • addaon6 hours ago |parent

              Yep, that. Just a bit more fiddly than a min-max heap, and though I rarely actually need the median functionality, fiddly is a pro for learning a language at times.

      • nextos3 hours ago |parent

        I have found Isabelle very useful, and Dafny even more so.

        Amazon AWS uses Dafny to prove the correctness of some complex components.

        Then, they extract verified Java code. There are other target languages.

        Being based on Hoare logic, Dafny is really simple. The barrier of entry is low.

        • kragen3 hours ago |parent

          Have you tried using them as macro assemblers in the way described in the paper?

          • nextos2 hours ago |parent

            No, I haven't. I have used them with shallow embedding techniques that are relatively similar, but not in this way.

            • kragen2 hours ago |parent

              That sounds interesting!

      • throw-qqqqq12 hours ago |parent

        I have found huge value in CBMC and KLEE for statically verifying C code for real time safety critical embedded applications.

        ACL2 is also VERY powerful and capable.

        • addaon5 hours ago |parent

          I'd love some examples here.

    • sylware12 hours ago |parent

      The second your macro-assembler has a significantly complex syntax, you are done: feature creep for 2042 features then 2043 then 2044... and it mechanically reduces the number the alternative implementations and make the effort required to develop a new more and more unreasonable.

      In this end, it is all about the complexity and stability in time for this matter.

  • dang2 hours ago

    Discussed at the time:

    Coq: The world’s best macro assembler? (2013) [pdf] - https://news.ycombinator.com/item?id=8752385 - Dec 2014 (61 comments)

  • quanto16 hours ago

    This paper reminds me of a class assignment in grad school where the prof asked the students to write a compiler in Coq for some toy Turning-complete language in a week. Having no background in compiler design or functional programming, I found it daunting at first, but eventually managed it. The Coq language's rigor really helps with something like this.

    I wonder if AI's compute graph would benefit from a language-level rigor as of Coq.

  • kragen16 hours ago

    The canonical URL, dating back to at least 02015, may be http://research.microsoft.com/en-us/um/people/nick/coqasm.pd..., which now redirects to https://www.microsoft.com/en-us/research/publication/coq-wor.... This site presumably belongs to the Nick Benton who is one of its authors.

  • mmastrac14 hours ago

    It's a bit of an awkward syntax to get a reliable assembler. Does it at least allow you to prove the behaviour of a larger block of assembly? For example, could I use it to prove that a block of assembly is equivalent to a given set of operations?

    • kragen3 hours ago |parent

      Yes.

  • throwaway19884616 hours ago

    They are going by Rocq today

    • volemo16 hours ago |parent

      I understand and respect the renaming, Coq was a much cooler name though.

      • graemep10 hours ago |parent

        It was.

        Renaming languages to suit American taste really grates.

        Nimrod to Nim was even worse. Apparently Americans cannot get Biblical references.

        • dugmartin4 hours ago |parent

          Not biblical references but rather Elmer Fudd.

          Bugs Bunny called Elmer Fudd "Nimrod" in a 1940's cartoon to sarcastically refer to Elmer as a great hunter. At that time I think most people probably got the biblical reference. Over time that word morphed into meaning something like an idiot to most Americans due to that cartoon.

          The same thing happened to the word "Acme" - the coyote in the road runner cartoons bought all his devices from the "Acme Corporation". Acme means the best/peak and it was a sarcastic reference to none of the gadgets ever working. Now most American's think Acme means generic/bad.

          They should have kept the name as Nimrod and named the package manager Acme instead of Nimble.

        • UltraSane7 hours ago |parent

          What if it was the word for Penis in Japanese or Chinese? I would understand them for changing it.

          • gryn5 hours ago |parent

            then we should change bit(s) because it means dick/penis in french /s

            • umanwizard3 hours ago |parent

              bit (in English) is not pronounced the same as bite (in French). The French word is closer in pronunciation to “beet” or “beat” in English.

              Also, “coq” and “cock” are not really pronounced the same either. The English word with the closest pronunciation to “coq” is “coke”.

              • Narishma3 hours ago |parent

                > bit (in English) is not pronounced the same as bite (in French). The French word is closer in pronunciation to “beet” or “beat” in English.

                Wrong, it's pronounced exactly like the English "bit".

                • umanwizard2 hours ago |parent

                  That’s not true, at least in France. Perhaps it’s true in some other dialect, e.g. Quebec French; I don’t know.

                  From Wiktionary, the pronunciation of English bit is /bɪt/, and French bite is /bit/. The sounds represented in IPA by ɪ and i are not the same, which is precisely why “bit” and “beet” sound different to Americans.

    • AdieuToLogic16 hours ago |parent

      The linked PDF was created on 8/5/13.

      • volemo16 hours ago |parent

        Then the title needs (2013), no?

        • addaon16 hours ago |parent

          Yep, sorry. Added.

  • Svokaan hour ago

    This is great. I honestly never thought of computer as a register state and instructions/asm commands as functions applied to it. But that makes so much sense.

  • asdefghyk16 hours ago

    The x86 architecture and instruction set is complex - so it absolutely needs a powerful assembler to help prevent mistakes.

    • kragen16 hours ago |parent

      That doesn't seem right to me. If the problem is that it has too many instructions and addressing modes, you can decide to only use a small subset of those instructions and addressing modes, which really isn't much of a handicap for implementing functionality. (It doesn't help with analyzing existing code, but neither does a powerful assembler.)

      I'm no expert on assembly-language programming, but probably 90% of the assembly I write on i386, amd64, RISC-V, and ARM is about 40 instructions: ldr, mov, bl, cmp, movs, push, pop, add, b/jmp, bl/blx/call, ret, str, beq/jz, bne/jnz, bhi/ja, bge/jge, cbz, stmia, ldmia, ldmdb, add/adds, addi, sub/subs, bx, xor/eor, and, or/orr, lsls/shl, lsrs/sar/shr, test/tst, inc, dec, lea, and slt, I think. Every once in a while you need a mul or a div or something. But the other 99% of the instruction set is either for optimized vectorized inner loops or for writing operating system kernels.

      I think that the reason that i386 assembly (or amd64 assembly) is error-prone is something else, something it has in common with very simple architectures and instruction sets like that of the PDP-8.

      • yjftsjthsd-h15 hours ago |parent

        > I think that the reason that i386 assembly (or amd64 assembly) is error-prone is something else, something it has in common with very simple architectures and instruction sets like that of the PDP-8.

        What reason is that? (And, if it's not obvious, what are ARM/RISC-V doing that make them less bad?)

        • kragen15 hours ago |parent

          I don't think they're particularly less bad. All five architectures just treat memory as a single untyped array of integers, and registers as integer global variables. Their only control structure is goto (and conditional goto.) If you forget to pass an argument to a subroutine, or pass a pointer to an integer instead of the integer, or forget to save a callee-saved register you're using, or do a signed comparison on an unsigned integer, or deallocate memory and then keep using it, or omit or jump past the initialization of a local variable, conventional assemblers provide you no help at all in finding the bug. You'll have to figure it out by watching the program give you wrong answers, maybe single-stepping through it in a debugger.

          There are various minor details of one architecture or the other that make them more or less bug-prone, but those are minor compared to what they have in common.

          None of this is because the instruction sets are complex. It would be closer to the mark to say that it's because they are simple.

          • pjc5010 hours ago |parent

            I have a postit note idea that says simply "typesafe macro assembler".

            I've not fleshed this out yet, but I think a relatively simple system would help deal with all the issues you mention in the first paragraph while allowing escape hatches.

            • cpeterso2 hours ago |parent

              Check out typed assembly languages like TALx86.

              https://en.wikipedia.org/wiki/Typed_assembly_language

            • kragen4 hours ago |parent

              You could call it "C".

  • anthk12 hours ago

    A Lisp can mimic Prolog and the rigour of Coq with ease.

    • addaon4 hours ago |parent

      Can you give an example? With something like Figure 1 of the paper converted to a convenient form for Lisp (s-expressions, presumedly), and assuming a function to convert it to binary, what would it look like to prove correctness of that assembly in Lisp? What's the ecosystem of proof assistants that would get you there?

    • attila-lendvai6 hours ago |parent

      i'm a long time lisper, but i don't think this is justified.

      implementing prolog/backtracking on top of lisp is certainly possible and not even too hard (see e.g. https://github.com/nikodemus/screamer)

      and implementing Coq on top of lisp is also possible.

      but IMO the "with ease" phrase is not justified in this context.

      if you only mean that lisp will not be in the way if you set out to implement these, then i agree. lisp -- the language and the typical opensource implementation -- will be much less of an obstacle than other languages when chosen as foundations.

    • kragen3 hours ago |parent

      You can write a prover like Coq in a Lisp, but, though it's easier than doing it in C, I think it's somewhat harder than doing it in something like OCaml. ACL2 is an example of such a thing.