HNNewShowAskJobs
Built with Tanstack Start
The C++ standard for the F-35 Fighter Jet [video](youtube.com)
319 points by AareyBaba a day ago | 404 comments

PDF: https://www.stroustrup.com/JSF-AV-rules.pdf

  • bri3da day ago

    https://web.archive.org/web/20111219004314/http://journal.th... (referenced, at least tangentially, in the video) is a piece from the engineering lead which does a great job discussing Why C++. The short summary is "they couldn't find enough people to write Ada, and even if they could, they also couldn't find enough Ada middleware and toolchain."

    I actually think Ada would be an easier sell today than it was back then. It seems to me that the software field overall has become more open to a wider variety of languages and concepts, and knowing Ada wouldn't be perceived as widely as career pidgeonholing today. Plus, Ada is having a bit of a resurgence with stuff like NVidia picking SPARK.

    • ecshafer15 hours ago |parent

      I've always strongly disliked this argument of not enough X programmers. If the DoD enforces the requirement for Ada, Universities, job training centers, and companies will follow. People can learn new languages. And the F35 and America's combat readiness would be in a better place today with Ada instead of C++.

      • dexterous7 hours ago |parent

        I agree that the "there aren't enough programmers for language X" argument is generally flawed. Acceptable cases would be niches like maintenance of seriously legacy or dying platforms. COBOL anyone?

        But, not because I think schools and colleges would jump at the opportunity and start training the next batch of students in said language just because some government department or a bunch of large corporations supported and/or mandated it. Mostly because that hasn't actually panned out in reality for as long as I can remember. Trust me, I _wish_ schools and colleges were that proactive or even in touch with with the industry needs, but... (shrug!)

        Like I said, I still think the original argument is flawed, at least in the general case, because any good organization shouldn't be hiring "language X" programmers, they should be hiring good programmers who show the ability to transfer their problem solving skills across the panopticon of languages out there. Investing in getting a _good_ programmer upskilled on a new language is not as expensive as most organizations make it out to be.

        Now, if you go and pick some _really obscure_ (read "screwed up") programming language, there's not much out there that can help you either way, so... (shrug!)

      • exDM6910 hours ago |parent

        > If the DoD enforces the requirement for Ada, Universities, job training centers, and companies will follow

        DoD did enforce a requirement for Ada but universities and others did not follow.

        The JSF C++ guidelines were created for circumventing the DoD Ada mandate (as discussed in the video).

        • p_l6 hours ago |parent

          TL;DR Ada programmers were more expensive

          • adolph5 hours ago |parent

            Since when was expense a problem for defense spending?

            In the video, the narrator also claims that Ada compilers were expensive and thus students were dissuaded from trying it out. However, in researching this comment I founds that the Gnat project has been around since the early 90s. Maybe it wasn't complete enough until much later and maybe potential students of the time weren't using GNU?

              The GNAT project started in 1992 when the United States Air Force awarded New 
              York University (NYU) a contract to build a free compiler for Ada to help 
              with the Ada 9X standardization process. The 3-million-dollar contract 
              required the use of the GNU GPL for all development, and assigned the 
              copyright to the Free Software Foundation.
            
            https://en.wikipedia.org/wiki/GNAT
            • p_l38 minutes ago |parent

              Since on paper government cares about cost efficiency and you have to consider that in your lobbying materials.

              Also it enables getting cheaper programmers who where possible might be isolated from the actual TS materiel to develop on the cheap so that the profit margin is bigger.

              It gets worse outside of the flight side JSF software - or so it looks like from GAO reports. You don't turn around a culture of shittiness that fast, and I've seen earlier code in the same area (but not for JSF) by L-M... and well, it was among the worst code I've seen. Including failing even basic requirement of running on a specific version of a browser at minimum.

            • 0xffff23 hours ago |parent

              Take a look at job adds for major defense contractors in jurisdictions that require salary disclosure. Wherever all that defense money is going, it's not engineering salaries. I'm a non-DoD government contractor and even I scoff at the salary ranges that Boeing/Lockheed/Northrup post, which often feature an upper bound substantially lower than my current salary while the job requires an invasive security clearance (my current job doesn't). And my compensation pales in comparison to what the top tech companies pay.

            • jll294 hours ago |parent

              The DOD could easily have organized Ada hackathons with a lot of prize money to "make Ada cool" if they had chosen to in order to get the language out of the limelight. They could also have funded developing a free, open source toolchain.

              • jandrese2 hours ago |parent

                Ada would never have been cool.

                Ironically I remember one of the complaints was it took a long time for the compilers to stabilize. They were such complex beasts with a small userbase so you had smallish companies trying to develop a tremendously complex compiler for a small crowd of government contractors, a perfect recipe for expensive software.

                I think maybe they were just a little ahead of their time on getting a good open source compiler. The Rust project shows that it is possible now, but back in the 80s and 90s with only the very early forms of the Internet I don't think the world was ready.

                • skepti32 hours ago |parent

                  Out of curiosity:

                  1: If you had to guess, how high is the level of complexity of rustc?

                  2: How do you think gccrs will fare?

                  3: Do you like or dislike the Rust specification that originated from Ferrocene?

                  4: Is it important for a systems language to have more than one full compiler for it?

                  • jandrese2 hours ago |parent

                    Given how much memory and CPU time is burned compiling Rust projects I'm guessing it is pretty complex.

                    I'm not deep enough into the Rust ecosystem to have solid opinions on the rest of that, but I know from the specification alone that it has a lot of work to do every time you execute rustc. I would hope that the strict implementation would reduce the number of edge cases the compiler has to deal with, but the sheer volume of the specification works against efforts to simplify.

              • skepti32 hours ago |parent

                > They could also have funded developing a free, open source toolchain.

                If the actual purpose of the Ada mandate was cartel-making for companies selling Ada products, that would have been counter-productive to their goals.

                Not that compiler vendors making money is a bad thing, compiler development needs to be funded somehow. Funding for language development is also a topic. There was a presentation by the maker of Elm about how programming language development is funded [0].

                [0]: https://youtube.com/watch?v=XZ3w_jec1v8

      • lallysingh11 hours ago |parent

        No they won't. DoD is small compared to the rest of the software market. You get better quality and lower cost with COTS than with custom solutions, unless you spend a crap ton. The labor market for software's no different.

        Everyone likes to crap on C++ because it's (a) popular and (b) tries to make everyone happy with a ton of different paradigms built-in. But you can program nearly any system with it more scalably than anything else.

        • adrianN8 hours ago |parent

          In my experience people criticize C++ for its safety problems. Safety is more important in certain areas than in others. I’m not convinced that you get better quality with C++ than with Ada

        • nmz4 hours ago |parent

          Go was built because C++ does not scale. Anybody that's ever used a source based distro knows that if you're installing/building a large C++ codebase, better forget your PC for the day because you will not be using it. Rust also applies here, but at least multiplatform support is easier, so I don't fault it for slow build times

        • bmitc5 hours ago |parent

          > more scalably than anything else

          That's quite debatable. C++ is well known to scale poorly.

      • IshKebab13 hours ago |parent

        I agree. First of all I don't think Ada is a difficult language to learn. Hire C++ programmers and let them learn Ada.

        Secondly, when companies say "we can't hire enough X" what they really mean is "X are too expensive". They probably have some strict salary bands and nobody had the power to change them.

        In other words there are plenty of expensive good Ada and C++ programmers, but there are only cheap crap C++ programmers.

        • jll294 hours ago |parent

          I agree - Ada is very similar to Pascal, and much faster to pick up than, say, C++.

        • blub12 hours ago |parent

          Actually these kinds of projects are chronically over budget and the US military is notorious for wasting money.

          Using C++ vs wishing an Ada ecosystem into existence may have been one of the few successful cost saving measures.

          Keep in mind that these are not normal programmers. They need to have a security clearance and fulfill specific requirements.

          • reactordev9 hours ago |parent

            They need to have very strict security clearance requirements and maintain them throughout the life of the project or their tenure. People don’t realize this isn’t some little embedded app you throw on an ESP32.

            You’ll be interviewed, your family, your neighbors, your school teachers, your past bosses, your cousin once removed, your sheriff, your past lovers, and even your old childhood friends. Your life goes under a microscope.

            • nmfisher8 hours ago |parent

              I went through the TS positive vetting process (for signals intelligence, not writing software for fighter jets, but the process is presumably the same).

              If I were back on the job market, I’d be demanding a big premium to go through it again. It’s very intrusive, puts significant limitations on where you can go, and adds significant job uncertainty (since your job is now tied to your clearance).

              • galangalalgol7 hours ago |parent

                Not to mention embedded software is often half the pay of a startup and defense software often isn't work from home. Forget asking what languages they can hire for. They are relying on the work being interesting to compensate for dramatically less pay and substantially less pleasant working conditions. Factor in some portion of the workforce has ethical concerns working in the sector and you can see they will get three sorts of employees. Those who couldn't get a job elsewhere, those who want something cool on their resume, and those who love the domain. And they will lose the middle category right around the time they become productive members of the team because it was always just a stepping stone.

              • reactordev8 hours ago |parent

                Yes but like a certification, that clearance is yours, not the companies. You take it with you. It lasts a good while. There are plenty of government companies that would love you if you had one. Northrop, Lockheed, Boeing, etc.

                • ecshafer7 hours ago |parent

                  An Engineering degree and a TS is basically a guaranteed job. They might not be the flashiest FAANG jobs, but it is job security. In this downturn where people talk about being unable to find jobs for years in big cities, I look around my local area and Lockheed, BAE, Booze Allen, etc they have openings.

                  • reactordev7 hours ago |parent

                    My issue is you end up dealing with dopes who don't want to learn, just want to milk the money and the job security, and actively fight you when you try to make things better. Institutionalized.

                  • Xss35 hours ago |parent

                    They always have openings so investors think theyre hiring and growing. Many ads are for fictional positions.

              • 0xffff22 hours ago |parent

                And yet my experience looking at the deluge of clearance-required dev jobs from defense startups in the past couple of years is that there is absolutely no premium at all for clearance-required positions.

            • CWuestefeld2 hours ago |parent

              I was once interviewed by the FBI due to a buddy applying for security clearance. One thing they asked was, "have you every known XXX to drink excessively", to which I replied "we were fraternity brothers together, so while we did often drink a lot, it needs to be viewed in context",

        • skepti13 hours ago |parent

          As I wrote to someone else:

          Why require that companies use a specific programming language instead of requiring that the end product is good? > And the F35 and America's combat readiness would be in a better place today with Ada instead of C++.

          What is the evidence for this? Companies selling Ada products would almost certainly agree, since they have a horse in the race. Ada does not automatically lead to better, more robust, safer or fully correct software.

          Your line of argument is dangerous and dishonest, as real life regrettably shows.[0]

          [0]: https://en.wikipedia.org/wiki/Ariane_flight_V88

          > The failure has become known as one of the most infamous and expensive software bugs in history.[2] The failure resulted in a loss of more than US$370 million.[3]

          > The launch failure brought the high risks associated with complex computing systems to the attention of the general public, politicians, and executives, resulting in increased support for research on ensuring the reliability of safety-critical systems. The subsequent automated analysis of the Ariane code (written in Ada) was the first example of large-scale static code analysis by abstract interpretation.[9]

          • zeroc811 hours ago |parent

            Ada and especially Spark makes it a whole lot easier to produce correct software. That doesn't mean it automatically leads to better software. The programming language is just a small piece of the puzzle. But an important one.

            • skepti211 hours ago |parent

              > Ada and especially Spark makes it a whole lot easier to produce correct software.

              Relative to what? There are formal verification tools for other languages. I have heard Ada/SPARK is good, but I do not know the veracity of that. And Ada companies promoting Ada have horses in the race.

              And Ada didn't prevent the Ada code in Ariane 5 from being a disaster.

              > The programming language is just a small piece of the puzzle. But an important one.

              100% true, but the parent of the original post that he agreed with said:

              > And the F35 and America's combat readiness would be in a better place today with Ada instead of C++.

              What is the proof for that, especially considering events like Ariane 5?

              And Ada arguably has technical and non-technical drawbacks relative to many other languages.

              When I tried Ada some weeks ago for a tiny example, I found it cumbersome in some ways. Is the syntax worse and more verbose than even C++? Maybe that is just a learning thing, though. Even with a mandate, Ada did not catch on.

              • yoda2226 hours ago |parent

                > Ada didn't prevent the Ada code in Ariane 5 from being a disaster

                That's a weak argument to say that Ada could not lead to a better place in term of software. It's like saying that it's not safer to cross at a crosswalk because you know someone who died while crossing on one.

                (But I guess that's fair for you to say that, as the argument should probably be made by the people that say that Ada would be better, and because they made a claim without evidences, you can counterclaim without any evidence :-) )

                There are no programming language that can prevent a software for working correctly outside of the domain for which the software is written, which was the case for Ariane 501. Any language that would have been used to write the same software for Ariane 4 may have led to the same exact error. Ariane 501 failure is a system engineering problem here, not a software problem (even if in the end, the almost last piece in the chain of event is a software problem)

              • serf10 hours ago |parent

                >What is the proof for that, especially considering events like Ariane 5?

                Ariane 5 is a nice anti-ada catchphrase, but ada is probably the most used language for war machines in the United States.

                now the argument can be whether or not the US military is superior to X; but the fact that the largest military in the world is filled to the brim with warmachines running ada code is testament itself to the effectiveness of the language/dod/grant structure around the language.

                would it be better off in c++? I don't know about that one way or the other , but it's silly pretend ada isn't successful.

                • skepti210 hours ago |parent

                  But Ada had for a number of years a mandate to require its usage [0]. That should have been an extreme competitive advantage. And even then, C++ is still used these days for some US military projects, like F-35. Though I don't know whether the F-35 is successful or not, if it is not, that could be an argument against C++.

                  Ada is almost non-existent outside its niche.

                  The main companies arguing for Ada appear to be the ones selling Ada services, meaning they have a horse in the race.

                  I barely have any experience at all with Ada. My main impression is that it, like C++, is very old.

                  [0]: https://www.militaryaerospace.com/communications/article/167...

                  > The Defense Department`s chief of computers, Emmett Paige Jr., is recommending a rescission of the DOD`s mandate to use the Ada programming language for real-time, mission-critical weapons and information systems.

                  • galangalalgol7 hours ago |parent

                    Poking around it looks like ada is actually the minority now. Everything current is either transitioning to c++ or started that way. The really old but still used stuff is often written in weird languages like jovial or in assembly.

                    • p_l6 hours ago |parent

                      Essentially the story of DoD mandates goes down to everyone getting waivers all the time and nuking the mandate.

              • zeroc85 hours ago |parent

                Well, readability, better typesafety, less undefined behaviour. In and out parameters, named parameters. Built in concurrency.

                With C++ it's just too easy to make mistakes.

      • goalieca6 hours ago |parent

        I’ve learned most languages on the job: c#, php, golang, JavaScript, …

        I know others who learned ADA on the job.

        It’s not too terrible.

      • blub14 hours ago |parent

        The exact opposite of what you suggest already happened: Ada was mandated and then the mandate was revoked. It’s generally a bad idea to be the only customer of a specific product, because it increases costs.

        > And the F35 and America's combat readiness would be in a better place today with Ada instead of C++

        What’s the problem with the F35 and combat readiness? Many EU countries are falling over each-other to buy it.

        • KolmogorovComp14 hours ago |parent

          > Many EU countries are falling over each-other to buy it

          They are not buying it for its capabilities though, but to please their US ally/bully which would have retaliated economically otherwise.

          See the very recent Swiss case were theirs pilots had chosen another aircraft (the french Rafale), only to be disavowed by their politics later on.

          • varjag13 hours ago |parent

            Much of existing European F-35 fleet predates Trump's first term. In fact now quite the opposite happens: other options being eyed from reliable partners, even if technically inferior.

          • gghffguhvc13 hours ago |parent

            The pilots might have reassessed after Pakistan seemed to have shot three of them down from over 200km range. Intel failure blamed but likely many factors of which some presumably may be attributed to the planes.

            • psunavy034 hours ago |parent

              Pakistan has never downed an F-35.

              • KolmogorovComp4 hours ago |parent

                They were talking about the Rafales. But I think the comment is irrelevant anyway as the scandal happened before that iirc.

          • blub12 hours ago |parent

            Maybe the EU shouldn’t have transformed themselves into US vassals then.

            Nobody respects weakness, not even an ally. Ironically showing a spine and decoupling from the US on some topics would have hurt more short term, but would have been healthier in the long term.

            • jack_tripper8 hours ago |parent

              >Maybe the EU shouldn’t have transformed themselves into US vassals then.

              I share the same opinion. If you're (on paper) the biggest economic block in the world, but you can be so easily bullied, then you've already failed >20 years ago.

              But I don't think it was bullying, but the other way around. EU countries were just buying favoritism for US military protection, because it was still way cheaper than ripping the bandaid and building its own domestic military industry of similar power and scale.

              Most defense spending uses the same motivation. You're not seeking to buying the best or cheapest hardware, you seek to buy powerful friends.

        • baud1472588 hours ago |parent

          > What’s the problem with the F35 and combat readiness?

          For example, the UK would like to use its own air-to-ground missile (the spear missile) with its own F-35 jets, but it's held back by Lockheed Martin's Block 4 software update delays.

          • p_l6 hours ago |parent

            Also by the software being black box for everyone outside of USA / Lockheed-Martin.

        • pixelesque13 hours ago |parent

          > What’s the problem with the F35 and combat readiness?

          Block 4 is very delayed for starters.

        • ecshafer7 hours ago |parent

          The F35 was like 10 years and $50B over budget.

        • pandemic_region13 hours ago |parent

          > Many EU countries are falling over each-other to buy it.

          It's because we are obliged to want more freedom.

      • skepti13 hours ago |parent

        Why require that companies use a specific programming language instead of requiring that the end product is good?

        > And the F35 and America's combat readiness would be in a better place today with Ada instead of C++.

        What is the evidence for this? Companies selling Ada products would almost certainly agree, since they have a horse in the race. Ada does not automatically lead to better, more robust, safer or fully correct software.

        Your line of argument is dangerous and dishonest, as real life regrettably shows.[0]

        [0]: https://en.wikipedia.org/wiki/Ariane_flight_V88

        > The failure has become known as one of the most infamous and expensive software bugs in history.[2] The failure resulted in a loss of more than US$370 million.[3]

        > The launch failure brought the high risks associated with complex computing systems to the attention of the general public, politicians, and executives, resulting in increased support for research on ensuring the reliability of safety-critical systems. The subsequent automated analysis of the Ariane code (written in Ada) was the first example of large-scale static code analysis by abstract interpretation.[9]

        • fauigerzigerk10 hours ago |parent

          > Why require that companies use a specific programming language instead of requiring that the end product is good?

          I can think of two reasons. First, achieving the same level of correctness could be cheaper using a better language. And second, you have to assume that your testing is not 100% correct and complete either. I think starting from a better baseline can only be helpful.

          That said, I have never used formal verification tools for C or C++. Maybe they make up for the deficiencies of the language.

          • skepti210 hours ago |parent

            How do you define a better programming language, how do you judge whether one programming language is better than another, and how do you prevent corruption and cartels from taking over?

            If Ada was "better" than C++, why did Ada not perform much better than C++, both in regards to safety and correctness (Ariane 5), and commercially regarding its niche and also generally? Lots of companies out there could have gotten a great competitive edge with a "better" programming language. Why did the free market not pick Ada?

            You could then argue that C++ had free compilers, but that should have been counter-weighed somewhat by the Ada mandate. Why did businesses not pick up Ada?

            Rust is much more popular than Ada, at least outside Ada's niche. Some of that is organic, for instance arguably due to Rust's nice pattern matching and modules and crates. And some of that is inorganic, like how Rust evangelists through force, threats[0], harassment[1] and organized and paid media spam force Rust.

            I also tried Ada some time ago, trying to write a tiny example, and it seemed worse than C++ in some regards. Though I only spent a few hours or so on it.

            [0]: https://github.com/microsoft/typescript-go/discussions/411#d...

            [1]: https://lkml.org/lkml/2025/2/6/1292

            > Technical patches and discussions matter. Social media brigading - no than\k you.

            > Linus

            https://archive.md/uLiWX

            https://archive.md/rESxe

            • fauigerzigerk3 hours ago |parent

              >How do you define a better programming language

              A language that makes avoiding certain important classes of defects easier and more productive.

              >how do you judge whether one programming language is better than another

              Analytically, i.e. by explaining and proving how these classes of bugs can be avoided.

              I don't find empirical studies on this subject particularly useful. There are too many moving parts in software projects. The quality of the team and its working environment probably dominates everything else. And these studies rarely take productivity and cost into consideration.

    • jandrese2 hours ago |parent

      The funny thing is the promise of Ada was "if it compiles it won't crash at runtime" which has a lot of overlap with Rust.

    • pjmlp13 hours ago |parent

      Given that there are still 7 vendors selling Ada compilers I always found that argument a bit disingenuous.

      https://www.adacore.com/

      https://www.ghs.com/products/ada_optimizing_compilers.html

      https://www.ptc.com/en/products/developer-tools/apexada

      https://www.ddci.com/solutions/products/ddci-developer-suite...

      http://www.irvine.com/tech.html

      http://www.ocsystems.com/w/index.php/OCS:PowerAda

      http://www.rrsoftware.com/html/prodinf/janus95/j-ada95.htm

      What is true, is that those vendors, and many others, like UNIX vendors that used to have Ada compilers like Sun, paying for Ada compilers was extra, while C and C++ were already there on the UNIX developers SKU (a tradition that Sun started, having various UNIX SKUs).

      So schools and many folks found easier to just buy a C or C++ compiler, than an Ada one, with its price tags.

      Something that has helped Ada is the great work done by Ada Core, even if a few love hating them. They are the major sponsor for ISO work, and spreading Ada knowledge on the open source community.

      • skepti12 hours ago |parent

        Another factor for Ada not being more popular is probably: https://en.wikipedia.org/wiki/Ariane_flight_V88

        > The failure has become known as one of the most infamous and expensive software bugs in history.[2] The failure resulted in a loss of more than US$370 million.[3]

        > The launch failure brought the high risks associated with complex computing systems to the attention of the general public, politicians, and executives, resulting in increased support for research on ensuring the reliability of safety-critical systems. The subsequent automated analysis of the Ariane code (written in Ada) was the first example of large-scale static code analysis by abstract interpretation.[9]

        • adrian_b8 hours ago |parent

          The failure of Ariane was not specific to Ada.

          It is just an example that it is possible to write garbage programs in any programming language, regardless if it is Rust or any other supposedly safer programming language.

          A program written in C, but compiled with the option to trap on overflow errors would have behaved identically to the Ada program of Ariane.

          A program where exceptions are ignored would have continued to run, but most likely the rocket would have crashed anyway a little later due to nonsense program decisions and the cause would have been more difficult to discover.

        • pjmlp12 hours ago |parent

          People love to point that out, missing the amount of failures in C derived languages.

          • skepti12 hours ago |parent

            But C-derived languages are also used much more. And it still shows that Ada does not automatically make software correct and robust. It presumably did indeed make Ada less popular than if it had not happened.

            • pjmlp11 hours ago |parent

              People still die in car crashes when wearing seatbelts, ergo seatbelts are useless.

              • skepti210 hours ago |parent

                Yet that was not any of my arguments. It, ironically, applies more to the argument you made in your previous post.

                A better argument would have been based on statistics. But that might both be difficult to do, and statistics can also be very easy to manipulate and difficult to handle correctly.

                I think companies should be free to choose any viable option, and then have requirements that the process and end product is good. Mandating Ada or other programming languages, doesn't seem like it would have prevented Ariane 5, and probably wouldn't improve safety, security or correctness, instead just open the door for limiting competition and cartels and false sense of security. I believe that one should never delegate responsibility to the programming language, more that programmers, organizations and companies are responsible for which languages they choose and how they use them (for instance using a formally verified subset). On the other hand, having standards and qualifications like ISO 26262 and ASIL-D, like what Ferrocene is trying to do with their products for Rust, is fine, I believe. Even though, specifically, some things about the Ferrocene-derived specification seem very off.

    • pyuser58316 hours ago |parent

      Yeah I find myself wishing it would take off again.

      I’m sure I’m idealizing it, but at least I’m not demonizing it like folks did back in the day.

    • skepti211 hours ago |parent

      > It seems to me that the software field overall has become more open to a wider variety of languages and concepts, and knowing Ada wouldn't be perceived as widely as career pidgeonholing today.

      Are you sure? I cannot even find Ada in [0].

      I tried modifying some Hello World example in Ada some weeks ago, and I cannot say that I liked the syntax. Some features were neat. I had some trouble with figuring out building and organizing files. Like C++, and unlike Rust I think, there are multiple source file types, like how C++ has header files. I also had trouble with some flags, but I was trying to use some experimental features, so I think that part was on me.

      [0]: https://redmonk.com/sogrady/2025/06/18/language-rankings-1-2...

  • anonymousiama day ago

    The same is true for the software that runs many satellites. Use of the STL is prohibited.

    The main issue is mission assurance. Using the stack or the heap means your variables aren't always at the same memory address. This can be bad if a particular memory cell has failed. If every variable has a fixed address, and one of those addresses goes bad, a patch can be loaded to move that address and the mission can continue.

    • 0xffff22 hours ago |parent

      For what it's worth, I am an active developer of space flight software. This might be true somewhere, but it's not true anywhere I've ever encountered. The contortions required to avoid using the stack would be insane and cause far more bugs than it could ever prevent. I'm pretty confident asserting that this is simply not a thing. Even heap allocation is very often allowed, but restricted to program initialization only. Furthermore, these rules are relaxing all the time. I am aware of at least one mission currently in space that is flying the C++14 STL with no restrictions on heap allocation and exceptions enabled. Unmodified `std::map` is currently flying in space with no ill effects.

    • quietbritishjim11 hours ago |parent

      > Using the stack or the heap means your variables aren't always at the same memory address.

      Your mention of STL makes it sound like you're talking about C++. But I don't know of any C++ compiler that lets you completely avoid use of the stack, even if you disable the usual suspects (RTTI and exceptions). Sure, you'd have to avoid local variables, defined within a function's body (or at block scope), but that's nowhere near enough.

      * The compiler would need to statically allocate space for every function's parameters and return address. That's actually how early compilers did work, but today it would be inefficient because there are surely so many functions defined in a program's binary compared to the number being executed at any given time. (Edit: I suppose you already need the actual code for those functions, so maybe allocating room for their parameters is not so bad.)

      * It would also mean that recursion would not work, even mutual recursion (so you'd need runtime checks because this would be hard to detect at compile/link time), although I suspect this is less of a problem than it sounds, but I'm not aware of a C++ compiler that supports it.

      * You'd also need to avoid creating any temporary variables at all e.g. y = a + b + c would not be allowed if a,b,c are non-trivial types. (y = a + b would be OK because the temporary could be constructed directly into y's footprint, or stored temporarily in the return space of the relevant operator+(), which again would be statically allocated).

      Is that really what you meant? I suspect not, but without all that your point about avoiding the stack doesn't make any sense.

      • RealityVoid9 hours ago |parent

        Your points are correct, but recursion is banned anyway in safety critical applications. The main issue is determinism. The fact you have to use the stack for call stacks is correct OP seems misinformed.

        • adrian_b8 hours ago |parent

          You have to use the stack for procedure calls on x86/x86-64 CPUs, where the hardware enforces this.

          In most other surviving CPU ISAs the return address is saved in a register and it is easy to arrange in a compiler to use only procedure arguments that are passed in registers, the only price being paid for this being a reasonable upper limit for the number of parameters of a function, e.g. 12 or 24, depending on the number of general-purpose registers (e.g. 16 or 32). For the very rare case when a programmer would want more parameters, some of them should be grouped into a structure.

          With this convention, which normally should not be a problem, there is no need for a call stack. There can be software managed stacks, which can be used even for implementing recursion, when that is desired.

          The use of static memory for passing function arguments was necessary only in the very early computers, which were starved in registers.

          • RealityVoid8 hours ago |parent

            I believe it's possible to do what you've described, but I am not aware of any compiler that does this.

            What do you get by doing it like this?

            Also, in your described structure, how do you handle nested function calls? I'm sure there exists a convoluted scheme that does this, but not sure with the current call assumptions.

            You also lose ABI compatibility with a bunch of stuff.

            And regardless, I mostly program in Risc-v and ARM -most compiles like to pass arguments on the registers, but use the stack anyway for local context.

            • jcalvinowens7 hours ago |parent

              You can do it on x86 too, just use jmp instead of call and invent your own arbitrary register scheme for it. This x86 program has no stack: https://github.com/jcalvinowens/asmhttpd

              I don't think it's too hard to imagine a compiler that does that, although it would obviously be very limited in functionality (nesting would be disallowed as you note, or I guess limited to the number of registers you're willing to waste on it...).

          • 0xffff22 hours ago |parent

            I honestly can't tell if you know a lot more than me or a lot less than me about how computers work... A couple of honest questions:

            1. Where do you save the current value of the return address register before calling a function?

            2. When parameters are "grouped into a structure" and the structure is passed as an argument to a function, where do you store that structure?

            • RealityVoid2 hours ago |parent

              Not OP, but presumably, the answers are:

              1) You don't... hence, my question about no nested function calls. If you push it anywhere else, you can call it whatever you want, but you just re-invented the stack. I _guess_ you could do some wierd stuff to technically not get a stack, but... again, it's wierd. And for what, again?

              2) Some fixed address. If you have for example:

              ```c

              typeRealBigStructure foo;

              void baz(typeRealBigStructure * struct){

                  // Do whatever to struct
              
              }

              void bar(void){

                baz(&foo);
              
              }

              ```

              The foo will probably end up in the BSS and will take up that space for the whole lifetime of the program. That's not the heap, not the stack, just... a fixed location in memory where the linker placed it.

              I guess on big PC's stuff is very dynamic and you use malloc for a lot of stuff, but in embedded C, it's a very common pattern.

              • 0xffff22 hours ago |parent

                Ah, you're right, the struct case is actually pretty straightforward (especially since recursion is likely forbidden anyway), I just have trouble contorting my brain to such a different viewpoint.

    • 5d41402abc4b16 hours ago |parent

      > Using the stack or the heap means your variables aren't always at the same memory address

      Where do you place the variables then? as global variables? and how do you detect if a memory cell has gone bad?

      • cminmin13 hours ago |parent

        your programs have a data segment. its not the heap nor the stack... and it can be (depending on loader/linker) a source of reliable object->address mappings as its not dynamically populated.

        • repelsteeltje13 hours ago |parent

          That sounds like your answer is: "Yes, global variables".

          That may be a perfectly good solution in many embedded environments, but in most other context's global variables are considered bad design or very limiting and impractical.

          • TuxSH11 hours ago |parent

            > global variables are considered bad design

            Global mutable variables, and they usually tend to be grouped into singletons (solving initialization issues, and fewer people bat an eye)

            • RealityVoid2 hours ago |parent

              Even global mutable variables are a problem only because they lend to spaghetti code and messy state handling if you use them in multiple places. But you could just... make a global variable and then handle it like you would a variable init with malloc. No functional issue there.

          • izacus12 hours ago |parent

            They're considered impractical mostly because language tooling doesn't support them appropriately.

            • repelsteeltje11 hours ago |parent

              Can you elaborate?

              For instance, how would better tooling help with storing a TCP buffer in global memory?

              • tatjam7 hours ago |parent

                As a quick example, compare doing embedded work with a C static uint8_t[MAX_BUFFER_SIZE] alongside a FreeRTOS semaphore and counter for the number of bytes written, vs using Rust's heapless::Vec<u8, MAX_BUFFER_SIZE>, behind a embassy Mutex.

                The first will be a real pain, as you now have 3 global variables, and the second will look pretty much like multi-threaded Rust running on a normal OS, but with some extra logic to handle the buffer growing too big.

                You can probably squeeze more performance out of the C code, specially if you know your system in-depth, but (from experience) it's very easy to lose track of the program's state and end up shooting your foot.

                • repelsteeltje4 hours ago |parent

                  Okay, fair enough.

                  So it's mostly about the absence of abstraction, in the C example? C++ would offer the same convenience (with std::mutex and std::array globals), but in C it's more of a hassle. Gotcha.

                  One more question because I'm curious - where would you anticipate C would be able to squeeze out more performance in above example?

    • RealityVoid9 hours ago |parent

      You can definitely have local variables on the stack. I don't know where you got that you don't use the stack. Heap is kind of a no-no, although memory pools are used.

      > If every variable has a fixed address, and one of those addresses goes bad, a patch can be loaded to move that address and the mission can continue.

      You can and do put the stack and heap pool at fixed memory ranges, so you can always do this. I'm not sold at all with this reasoning.

    • coppsilgold20 hours ago |parent

      > This can be bad if a particular memory cell has failed. If every variable has a fixed address, and one of those addresses goes bad, a patch can be loaded to move that address and the mission can continue.

      This seems like a rather manual way to go about things for which an automated solution can be devised. Such as create special ECC memory where you also account for entire cell failure with Reed-Solomon coding or some boot process which blacklists bad cells etc.

      • j16sdiz14 hours ago |parent

        It is more than that.

        This is what make remote debugging possible. It is impossible to do interactive remote debugging over a ultra low bandwidth link. If everything have static address and deterministic static, you can have a exact copy on ground and debug there.

        • coppsilgold3 hours ago |parent

          > If everything have static address and deterministic static, you can have a exact copy on ground and debug there.

          You can also have deterministic dynamic - the satellite could transmit its dynamic state (a few bits signifying which memory cells failed) and then you proceed deterministically on the ground.

        • varjag13 hours ago |parent

          Interactive debugging is apparently possible and was reportedly done on Deep Space One mission. One of developers involved frequents HN I believe.

          • unloader611811 hours ago |parent

            Kind of. That took hours of not days.

            A local exact replica with deterministic state save lots of time.

    • menaerus14 hours ago |parent

      So none of the functions you implement have in/out parameters?

      • cminmin13 hours ago |parent

        if you use few you can have em all in the registers perhaps (not sure what arch they rollin?)

    • Thaxlla day ago |parent

      Can't this be done at runtime? Like the underlying calls can black list hardware address on read/write faults?

      • amlutoa day ago |parent

        If you have memory to spare and are using hardware with an MMU, you can remap your logical address to a different page. Linux can do this, but only for user memory.

        • anonymousiama day ago |parent

          This assumes that the operating system can run. If the memory corruption impacts the OS, then it may be impossible to recover. As the systems (and software) have become more complex, keeping these Mission Assurance best practices becomes more important, but the modern generation of developers sometimes loses sight of this.

          A good example of what I'm talking about is a program that I was peripherally involved with about 15 years ago. The lead wanted to abstract the mundane details from the users (on the ground), so they would just "register intent" with the spacecraft, and it would figure out how to do what was wanted. The lead also wanted to eliminate features such as "memory dump", which is critical to the anomaly resolution process. If I had been on that team, I would have raised hell, but I wasn't, and at the time, I needed that team lead as an ally.

          • dahart6 hours ago |parent

            Do satellite embedded satellite systems usually have an OS these days? Is this a custom made OS, or do you have any examples of an OS that honors the no stack/heap and fixed address requirements you mentioned?

            What does the OS do? I don’t know about aerospace specifically, but plenty of embedded microcontroller systems don’t have an OS, and I would assume that having an OS is a massive risk against any mission assurance goals, no?

            • anonymousiam3 hours ago |parent

              It's a mixed bag. Some programs use Green Hills Integrity, some use Wind River VxWorks, some roll their own. I've done all of the above.

              The main purpose of the OS is to centralize, schedule, and manage the resources needed for the mission. It's usually pretty lightweight. Different philosophies are used on different missions. The OS risks can be mitigated. Usually there's a backup "golden copy" OS that can boot if needed. There's also "Safe Mode", which prioritizes communications with the ground, so anomalies can be worked.

          • charcircuit20 hours ago |parent

            >This assumes that the operating system can run.

            So does being able to download a new version of software that uses different memory addresses. The point is if you are able to patch software, you are able to patch memory maps.

          • amluto19 hours ago |parent

            Oh, to be clear, I would not do this if I needed that degree of reliability.

            Or maybe I would use an MMU but drive it with a kernel written in the old fashioned way with no allocation. It would depend on what hardware I had available and what faults I wanted to survive.

            (I’m not an aerospace software developer.)

          • 5d41402abc4b16 hours ago |parent

            >This assumes that the operating system can run.

            You could have two copies of the OS mapped to different memory regions. The CPU would boot with the first copy, if it fails watchdog would trigger and the CPU could try to boot the second copy.

          • d-lispa day ago |parent

            Wow, but how did they deal with anomalies ?

            I mean, even when I have the codebase readily accessible and testable in front of my eyes, I never trust the tests to be enough ? I often spot forgotten edge cases and bugs of various sort in C/embedded projects BECAUSE I run the program, can debug and spot mem issues and whole a lot of other things for which you NEED to gather the most informations you can in order to find solutions ?

  • don-codea day ago

    > All if, else if constructs will contain either a final else clause or a comment indicating why a final else clause is not necessary.

    I actually do this as well, but in addition I log out a message like, "value was neither found nor not found. This should never happen."

    This is incredibly useful for debugging. When code is running at scale, nonzero probability events happen all the time, and being able to immediately understand what happened - even if I don't understand why - has been very valuable to me.

    • kace9120 hours ago |parent

      I like rust matching for this reason: You need to cover all branches.

      In fact, not using a default (the else clause equivalent) is ideal if you can explicitly cover all cases, because then if the possibilities expand (say a new value in an enum) you’ll be annoyed by the compiler to cover the new case, which might otherwise slip by.

      • uecker11 hours ago |parent

        And I like using enums in C ;-) The compiler tells you to cover all branches.

        https://godbolt.org/z/bY1P9Kx7n

        • kace917 hours ago |parent

          Rust is a bit smarter than that, in that it covers exhaustiveness of possible states, for more than just enums:

          fn g(x: u8) { match x { 0..=10 => {}, 20..=200 => {},

              }
          }

          That for example would complain about the ranges 11 to 19 and 201 to 255 not being covered.

          You could try to map ranges to enum values, but then nobody would guarantee that you covered the whole range while mapping to enums so you’d be moving the problem to a different location.

          Rust approach is not flawless, larger data types like i32 or floats can’t check full coverage (I suppose for performance reasons) but still quite useful.

          • uecker7 hours ago |parent

            In principle C compilers can do this too https://godbolt.org/z/Ev4berx8d although you need to trick them to do this for you. This could certainly be improved.

        • rundev8 hours ago |parent

          The compiler also tells you that even if you cover all enum members, you still need a `default` to cover everything, because C enums allow non-member values.

    • YesBox18 hours ago |parent

      Same. I go one step further and create a macro _STOP which is defined as w/e your language's DebugBreak() is. And if it's really important, _CRASH (this coerces me to fix the issue immediately)

      • creato18 hours ago |parent

        The "standard" (typically defined in projects I'm familiar with, and as of C23, an actual standard) is "unreachable": https://en.cppreference.com/w/c/program/unreachable.html

        • vlovich12317 hours ago |parent

          That is not the same thing at all. Unreachable means that entire branch cannot be taken and the compiler is free to inject optimizations assuming that’s the case. It doesn’t need to crash if the violation isn’t met - indeed it probably won’t. It’s the equivalent of having something like

              x->foo();
              if (x == null) {
                  Return error…;
              }
          
          This literally caused a security vulnerability in the Linux kernel because it’s UB to dereference null (even in the kernel where engineers assumed it had well defined semantics) and it elided the null pointer check which then created a vulnerability.

          I would say that using unreachable() in mission critical software is super dangerous, moreso than an allocation failing. You want to remove all potential for UB (ie safe rust with no or minimal unsafe, not sprinkling in UB as a form of documentation).

          • creato15 hours ago |parent

            You're right, the thing I linked to do does exactly that. I should have read it more closely.

            The projects that I've worked on, unconditionally define it as a thing that crashes (e.g. `std::abort` with a message). They don't actually use that C/C++ thing (because C23 is too new), and apparently it would be wrong to do so.

            • uecker11 hours ago |parent

              You can use the standard feature with the unreachable sanitizer: https://godbolt.org/z/3hePd18Tn

          • skepti13 hours ago |parent

            For many types of projects and approaches, avoiding UB is necessary but not at all sufficient. It's perfectly possible to have critical bugs that can cause loss of health or life or loss of millions of dollars, without any undefined behavior being involved.

            Funnily enough, Rust's pattern matching, an innovation among systems languages without GCs (a small space inhabited by languages like C, C++ and Ada), may matter more regarding correctness and reliability than its famous borrow checker.

            • zozbot23413 hours ago |parent

              Didn't PASCAL have variant record types with a kind of primitive pattern matching already?

              • skepti310 hours ago |parent

                Possibly, I am not sure, though Delphi, a successor language, doesn't seem to advertise itself as having pattern matching.

                Maybe it is too primitive to be considered proper pattern matching, as pattern matching is known these days. Pattern matching has actually evolved quite a bit over the decades.

        • YesBox18 hours ago |parent

          C++ keeps getting bigger and bigger :D

          Thanks for sharing

          • creato17 hours ago |parent

            This is actually C and C++ has not done something similar AFAIK.

            • Fulgen10 hours ago |parent

              https://en.cppreference.com/w/cpp/utility/unreachable.html

            • TuxSH10 hours ago |parent

              C++23 does have std::unreachable (as a function), and its counterpart [[assume(expr)]]

            • hn_go_brrrrr13 hours ago |parent

              Fortunately the major compiler vendors all have. Routing around the standards committee is getting more and more common.

  • jandrewrogersa day ago

    For those interested, the F-35 (née Joint Strike Fighter) C++ coding standards can be found here, all 142 pages of it:

    https://www.stroustrup.com/JSF-AV-rules.pdf

    • tgva day ago |parent

      From quickly glancing over a couple of pages, that looks sensible. Which makes me curious to see some exceptions to the "shall" rules. With a project of this size, that should give some idea about the usefulness of such standards.

    • Animats21 hours ago |parent

      As is common in hard real time code, there is no dynamic allocation during operation:

          allocation/deallocation from/to the free store (heap) 
          shall not occur after initialization.
      
      This works fine when the problem is roughly constant, as it was in, say, 2005. But what do things look like in modern AI-guided drones?
      • jandrewrogers21 hours ago |parent

        Why would the modern environment materially change this? The initialized resource allocation reflects the limitations of the hardware. That budget is what it is.

        I can't think of anything about "modern AI-guided drones" that would change the fundamental mechanics. Some systems support very elastic and dynamic workloads under fixed allocation constraints.

        • Animats17 hours ago |parent

          Basic flight control is a fixed-sized problem. More military aircraft systems now on what the environment and enemy are doing.

          • jasonwatkinspdx17 hours ago |parent

            You're just imagining things at this point.

            The overwhelming majority of embedded systems are desired around a max buffer size and known worst case execution time. Attempting to balance resources dynamically in a fine grained way is almost always a mistake in these systems.

            Putting the words "modern" and "drone" in your sentence doesn't change this.

          • jandrewrogers15 hours ago |parent

            The compute side of real-time tracking and analysis of entity behavior in the environment is bottlenecked by what the sensors can resolve at this point. On the software side you really can’t flood the zone with enough drones etc such that software can’t keep up.

            These systems have limits but they are extremely high and in the improbable scenario that you hit them then it is a priority problem. That design problem has mature solutions from several decades ago when the limits were a few dozen simultaneous tracks.

      • mrgaro15 hours ago |parent

        There are missiles in which the allocation rate is calculated per second and then the hardware just has enough memory for the entire duration of the missile's flight plus a bit more. Garbage collection is then done by exploding the missile on the target ;)

        • superxpro126 hours ago |parent

          We call this "explosive deallocation". Destructors have a whole new meaning.

      • m4nu3l19 hours ago |parent

        What you are actually doing here is moving allocation logic from the heap allocator to your program logic.

        In this way you can use pools or buffers of which you know exactly the size. But, unless your program is always using exactly the same amount of memory at all times, you now have to manage memory allocations in your pool/buffers.

      • csmantle21 hours ago |parent

        "AI" comes in various flavors. It could be a expert system, a decision forest, a CNN, a Transformer, etc. In most inference scenarios the model is fixed, the input/output shapes are pre-defined and actions are prescribed. So it's not that dynamic after all.

        • vlovich12317 hours ago |parent

          This is also true of LLMs. I’m really not sure of OP’s point - AI (really all ML) generally is like the canonical “trivial to preallocate” problem.

          Where dynamic allocation starts to be really helpful is if you want to minimize your peak RAM usage for coexistence purposes (eg you have other processes running) or want to undersize your physical RAM requirements by leveraging temporal differences between different parts of code (ie components A and B never use memory simultaneously so either A or B can reuse the same RAM). It also does simplify some algorithms and also if you’re ever dealing with variable length inputs then it can help you not have to reason about maximums at design time (provided you just correctly handle an allocations failure).

      • dfedbeef17 hours ago |parent

        How do you think these modern AI-guided drones use their AI? What part of the drone uses it?

    • shepherdjerreda day ago |parent

      I wonder if they use static analysis to enforce these rules, or if developers are expected to just know all of this

      • jjmarra day ago |parent

        "shall" recommendations are statically analyzed, "will" are not.

      • ibejoeba day ago |parent

        static analysis

    • genewitcha day ago |parent

      In general, are these good recommendations for building software for embedded or lower-spec devices? I don't know how to do preprocessor macros anyhow, for instance - so as i am reading this i am like "yeah, i agree..." until the no stdio.h!

      • dmoy18 hours ago |parent

        Embedded more so than just lower-spec devices. Depends on the domain too.

        stdio.h is fine in some embedded contexts, and very very not fine in others

      • GoblinSlayera day ago |parent

        stdio.h is not what you would use in safe code.

        • fragmede19 hours ago |parent

          do they use f35io.h?

          • ecshafer15 hours ago |parent

            Depends. You use vendor specific libraries for hard real time systems, or in house libraries, or roll your own functions.

          • whaleofatw202218 hours ago |parent

            Afair they use a lot of stuff related to the Green Hills toolchain.

    • extraduder_irea day ago |parent

      The first time I came across this document, someone was using it as an example how the c++ you write for an Arduino Uno is still c++ despite missing so many features.

    • raffael_dea day ago |parent

      Interesting font choice for the code snippets. I wonder if that's been chosen on a whim or if there is a reason for not going with mono space.

      • throwaway2037a day ago |parent

        The font used for code samples looks nearly the same as "The C++ Programming Languages" (3rd edition / "Wave") by Bjarne Stroustrup. Looking back, yeah, I guess it was weird that he used italic variable width text for code samples, but uses tab stops to align the comments!

    • mslaa day ago |parent

      Interesting they're using C++ as opposed to Ada.

      • WD-42a day ago |parent

        The video goes into the history of why the military eventually accepted c++ instead of enforcing Ada.

  • kalugaa day ago

    The “90% ban” isn’t about hating C++ — it’s about guaranteeing determinism. In avionics, anything that can hide allocations, add unpredictable control flow, or complicate WCET analysis gets removed. Once you operate under those constraints, every language shrinks to a tiny, fully-auditable subset anyway.

    • grougnax12 hours ago |parent

      They could use 100% of Rust

      • accelbred3 hours ago |parent

        No they could not. Rusts standard library heavily uses dynamic memory allocation and panics, for example. MISRA C:2025 Addendum 6 covers MISRA rules that still apply to Rust, as an example of how one would restrict Rust in safety-critical contexts.

        • steveklabnik2 hours ago |parent

          In safety critical contexts, you're not usually using the standard library. Or at least, you're using core, not alloc or std.

          Panics can still exist, of course, but depending on the system design you probably don't want them either, which is a bit more difficult to remove but not the end of the world.

          I hadn't seen that addendum though yet, that's very cool!

  • time4teaa day ago

    a = a; // misra

    Actual code i have seen with my own eyes. (Not in F-35 code)

    Its a way to avoid removing an unused parameter from a method. Unused parameters are disallowed, but this is fine?

    I am sceptical that these coding standards make for good code!

    • tialaramexa day ago |parent

      Studies have looked at MISRA, I'm not aware of any for the JSF guidelines. For MISRA there's a mix, some of the rules seem to be effective (fewer defects in compliant software), some are the opposite (code which obeys these rules is more likely to have defects) and some were irrelevant.

      Notably this document is from 2005. So that's after C++ was standardized but before their second bite of that particular cherry and twenty years before its author, Bjarne Stroustrup suddenly decides after years of insisting that C++ dialects are a terrible idea and will never be endorsed by the language committee, that in fact dialects (now named "profiles") are the magic ingredient to fix the festering problems with the language.

      While Laurie's video is fun, I too am sceptical about the value of style guides, which is what these are. "TABS shall be avoided" or "Letters in function names shall be lowercase" isn't because somebody's aeroplane fell out of the sky - it's due to using a style Bjarne doesn't like.

      • platinumrad21 hours ago |parent

        The "good" rules are like "don't write off the end of an array", and the bad ones are like "no early returns" or "variable names must not be longer than 6 characters". 95% of the "good" rules are basically just longer ways of saying "don't invoke undefined behavior".

        • bboygravity16 hours ago |parent

          Why is "no early returns" not a good rule?

          I do early returns in code I write, but ONLY because everybody seems to do it. I prefer stuff to be in predictable places: variables at the top, return at the end. Simpler? Delphi/Pascal style.

          • jandrewrogers14 hours ago |parent

            Early returns makes the code more linear, reduces conditional/indent depth, and in some cases makes the code faster. In short, it often makes code simpler. The “no early returns” is a soft version of “no gotos”. There are cases where it is not possible to produce good code while following those heuristics. A software engineer should strive to produce the best possible code, not rigidly follow heuristics even when they don’t make sense.

            There is an element of taste. Don’t create random early returns if it doesn’t improve the code. But there are many, many cases where it makes the code much more readable and maintainable.

          • stonemetal125 hours ago |parent

            Because good code checks pre conditions and returns early if they are not met.

          • mrgaro14 hours ago |parent

            I remember having this argument with my professor at the school, who insisted that a function should have only one "return" clause at the very end. Even as I tried, I could not get him to explain why this would be valuable and how does this produce better code, so I'm interested on hearing your take on this?

            • superxpro126 hours ago |parent

              It helps prevent bugs with state. The apple login bypass bug comes to mind.

              Basically, you have code in an "if" statement, and if you return early in that if statement, you might have code that you needed to run, but didnt.

              Forcing devs to only "return once" encourages the dev to think through any stateful code that may be left in an intermediate state.

              In practice, at my shop, we permit early returns for trivial things at the top of a function, otherwise only one return at the bottom. That seems to be the best of both worlds for this particular rule.

              • Symmetry2 hours ago |parent

                The rule of thumb I use is to only have one return after modifying state that will persist outside the function call.

              • SideburnsOfDoom5 hours ago |parent

                > The apple login bypass bug comes to mind.

                I think you're talking about this "goto fail" bug?

                https://teamscale.com/blog/en/news/blog/gotofail

                > In practice, at my shop, we permit early returns for trivial things

                Are you also writing C or similar? If so, then this rule is relevant.

                In modern languages, there are language constructs to aid the cleanup on exit, such as using(resource) {} or try {} finally {} It really does depend on if these conveniences are available or not.

                For the rest of us, the opposite of "no early return" is to choose early return only sometime - in cases where results in better code, e.g. shorter, less indented and unlikely to cause issues due to failure to cleanup on exit. And avoid it where it might be problematic. In other words, to taste.

                > Kent Beck, Martin Fowler, and co-authors have argued in their refactoring books that nested conditionals may be harder to understand than a certain type of flatter structure using multiple exits predicated by guard clauses. Their 2009 book flatly states that "one exit point is really not a useful rule. Clarity is the key principle: If the method is clearer with one exit point, use one exit point; otherwise don’t".

                https://en.wikipedia.org/wiki/Structured_programming#Early_e...

                this thinking is quite different to say, 25 years earlier than that, and IMHO the programming language constructs available play a big role.

          • katzenversteher15 hours ago |parent

            For me it's mostly about indentation / scope depth. So I prefer to have some early exits with precondition checks at the beginning, these are things I don't have to worry about afterwards and I can start with the rest at indentation level "0". The "real" result is at the end.

          • dragonwriter13 hours ago |parent

            > Why is "no early returns" not a good rule?

            It might be a good guideline.

            Its not a good rule because slavishly following results in harder to follow code written to adhere to it.

          • SideburnsOfDoom13 hours ago |parent

            The "no early returns" rule came about because was a good rule in context, specifically C and FORTRAN code before roughly 1990. It was part of "structured programming", contemporary to "Go To Statement Considered Harmful", Dijkstra, 1968. And it became received wisdom - i.e. a rule that people follow without close examination.

            For example of the rule, a function might allocate, do something and then de-allocate again at the end of the block. A second exit point makes it easy to miss that de-allocation, and so introduce memory leaks that only happen sometimes. The code is harder to reason about and the bugs harder to find.

            source:

            > A problem with early exit is that cleanup statements might not be executed. ... Cleanup must be done at each return site, which is brittle and can easily result in bugs.

            https://en.wikipedia.org/wiki/Structured_programming#Early_r...

            About 90% of us will now be thinking "but that issue doesn't apply to me at all in $ModernLang. We have GC, using (x) {} blocks, try-finally, or we have deterministic finalisation, etc."

            And they're correct. In most modern languages it's fine. The "no early returns" rule does not apply to Java, TypeScript, C#, Rust, Python, etc. Because these languages specifically made early return habitable.

            The meta-rule is that some rules persist past the point when they were useful. Understand what a rule is for and then you can say when it applies at all. Rules without reasons make this harder. Some rules have lasted: we typically don't use goto at all any more, just structured wrappers of it such as if-else and foreach

            • zahlman4 hours ago |parent

              > And they're correct. In most modern languages it's fine. The "no early returns" rule does not apply to Java, TypeScript, C#, Rust, Python, etc. Because these languages specifically made early return habitable.

              Early return is perfectly manageable in C as long as you aren't paranoid about function inlining. You just have a wrapper that does unconditional setup, passes the acquired resources to a worker, and unconditionally cleans up. Then the worker can return whenever it likes, and you don't need any gotos either.

              • SideburnsOfDoom4 hours ago |parent

                > In C, you just have a wrapper that does unconditional setup, passes the acquired resources to a worker, and unconditionally cleans up. Then the worker can return whenever it like

                Right, so you allow early return only in functions that do not have any setup and clean-up - where it's safe. Something like "pure" functions. And you describe a way to extract such functions from others.

        • lou130610 hours ago |parent

          > variable names must not be longer than 6 characters

          My memory might be lapsing here, but I don't think MISRA has such a rule. C89/C90 states that _external_ identifiers only matter up to their first 6 characters [1], while MISRA specifies uniqueness up to the first 31 characters [2].

          [1] https://stackoverflow.com/questions/38035628/c-why-did-ansi-...

          [2] https://stackoverflow.com/questions/19905944/why-must-the-fi...

      • windward11 hours ago |parent

        Bjarne's just a guy, he doesn't control how the C++ committee vote and doesn't remotely control how you or I make decisions about style.

        And boiling down these guidelines to style guides is just incorrect. I've never had a 'nit: cyclomatic complexity, and uses dynamic allocation'.

      • writtiewrata day ago |parent

        [flagged]

        • tomhow16 hours ago |parent

          We've banned this account for continual guidelines breaches across multiple accounts.

          • menaerus14 hours ago |parent

            You do realize that there's a handful of literally the same people here on HN continuously evangelizing one technology by constantly dissing on the other? Because of the pervasiveness of such accounts/comments it invites other people, myself included, to counter-argue because most of the time the reality they're trying to portray is misrepresented or many times simply wrong. This is harmful and obviously invites for a flame war so how is that not by the same principle you applied to above account a guideline breach too?

            • tomhow10 hours ago |parent

              We act on what we see, and we see what people make us aware of via flags and emails.

              Comments like yours are difficult because they’re not actionable or able to be responded to in a way you’ll find satisfying if you don’t link to the comments that you mean.

              Programming language flamewars have always been lame on HN and we have no problem taking action against perpetrators when we’re alerted to them.

        • tialaramex21 hours ago |parent

          "No semantic effect" is one of those recurring C++ tropes like the "subset of a superset" or "trading performance for safety" that I think even its defenders ought to call bullshit on. The insistence on "No semantic effect" for attributes has poisoned them badly, and the choice to just ignore the semantic implications for Bjarne's C++ 20 Concepts makes this a poor substitute for the concepts feature as once imagined at the start of the century.

          I doubt I can satisfy you as to whether I'm somehow a paid evangelist, I remember I got a free meal once for contributing to the OSM project, and I bet if I dig further I can find some other occasion that, if you spin it hard enough can be justified as "payment" for my opinion that Rust is a good language. There was a nice lady giving our free cookies at the anti-racist counter-protests the other week, maybe she once met a guy who worked for an outfit which was contracted to print a Rust book? I sense you may own a corkboard and a lot of red string.

          • vlovich12317 hours ago |parent

            But what’s the relevance of all of this to bird law?

    • unwinda day ago |parent

      For C, the proper/expected/standard way to reference a variable without accessing it is a cast to void:

          (void) a;
      
      I'm sure there are commonly-implemented compiler extensions, but this is the normal/native way and should always work.
      • amlutoa day ago |parent

        Not if you use GCC.

        https://godbolt.org/z/zYdc9ej88

        clang gets this right.

        • comexa day ago |parent

          It does work in GCC to suppress unused variable warnings. Just not for function calls I guess.

          • cminmin13 hours ago |parent

            __attribute__((maybe_unused)) or [[maybe_unused]] or such things (dependin on ur spec version i guess?) can be used not to disable a whole line of errors.

        • gpderetta8 hours ago |parent

          interestingly it works for [[nodiscard]]!

          and assigning to std::ignore works for both.

        • Am4TIfIsER0pposa day ago |parent

          You've defined that function with an attribute saying not to ignore the returned value. Is it right to explicitly silence an explicit warning?

          • amluto19 hours ago |parent

            I want some defined way to tell the compiler that I am intentionally ignoring the result.

            I encounter this when trying to do best-effort logging in a failure path. I call some function to log and error and maybe it fails. If it does, what, exactly, am I going to do about it? Log harder?

            • dotancohen14 hours ago |parent

              Yes.

              When my database logging fails, I write a file that logs the database fail (but not the original log file).

              When my file logging fails, depending on application, I'll try another way of getting the information (the fact that for file logging failed) out - be that an http request or an email or something else.

              Databases fail, file systems fill up. Logging logging failures is extremely important.

              • amluto14 hours ago |parent

                And when that last way fails, what do you do?

                I like to have a separate monitoring process that monitors my process and a separate machine in a different datacenter monitoring that. But at the end of the day, the first process is still going to do try to log, detect that it failed, try the final backup log and then signal to its monitor that it’s in a bad state. It won’t make any decisions that depend on whether the final backup logging succeeds or fails.

                • dotancohen11 hours ago |parent

                  I'm not working on anything life-critical, one additional layer of redundancy is all I budget for. If both my database is down and my local filesystem is full simultaneously, things have gone bad and I've likely got lots of other symptoms I'm going to find to direct me.

          • MathMonkeyMana day ago |parent

            Sometimes. For example, you might be setting a non-crucial option on a socket, and if it fails you don't even care to log the fact (maybe the logging would be too expensive), so you just ignore the return value of whatever library is wrapping setsockopt.

    • platinumrad21 hours ago |parent

      I've (unfortunately) written plenty of "safety critical" code professionally and coding standards definitely have a negative effect overall. The thing keeping planes from falling out of the sky is careful design, which in practice means fail-safes, watchdogs, redundancy, and most-importantly, requirements that aren't overly ambitious.

      While maybe 10% of rules are sensible, these sensible rules also tend to be blindingly obvious, or at least table stakes on embedded systems (e.g. don't try to allocate on a system which probably doesn't have a full libc in the first place).

      • dilyevsky20 hours ago |parent

        Many coding standards rules have nothing to do with correctness and everything to do with things like readability and reducing cognitive load (“which style should I use here?”)

    • ivanjermakova day ago |parent

      Zig makes it explicit with

          _ = a;
      
      And you would encounter it quite often because unused variable is a compilation error: https://github.com/ziglang/zig/issues/335
      • bluecalma day ago |parent

        Doesn't it make it more likely unused variables stay in the codebase? You want to experiment, the code doesn't compile, you add this (probably by automatic tool), the code now compiles. You're happy with your experiment. As the compiler doesn't complain you commit and junk stays in the code.

        Isn't it just bad design that makes both experimenting harder and for unused variables to stay in the code in the final version?

        • ivanjermakova day ago |parent

          It is indeed quite controversial aspect of Zig's design. I would rather prefer it be a warning. Argument "warnings are always ignored" just doesn't hold because anything can be ignored if there is a way to suppress it.

          • dnautics21 hours ago |parent

            there was a recent interview where andrew suggested if i understood correctly: the future path of zig is to make all compilations (successful or not) produce an executable. if theres something egregious like a syntax or type error, the produced artifact just prints the error and returns nonzero. for a "unused parameter", the compiler produces the artifact you expect, but returns nonzero (so it gets caught by CI for example.

            • sumalamana21 hours ago |parent

              Why would the compiler do that, instead of just printing the error at compile-time and exiting with a non-zero value? What is the benefit?

              • dnautics5 hours ago |parent

                if you have a syntax error in file A, and file B is just peachy keen, you can keep compiling file B instead of stopping the world. Then the next time you compile, you have already cached the result of file B compilation.

              • j16sdiz17 hours ago |parent

                It is more a debug/development feature. You can try out some idea without fixing the whole code base.

      • ErroneousBosha day ago |parent

        Golang is exactly the same.

        It's extremely annoying until it's suddenly very useful and has prevented you doing something unintended.

        • bluecalma day ago |parent

          I fail to see how a warning doesn't achieve the same thing while allowing you to iterate faster. Unless you're working with barbarians who commit code that complies with warnings to your repo and there is 0 discipline to stop them.

          • FieryMechanic10 hours ago |parent

            > I fail to see how a warning doesn't achieve the same thing while allowing you to iterate faster.

            In almost every code base I have worked with where warnings weren't compile errors, there were hundreds of warnings. Therefore it just best to set all warnings as errors and force people to correct them.

            > Unless you're working with barbarians who commit code that complies with warnings to your repo and there is 0 discipline to stop them.

            I work with a colleague that doesn't compile/run the code before putting up a MR. I informed my manager who did nothing about it after he did it several times (this was after I personally told him he needed to do it and it was unacceptable).

            This BTW this happens more often than you would expect. I have read PRs and had to reject them because I read the code and they wouldn't have worked, so I know the person had never actually run the code.

            I am quite a tidy programmers, but it difficult for people even to write commit messages that aren't just "fixed bugs".

            • ErroneousBoshan hour ago |parent

              > I work with a colleague that doesn't compile/run the code before putting up a MR. I informed my manager who did nothing about it after he did it several times (this was after I personally told him he needed to do it and it was unacceptable).

              At this point what you need to do is stop treating compiler warnings as errors, and just have them fire the shock collar.

              Negative reinforcement gets a bad rep, but it sure does work.

          • ErroneousBosh13 hours ago |parent

            Go is a very opinionated language. If you don't like K&R indentation, tough - anything else is a syntax error.

            It's kind of like the olden days.

            • bluecalm12 hours ago |parent

              Yeah but this case just seem to be strictly worse. It makes experimenting worse and it makes it more likely (not less) that unused variables end up in the final version. I get being opinionated about formatting, style etc. to cut endless debates but this choice just seem strictly worse for two things it influences (experimenting and quality of the final code).

              • ErroneousBosh12 hours ago |parent

                If you want to leave a variable unused, you can just assign it to _ (underscore) though. IIRC gofmt (which your editor should run when you save) will warn you about it but your code will compile.

                It's a slightly different mindset, for sure, but having gofmt bitch about stuff before you commit it rather than have the compiler bitch about it helps you "clean as you go" rather than writing some hideous ball of C++ and then a day of cleaning the stables once it actually runs. Or at least it does for me...

          • treyd15 hours ago |parent

            You're not supposed to question the wisdom of the Go developers. They had a very good reason for making unused variables be an unconfigurable hard error, and they don't need to rigorously justify it.

            • FieryMechanic10 hours ago |parent

              Warnings are often ignored by developers unless you specifically force warnings to be compile errors (you can do this in most compiler). I work on TypeScript/C# code-bases and unless you force people to tidy up unused imports/using and variables, people will just leave them there.

              This BTW can cause issues with dependency chains and cause odd compile issues as a result.

        • SoKamil21 hours ago |parent

          And what is the unintended thing that happens when you have unused variable?

    • y1n0a day ago |parent

      The standards don't remove the need for code review. In fact they provide a standard to be used in code review. Anything you can automate is nice, but when you have exceptions to rules that say "Exception, if there's no reasonable way to do X then Y is acceptable" isn't really something you can codify into static analysis.

    • qart14 hours ago |parent

      You're right. MISRA is a cult. Actual studies[1][2] have shown many of their rules to be harmful rather than helpful. I have worked in multiple safety-critical industries. MISRA is almost always enforced by bureaucrats who don't understand source code at all, or by senior developers who rose up ranks as code monkeys. One such manager was impressed with Matlab because Matlab-generated C code was always MISRA compliant, whereas the code my company was giving them had violations. Never mind the fact that every function of the generated, compliant code had variables like tmp01, tmp02, tmp03, etc.

      There are many areas of software where bureaucracy requires MISRA compliance, but that aren't really safety-critical. The code is a hot mess. There are other areas that require MISRA compliance and the domain is actually safety-critical (e.g. automotive software). Here, the saving grace is (1) low complexity of each CPU's codebase and (2) extensive testing.

      To people who want actual safety, security, portability, I tell them to learn from examples set by the Linux kernel, SQLite, OpenSSL, FFMpeg, etc. Modern linters (even free ones) are actually valuable compared to MISRA compliance checkers.

      [1] https://ieeexplore.ieee.org/abstract/document/4658076

      [2] https://repository.tudelft.nl/record/uuid:646de5ba-eee8-4ec8...

      • sam_bristow12 hours ago |parent

        One key point that people overlook with that paper is that they were applying the coding standards retroactively. Taking an existing codebase, running compliance tools, and trying to fix the issues which were flagged. I think they correctly identified the issue with this approach in that you have all the risks of introducing defects as part of reworking the existing code. I don't think they have much empirical evidence for the case where coding standards were applied from the beginning of a project.

        In my opinion, the MISRA C++ 2023 revision is a massive improvement over the 2008 edition. It was a major rethink and has a lot more generally useful guidance. Either way, you need to tailor the standards to your project. Even the MISRA standards authors agree:

        """

          Blind adherence to the letter without understanding is pointless.
        
          Anyone who stipulates 100% MISRA-C coverage with no deviations does not understand what the are asking for.
          
          In my opionion they should be taken out and... well... Just taken out.
            - Chris Hill, Member of MISRA C Working Group (MISRA Matters Column, MTE, June 2012
        
        """
    • jjmarra day ago |parent

      An unused parameter should be commented out.

      • MobiusHorizons21 hours ago |parent

        Unless it’s there to conform to an interface

        • jjmarr21 hours ago |parent

          Especially if it's there to conform to an interface. You can comment out the variable name and leave the type.

    • binary132a day ago |parent

      It’s very weird how none of the sibling comments understood what it were saying is wrong with this.

      • binary13221 hours ago |parent

        Erm, sorry about the weird typo. Didn’t notice. Can’t edit now.

    • mslaa day ago |parent

      Especially since there is a widely recognized way to ignore a parameter:

          (void) a;
      
      Every C programmer beyond weaning knows that.
      • time4teaa day ago |parent

        The point really was that the unused method parameter should in almost all cases be removed, not that some trick should be used to make it seem used, and this is the wrong trick!

        • addaona day ago |parent

          Sometimes. But sometimes you have a set of functions that are called through function pointers that need the same signature, and one or more of them ignore some of the arguments. These days I’d spell that __attribute__((unused)); but it’s a perfectly reasonable case.

        • bluGill11 hours ago |parent

              #if otherbuild
                  dosomething(param);
               #endif
          
          the above type of thing happens once in a while. nos the paramater is needed but the normal build doesn't use it
      • stefan_a day ago |parent

        I'm sure thats disallowed for the C-style cast.

        • cpgxiiia day ago |parent

          Fwiw, unused-cast-to-void is a case that GCC and Clang ignore when using -Wno-old-style-cast, which is what most projects prohibiting C-style casts are going to be using (or whatever the equivalent their compiler provides).

        • daringrain32781a day ago |parent

          C++17 has the [[maybe_unused]] attribute.

    • jojobasa day ago |parent

      Isn't it inevitable for some cases of inheritance? A superclass does something basic and doesn't need all parameters, child classes require additional ones.

  • geophpha day ago

    LaurieWired is an awesome follow on YouTube!

    • jamal-kumara day ago |parent

      Her ARM assembly tutorial series is really excellent

  • djfobbz21 hours ago

    I wonder if Lockheed Martin has an Electron based future fighter in the works?

    • sayamqazi10 hours ago |parent

      My brain took so long to identify "Electron" as a framework in your comment.

    • pezgrande9 hours ago |parent

      Didn't it work just fine for SpaceX and Nasa?

  • barfourea day ago

    Do avionics in general subscribe to MISRA C/C++ or do they go even further with an additional (or different) approach?

    • fallingmeata day ago |parent

      coding standard is a part of the story. mainly it comes down to level of rigor and documenting process and outcomes for audit ability. DO-178c

    • stackghosta day ago |parent

      Depends on the company in my experience. I've seen some suppliers that basically just wire up the diagram in Matlab/simulink and hit Autocode. No humans actually touch the C that comes out.

      Honestly I think that's probably the correct way to write high reliability code.

      • garyfirestorma day ago |parent

        You’re joking right? That autogenerated code is generally garbage and spaghetti code. It was probably the reason for Toyotas unintended acceleration glitch.

        • cpgxiiia day ago |parent

          In the case of the Toyota/Denso mess, the code in question had both auto-generated and hand-written elements, including places where the autogenerated code had been modified by hand later. That is the worst place to be, where you no longer have whatever structure and/or guarantees the code gen might provide, but you also don't have the structure and choices that a good SWE team would have to develop that level of complexity by hand.

          • superxpro126 hours ago |parent

            The toyota code was a case of truly abysmal software development methodology. The resultant code they released was so bad that neither NASA, nor Barr, nor Koopman could successfully decipher. (Although Barr posited that the issue was VERY LIKELY in one of a few places with complex multithreaded interactions).

            Which therein lies the clue. They wrote software that was simply unmaintainable. Autogenerated code isnt any better.

        • creato21 hours ago |parent

          This isn't necessarily a problem if you don't consider the output to be "source" code. Assembly is also garbage spaghetti code but that doesn't stop you from using a compiler does it?

        • vodoua day ago |parent

          Modern autogenerated C code from Simulink is rather effective. It is neither garbage nor spaghetti, it is just... peculiar.

          • addaona day ago |parent

            It’s also much, much more resource intensive (both compute and memory) than what a human would right for the same requirements.

            • stackghosta day ago |parent

              For control systems like avionics it either passes the suite of tests for certification, or it doesn't. Whether a human could write code that uses less memory is simply not important. In the event the autocode isn't performant enough to run on the box you just spec a faster chip or more memory.

              • addaona day ago |parent

                I’m sorry, but I disagree. Building these real-time safety-critical systems is what I do for a living. Once the system is designed and hardware is selected, I agree that if the required tasks fit in the hardware, it’s good to go — there’s no bonus points for leaving memory empty. But the sizing of the system, and even the decomposition of the system to multiple ECUs and the level of integration, depends on how efficient the code is. And there are step functions here — even a decade ago it wasn’t possible to get safety processors with sufficient performance for eVTOL control loops (there’s no “just spec a faster chip”), so the system design needed to deal with lower-ASIL capable hardware and achieve reliability, at the cost of system complexity, at a higher level. Today doing that in a safety processors is possible for hand-written code, but still marginal for autogen code, meaning that if you want to allow for the bloat of code gen you’ll pay for it at the system level.

                • stackghost21 hours ago |parent

                  >And there are step functions here — even a decade ago it wasn’t possible to get safety processors with sufficient performance for eVTOL control loops (there’s no “just spec a faster chip”)

                  The idea that processors from the last decade were slower than those available today isn't a novel or interesting revelation.

                  All that means is that 10 years ago you had to rely on humans to write the code that today can be done more safely with auto generation.

                  50+ years of off by ones and use after frees should have disabused us of the hubristic notion that humans can write safe code. We demonstrably can't.

                  In any other problem domain, if our bodies can't do something we use a tool. This is why we invented axes, screwdrivers, and forklifts.

                  But for some reason in software there are people who, despite all evidence to the contrary, cling to the absurd notion that people can write safe code.

                  • addaon21 hours ago |parent

                    > All that means is that 10 years ago you had to rely on humans to write the code that today can be done more safely with auto generation.

                    No. It means more than that. There's a cross-product here. On one axis, you have "resources needed", higher for code gen. On another axis, you have "available hardware safety features." If the higher resources needed for code gen pushes you to fewer hardware safety features available at that performance bucket, then you're stuck with a more complex safety concept, pushing the overall system complexity up. The choice isn't "code gen, with corresponding hopefully better tool safety, and more hardware cost" vs. "hand written code, with human-written bugs that need to be mitigated by test processes, and less hardware cost." It's "code gen, better tool safety, more system complexity, much much larger test matrix for fault injection" vs "human-written code, human-written bugs, but an overall much simpler system." And while it is possible to discuss systems that are so simple that safety processors can be used either way, or systems so complex that non-safety processors must be used either way... in my experience, there are real, interesting, and relevant systems over the past decade that are right on the edge.

                    It's also worth saying that for high-criticality avionics built to DAL B or DAL A via DO-178, the incidence of bugs found in the wild is very, very low. That's accomplished by spending outrageous time (money) on testing, but it's achievable -- defects in real-world avionics systems overwhelming are defects in the requirement specifications, not in the implementation, hand-written or not.

                    • stackghost20 hours ago |parent

                      HN is a very poor platform for good conversations so we'll have to agree to disagree, as I'm not willing to go further in this format

                      • menaerus14 hours ago |parent

                        Codegen from Matlab/Simulink/whatever is good for proof of concept design. It largely helps engineers who are not very good with coding to hypothesize about different algorithmic approaches. Engineers who actually implement that algorithm in a system that will be deployed are coming from a different group with different domain expertise.

        • fluorinerocketa day ago |parent

          Rockets have flown to orbit on auto coded simulink, seen it myself

        • AnimalMuppeta day ago |parent

          > It was probably the reason for Toyotas unintended acceleration glitch.

          Do you have any evidence for "probably"?

          • garyfirestorma day ago |parent

            I know for the fact simulink generates spaghetti and spaghetti code was partially blamed for Toyotas problems. Hence the inference

            See https://www.safetyresearch.net/toyota-unintended-acceleratio...

            • CamouflagedKiwia day ago |parent

              That's a nonsensical connection. "Spaghetti code" is a very general term, that's nowhere near specific enough for the two to be related.

              "I know for a fact that Italian cooks generate spaghetti, and the deceased's last meal contained spaghetti, therefore an Italian chef must have poisoned him"

            • stackghosta day ago |parent

              SRS is a for-profit corporation whose income comes from lawsuits, so their reports/investigations are tainted by their financial incentive to overstate the significance of their findings.

        • stackghosta day ago |parent

          No I'm not joking at all. The Autocode feature generates code that has high fidelity to the model in simulink, and is immensely more reliable than a human.

          It is impossible for a simulink model to accidentally type `i > 0` when they meant `i >= 0`, for example. Any human who tells you they have not made this mistake is a liar.

          Unless there was a second uncommanded acceleration problem with Toyotas, my understanding is that it was caused by poor mechanical design of the accelerator pedal that caused it to get stuck on floor mats.

          In any case, when we're talking about safety critical control systems like avionics, it's better to abstract away the actual act of typing code into an editor, because it eliminates a potential source of errors. You verify the model at a higher level, and the code is produced in a deterministic manner.

          • fl7305a day ago |parent

            > It is impossible for a simulink model to accidentally type `i > 0` when they meant `i >= 0`

            The Simulink Coder tool is a piece of software. It is designed and implemented by humans. It will have bugs.

            Autogenerated code is different from human written code. It hits soft spots in the C/C++ compilers.

            For example, autogenerated code can have really huge switch statements. You know, larger than the 15-bit branch offset the compiler implementer thought was big enough to handle any switch-statement any sane human would ever write? So now the switch jumps backwards instead when trying to get the the correct case-statement.

            I'm not saying that Simulink Coder + a C/C++ compiler is bad. It might be better than the "manual coding" options available. But it's not 100% bug free either.

            • stackghost21 hours ago |parent

              >But it's not 100% bug free either.

              Nobody said it was bug free, and this is a straw man argument of your own construction.

              Using Autocode completely eliminates certain types of errors that human C programmers have continued to make for more than half a century.

          • mmooss21 hours ago |parent

            > It is impossible for a simulink model to accidentally type `i > 0` when they meant `i >= 0`

            That's a classic bias: Comparing A and B, show that B doesn't have some A flaws. If they are different systems, of course that's true. But it's also true that A doesn't have some B flaws. That is, what flaws does Autocode have that humans don't?

            The fantasy that machines are infallible - another (implicit) argument in this thread - is just ignorance for any professional in technology.

            • coderenegade17 hours ago |parent

              What's the difference between autogenerated C code and compiling to assembly or machine code? Seems academic to me.

              The main flaw of autocode is that a human can't easily read and validate it, so you can't really use it as source code. In my experience, this is one of the biggest flaws of these types of systems. You have to version control the file for whatever proprietary graphical programming software generated the code in the first place, and as much as we like to complain about git, it looks like a miracle by comparison.

              • mmooss17 hours ago |parent

                > What's the difference between autogenerated C code and compiling to assembly or machine code? Seems academic to me.

                It's an interesting question and point, but those are two different things and there is no reason to think you'll get the same results. Why not compile from natural language, if that theory is true?

                • _flux15 hours ago |parent

                  Natural language does not have a specification, while both C and assembly do.

                  • mmooss14 hours ago |parent

                    The C specification is orders of magnitude more complex and is much less defined than assembly. Arguably, the same could be said comparing natural language with C.

                    I admit that's mostly philosphical. But I think saying 'C can autogenerate reliable assembly, therefore a specification can autogenerate reliable C' is also about two different problems.

      • superxpro126 hours ago |parent

        I just vomited in my mouth a little. Please god no.

    • 4gotunameagaina day ago |parent

      Depends on the region. MISRA is widely adopted, and then there are the US MIL standards, ECSS for european aerospace stuff, do-178C for aviation..

      • pacoWebConsult5 hours ago |parent

        DO-178c is not a coding standard, it's a process standard. Projects following DO-178c processes would adopt a coding standard as a part of the process, reviewing software deliverables adhere to those standards.

      • westurner16 hours ago |parent

        /?hnlog awesome-safety-critical

        From https://news.ycombinator.com/item?id=45562815 :

        > awesome-safety-critical: https://awesome-safety-critical.readthedocs.io/en/latest/

        From "Safe C++ proposal is not being continued" (2025) https://news.ycombinator.com/item?id=45237019 :

        > Safe C++ draft: https://safecpp.org/draft.html

        Also there are efforts to standardize safe Rust; rust-lang/fls, rustfoundation/safety-critical-rust-consortium

        > How does what FLS enables compare to these [unfortunately discontinued] Safe C++ proposals?

  • arein35 hours ago

    She said it could be estimated how many cycles it takes to complete a calculation, but there are a lot of different paths which take different cycles.

    How does the code work with timing? It counts cycles?

    • constantcrying2 hours ago |parent

      Major commercial airliners have system controllers where the measuring is done as follows: Write into the code some instruction which flips some output bit, hook the system up to a test rig and then get an oscilloscope. With the oscilloscope measure how long it takes between bit flips. The instructions for the measurements get commented out for the final release.

      Yes, I have done this. By the way, these measurements of course have to be part of the certification.

  • t1234s6 hours ago

    What compiler is used to build the production F35 code? Something off the shelf or developed by LM?

  • gcr9 hours ago

    Laurie’s work is so good! The other videos on her channel talk about reverse engineering, obfuscation, compilers, etc

    Highly recommend checking her other videos out if you like this

  • greenavocado21 hours ago

    The C++ standard for the F-35 fighter jet prohibits ninety percent of C++ features because what they are actually after is C with destructors. I was just thinking about how to write C in a modern way today and discovered GLib has an enormous about of useful C++ convieniences in plain C.

    Reading through the JSF++ coding standards I see they ban exceptions, ban the standard template library, ban multiple inheritance, ban dynamic casts, and essentially strip C++ down to bare metal with one crucial feature remaining: automatic destructors through RAII. When a variable goes out of scope, cleanup happens. That is the entire value proposition they are extracting from C++, and it made me wonder if C could achieve the same thing without dragging along the C++ compiler and all its complexity.

    GLib is a utility library that extends C with better string handling, data structures, and portable system abstractions, but buried within it is a remarkably elegant solution to automatic resource management that leverages a GCC and Clang extension called the cleanup attribute. This attribute allows you to tag a variable with a function that gets called automatically when that variable goes out of scope, which is essentially what C++ destructors do but without the overhead of classes and virtual tables.

    The heart of GLib's memory management system starts with two simple macros: g_autofree and g_autoptr. The g_autofree macro is deceptively simple. You declare a pointer with this attribute and when the pointer goes out of scope, g_free is automatically called on it. No manual memory management, no remembering to free at every return path, no cleanup sections with goto statements. The pointer is freed whether you return normally, return early due to an error, or even if somehow the code takes an unexpected path. This alone eliminates the majority of memory leaks in typical C programs because most memory management is just malloc and free, or in GLib's case, g_malloc and g_free.

    The g_autoptr macro is more sophisticated. While g_autofree works for simple pointers to memory, g_autoptr handles complex types that need custom cleanup functions. A file handle needs fclose, a database connection needs a close function, a custom structure might need multiple cleanup steps. The g_autoptr macro takes a type name and automatically calls the appropriate cleanup function registered for that type. This is where GLib shows its maturity because the library has already registered cleanup functions for all its own types. GError structures are freed correctly, GFile objects are unreferenced, GInputStream objects are closed and released. Everything just works.

    Behind these macros is something called G_DEFINE_AUTOPTR_CLEANUP_FUNC, which is how you teach GLib about your own types. You write a cleanup function that knows how to properly destroy your structure, then you invoke this macro with your type name and cleanup function, and from that moment forward you can use g_autoptr with your type. The macro generates the necessary glue code that connects the cleanup attribute to your function, handling all the pointer indirection correctly. This is critical because the cleanup attribute passes a pointer to your variable, not the variable itself, which means for a pointer variable it passes a double pointer, and getting this wrong leads to crashes or memory corruption.

    The third member of this is g_auto, which handles stack-allocated types. Some GLib types like GString are meant to live on the stack but still need cleanup. A GString internally allocates memory for its buffer even though the GString structure itself is on the stack. The g_auto macro ensures that when the structure goes out of scope, its cleanup function runs to free the internal allocations. Heap pointers, complex objects, and stack structures all get automatic cleanup.

    What's interesting about this system is how it composes. You can have a function that opens a file, allocates several buffers, creates error objects, and builds complex data structures, and you can simply declare each resource with the appropriate auto macro. If any operation fails and you return early, every resource declared up to that point is automatically cleaned up in reverse order of declaration. This is identical to C++ destructors running in reverse order of construction, but you are writing pure C code that works with any GCC or Clang compiler from the past fifteen years.

    The foundation beneath all this is GLib's memory allocation functions. The library provides g_malloc, g_new, g_realloc and friends which are drop-in replacements for the standard C allocation functions. These functions have better error handling because g_malloc never returns NULL. If allocation fails, the program aborts with a clear error message. This might sound extreme but for most applications it is actually the right behavior. When malloc returns NULL in traditional C code, most programmers either do not check it, check it incorrectly, or check it but then do not have a reasonable recovery path anyway. GLib acknowledges this reality and makes the contract explicit: if you cannot allocate memory, the program terminates cleanly rather than stumbling forward into undefined behavior.

    • avadodin13 hours ago |parent

      I'm a big fan of the GLib/old ObjC approach when it comes to UI elements and backwards compatibility with C but I can't imagine a situation where it would be appropriate on the kind of embedded we're discussing here to dynamically create and destroy objects - whether through malloc or oop. Maybe on the HUD but even there I'd favor other approaches if it were my soldiers that I want to return home behind that HUD.

    • greenavocado21 hours ago |parent

      For situations where you do want to handle allocation failure, GLib provides g_try_malloc and related functions that can return NULL. The key insight is making the common case automatic and the exceptional case explicit. The g_new macro is particularly nice because it is type-aware. Instead of writing g_malloc of sizeof times count and then casting, you write g_new of type and count, and it handles the sizing and casting automatically while checking for overflow in the multiplication.

      Reference counting is another critical component of GLib's memory management, particularly for objects. The GObject system, which is GLib's object system for C, uses reference counting to manage object lifetimes. Every object has a reference count starting at one when created. When you want to keep a reference to an object, you call g_object_ref. When you are done with it, you call g_object_unref. When the reference count reaches zero, the object is automatically destroyed. This is the same model used by shared_ptr in C++ or reference counting in Python, but implemented in pure C.

      This also integrates with the autoptr system. Many GLib types are reference counted, and their cleanup functions simply decrement the reference count. This means you can declare a local variable with g_autoptr, the reference count stays positive while you use it, and when the variable goes out of scope the reference is automatically released. If you were the last holder of that reference, the object is freed. If other parts of the code still hold references, the object stays alive. This solves the resource sharing problem that makes manual memory management so difficult in C.

      GLib also provides memory pools through GMemChunk and the newer slice allocator, though the slice allocator is being phased out in favor of standard malloc since modern allocators have improved significantly. The concept was to reduce allocation overhead and fragmentation for programs that allocate many small objects of the same size. You create a pool for objects of a specific size and then allocate from that pool quickly without going through the general purpose allocator. When you are done with all objects from that pool, you can destroy the entire pool at once. This pattern shows up in many high-performance C programs but GLib provided it as a reusable component.

      The error handling story in GLib deserves special attention because it demonstrates how automatic cleanup enables better error handling patterns. The GError type is a structure that carries error information including a domain, a code, and a message. Functions that can fail take a GError double pointer as their last parameter. If the function succeeds, it returns true or a valid value and leaves the error NULL. If it fails, it returns false or NULL and allocates a GError with details about what went wrong. The calling code checks the return value and if there was an error, examines the GError for details.

      The critical part is that GError is automatically freed when declared with g_autoptr. You can write a function that calls ten different operations, each of which might set an error, and you can check each one and return early if something fails, and the error is automatically freed on all code paths. You never leak the error message string, never double-free it, never forget to free it. This is a massive improvement over traditional C error handling where you either ignore errors or write incredibly tedious cleanup code with goto statements jumping to labels at the end of the function.

      The GNOME developers could have switched to C++ or Rust or any modern language, but instead they invested in making C excellent at what C is good at. They added just enough infrastructure to eliminate the common pitfalls without fundamentally changing the language. A C programmer can read GLib code and understand it immediately because it is still just C. The auto macros are syntactic sugar over a compiler attribute, not a new language feature requiring a custom compiler.

      This philosophy aligns pretty well with what the F-35 programmers want: the performance and predictability of C with the safety of automatic resource management. No hidden allocations, no virtual dispatch overhead, no exception unwinding cost, no template instantiation bloat. Just deterministic cleanup that happens exactly when you expect it to happen because it is tied to lexical scope, which is something you can see by reading the code.

      I found it sort of surprising that the solution to modern C was not a new language or a massive departure from traditional practices. The cleanup attribute has been in GCC since 2003. Reference counting has been around forever. The innovation was putting these pieces together in a coherent system that feels natural to use and composes well.

      Sometimes the right tool is not the newest or most fashionable one, but the one that solves your actual problem with the least additional complexity. GLib proves you can have that feature in C, today, with compilers that have been stable for decades, without giving up the simplicity and predictability that makes C valuable in the first place.

      • pjmlp13 hours ago |parent

        You missed the part that GNOME was started due to differences with KDE licensing, and original FSF was against C++ due to religious reasons, and even if KDE/QT license had been GPL compatible, they would not adopted it.

        If you look around outside Linux world, everyone was going into C++, PC world with OS/2, MS-DOS and Windows, Apple, Epoch (later Symbian), BeOS,.... UNIX was playing with CORBA, OpenInventor,....

        Here the original version of the GNU Manifesto,

        "Using a language other than C is like using a non-standard feature: it will cause trouble for users. Even if GCC supports the other language, users may find it inconvenient to have to install the compiler for that other language in order to build your program. So please write in C."

        The GNU Coding Standard in 1994, http://web.mit.edu/gnu/doc/html/standards_7.html#SEC12

        Moving a bit forward to 1998, when GNOME 1.0 was still being made ready,

        "Using a language other than C is like using a non-standard feature: it will cause trouble for users. Even if GCC supports the other language, users may find it inconvenient to have to install the compiler for that other language in order to build your program. For example, if you write your program in C++, people will have to install the C++ compiler in order to compile your program. Thus, it is better if you write in C. "

        https://www.ime.usp.br/~jose/standards.html#SEC9

        Yes, the actual version is a bit more welcoming to programming language variety,

        https://www.gnu.org/prep/standards/html_node/Source-Language...

  • factorialboya day ago

    Isn't the F35 program considered a failure? Or am I confusing it with some other program?

    • ironhavena day ago |parent

      There have been countless articles claiming the demise and failure of the F35 but that is just one side of the story. There has been an argument started 50 years ago in the 1970's about how to build the best next generation fighter jets. One of these camps was called the "Fighter mafia"[0] figure headed by John Boyd. The main argument they bing was the only thing that matters for a jet fighter is how well it performs in one-on-one short ranged dog fighting. They claim that stealth, beyond visual range missiles, electronic warfare and sensors/datalink systems are useless junk that only hinders the dog fighting capability and bloat the cost of new jets.

      The evidence for this claim was found in testing for the F35 where it was dog fighting a older F16. The results of the test where that the F35 won almost every scenario except one where a lightweight fitted F16 was teleported directed behind a F35 weighed down by heavy missiles and won the fight. This one loss has spawned hundreds of articles about how the F35 is junk that can't dogfight.

      In the end the F35 has a lot of fancy features that are not optional for modern operations. The jet has now found enough buyers across the west for economies of scale to kick in and the cost is about ~80 million each which is cheaper than retrofitting stealth and sensors onto other air frames like what you get with the F15-EX

      [0] https://en.wikipedia.org/wiki/Fighter_Mafia

      • mrlongroots16 hours ago |parent

        Yeah unfortunately no amount of manoeuvering is a substitute for a kill chain where a distributed web of sensors and relays and weapon carriers can result in an AAM being dispatched from any direction at lightspeed.

    • joha4270a day ago |parent

      A lot of people have made careers out of telling you that it's a failure, but while not everything about the F-35 is an unquestionable success, it has produced a "cheap" fighter jet that is more capable than all but a handful of other planes.

      Definitely not a failure.

    • exabrial6 hours ago |parent

      I'm convinced this was a psyop actually, with the intent of delaying other country's 5th gen programs or development of quantum radars. It definitely had some cost overruns but I think its performance as a fighter is pretty impressive.

      Criticism is fair however: they did probably extend themselves too far with the helmet technology, and I do have concerns about touch screens in cockpits (a touch screen requires you to take your eyes off of a target to move your hand to the right location, rather than locating a button by touch).

    • tsunagattaa day ago |parent

      The F-35 was in development hell for a while for sure, but it’s far from a failure. See the recent deals where it’s been used as a political bargaining chip; it still ended up being a very desirable and capable platform from my understanding.

      • fl7305a day ago |parent

        > See the recent deals where it’s been used as a political bargaining chip; it still ended up being a very desirable and capable platform from my understanding.

        From a european perspective, I can tell you that the mood has shifted 180 degrees from "buy American fighters to solidify our ties with the US" to "can't rely on the US for anything which we'll need when the war comes".

        • hu321 hours ago |parent

          that has nothing to do with the F-35.

          Europe is wise and capable enough to develop their own platform.

          • simonask20 hours ago |parent

            While there are several comparable European alternatives, many countries put their bets on the F-35 a long time ago. It is very much a part of this discussion.

            I’m from one of those countries, and I can assure you a lot of people would now have preferred that we went with an EU competitor instead.

            • jandrewrogers19 hours ago |parent

              What comparable alternative is available today? None of the European companies has a production 5th generation aircraft nor the integrated sensing capabilities. This is what is driving the incredible demand despite misgivings. You can't survive in a near peer combat environment without it.

              Countries are buying it because it is the only game in town for certain high-value capabilities, not because they necessarily like the implications of there being a single seller of those capabilities. For better or worse, the US has been flying these for 30 years and has 6th generation aircraft in production. Everyone else is still figuring out their first 5th generation offering.

              Closing that gap is a tall order. Either way, European countries need these modern capabilities to have a capable deterrent.

              • fl73053 hours ago |parent

                > You can't survive in a near peer combat environment without it.

                How well will the european countries survive with it if the US cuts off access to spare parts, SW maintenance links etc?

              • simonask10 hours ago |parent

                I'm no expert, but the narrative is that it really depends what you need them for. And keep in mind that joining the jet fighter programme also means joining the development of it, enacting a certain amount of influence through your funding. For example, it is conceivable that a sufficiently upgraded Gripen tailored to our needs would be just as effective (which aren't really dogfighting, as I understand it), and cheaper.

                Anyway we're all just crossing our fingers that the US is just temporarily insane and will eventually come to its senses. What else can you do.

              • monerozcash18 hours ago |parent

                > What comparable alternative is available today?

                You know the answer, but I'll say it anyway. There is no comparable alternative today, and there will not be one in the near future.

    • jasonwatkinspdx21 hours ago |parent

      There's a ton of absolutely garbage reporting on it. No like seriously a ton of the articles are just more mainstream media uncritically reposting claims by a couple cranks in Australia that LARP as a think tank.

      Anyhow, a fair assessment is the program has gone massively over timeline and budget, so in that sense is a failure, however the resulting aircraft is very clearly the best in its class both in absolute capability and value.

      Going forward there's broad awareness in the government that the program management mistakes of the F-35 program cannot be repeated. There's a general consensus that 3 decade long development projects just won't be relevant in a world where drone concepts and similar are evolving rapidly on a year by year basis. There's also awareness the government needs to act more as the integrator that owns the project to avoid lock in issues.

    • constantcrying2 hours ago |parent

      The F-35 program was a complete disaster, with enormous delays, cost overruns and very long lists of issues.

      Right now it is also the single most advance combat airplane, built in any number, which exists anywhere in the world and guarantees that the USA will be able to convincingly assert air dominance in any conflict.

    • wat1000021 hours ago |parent

      You’re confusing clickbait articles with reality.

      There have been over 1,200 F-35s built so far, with new ones being built at a rate of about 150 per year. For comparison, that’s nearly as many F-35s built per year as F-22s were built ever, and 1,200 is a large amount for a modern jet fighter. The extremely successful F-15 has seen about that many built since it first entered production over 50 years ago.

      That doesn’t mean it must be good, but it’s a strong indicator. Especially since the US isn’t the only customer. Many other countries want it too. Some are shying away from it now, but only for political reasons because the US is no longer seen as a reliable supplier.

      In terms of actual capabilities, it’s the best fighter jet out there save for the F-22, which was far more expensive and is no longer being made. It’s relatively cheap, comparable in cost to alternatives like the Gripen or Rafale while being much more capable.

      There have been a lot of articles out there about how terrible it is. These fall into a few different categories:

      * Reasonable critiques of its high development costs, overruns, and delays, baselessly extrapolated to “it’s bad.”

      * Teething problems extrapolated to “it’s terrible” as if these things never get fixed.

      * Analyses of outcomes from exercises that misunderstand the purpose and design of exercises. You might see that, say, an F-35 lost against an F-16 in some mock fights. But they’re not going to set up a lot of exercises where the F-35 and F-16 have a realistic engagement. The result of such an exercise would be that the F-16 gets shot out of the sky without ever knowing the F-35 was there. This is uninformative and a waste of time and money. So such a matchup will be done with restrictions that actually make it useful. This might end up in a dogfight, where the F-16 is legitimately superior. This then gets reported as “F-35 worse than F-16,” ignoring the fact that a real situation would have the F-35 victorious long before a dogfight could occur.

      * Completely legitimate arguments that fighter jets are last century’s weapons, that drones and missiles are the future, and the F-35 is like the most advanced battleship in 1941: useful, powerful, but rapidly becoming obsolete. This may be true, but if it is, it only means the F-35 wasn’t the right thing to focus on, not that it’s a failure. The aircraft carrier was the decisive weapon of the Pacific war but that didn’t make the Iowa class battleships a failure.

      • jandrewrogers21 hours ago |parent

        In many regards, the F-35 was the first aircraft explicitly engineered for the requirements of drone-centric warfare. Its limitations are that this capability was grafted onto an older (by US standards) 5th generation tech stack that wasn't designed for this role from first principles. I think this is what ultimately limited production of the F-22, which is not upgradeable even to the standard of the F-35 for drone-centric environments.

        The new 6th generation platforms being rolled out (B-21, F-47, et al) are all pure first-principles drone-warfare native platforms.

    • throwaway2037a day ago |parent

      My slightly trollish reply: If you have infinite money, it is hard to fail.

      Ok, joking aside: If it is considered a failure, what 100B+ military programme has not been considered a failure?

      In my totally unqualified opinion, the best cost performance fighter jet in the world is the Saab JAS 39 Gripen. It is very cheap to buy and operate, and has pretty good capabilities. It's a good option for militaries that don't have the infinite money glitch.

    • TiredOfLife21 hours ago |parent

      It freely flies in territory protected by state of the art airdefense made by the country that spreads those claims.

    • TimorousBestiea day ago |parent

      The research and development effort went way over budget, the first couple rounds of production were fraught with difficulty, and the platform itself has design issues from being a “one-size-fits-all” compromise (despite also having variants for each service).

      I haven’t heard anything particularly bad about the software effort, other than the difficulties they had making the VR/AR helmet work (the component never made it to production afaik).

      • themafiaa day ago |parent

        They oxygen delivery system fails and has left pilots hypoxic.

        https://www.nwfdailynews.com/story/news/local/2021/08/02/f-3...

        The electrical system performs poorly under short circuit conditions.

        https://breakingdefense.com/2024/10/marine-corps-reveals-wha...

        They haven't even finished delivering and now have to overhaul the entire fleet due to overheating.

        https://nationalsecurityjournal.org/the-f-35-fighters-2-big-...

        This program was a complete and total boondoggle. It was entirely the wrong thing to build in peace time. It was a moonshoot for no reason other than to mollify bored generals and greedy congresspeople.

  • thenobstaa day ago

    I wonder how these compare to high frequency training standards. It seems like they'd have similar speed/reliability/predictability requirements in the critical paths.

    • perbua day ago |parent

      JFS-CPP bans exceptions because you would lose control over the execution of the problem. The HFT crowds didn't like it because you'd add 10ns to a function call.

      At least before we had zero-cost exceptions. These days, I suspect the HFT crowd is back to counting microseconds or milliseconds as trades are being done smarter, not faster.

    • clankya day ago |parent

      There are at least some HFT players who actually use exceptions to avoid branches on the infrequent-but-speed-critical execution path: https://youtu.be/KHlI5NBbIPY?si=VjFs7xVN0GsectHr

  • manoDeva day ago

    You mean fighters ARE coded in C++? My god

    • fweimera day ago |parent

      I think the late Robert Dewar once quipped that modern jet fighters aren't safety-critical applications because the aircraft disintegrates immediately if the computer system fails.

    • GoblinSlayera day ago |parent

      "Launching nuclear rockets" just became literal.

  • flamedoge15 hours ago

    surprised not that strict about types. i remember google doesn't allow unsigned?

  • mainecoder6 hours ago

    Did they really have to tell their programmers this ? (see Page 52) AV Rule 174 (MISRA Rule 107) The null pointer shall not be de-referenced.

    • jandrewrogers4 hours ago |parent

      There are old idioms in C where null pointers are intentionally dereferenced to induce the expected outcome. Not the best way to write that code because beyond being less explicit about intent it also isn't guaranteed to work.

      The rule is likely speaking to this code.

    • constantcrying2 hours ago |parent

      Commercial jet airliners, which in all likelihood you have flown with, have system controllers which intentionally dereference the null pointer. Yes, I have seen the code, the intention was an integrity check at startup, which computed a checksum of the memory, which included the value stored at address zero.

  • dzongaa day ago

    I guess a bigger conversation could be had in regards to:

    what leads to better code in terms of understandability & preventing errors

    Exceptions (what almost every language does) or Error codes (like Golang)

    are there folks here that choose to use error codes and forgo Exceptions completely ?

    • jandrewrogersa day ago |parent

      There isn't much of a conversation to be had here. For low-level systems code, exceptions introduce a bunch of issues and ugly edge cases. Error codes are cleaner, faster, and easier to reason about in this context. Pretty much all systems languages use error codes.

      In C++, which supports both, exceptions are commonly disabled at compile-time for systems code. This is pretty idiomatic, I've never worked on a C++ code base that used exceptions. On the other hand, high-level non-systems C++ code may use exceptions.

      • bluGill11 hours ago |parent

        What you wrote is historically correct, but new analisys shows exceptions are faster that error codes if you actually check the error codes. Of course checking error codes is tedious and so often you don't. Also is micro benchmarks error codes are faster and only when you do more complex benchmarks do exceptions show up as faster.

        • jandrewrogers3 hours ago |parent

          The performance benefits of exceptions are not borne out in practice in my experience relative to other error handling mechanisms. It doesn't replicate. But that is not the main reason to avoid them.

          Exceptions have very brittle interaction with some types of low-level systems code because unwinding the stack can't be guaranteed to be safe. Trying to make this code robustly exception-safe requires a lot of extra code and has runtime overhead.

          Using exceptions in these kinds of software contexts is strictly worse from a safety and maintainability standpoint.

      • dzonga21 hours ago |parent

        thanks for the explanation.

  • zerofor_conduct19 hours ago

    program like a fighter pilot? that makes no sense.

  • rramadass16 hours ago

    AUTOSAR's free pdf file Guidelines for the use of the C++14 language in critical and safety-related systems (defined as an update to MISRA C++ 2008) - http://www.autosar.org/fileadmin/standards/R18-10_R4.4.0_R1....

    Note that both MISRA and AUTOSAR's guidelines have been combined into a single standard "MISRA C++ 2023" which has been updated for C++17.

    Breaking Down the AUTOSAR C++14 Coding Guidelines - https://www.parasoft.com/blog/breaking-down-the-autosar-c14-...

  • FpUsera day ago

    Her point about exceptions vs error codes was that one failed to catch exception of particular and and things went south meanwhile if we instead "catch" error code all will be nice and dandy. Well one might fail to handle error codes just as well.

    That is of course not to say that exceptions and error codes are the same.

  • semiinfinitelya day ago

    even with 90% of c++ features banned, the language remains ~5x larger than every other programming language

    • pjmlpa day ago |parent

      Check C# 10, Python 3.14, D, Haskell,...

      • metaltyphoon17 hours ago |parent

        I don't consider C# a very large language. Most of what has been added removed boilerplate code. Swift, a much younger language, is way more complicated IMO

        • pjmlp15 hours ago |parent

          Someone doing maintenance work on C# project might find code going all the way back to C# 1.0.

          Also improvements to low level programming, being done since C# 7, a few semantic changes, aren't for removing boilerplate code.

          Then since a language is useless without its standard library, there have beem plenty of changes on how to do P/Invoke, COM interop, development of Web applications, and naturally knowing in what release specific features were introduced.

  • nikanja day ago

    In 1994 C++ compilers were buggy, and a modernization of the C++ allowed features list is still stuck in a committee somewhere?

  • ltbarcly3a day ago

    Paging our Ada fanboys! You're missing it!

    • __patchbit__14 hours ago |parent

      If in 1994 Joe Armstrong and Alan Kay were to list 7 alternative languages to C++ for programming fighter jets, what would they have done?

  • mwkaufmaa day ago

    TL;DR

    - no exceptions

    - no recursion

    - no malloc()/free() in the inner-loop

    • thefourthchimea day ago |parent

      I've worked on a playout system for broadcast television. The software has to run for years at a time and not have any leaks, We need to send out one frame of television exactly on time, every time.

      It is "C++", but we also follow the same standards. Static memory allocation, no exceptions, no recursion. We don't use templates. We barely use inheritance. It's more like C with classes.

      • EliRiversa day ago |parent

        I worked on the same for many years; same deal - playout system for broadcast, years of uptime, never miss a frame.

        The C++ was atrocious. Home-made reference counting that was thread-dangerous, but depending on what kind of object the multi-multi-multi diamond inheritance would use, sometimes it would increment, sometimes it wouldn't. Entire objects made out of weird inheritance chains. Even the naming system was crazy; "pencilFactory" wasn't a factory for making pencils, it was anything that was made by the factory for pencils. Inheritance rather than composition was very clearly the model; if some other object had function you needed, you would inherit from that also. Which led to some object inheriting from the same class a half-dozen times in all.

        The multi-inheritance system given weird control by objects on creation defining what kind of objects (from the set of all kinds that they actually were) they could be cast to via a special function, but any time someone wanted one that wasn't on that list they'd just cast to it using C++ anyway. You had to cast, because the functions were all deliberately private - to force you to cast. But not how C++ would expect you to cast, oh no!

        Crazy, home made containers that were like Win32 opaque objects; you'd just get a void pointer to the object you wanted, and to get the next one pass that void pointer back in. Obviously trying to copy MS COM with IUnknown and other such home made QueryInterface nonsense, in effect creating their own inheritance system on top of C++.

        What I really learned is that it's possible to create systems that maintain years of uptime and keep their frame accuracy even with the most atrocious, utterly insane architecture decisions that make it so clear the original architect was thinking in C the whole time and using C++ to build his own terrible implementation of C++, and THAT'S what he wrote it all in.

        Gosh, this was a fun walk down memory lane.

        • ueckera day ago |parent

          A multi-inhertiance system is certainly not something somebody who "was thinking in C" would ever come up with. This sounds more like a true C++ mess.

          • throwaway2037a day ago |parent

            I worked on a pure C system early in my career. They implemented multiple inheritance (a bit like Perl/Python MRO style) in pure C. It was nuts, but they didn't abuse it, so it worked OK.

            Also, serious question: Are they any GUI toolkits that do not use multiple inheritance? Even Java Swing uses multiple inheritance through interfaces. (I guess DotNet does something similar.) Qt has it all over the place.

            • aninteger18 hours ago |parent

              The best example I can think of is the Win32 controls UI (user32/Create window/RegisterClass) in C. You likely can't read the source code for this but you can see how Wine did it or Wine alternatives (like NetBSD's PEACE runtime, now abandoned).

              Actually the only toolkit that I know that sort of copied this style is Nakst's Luigi toolkit (also in C).

              Neither really used inheritance and use composition with "message passing" sent to different controls.

            • uecker11 hours ago |parent

              I take this back ;-) People come up with crazy things. Still I would not call this "C thinking". Building object-oriented code in C is common though and works nicely.

            • nottorp21 hours ago |parent

              One could say toolkits done in C++ use multiple inheritance because C++ doesn't have interfaces though.

            • WD-4218 hours ago |parent

              GTK does not support multiple inheritance afaik.

              • aninteger17 hours ago |parent

                It doesn't but it definitely "implements" a single inheritance tree (with up casting/down casting) which I believe Xt toolkits (like Motif) also did.

        • webdevvera day ago |parent

          it is also interesting that places where you would expect to have quite 'switched-on' software development practices tend to be the opposite - and the much-maligned 'codemonkeys' at 'big tech' infact tend to be pretty damn good.

          it was painful for me to accept that the most elite programmers i have ever encountered were the ones working in high frequency trading, finance, and mass-producers of 'slop' (adtech, etc.)

          i still ache to work in embedded fields, in 8kB constrained environment to write perfectly correct code without a cycle wasted, but i know from (others) experience that embedded software tends to have the worst software developers and software development practices of them all.

    • krashidova day ago |parent

      Has anyone else here banned exceptions (for the most part) in less critical settings (like a web app)?

      I feel like that's the way to go since you don't obscure control flow. I have also been considered adding assertions like TigerBeetle does

      https://github.com/tigerbeetle/tigerbeetle/blob/main/docs/TI...

      • tonfaa day ago |parent

        Google style bans them: https://google.github.io/styleguide/cppguide.html#Exceptions

      • fweimera day ago |parent

        Most large open-source projects ban exceptions, often because the project was originally converted from C and is just not compatible with non-local control flow. Or the project originated within an organization which has tons of C++ code that is not exception-safe and is expected to integrate with that.

        Some large commercial software systems use C++ exceptions, though.

        Until recently, pretty much all implementations seemed to have a global mutex on the throw path. With higher and higher core counts, the affordable throw rate in a process was getting surprisingly slow. But the lock is gone in GCC/libstdc++ with glibc. Hopefully the other implementations follow, so that we don't end up with yet another error handling scheme for C++.

      • mwkaufmaa day ago |parent

        Lots of games, and notably the Unreal Engine, compile without exceptions. EASTL back in the day was in part written to avoid the poor no-exception support in Dinkumware STL and STLport.

        • jesse__21 hours ago |parent

          Basically all high profile engine teams I know of ban exceptions. They're worse than useless

    • Taniwhaa day ago |parent

      yup, same for any real time code, new/malloc/free/delete use hidden mutexes and can cause priority inversion as a result - heisenbugs, that audio/video dropout that happens rarely and you can't quite catch - best to code to avoid them

      • AnimalMuppeta day ago |parent

        They also can simply fail, if you are out of memory or your heap is hopelessly fragmented. And they take an unpredictable amount of time. That's very bad if you're trying to prove that you satisfy the worst-case timing requirement.

    • pton_xda day ago |parent

      That's standard in the games industry as well. Plus many others like no rtti, no huge dependencies like boost, no smart pointers, generally avoid ctors / dtors, etc.

    • wiseowisea day ago |parent

      That’s hardly 90% of C++.

      • eltetoa day ago |parent

        If you compile with -fno-exceptions you just lost almost all of the STL.

        You can compile with exceptions enabled, use the STL, but strictly enforce no allocations after initialization. It depends on how strict is the spec you are trying to hit.

        • vodoua day ago |parent

          Not my experience. I work with a -fno-exceptions codebase. Still quite a lot of std left. (Exceptions come with a surprisingly hefty binary size cost.)

          • theICEBeardka day ago |parent

            Apparently according to some ACCU and CPPCon talks by Khalil Estel this can be largely mitigated even in embedded lowering the size cost by orders of magnitude.

            • vodoua day ago |parent

              Need to check it out. I guess you mean these:

              - C++ Exceptions Reduce Firmware Code Size, ACCU [1]

              - C++ Exceptions for Smaller Firmware, CppCon [2]

              [1]: https://www.youtube.com/watch?v=BGmzMuSDt-Y

              [2]: https://www.youtube.com/watch?v=bY2FlayomlE

            • Espressosaurusa day ago |parent

              Yeah. I unfortunately moved to an APU where code size isn't an issue so I never got the chance to see how well that analysis translated to the work I do.

              Provocative talk though, it upends one of the pillars of deeply embedded programming, at least from a size perspective.

          • eltetoa day ago |parent

            Not exactly sure what your experience is, but if you work with in an -fno-exceptions codebase then you know that STL containers are not usable in that regime (with the exception of std::tuple it seems, see freestanding comment below). I would argue that the majority of use cases of the STL is for its containers.

            So, what exact parts of the STL do you use in your code base? Most be mostly compile time stuff (types, type trait, etc).

            • alchemioa day ago |parent

              You can use std containers in a no-exceptions environment. Just know that if an error occurs the program will terminate.

              • WD-4218 hours ago |parent

                We've banned exceptions! If any occur, we just don't catch them.

              • elteto21 hours ago |parent

                So you can’t use them then.

                • _flux14 hours ago |parent

                  I don't think it would be typical to depend on exception handling when dealing with boundary conditions with C++ containers.

                  I mean .at is great and all, but it's really for the benefit of eliminating undefined behavior and if the program just terminates then you've achieved this. I've seen decoders that just catch the std::out_of_range or even std::exception to handle the remaining bugs in the logic, though.

        • theICEBeardka day ago |parent

          Are you aware of the Freestanding definition of STL? See here: https://en.cppreference.com/w/cpp/freestanding.html Large and useful parts of it are available if you run with a newer c++ standard.

          • eltetoa day ago |parent

            Well, it's mostly type definitions and compiler stuff, like type_traits. Although I'm pleasantly surprised that std::tuple is fully supported. It looks like C++26 will bring in a lot more support for freestanding stuff.

            No algorithms or containers, which to me is probably 90% of what is most heavily used of the STL.

      • bluGilla day ago |parent

        Large parts of the standard library malloc/free.

        • gmueckla day ago |parent

          But you won't miss those parts much if all your memory is statically initialized at boot.

        • canypa day ago |parent

          Or throw.

    • jandrewrogersa day ago |parent

      i.e. standard practice for every C++ code base I've ever worked on

      • DashAnimala day ago |parent

        What industry do you work in? Modern RAII practices are pretty prevalent

        • Cyan488a day ago |parent

          This is common in embedded systems, where there is limited memory and no OS to run garbage collection.

          • criddell18 hours ago |parent

            Garbage collection in C++?

        • jandrewrogersa day ago |parent

          What does RAII have to do with any of the above?

          • WD-42a day ago |parent

            0 allocations after the program initializes.

            • tialaramexa day ago |parent

              RAII doesn't imply allocating.

              My guess is that you're assuming all user defined types, and maybe even all non-trivial built-in types too, are boxed, meaning they're allocated on the heap when we create them.

              That's not the case in C++ (the language in question here) and it's rarely the case in other modern languages because it has terrible performance qualities.

            • Gupiea day ago |parent

              Open a file in the constructor, close it in the destructor. RAII with 0 allocations.

              • dh202219 hours ago |parent

                std::vector<int> allocated and freed on the stack will allocate an array for its int’s on the heap…

                • Gupie6 hours ago |parent

                  Sure, but my point was that RAII doesn't need to involve the heap. Another example would be acquiring abd releasing a mutex.

                • usefulcat15 hours ago |parent

                  I've heard that MSVC does (did?) that, but if so that's an MSVC problem. gcc and clang don't do that.

                  https://godbolt.org/z/nasoWeq5M

                  • menaerus13 hours ago |parent

                    WDYM? Vector is an abstraction over dynamically sized arrays so sure it does use heap to store its elements.

                    • aw16211073 hours ago |parent

                      I think usefulcat interpreted "std::vector<int> allocated and freed on the stack" as creating a default std::vector<int> and then destroying it without pushing elements to it. That's what their godbolt link shows, at least, though to be fair MSVC seems to match the described GCC/Clang behavior these days.

            • nicoburnsa day ago |parent

              RAII doesn't necessarily require allocation?

            • jjmarra day ago |parent

              Stack "allocations" are basically free.

              • grougnax12 hours ago |parent

                No. And they're unsafe. Avoid them at all costs.

          • DashAnimala day ago |parent

            Well if you're using the standard library then you're not really paying attention to allocations and deallocations for one. For instance, the use of std::string. So I guess I'm wondering if you work in an industry that avoids std?

            • jandrewrogersa day ago |parent

              I work in high-scale data infrastructure. It is common practice to do no memory allocation after bootstrap. Much of the standard library is still available despite this, though there are other reasons to not use the standard containers. For example, it is common to need containers that can be paged to storage across process boundaries.

              C++ is designed to make this pretty easy.

          • nmhancoca day ago |parent

            Not an expert but I’m pretty sure no exceptions means you can’t use significant parts of std algorithm or the std containers.

            And if you’re using pooling I think RAII gets significantly trickier to do.

            • theICEBeardka day ago |parent

              https://en.cppreference.com/w/cpp/freestanding.html to see the parts you can use.

          • astrobe_a day ago |parent

            And what does "modern" has to do with it anyway.

    • tialaramexa day ago |parent

      Forbidding recursion is pretty annoying. One of the nice things that's on the distant horizon for Rust is an explicit tail recursion operator perhaps named `become`. Unlike naive recursion, which as this video (I haven't followed the link but I'm assuming it is Laurie's recent video) explains risks stack overflow, optimized tail recursion doesn't grow the stack.

      The idea of `become` is to signal "I believe this can be tail recursive" and then the compiler is either going to agree and deliver the optimized machine code, or disagree and your program won't compile, so in neither case have you introduced a stack overflow.

      Rust's Drop mechanism throws a small spanner into this, in principle if every function foo makes a Goose, and then in most cases calls foo again, we shouldn't Drop each Goose until the functions return, which is too late, that's now our tail instead of the call. So the `become` feature AIUI will spot this, and Drop that Goose early (or refuse to compile) to support the optimization.

      • tgva day ago |parent

        In C, tail recursion is a fairly simple rewrite. I can't think of any complications.

        But ... that rewrite can increase the cyclomatic complexity of the code on which they have some hard limits, so perhaps that's why it isn't allowed? And the stack overflow, of course.

        • AnimalMuppeta day ago |parent

          I don't know that it's just cyclomatic complexity. I think it at least part of it is proving that you meet hard real-time constraints. Recursion is harder to analyze that way than "for (i = 0; i < 16; i++) ... " is.

      • morshu90017 hours ago |parent

        Thinking recursively is one thing, but can't remember the last time I've wanted to use recursion in real code.

      • zozbot234a day ago |parent

        The tail recursion operator is a nice idea, but the extra `become` keyword is annoying. I think the syntax should be `return as`: it uses existing keywords, is unambiguous and starts with `return` which tail recursion is a special case of.

        • tialaramexa day ago |parent

          Traditionally the time for bike shedding the exact syntax is much closer to stabilization.

          Because Rust is allowed (at this sort of distance in time) to reserve new keywords via editions, it's not a problem to invent more, so I generally do prefer new keywords over re-using existing words but I'm sure I'd be interested in reading the pros and cons.

          • zozbot234a day ago |parent

            The usual argument against a decorated `return` keyword is that a proper tail call is not a true "return" since it has to first drop any locals that aren't passed thru to the tail call. I don't think it's a very good argument because if the distinction of where exactly those implicit drops occur was that important, we'd probably choose to require explicit drops anyway.

    • petermcneeleya day ago |parent

      This is basically video games prior to 2010

      • mwkaufmaa day ago |parent

        Relax the dynamic-memory restriction to "limit per-event memory allocation to the bump allocator" and it's still mostly true for many AAA/AAAA games I work on today.

        • petermcneeleya day ago |parent

          Developers have gotten lazy. Im glad to here where you are they are at least still trying.

          • mwkaufmaa day ago |parent

            Nah I'm lazy, too.

            • petermcneeleya day ago |parent

              but_you_were_the_chosen_one.jpeg

    • mslaa day ago |parent

      At that point, why not write in C? Do they think it's C/C++ and not understand the difference?

      > no recursion

      Does this actually mean no recursion or does it just mean to limit stack use? Because processing a tree, for example, is recursive even if you use an array, for example, instead of the stack to keep track of your progress. The real trick is limiting memory consumption, which requires limiting input size.

      • billforsternz19 hours ago |parent

        Semi serious idea: A lot of people (including me) write C++ but it's basically C plus a small set of really ergonomic and useful C++ features (eg references). This should be standardised as a new language called C+

        • zeroc810 hours ago |parent

          That probably would see more success than the monster they've created. I've been out of the C++ world for a while, but I hardly recognize the language anymore.

      • mwkaufmaa day ago |parent

        For a long time, at least in MS and Intel, the C++ compilers were better than the C compilers.

      • drnick1a day ago |parent

        You may still want to use classes (where they make sense), references (cleaner syntax than pointers), operator overloading, etc. For example, a linear algebra library is far nicer to write and use in C++.

        • jesse__21 hours ago |parent

          Function overloading is nice, too

      • mwkaufmaa day ago |parent

        Re: recursion. She explains in her video. Per requirements, the stack capacity has to be statically verifiable, and not dependent on runtime input.

  • _rpxpx9 hours ago

    A very jovial discussion of systems that have killed millions of innocent people. Maybe you could do the same treatment of Nazi gas chambers or something for the next video?

    • lan3219 hours ago |parent

      Weird link to gas chambers.. Do you think 'genocides' will go down if we bring first world militaries to third world standards?

      • _rpxpx3 hours ago |parent

        I think Israel would find it harder to kill children, yes. In my view a good thing.

    • hyperbolablabla8 hours ago |parent

      Technology is not evil, the people wielding it are. It's a little disingenuous and dare I say insensitive to make this analogy.

      • _rpxpx3 hours ago |parent

        Disingenuous is making a bald statement like that about a very long and involved debate in philosophy. Suggest you read around the subject a bit first before making such haughty comments... Could start here:

        https://plato.stanford.edu/entries/technology/

        https://bpb-us-e2.wpmucdn.com/sites.uci.edu/dist/a/3282/file...

  • grougnax12 hours ago

    They should use Rust

    • zeroc810 hours ago |parent

      It's not quite there yet: https://ferrocene.dev/en?ref=blog.pictor.us

  • chairmanstevea day ago

    Ahhh. They use C++.....

    That explains all the delays on the F-35....,

    • smlacya day ago |parent

      You think a fighter jet should run Ruby on rails instead?

      • zenlota day ago |parent

        No jet should be on rails.

        • da_chicken17 hours ago |parent

          What about the launch rail on an aircraft carrier?

    • riku_ikia day ago |parent

      what would be so obviously better choice of language in your opinion?

      • throwaway2037a day ago |parent

        You raise a good point. No trolling: I wonder what languages they seriously considered? Example: I am sure the analysis included C in the mix. Also, I wonder if they considered compiler extensions. Example: Since C doesn't have destructors, maybe you could add a compiler extension to add the defer keyword to allow people to schedule object destruction. Even when they decided upon C++, I am sure there was a small holy war to decide what features were allowed. When they started the JSF programmed in the 1990s, C++ compilers were pretty terrible!

        • jasonwatkinspdxa day ago |parent

          Ada and C++ were the only realistic options at the time, and Ada developers are difficult to hire.

          But honestly, with this sort of programming the language distinctions matter less. As the guide shows you restrict yourself to a subset of the language where distinctions between languages aren't as meaningful. Basically everything runs out of statically allocated global variables and arrays. Don't have to worry about fragmentation and garbage collection if there's no allocation at all. Basically remove any source of variability in execution possible.

          So really you could do this in any c style language that gives you control over the memory layout.

        • riku_ikia day ago |parent

          My recollection is that traditionally they used Ada for avionics, but per some internet claims they had difficulties to hire enough Ada programmers for such large projects, so switched to C++.

      • grougnax12 hours ago |parent

        Rust

  • xvilka13 hours ago

    Time to rewrite it in Rust.