[delayed]
> In her “diagram of development,” Lovelace gives the fourth operation as v5 / v4. But the correct ordering here is v4 / v5. This may well have been a typesetting error and not an error in the program that Lovelace devised. All the same, this must be the oldest bug in computing. I marveled that, for ten minutes or so, unknowingly, I had wrestled with this first ever bug.
The real mark of a non-trivial program is that it doesn't work on the first try.
It's incredible how Babbage, frustrated that the mass production precision machining technology necessary to make his simple engine work didn't exist yet, decides that the best way forward is to design a new system an order of magnitude more complex and then go to Italy to find more advanced manufacturing somehow.
[delayed]
I had an employee like that.
He'd want to do something, and hit a roadblock, so he'd design his own tool (He wrote his own font, once, because he didn't like the way the built-in ones worked at teeny point sizes).
Best damn engineer I ever knew, but I had to keep an eye out for rabbitholing.
Obviously yak shaving is a hazard in that it can result in the original project getting abandoned or deadlines being missed, but often the tools you develop along the way are more (economically) valuable than the original project. They're often more widely applicable and narrower in scope, so they're more likely to get done and more likely to find an audience.
An example that comes to my mind is the Rust library mio. The Metal database for which it was the I/O component never materialized. But mio is a core component in the Rust ecosystem.
Similarly, many applications could benefit from a font that's legible at tiny sizes, not just the one that it was developed for. (Though obviously in most work cultures, this would be considered inappropriate, and for good reasons. My remarks apply mostly to greenfield research/personal projects where deadlines are loose.)
Babbage would have likely had more success if he stayed in England and opened his own precision machine shop.
> He wrote his own font, once, because he didn't like the way...
Wonder how many folks here have done the same thing, building and discarding in the throes of creation like tibetan monks:
> The real mark of a non-trivial program is that it doesn't work on the first try.
not true.
A really cool article. From the Intro:
> She thought carefully about how operations could be organized into groups that could be repeated, thereby inventing the loop. She realized how important it was to track the state of variables as they changed, introducing a notation to illustrate those changes. As a programmer myself, I’m startled to see how much of what Lovelace was doing resembles the experience of writing software today.
> So let’s take a closer look at Lovelace’s program. She designed it to calculate the Bernoulli numbers. To understand what those are, we have to go back a couple millennia to the genesis of one of mathematics’ oldest problems.
It does a nice job getting into just enough detail to make you appreciate what she did. If she were alive today, you could imagine her down the hall grinding away on some problem in Rust (I have a feeling she'd have a strong preference for statically typed languages).
However much credit Ada deserves for her programming techniques, to me the thing that always stood out is her ability to see the big picture wrt computation:
> Again, it [Analytical Engine] might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine. Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.
Imagine coming up with this idea in 1842, a whole century before the first actual programmable computers would be built, based solely on a description of a prototype of a mechanical computer. This is hacking extraordinaire.
I agree, this is the thing that stood out to me. There's this kind of amazing leap you have to do to understand how computers do what they do. How does a thing that adds and subtracts numbers paint pictures? Once you grasp that you can transform those things into numbers and then operate on them, the whole world of computation opens up. It's amazing Ada was thinking about this 100 years before computers really existed.
I agree she was a visionary, but take note that by the time she was active, people were already building complex mechanical automata that executed stored programs implemented using cams and gears: https://en.wikipedia.org/wiki/Jaquet-Droz_automata (see also https://en.wikipedia.org/wiki/Maillardet%27s_automaton). I think a small number of very intelligent people would see Babbage's work and Jaquet-Droz and conclude "hmm, if we mash these together with some creativity, it seems reasonable the result would be a programmable automaton capable of painting".
Programmable looms (which used a type of punchcard) such as the Jacquard Loom had existed for a little while - if I recall she specifically referenced this as inspiration for some of her ideas. Not trying to diminish how impressive her work was, but I do believe some form of primitive mechanical computation had already been done for a little while.
Jacquard loom was indeed well-known, and one of the sources of inspiration for Babbage, but it is still fundamentally about designing a system around a specific task - the cards directly encode operations on hooks.
What Ada is saying here is that, once you have a machine that let you do generic operations on numbers, you can use it to do all kinds of non-math stuff so long as you can come up with ways to encode other things as numbers (= finite sets of symbols). This was not at all obvious to other people who worked on the Engine, including Babbage himself.
Given that the looms’ punched cards already represented non-math stuff, the thought wasn’t entirely far-fetched.
Tide prediction machines came about 30 years later as an application of the "science of harmony".
> She realized how important it was to track the state of variables as they changed, introducing a notation to illustrate those changes.
The thing that really stuck out to me was how similar it was to static single assignment. https://en.wikipedia.org/wiki/Static_single-assignment_form#...
I think this is a state-of-the-art technique today and she had it, what, 180 years ago?
> In 1975, Paul Allen flew out to Albuquerque to demonstrate the BASIC interpreter that he and Bill Gates had written for the Altair microcomputer. Because neither of them had a working Altair, Allen and Gates tested their interpreter using an emulator that they wrote and ran on Harvard’s computer system. The emulator was based on nothing more than the published specifications for the Intel 8080 processor. When Allen finally ran their interpreter on a real Altair—in front of the person he and Gates hoped would buy their software—he had no idea if it would work. But it did.
So, the real unsung heroes here are the Intel engineers who wrote a spec that was so exact that software running on an emulator written based just on the spec would also run without a hitch on the actual hardware?
What I think is the coolest part is her actual work in the "notes" she attached to the translation.
See: https://upload.wikimedia.org/wikipedia/commons/c/cf/Diagram_...
and: https://en.wikipedia.org/wiki/Note_G
The article also references this python translation of her work:
Half the article is about Note A and Note G.
Sure, but it is spoken about in the abstract. I enjoyed the article, but why not at least include "some" of the actual notes she wrote or at least a screenshot?
There was a link to directly to note G in the article, in fact, it's the exact same URL that you linked to.
Yes, and I said that explicitly in my post.
The difference is in my post it is one of the featured things. In the article that claims to show what the program actually did it is buried in the text.
Has anyone built a virtual machine out of Babbage's instruction set and then tried Ada's program?
John Walker built a virtual machine for the Babbage's instruction set, and it has a web emulator: https://fourmilab.ch/babbage/emulator.html.
I don't think Ada program is available as an example though, so you'll need to input it manually.
Fun fact: my compiler course project was creating a C compiler targeting the emulator https://github.com/Christopher-Chianelli/ccpa (warning, said code is terrible).
Not quite, but this emulates her program.
> In fact, aside from the profusion of variables with unhelpful names, the C translation of Lovelace’s program doesn’t look that alien at all.
Clearly the author never met my coworkers.
I'm reminded of a high school programming class where a project partner named variables with the most crude and lewd words he could imagine. Not that I was prudish, but he unsurprisingly never remembered what "butts" was for and somehow never figured out why he kept getting confused by his own code.
...or worked with any mathematicians/physicists/engineers who program. As soon as I saw that, I thought "typical quant".
Like my dad (A chemical engineer) learned to program in FORTRAN, which used to insist variable names were 1 letter and up to 2 digits. He later learned Basic, but his code was still spiritually FORTRAN so the one-letter-two digits thing stuck. I thought that was just him but then much later I went to work on Wall St and had to work with quants who were copying code out of "Numerical Recipes" and it was exactly the same just now in C.
I helped port a physicist's assembly code long ago; variables were named alphabetically in the order encountered in the code, e.g. A, B, ...A1, ..., AA1, etc. up to ZZ23.
Still amazed that the nearly-incomprehensible code (and the port) worked
Not sure which Fortran this refers to. I never used Fortran I, but as I understand it, names were up to 6 characters long, first character alphabetic; names with initial letter A-H and O-Z were REAL, I-M INTEGER (Fortran II added declarations to override the defaults). Dartmouth Basic restricted names to a single letter and an optional digit.
Incidentally, the various Autocode languages of the 1950s in Britain had 1-character variable names.
That naming convention makes perfect sense to the mathematician, so why not? It's why we use `for(int i = i; i < n; i++)` in for loops; its the mathematical sigma sum of values with the same naming convention
A loop counter doesn't carry much semantic weight so it gets a short name. Doing that for important things that deserve a descriptive name is the problem. Maybe passable with literate programming, but even Knuth's code is pretty inscrutable due to transclusions everywhere.
The question to me always was, does it makes sense in the way of, it is intuitivly understandable, or does it only make sense, if it was drilled into you long enough?
(I suspect the latter)
Oh yeah. And if you're like my dad you call them "do loops" not "for loops"
> The Difference Engine was not a computer, because all it did was add and subtract.
The definition of computer is pretty grey for the pre-digital era, and it wasn't turing complete, but is it actually controversial whether it was a computer?
Difference Engine basically implemented one algorithm in hardware, while Analytical Engine was supposed to run a program. I believe that could make the latter one a computer.
The first stored program computer is a remarkable achievement, even if they didn't actually build it.
The analytical engine wasn't a stored program computer. It most closely follows the Harvard architecture, with instructions read from punch card memory. The analytical engine's claim to fame is that it was the first Turing complete computer to be designed.
> with instructions read from punch card memory
If that isn't a stored program, I don't know what is.
A stored program computer refers to the computer architecture where program instructions and data are stored in the same memory. This is also referred to as the Von Neumann architecture.
In contrast, a lot of early computers were built with separate instruction memory like punch cards. This is called the Harvard Architecture. If the instructions were immutable, which they usually were, then things like modifying the program at runtime were not possible.
Concrete examples of this difference is the Harvard Mk 1 and the Manchester Mk 1, the former being a Harvard architecture computer and the latter is a stored program computer or a von Neumann architecture.
> Difference Engine basically implemented one algorithm in hardware
So, did Pong run on a computer?
No.
That the Difference Engine and Analytical Engine belong on the timeline of computing history isn't particularly controversial, but the Difference Engine itself I've never seen anyone try to claim was a computer (it's a mechanical calculator)--the Wikipedia page doesn't even try to link it directly to the history of computers, you have to go to the Analytical Engine to see the Difference Engine's place in the "history of computing" timeline.
Probably not, it's stated in the TFA, the controversy is because Lovelace was a woman and some people think propping her up is basically a DEI retcon in history, the rest of us don't care. But I don't think it's anything whatsoever to do with actual computers
https://en.wikipedia.org/wiki/Ada_Lovelace#Controversy_over_...
> All but one of the programs cited in her notes had been prepared by Babbage from three to seven years earlier. The exception was prepared by Babbage for her, although she did detect a "bug" in it. Not only is there no evidence that Ada ever prepared a program for the Analytical Engine, but her correspondence with Babbage shows that she did not have the knowledge to do so.
> Bruce Collier wrote that Lovelace "made a considerable contribution to publicizing the Analytical Engine, but there is no evidence that she advanced the design or theory of it in any way"
The common claims are that Ada Lovelace was the first person to write a computer program, or that she was actually the primary driver in developing the analytical engine. Both such claims fall into the area "DEI retcon" as you choose to phrase it.
Although on a more pedantic note, Babbage wasn't the first person to program a computer either. Computers that aren't Turing complete are still computers. The Jacquard loom is one such example, and unlike the analytical engine it was actually built and put to practical use.
That's so funny...
Mathematicians for 150 years: Ada Lovelace is kind of on top of it.
Random from 2024: probably just a diversity footnote.
Funny indeed.Ada Lovelace has been persistantly recognised for a very long time, but has never been held up as a sufferget type mayrter, as by all accounts, she enjoyed herself out on the bleeding edge and is still making people uncomfortable 150 years after not fitting into any stereotypes then. Its clear from the footnotes that, whatever crowd around Babage and Lovelace, grasped the possibilities. Also interesting is that durring the apollo moon mission, the memory modules for the guidance computers were crafted by some of the last lace makers, working by hand, to survive the introduction of the jaquard looms and there punch cards.
Seriously. As the article states, while everyone else was like "Wow cool we will make a machine that makes calculating things easier"
Meanwhile Ada over here going "Oh shit this can do literally anything that can be done by steps of math. Someday machines following that formula will make music"
Ada is not the first programmer. Ada is the first computer scientist. She understood the ramifications of what we would eventually call "turing complete" systems, and understood the value of "general purpose" in a general purpose computer, and seemingly understood that more than just numbers could be represented and calculated in a computer.
Yes this is the most interesting thing about her writing - she foresaw a lot of later work.
An entire programming language was named after her in 1980 (by a man) when when such things didn't exist.
I don't think there is anything controversial here- the Difference Engine was a calculator that could only do a predefined set of hardwired computations, the Analytical Engine a true turing complete computer.
Is an early 20th century mechanical desk calculator a computer? There is no consensus on definition but for me, a computer follows a program. Maybe even only one fixed program. But a program. If there is no stepping through a program it is not a computer.
Does the iterative method used by the difference engine constitute a program?
I'm not sure I have a direct answer, but I agree something shouldn't be called a computer if it just does a one-shot, fixed-length calculation before requiring further human intervention. To be a "computer", and be associated with that general conceptspace, it should be Turing-complete and thus capable of running arbitrarily long (up to the limits of memory and hardware rot).
Earlier comment expressing annoyance at a mislabeling:
Separate comment to address a subtlety that comes up a lot:
Often you'll hear about fully homomorphic encryption (FHE) being Turing-complete. But you can't actually have a Turing complete system with variable-run-time loops that's homomorphically encrypted, because that leaks information about the inputs.
When they say FHE is Turing-complete, what they mean is that you can take an arbitrary program requiring Turing completeness, then time-bound it, unroll it into a fixed-length circuit, and run that homomorphically. Since you can keep upping the time bound, you can compute any function. So the system that translates your programs into those circuits, with no limit on the bound you set, could then be called Turing-complete -- but you couldn't say that about any of those circuits individually.
Earlier related comment: https://news.ycombinator.com/item?id=40078494
Discussed at the time (of the article):
What Did Ada Lovelace’s Program Actually Do? - https://news.ycombinator.com/item?id=17797003 - Aug 2018 (52 comments)
Also relevant is "Untangling the Tale of Ada Lovelace" from December, 2015 at https://writings.stephenwolfram.com/2015/12/untangling-the-t... with 35 comments from the time at https://news.ycombinator.com/item?id=10709730 .
> Discussed at the time (of the article)
Thank you for that careful clarification. The discussion in "Bell's Life in London and Sporting Chronicle" was far less enlightening.
Good article. This is the clearest explaination I've read of how and why Ada was meaningfully innovative, and worthy of her recognition.