HNNewShowAskJobs
Built with Tanstack Start
Lucent 7 R/E 5ESS Telephone Switch Rescue (2024)(kev009.com)
52 points by gjvc 3 days ago | 34 comments
  • luckyturkey2 days ago

    This is such a stark contrast with how "critical infrastructure" is built now.

    A university bought a 5ESS in the 80s, ran it for ~35 years, did two major retrofits, and it just kept going. One physical system, understandable by humans with schematics, that degrades gracefully and can be literally moved with trucks and patience. The whole thing is engineered around physical constraints: -48V, cable management, alarm loops, test circuits, rings. You can walk it, trace it, power it.

    Modern telco / "UC" is the opposite: logical sprawl over other people's hardware, opaque vendor blobs, licensing servers, soft switches that are really just big Java apps hoping the underlying cloud doesn't get "optimized" out from under them. When the vendor loses interest, the product dies no matter how many 9s it had last quarter.

    The irony is that the 5ESS looks overbuilt until you realize its total lifecycle cost was probably lower than three generations of forklifted VoIP, PBX, and UC migrations, plus all the consulting. Bell Labs treated switching as a capital asset with a 30-year horizon. The industry now treats it as a revenue stream with a 3-year sales quota.

    Preserving something like this isn't just nostalgia, it's preserving an existence proof: telephony at planetary scale was solved with understandable, serviceable systems that could run for decades. That design philosophy has mostly vanished from commercial practice, but it's still incredibly relevant if you care about building anything that's supposed to outlive the current funding cycle.

    • gjvc2 days ago |parent

      "UC" == Unified Communications https://en.wikipedia.org/wiki/Unified_communications

  • jakedata3 days ago

    Visiting Bletchley Park and seeing step-by-step telephone switching equipment repurposed for computing re-enforced my appreciation for the brilliance of the telecommunication systems we created in the past 150 years. Packet switching was inevitable and IP everything makes sense in today's world, but something was lost in that transition too. I am glad to see that enthusiasts with the will and means are working to preserve some of that history. -Posted from SC2025-

    • dekhn3 days ago |parent

      I wanted to learn more about computer hardware in college so I took a class called "Cybernetics" (taught by D. Huffman). I thought we were going to focus on modern stuff, but instead, it was a tour of information theory- which included various mathematical routing concepts (kissing spheres/spherical code, Karnaugh maps). At the time I thought it was boring, but a couple decades later, when working on Clos topologies, it came in handy.

      Other interesting notes: the invention of telegraphy and improvements to the underlying electrical systems really helped me understand communications in the 1800s better. And reading/watching Cuckoo's Egg (with the german relay-based telephones) made me appreciate modern digital transistor-based systems.

      Even today, when I work on electrical projects in my garage, I am absolutely blown away with how much people could do with limited understanding and technology 100+ years ago compared to what I'm able to cobble together. I know Newton said he saw farther by standing on the shoulders of giants, but some days I feel like I'm standing on a giant, looking backwards and thinking "I am not worthy".

      • Animats2 days ago |parent

        When the Bell System broke up, the old guys wrote a 3-volume technical history of the Bell System.[1] So all that is well documented.

        The history of automatic telephony in the Bell System is roughly:

        - Step by step switches. 1920s Very reliable in terms of failure, but about 1% misdirected or failed calls. Totally distributed. You could remove any switch, and all it would do is reduce the capacity of the system slightly. Too much hardware per line.

        - Panel. 1930s. Scaled better, to large-city central offices. Less hardware per line. Beginnings of common control. Too complex mechanically. Lots of driveshafts, motors, and clutches.

        - Crossbar. 1940s. #5 crossbar was a big dumb switch fabric controlled by a distributed set of microservices, all built from relays. Most elegant architecture. All reliable wire relays, no more motors and gears. If you have to design high-reliability systems, is worth knowing how #5 crossbar worked.

        - 1ESS - first US electronic switching. 1960s Two mainframe computers (one spare) controlling a big dumb switch fabric. Worked, but clunky.

        - 5ESS - good US electronic switching. Two or more minicomputers controlling a big dumb switch fabric. Very good.

        The Museum of Communications in Seattle has step by step, panel, and crossbar systems all working and interconnected.

        In the entire history of electromechanical switching in the Bell System, no central office was ever fully down for more than 30 minutes for any reason other than a natural disaster, and in one case a fire in the cable plant. That record has not been maintained in the computer era. It is worth understanding why.

        [1] https://archive.org/details/bellsystem_HistoryOfEngineeringA...

        • kev0092 days ago |parent

          The more I study the 5E I see it as a multicomputer or distributed system. The minicomputers were responsible for OAM and orchestrating the symphony over time, but the communications are happening across the CM which implements the Time/Space/Time fabric and a sea of microcontrollers. I think this clarification is worthwhile because it drives your point about faults in this computer-era and by extension this (micro)services-era home even more -- it's much less mainframe and more distributed system than commonly chronicled, which can be a harder problem especially with the tooling back then.

        • Aloha2 days ago |parent

          It's actually an 8 volume History (I have all 8 on my shelf) 3 were just on switching system - you left out the parallel development to Panel, Rotary.

          Museum in seattle also has a working 3ESS (likely the only one left in the world), and have recently added a DMS-10 as well.

        • palmotea2 days ago |parent

          > That record has not been maintained in the computer era. It is worth understanding why.

          Go on.

          • Animats2 days ago |parent

            Briefly,

            The big dumb switch fabric of #5 Crossbar has no processing power at all, but it has persistent state. The units that have processing power all go down to their ground state at the end of each call processing event, and have no state that persists over transactions. The various processing units (markers, junctors, senders, originating registers, etc.) are all at least duplicated, and usually there's a pool of them. Requests "seize" a unit at random from a pool, the unit does its thing, and the unit is quickly released.

            Units have self-checking, and if they fail, they drop out of their pool and raise an alarm. The call capacity or connection speed of the exchange is reduced but it keeps working. Everything has short hardware stall timers which will prevent some unit failure from hanging the exchange.

            #5 Crossbar has almost no persistent memory. End offices (for connecting subscriber lines) did not log call info. Toll offices did, but that used an output-only paper tape punch. There's so little state in the switch that matching up call start and call end events was done later in a billing office where the paper tape was read.

            The combination of statelessness and resource pools prevented total failure. Errors and unit failures happened occasionally but could not take down the whole switch.

            There's plenty of info about #5 Crossbar on line, but 1950s telephony jargon is so different from 2020s server jargon that it's not obvious that #5 Crossbar is a microservices architecture.

            • Animatsa day ago |parent

              Thinking about this, this is why Erlang, designed for phone switches, is built around small processes which can fail and be restarted.

    • 2 days ago |parent
      [deleted]
  • hasbot2 days ago

    My first development job was as a software developer at Bell Labs in Naperville working on the 5E. I started at the end of 5E4 (the 4th revision) and then worked on 5E5 and 5E6. I went from school writing maybe 1000 line programs to maintaining and enhancing a system comprised of millions of lines of code and hundreds of developers. Most of the code itself was very simple but it was the interactions between modules and switching features that was very complex.

  • kev0092 days ago

    Author and surprised to see this here but happy to see interest in these historical machines.

    I will also plug Connections Museum who have an already neat installation in Seattle and are working on their own 5ESS recovery for display at a new site in Colorado https://www.youtube.com/watch?v=3X3-xeuGI5o

    • Aloha2 days ago |parent

      Did you end up bringing up the AM and playing around with it?

      • kev0092 days ago |parent

        I've been squeezed a bit hard since then so not yet. I did get a couple DC power supplies that will have enough to at least get the AM and a few other shelves up, so if things slow down over the holidays I may try to image the disks for a recovery point and then see about doing it.

        • Aloha12 hours ago |parent

          I'd be interested in helping, I have a more than passive interest in the 3B-Series and the 5E - I figured by that point it'd be a emulated 3B, not a real 3B21.

  • palmotea2 days ago

    > In particular, the machine had an uptime of approximately 35 years including two significant retrofits to newer technology culminating in the current Lucent-dressed 7 R/E configuration...

    Pretty impressive. It makes me sad that the trend is to move away from rock-solid stuff towards more and more unreliable and unpredictable stuff (e.g. LLMs that need constant human monitoring because they mess up so much).

  • g-mork3 days ago

    Talk about a gargantuan project.. also awesome to bag such a thing. He's lucky to even have the resources to store^W warehouse it

    • userbinator2 days ago |parent

      It's not that much space in some parts of the US where properties are measured in acres.

  • yborg3 days ago

    I wonder how many operating 5ESS are left now.

    • Aloha3 days ago |parent

      a fairly large number - a bigger question is what happens to all the CO buildings once all the copper is turned down.

      There is a huge opportunity about 5 years from now for edge datacenters. You have these buildings which have highly reliable power and connectivity, all thats needed is servers which can live in a NEBS environment.

      • kayfox3 days ago |parent

        COs are already being used for edge datacenters, its just not been talked about much outside the industry.

      • kjs32 days ago |parent

        The CO closest to me was turned into condos. A friend was the general contractor. It was by all accounts a nightmare.

      • bluedino3 days ago |parent

        Most of those CO's are in buildings that don't have all that much space in them, were built in the 40's and 50's, and likely aren't suitable for that kind of thing. Cooling would be a big deal.

        • Aloha3 days ago |parent

          I have been in ~15 CO's - there is tons of floor space in them, and the only thing telephone switching equipment has done since the 50's is shrink - beyond that, most existing CO buildings had expansions when electronic switching came about, because they couldnt add the new electronic (1/1A/5 ESS) without additional floor space. Cooling is noted by the requirement for NEBS compliant equipment.

        • Animats2 days ago |parent

          The older ones have lots of tall windows. It's the newer windowless ones that cannot be easily repurposed, unless you want to build a data center.

      • bediger40002 days ago |parent

        Central offices are everywhere, too. You've driven or walked by any number of them, and the most you noticed was a Bell System logo. The downtown COs in big cities are on expensive real estate.

        • ipdashc2 days ago |parent

          I want to know so badly what the telcos are still doing with all that space. Some, like mentioned above, are probably edge data centers. I imagine there's a lot of Internet infrastructure in them as well.

          But even the biggest IXP is surely tiny compared to the space required for an electromechanical exchange (that would host human operators as well). Are there just floors and floors of empty space? Like you said, on very expensive downtown real estate?

        • gjvca day ago |parent

          it makes sense when you realise they are actually machines disguised as buildings

      • peterbecich2 days ago |parent

        Relevant https://www.co-buildings.com/

    • shrubble2 days ago |parent

      Across the USA? Very likely a few thousand.

  • 2 days ago
    [deleted]
  • CaliforniaKarl3 days ago

    (2024), but still a good read!

    • gjvc2 days ago |parent

      this date obsession is moronic, especially when we are talking about technology over forty years old. Next time you are tempted to spam the date, wait, and see if conversation still happens without your vital input.

      There are many articles missing a (2025) addition, so get to work!