HNNewShowAskJobs
Built with Tanstack Start
Nvidia is gearing up to sell servers instead of just GPUs and components(tomshardware.com)
177 points by giuliomagnifico 3 days ago | 77 comments
  • alecco3 days ago

    Guys, please read the article. Yes NVIDIA sells servers already. What they mean is they are going to also do other system parts that currenlty the partners are doing.

    > Starting with the VR200 platform, Nvidia is reportedly preparing to take over production of fully built L10 compute trays with a pre-installed Vera CPU, Rubin GPUs, and a cooling system instead of allowing hyperscalers and ODM partners to build their own motherboards and cooling solutions. This would not be the first time the company has supplied its partners with a partially integrated server sub-assembly: it did so with its GB200 platform when it supplied the whole Bianca board with key components pre-installed. However, at the time, this could be considered as L7 – L8 integration, whereas now the company is reportedly considering going all the way to L10, selling the whole tray assembly — including accelerators, CPU, memory, NICs, power-delivery hardware, midplane interfaces, and liquid-cooling cold plates — as a pre-built, tested module.

  • nijave3 days ago

    I talked to someone at Nvidia ~2019 or ~2020 and their plan at the time was to completely vertically integrate and sell compute as a service via their own managed data centers with their own software, drivers, firmware, and hardware so this seems like just another incremental step in that direction.

    • michaelt3 days ago |parent

      Maybe in 2019, but I find it hard to believe nvidia looks at google’s TPU business model enviously these days.

      • skybrian3 days ago |parent

        Are TPU’s a bad business model?

        • AnthonyMouse3 days ago |parent

          Customers are increasingly wary of building their business on someone else's land because they've seen it happen too many times that once you're locked in the price goes up, or the company who is now your only viable supplier decides to enter your own market.

          And at least if anyone can buy the hardware you'll have your own or have multiple competing providers you can lease it from. If you can only lease it and only from one company, who would want to touch that? It's like purposely walking into a trap.

          • skybrian2 days ago |parent

            Lock-in is a valid concern, but on the other hand, for many apps, it seems like this can be fairly easily mitigated? If you can swap in a different LLM, I don't think it matters if it's running on Google TPU's or NVidia?

            Meanwhile, at the hardware level, TPU's provide some competition for NVidia.

            • AnthonyMouse2 days ago |parent

              But then what's the case for making them Google-exclusive instead of selling the hardware to anyone who wants to buy one?

          • cyanydeez3 days ago |parent

            This might be the source of the AI bubble burst, just like the 2000 bubble. Eventually someone's gonna raise the price to cover a bill and suddenly everyone looks at actual revenue and power bills , calculates within 6 months they'll not get their MBA turnip squeezed bonus and walk away

    • noir_lord3 days ago |parent

      That's one way to arrive at an IBM Mainframe like model I guess.

      It'll work until you can buy comparable expansion cards for open systems (if history is any guide).

      • mrbungie3 days ago |parent

        Yep, tech is incredibly circular. Once Nvidia gets there is highly probable that "disruptive" competition will apear due to mere desire/pressure for more freedom and options (and knowing NVDA, also costs).

        • cyanydeez3 days ago |parent

          AMD is already knocking on the same door. If they had focused more on drivers it'd be an equal comparison.

    • amelius3 days ago |parent

      > vertically integrate

      Sounds like they are going the Apple way. How long until we have to pay 30% to get our apps in their AI-Store?

      • DevKoala3 days ago |parent

        We already pay a 100% premium on AWS.

        • amelius3 days ago |parent

          At least that premium is not directly proportional to our revenue.

      • diamond5593 days ago |parent

        More like IBM

    • SV_BubbleTime3 days ago |parent

      In 2019 or 2020 that probably seemed reasonable.

      Now? You would have to tell me nVidia was also building multiple nuclear power plants to get the scale to make sense.

      • Kye3 days ago |parent

        They're invested in nuclear power. Pairing datacenters with small modular reactors is at least on the minds of all the AI companies.

        • pishpash3 days ago |parent

          One day they will build AI out of radioactive source material directly and skip the reactor. Maybe.

  • ecshafer3 days ago

    Nvidia already sells servers?

    What I don't really get, is that Nvidia is worth like $4.5T on $130B revenue. If they want to sell servers, why don't they just buy Dell or HP? If they want CPUs want not buy AMD, Qualcomm, Broadcom or TI? (I know they got blocked on their ARM attempt before the AI boom) Their revenue is too low to support their value, shouldn't they use this massive value to just buy up companies to up their revenue?

    • jack_tripper3 days ago |parent

      >why don't they just buy Dell or HP?

      Why buy a complex but relatively low margin business that comes with a lot of baggage they don't need, when they can focus on what they do best and let Dell and HP compete against each other for Nvidia's benefit?

      Same reason why Apple doesn't buy Foxconn or TSMC.

    • btian3 days ago |parent

      But nVidia already sells servers (NVL72), and CPUs (Grace). Why buy a bunch of overlapping companies?

      And no sane regulator on the planet will allow them to takeover AMD, Qualcomm, or Broadcom.

      • disqard2 days ago |parent

        A $1m dinner at Mar-a-Lago will take care of that.

        https://www.npr.org/2025/04/09/nx-s1-5356480/nvidia-china-ai...

      • cyanydeez3 days ago |parent

        Lucky for them Americans have recently gone insane.

    • andreasmetsala3 days ago |parent

      Sometimes building a new organization is easier than trying to improve a legacy one.

    • mr_toad3 days ago |parent

      > If they want to sell servers, why don't they just buy Dell or HP?

      They want to sell HPC servers, not general purpose servers.

  • kj4ips3 days ago

    It's my opinion that nvidia does good engineering at the nanometer scale, but it gets worse the larger it gets. They do a worse job at integrating the same aspeed BMC that (almost) everyone uses than SuperMicro does, and the version of Aptio they tend to ship has almost nothing available in setup. With the price of a DGX, I expect far better. (Insert obligatory bezel grumble here)

  • hvenev3 days ago

    Don't they already sell servers? https://www.nvidia.com/en-us/data-center/dgx-platform/

    • p4ul3 days ago |parent

      I had the same reaction. Haven't they been selling DGX boxes for almost 10 years now? And they've been selling the rack-scale NVL72 beast for probably a few years.[1]

      What is changing?

      [1] https://www.nvidia.com/en-us/data-center/gb200-nvl72/

      • reactordev3 days ago |parent

        Cutting out the Vendor like SuperMicro or HPE, they are going straight to consumer now.

      • AlanYx3 days ago |parent

        When nVIDIA sells DGX directly they usually still partner with SuperMicro, etc. for deployment and support. It sounds like they're going to be offering those services in-house now, competing with their resellers on that front.

    • rlupi3 days ago |parent

      Hyperscalers and similar clients don't use DGX, but their own designs that integrate better with their custom designed datacenters

      https://www.nvidia.com/en-us/data-center/products/mgx/

  • foruhar3 days ago

    This video shows the systems being built and shipped with cooling, cabling, etc.

    It’s pretty mind blowing what this crisis shows from the manipulation of atoms and electrons all the way up to these clusters. Particularly mind blowing for me who has cable management issues with a ten port router.

    https://youtu.be/1la6fMl7xNA?si=eWTVHeGThNgFKMVG

    • dzonga3 days ago |parent

      what's mind blowing about the video you shared was the amount of coper cable used.

      I thought with fiber we wouldn't need coper cables maybe just for electricity distribution but clearly I was wrong.

      thanks for sharing

  • wmf3 days ago

    I always wondered why a bunch of different companies make identical graphics cards then complain that it's a horrible business and Nvidia is screwing them. I wondered even more strongly when I saw a dozen flavors of the NVL72 rack. If the rack is so complex and difficult to manufacture, why have N companies do redundant work?

    • AnthonyMouse3 days ago |parent

      Designing the board is a different business from designing the chip. You're negotiating with the DRAM fabs instead of the logic fabs, selling to a thousand retailers instead of a dozen integrators, etc. And once you've done all that, you do it more than once. ASRock isn't just making Nvidia GPUs, they're making AMD and Intel GPUs, motherboards, wireless routers, etc.

      It's more efficient to have companies that specialize in making all kinds of boards than to make each of the companies making chips have to do that too. And it's a competitive market so the margins are low and the chip makers have little incentive to enter it when they can just have someone else do it for little money.

  • modeless3 days ago

    They're not stopping at servers. They want to sell datacenters.

  • heisenbit3 days ago

    Competing with your customers can be a risky strategy for a platform provider. If the platform abandons the neutral stance its customers will be a lot more open to alternatives.

  • jpecar3 days ago

    Servers? I thought they left even racks behind, they're now selling these "AI factories".

  • dmboyd3 days ago

    Aren’t they already supply constrained? Seems like this would be counterproductive in further limiting supply vs a strategy of commoditizing your complements. This seems closer to PR designed to boost share price rather than a cogent strategy.

    • mikeryan3 days ago |parent

      Huh. I view it the other way. If you’re supply constrained go straight to the consumer and capture the value that the middlemen building on top of your tech are currently profiting from.

    • MattRix3 days ago |parent

      They’re only supply constrained on the chips themselves. Selling fully integrated racks allows them to get even more money per chip.

    • energy1233 days ago |parent

        > Further limiting supply
      
      Even if they don't increase their GPU production capacity, that's not "limiting" supply. It's keeping it the same. Only now they can sell each unit for a larger profit margin.
    • dboreham3 days ago |parent

      In MBA-speak this is "capturing more of the value chain".

  • JCM93 days ago

    This would basically start to turn cloud providers into CoLo facilities that just host these servers.

    Makes sense longer term for NVidia to build this but adds to the bear case for AWS et al long term on AI infrastructure.

  • thefourthchime3 days ago

    In a sense, they already do, since they're heavily invested in CoreWeave. For those unfamiliar, CoreWeave was a crypto company that pivoted to building out data centers.

    • zerosizedweasle3 days ago |parent

      It's interesting to se the market try to do anything to rally. The problem is you guys are rallying on the thought that you've scared the Fed into cutting rates, but actually by rallying you short circuit it. You ensure they won't cut. And that's how the market's lillypad hopping thinking is actually just stupidity. You rallied, so now there are no rate cuts so the crash will be even more brutal.

    • wmf3 days ago |parent

      GPU "neoclouds" are a different topic than whose logo is on the server.

  • re-thc3 days ago

    Soon Nvidia will sell AI itself instead of servers.

    • Cthulhu_3 days ago |parent

      To a point / by some definitions of the phrase AI they already do: https://en.wikipedia.org/wiki/Deep_Learning_Super_Sampling

      I wouldn't be surprised if we see some major acquisitions or mergers happening in the next few years by one of the independent AI vendors like OpenAI and Nvidia.

    • michaelbuckbee3 days ago |parent

      Considering they have a pure service in selling Geforce Now (game streaming), that doesn't seem in any way far fetched.

    • Palomides3 days ago |parent

      why? selling GPUs is way more profitable

      • giuliomagnifico3 days ago |parent

        Sell a whole infrastructure is more profitable than sell components, and also put the customers in “sandboxes” with you.

      • reactordev3 days ago |parent

        Why sell when you can rent?

        • re-thc3 days ago |parent

          That's what Coreweave etc does and Nvidia already invests in them (i.e. has a stake).

        • Palomides3 days ago |parent

          so you don't get stuck with many billions of dollars in useless GPUs and data centers when the bubble pops

  • idatum3 days ago

    Can anyone comment on wafer-scale systems, multiple equivalent chips on an entire wafer?

    Seems like where things are heading?

    • wmf3 days ago |parent

      Only Cerebras is doing wafer-scale. It seems to be working for them but no one is copying them. The minimum unit (one wafer) costs millions and it's not clear how good their multi-wafer scaling is.

      • matthews32 days ago |parent

        A 300mm wafer on a recent process node (TSMC N3) is estimated to be around $20k at quantity[1]. I don't know what kind of testing and crazy packaging processes would cost for a wafer-scale chip, but I can't imagine it would put the price anywhere near the millions.

        [1]: https://www.tomshardware.com/tech-industry/tsmcs-wafer-prici...

  • thesuperbigfrog3 days ago

    What software will those Nvidia servers run?

    Are they creating their own software stack or working with one or more partners?

    • kj4ips3 days ago |parent

      They have a Ubuntu derivative called DGX OS, that they use on their current lines.

      • overfeed3 days ago |parent

        I wonder which [publicly listed] companies would look at the abandonment of Jetson and still commit to having Nvidia set the depreciation schedule for them.

    • nijave3 days ago |parent

      They already do have a pretty robust software stack that goes all the way to code/analytics libraries. I'm not sure on the current state of things but ~2020 they were automatically testing chip designs for performance regressions in analytics libraries across the entire stack from hardware to each piece of software

  • m_ke3 days ago

    Soon OpenAI will make its own chips and Nvidia its own foundational models

  • 2OEH8eoCRo03 days ago

    Why would they chase a lower margin business area? Are they out of ideas?

    • nijave3 days ago |parent

      More vertical integration

      • 2OEH8eoCRo03 days ago |parent

        Like IBM?

        • wmf3 days ago |parent

          Like IBM in the 1960s.

  • alberth3 days ago

    How does their attempt to acquire ARM (and failed) impact this?

    • wmf3 days ago |parent

      It doesn't.

  • btown3 days ago

    “Nobody gets fired for choosing NVIDIA.”

  • czbond3 days ago

    Didn't they watch Silicon Valley to learn that lesson? Don't sell the box.

  • lvl1553 days ago

    We’re not far from Nvidia exclusively bundling ChatGPT. It’s a classic playbook from Microsoft.

    • gruturo3 days ago |parent

      ChatGPT doesn't really have much of a moat. If it becomes Microsoft or Nvidia exclusive, it just opens an opportunity for its competitors. I barely notice which LLM I'm using unless it's something super specific where one is known to be stronger.

    • mcintyre19943 days ago |parent

      I'm pretty sure they'd like to keep selling chips to all of OpenAI's competitors too.

    • ptero3 days ago |parent

      Chatgpt is not the only game in town. Any exclusivity deal will likely backfire against chatgpt.

    • MangoToupe3 days ago |parent

      Why would Nvidia ever agree to that?

  • 3 days ago
    [deleted]