HNNewShowAskJobs
Built with Tanstack Start
Asus Ascent GX10(asus.com)
212 points by jimexp69 6 days ago | 202 comments
  • abtinf6 days ago

    From the FAQ… doesn’t seem promising when they ask and then evade a crucial question.

    > What is the memory bandwidth supported by Ascent GX10? AI applications often require a bigger memory. With the NVIDIA Blackwell GPU that supports 128GB of unified memory, ASUS Ascent GX10 is an AI supercomputer that enables faster training, better real-time inference, and support larger models like LLMs.

    • Youden6 days ago |parent

      They seem to have another FAQ here that gives a real answer (273GB/s): https://www.asus.com/us/support/faq/1056142/

      • suprjami6 days ago |parent

        Now we can see why they avoided giving a straight answer.

        File this one in the blue folder like the DGX

        • stogot6 days ago |parent

          Noob here. Why is that number bad?

          • TomatoCo6 days ago |parent

            LLM performance depends on doing a lot of math on a lot of different numbers. For example, if your model has 8 billion parameters, and each parameter is one byte, then for 256gb/s you can't do better than 32 tokens per second. So if you try to load a model that's 80 gigs, you only get 3.2 tokens per second, which is kinda bad for something that costs 3-4k.

            There's newer models called "Mixture of Experts" that are, say, 120b parameters, but only use 5b parameters per token (the specific parameters are chosen via a much smaller routing model). That is the kind of model that excels on this machine. Unfortunately again, those models work really well when doing hybrid inference, because the GPU can handle the small-but-computationally-complex fully connected layers while the CPU can handle the large-but-computationally-easy expert layers.

            This product doesn't really have a niche for inference. For training and prototyping is another story, but I'm a noob on those topics.

          • abtinf6 days ago |parent

            My mac laptop has 400gb/s bandwidth. LLMs are bandwidth bound.

          • kennethallen6 days ago |parent

            Running LLMs will be slow and training them is basically out of the question. You can get a Framework Desktop with similar bandwidth for less than a third of the price of this thing (though that isn't NVIDIA).

            • embedding-shape5 days ago |parent

              > Running LLMs will be slow and training them is basically out of the question

              I think it's the reverse, the use case for these boxes are basically training and fine-tuning, not inference.

              • kennethallen4 days ago |parent

                The use case for these boxes is a local NVIDIA development platform before you do your actual training run on your A100 cluster.

          • NaomiLehman6 days ago |parent

            refurbished macbooks m1 for $1,500 have more with less latency

    • fancyfredbot6 days ago |parent

      They have failed to provide answers to other FAQ as well. The answers are really awkward and don't read like LLM output which I'd expect to be much more fluent. Perhaps a model which was lobotomized through FP4 quantisation and "fine tuning" on one of these.

    • LeifCarrotson6 days ago |parent

      It sounds good, but it ultimately fails to comprehend the question: ignoring the word "bandwidth" and just spewing pretty nonsense.

      Which is appropriate, given the applications!

      I see that they mention it uses LPDDR5x, so bandwidth will not be nearly as fast as something using HBM or GDDR7, even if bus width is large.

      Edit: I found elsewhere that the GB10 has a 256bit L5X-9400 memory interface, allowing for ~300GB/sec of memory bandwidth.

      • tuhgdetzhh6 days ago |parent

        For comparison, the RTX 5090 has a memory bandwidth of 1,792 GB/s. The GX10 will likely be quite disappointing in terms of tokens per second and therefore not well suited for real-time interaction with a state-of-the-art large language model or as a coding assistant.

      • guerrilla6 days ago |parent

        It doesn't sound good at all. It sounds like malicious evasion and marketing bullshit.

        • exe346 days ago |parent

          It gives you a very good idea of the capability of the models you'll be running on it!

          • guerrilla6 days ago |parent

            It doesn't give a good idea of anything. We already know it has 128GB unified memory from the first bullet point on the page.

            • darkwater6 days ago |parent

              GP was subtly implying that the text was written by an LLM (running in the very same Ascent GX10).

              • guerrilla6 days ago |parent

                Ah! Thanks for explaining. haha

              • BikiniPrince6 days ago |parent

                With a little tinkering we can just have the AI gaslight us about it’s capabilities.

            • epolanski6 days ago |parent

              I think the previous user made a joke about LLMs spewing nonsense on top of AI bs thus this product being quite fitting.

    • curvaturearth6 days ago |parent

      Written by a LLM?

  • npalli6 days ago

    Seems this is basically DGX Spark with 1TB of disk so about $1000 bucks cheaper. DGX Spark has not been received well (at least online, Carmack saying it runs at half the spec, low memory bandwidth etc.) so perhaps this is way to reduce buyers regret, you are out only $3000 and not $4000 (with DGX Spark).

    • simlevesque6 days ago |parent

      Simon Willison seems to like his:https://til.simonwillison.net/llms/codex-spark-gpt-oss

      • npalli6 days ago |parent

        He is very enthusiastic about new things but even he struggled (for ex. the first link is about his experience OOB with Sparq and it wasn't a smashing success).

          Should you get one? #
          It’s a bit too early for me to provide a confident   recommendation concerning this machine. As indicated above,   I’ve had a tough time figuring out how best to put it to use,   largely through my own inexperience with CUDA, ARM64 and Ubuntu GPU machines in general.
         
          The ecosystem improvements in just the past 24 hours have been very reassuring though. I expect it will be clear within a few weeks how well supported this machine is going to be.
      • jandrese6 days ago |parent

        Performance wise it was able to spit out about half of a buggy version of Space Invaders as a single HTML file in roughly a minute.

        • badgersnake6 days ago |parent

          I’m pretty sure I could spit out something that doesn’t work in half a minute.

          • jandrese6 days ago |parent

            Don't undersell it. The game is playable in a browser. The graphics are just blocks, the aliens don't return fire. There are no bunkers. The aliens change colors when they descend to a new level (whoops). But for less than 60 seconds of effort it does include the aliens (who do properly go all the way to the edges, so the strategy of shooting the sides off of the formation still works--not every implementation gets that part right), and it does detect when you have won the game. The tank and the bullets work, and it even maintains the limit on the number of bullets you can have in the air at once. However, the bullets are not destroyed by the aliens so a single shot can wipe out half of a column. It also doesn't have the formation speed up as you destroy the aliens.

            So it is severely underbaked but the base gameplay is there. Roughly what you would expect out of a LLM given only the high level objective. I would expect an hour or so of vibe coding would probably result in something reasonably complete before you started bumping up into the context window. I'm honestly kind of impressed that it worked at all given the minuscule amount of human input that went into that prompt.

            • JohnBooty6 days ago |parent

              I do think that people typically undersell the ability of LLMs as coding assistants!

              I'm not quite sure how impressed to be by the LLM's output here. Surely there are quite a few simple Space Invaders implementations that made it into the training corpus. So the amount of work the LLM did here may have been relatively small; more of a simple regurgitation?

              What do you think?

            • ofalkaed6 days ago |parent

              >The aliens change colors when they descend to a new level (whoops).

              That is how Space Invaders originally worked, used strips of colored cellophane to give the B&W graphics color and the aliens moved behind a different colored strip on each level down. So, maybe not an whoops?

              Edit: After some reading, I guess it was the second release of Space Invaders which had the aliens change color as they dropped, first version only used the cellophane for a couple parts of the screen.

        • npalli6 days ago |parent

          I think this is the key, it can do impressive stuff but it won't be fast. For that, you have to put in a NVidia data center / AI Factory.

      • BoredPositron6 days ago |parent

        He likes everything.

      • colordrops6 days ago |parent

        "I don't think I'll use this heavily"

    • cma6 days ago |parent

      Some of the stuff in the Carmack thread made it sound like it could be due to thermals, so maybe could reach or come a lot closer to, but not sustain, and if this has better cooling maybe it does better? I might be off on that.

      • nxobject6 days ago |parent

        I'd love to see how far shucking it and using aftermarket cooling will go. Or perhaps it's hard-throttled for market segmentation purposes?

    • killerstorm5 days ago |parent

      I don't understand DGX Spark hate. It's clearly not about performance (a small, low-TDP device), but ability to experiment with bigger models. I.e. a niche between 5090 and 6000 Pro, and specifically for people who want CUDA

    • justinclift5 days ago |parent

      Wasn't it shown that Carmack just had incorrect expectations, based upon misunderstanding the details of the GPU hardware?

      From rough memory, something along the lines of "it's an RTX, not RTX Pro class of GPU" so the core layout is different from what he was basing his initial expectations upon.

    • sirlancer6 days ago |parent

      Except Carmack, as much as I hate to say it, was simply wrong. If you run the GPU at full throttle then you get the power draw that he reported. However, if you run the CPU AND the GPU at full throttle, then you can draw all the power that’s available.

  • dinkleberg6 days ago

    This is a tangent, but the little pop up example for their ai chat bot to try and entice me to use it was something along the lines of “what are the specs?”

    How great would it be if instead of shoving these bots to help decipher the marketing speak they just had the specs right up front?

    • yndoendo6 days ago |parent

      I find all these Popup Assistant Bots as bad User Experience.

      No, I don't want to use your assistant and your are forcing me to pointlessly click on the close button. Some times they event hide viable information during their popup.

      They seem to be the reincarnation of 2000s popups; there to satisfy a business manager versus actually being a useful tool.

    • arcanemachiner6 days ago |parent

      But how would that boost their KPIs for user engagement and AI usage?

      • mey6 days ago |parent

        Why not burn down some tree's and show the wrong information instead of putting a simple table?

  • sparkler1236 days ago

    I had one of these on pre-order/reservation from when they announced the DGX Spark and ended up returning it after a couple days. I thought I'd give it a shot, though. The 128GB of unified memory was the big selling point (as are any of the DGX Spark boxes), but the memory bandwidth was very disappointing. Being able to load a 100B+ parameter model was cool in terms of novelty but not particularly great for local inferencing.

    Also, NVIDIA's software they have you install on another machine to use it is garbage. They tried to make it sort of appliance-y but most people would rather just have SSH work out of the box and can go from there. IMO just totally unnecessary. The software aspect was what put me over the edge.

    Maybe the gen 2 will be better, but unless you have a really specific use case that this solves well, buy credits or something somewhere else.

    • mirekrusin6 days ago |parent

      I have a weird feeling that "Spark 2" may have an apple logo on it.

  • mindcrash6 days ago

    ServeTheHome has already benchmarked the DGX Spark architecture against the (very obvious) Ryzen AI Max 395+ with 128G RAM:

    https://www.servethehome.com/nvidia-dgx-spark-review-the-gb1...

    If (and in case of Nvidia that's a big if at the moment) they get their software straight on Linux for once this piece of hardware seems to be something to keep an eye on.

    • canucker20166 days ago |parent

      GMKtec, maker of the EVO-X2 mini-PC that uses a Ryzen AI Max 395+, posted a blog post with a comparison between the DGX Spark and their EVO-X2 miniPC.

      from https://www.gmktec.com/blog/evo-x2-vs-nvidia-dgx-spark-redef... (text taken from https://wccftech.com/forget-nvidia-dgx-spark-amd-strix-halo-... since the GMKtec table was an image, but wccftech converted to an HTML table - EDIT-reformatted to make table look nicer in monospace font w/o tabs)

        Test Model    Metric                          EVO – X2        NVIDIA GB10     Winner
        Llama 3.3 70B Generation Speed (tok/sec)       4.90            4.67           AMD
                      First Token Response Time (s)    0.86            0.53           NVIDIA
        Qwen3 Coder   Generation Speed (tok/sec)      35.13           38.03           NVIDIA
                      First Token Response Time (s)    0.13            0.42           AMD
        GPT-OSS 20B   Generation Speed (tok/sec)      64.69           60.33           AMD
                      First Token Response Time (s)    0.19            0.44           AMD
        Qwen3 0.6B Model Generation Speed (tok/sec)  163.78          174.29           NVIDIA
                      First Token Response Time (s)    0.02            0.03           AMD
      • mindcrash6 days ago |parent

        And additionally Framework apparently benchmarked GPT-OSS 120B (!) on the maxed out 395+ Desktop and reached a 38.0 tok/sec Generation Speed. Given that Nvidia can't even keep up on a 20B model, I assume they can't keep up on the 120B model aswell.

        https://frame.work/nl/en/desktop?tab=machine-learning

        So to me the only thing which seems to be interesting about the Spark atm is the ability to daisy link several units together so you can create a InfiniBand-ish network at InfiniBand speeds of Sparks.

        But overall for just plain development and experimentation, and since I don't work at Big AI, I'm pretty sure I would not purchase Nvidia at the moment.

        • aseipp6 days ago |parent

          Unfortunately comparing tok/sec right now in a vacuum and especially across weeks of time is kind of pointless. Everything is still evolving; there were patches within days that bumped GB10 performance by double digit percentiles in some frameworks. You just kind of have to accept things are a moving target.

          For comparison, as of right now, I can run GPT-OSS 120b @ 59 tok/sec, using llama.cpp (revision 395e286bc) and Unsloth dynamic 4-bit quantized models.[1] GPT-OSS 20b @ 88 tok/sec [2]. The MXFP4 variant comes in the same, at ~89 tok/sec[3]. It's probably faster on other frameworks, llama.cpp is known to not be the fastest. I don't know what LM Studio backend they used. All of these numbers put the GB10 well ahead of Strix Halo, if only going by the numbers we see here.

          If the AMD software wasn't also comparatively optimized by the same amount in the same timeframe, then the GB10 would be faster, now. Maybe it was optimized just as much; I don't have a Strix Halo part to compare. But my point is, don't just compare numbers from two various points in time, it's going to be very misleading.

          [1]: https://huggingface.co/unsloth/gpt-oss-120b-GGUF/tree/main/U... [2]: https://huggingface.co/unsloth/gpt-oss-20b-GGUF/resolve/main... [3]: https://huggingface.co/unsloth/gpt-oss-20b-GGUF/resolve/main...

          • nl6 days ago |parent

            These are valid points but the numbers are still useful as a floor on performance.

            Given Strix Halo is so much cheaper I'd expect more people to work on improving it, but the NVIDIA tools are better so unclear which has more headroom.

            • aseipp6 days ago |parent

              Yeah that's fair. 60 tok/sec on a gpt-oss-120b is certainly nice to know if you should even think about it at all. I'm quite happy with it anyway.

              The pricing is definitely by far the worst part of all of this. I suspect the GB10 still has more perf left on the table, Blackwell has been a rough launch. But I'm not sure it's $2000 better if you're just looking to get a fun little AI machine to do embeddings/vision/LLMs on?

      • EnPissant5 days ago |parent

        This is nonsense. The NVIDIA will slightly win on all generation speed, and be _much_ faster on first token response time.

    • embedding-shape5 days ago |parent

      > If (and in case of Nvidia that's a big if at the moment) they get their software straight on Linux for

      What exactly isn't working for you? The last two/three months I've been almost exclusively doing ML work (+CUDA) with a NVIDIA card on Linux, and everything seems to work out of the box, including debugging/introspection tools and everything else I've tried. As an extra plus, everything runs much faster on Linux than the very same hardware and software does on Windows.

  • canucker20166 days ago

    Dell and Lenovo have product pages for their versions of the DGX Spark.

    Dell:

    https://www.dell.com/en-us/shop/desktop-computers/dell-pro-m...

    - $3,998.99 4TB SSD

    - $3,699.00 2TB SSD

    Lenovo:

    https://www.lenovo.com/us/en/p/workstations/thinkstation-p-s...

    - $3,999.00 4TB SSD

    https://www.lenovo.com/us/en/p/workstations/thinkstation-p-s...

    - $3,539.00 1TB SSD

  • WhitneyLand6 days ago

    GX10 vs MacBook Pro M4 Max:

    - Price: $3k / $5k

    - Memory: same (128GB)

    - Memory bandwidth: ~273GB/s / 546GB/sec

    - SSD: same (1 TB)

    - GPU advantage: ~5x-10x depending on memory bottleneck

    - Network: same 10Gbe (via TB)

    - Direct cluster: 200Gb / 80Gb

    - Portable: No / Yes

    - Free Mac included: No / Yes

    - Free monitor: No / Yes

    - Linux out of the box: Yes / No

    - CUDA Dev environment: Yes : No

    • tassadarforaiur6 days ago |parent

      On the networking side. M4 max does have thunderbolt 5, 80gbps advertised. Would ip over TB not allow for significantly faster interconnects when clustering Macs?

      • wmf6 days ago |parent

        Yes, people use Thundebolt networking to build Mac AI clusters. The Spark has 200G Ethernet that is even faster though.

      • WhitneyLand6 days ago |parent

        Made the correction to 80Gb/sec thank you.

        W.r.t ip, the fastest I’m aware of is 25Gb/s via TB5 adapters like from Sonnet.

        • tgma6 days ago |parent

          You should not be using an adapter to get IP over Thunderbolt. Just connect a Thunderbolt5 cable to both machines.

          • WhitneyLand6 days ago |parent

            For point to point sure, but if you want to connect multiple machines in an actual fabric you’ll need some kind of network interop.

            The Asus clustering speed is not limited to p2p.

            • tgma6 days ago |parent

              Fair enough. On the other hand you have more thunderbolts to make up a clique mesh of seven point to point Macs.

    • hasperdi6 days ago |parent

      AMD 395+ is more bang for the buck IMO.

      GMKtec EVO-X2 vs GX10 vs MacBook Pro M4 Max

        Price:  $2,199.99 / $3,000 / $5,000
        CPU:  Ryzen AI Max 395+ (Strix Halo, 16C/32T) / NVIDIA Grace Blackwell GB200 Superchip (20-core ARM v9.2) / Apple M4 Max (12C)
        GPU:  Radeon 890M (RDNA3 iGPU) / Integrated Blackwell GPU (up to 1 PFLOP FP4) / 40-core integrated GPU
        Memory:  128GB LPDDR5X / 128GB LPDDR5X unified / 128GB unified
        Memory bandwidth:  ???GB/s / ~500GB/s / ~546GB/s
        SSD:  1TB PCIe 4.0 / 4TB PCIe 5.0 / 1TB NVMe
        GPU advantage:  Similar (EVO-X2 trades blows with GB10 depending on model and framework)
        Network:  2.5GbE / 10GbE / 10GbE (via TB)
        Direct cluster:  40Gb (USB4/TB4) / 200Gb / 80Gb
        Portable:  Semi (compact desktop) / No / Yes
        Free Mac included:  No / No / Yes
        Free monitor:  No / No / Yes
        Linux out of the box:  Yes / Yes / No
        CUDA dev environment:  No (ROCm) / Yes / No
      • wtallis6 days ago |parent

        The DGX Spark, Ascent GX10, and related machines have no relation to NVIDIA Grace Blackwell GB200. The chip they are based on is called GB10, and is architecturally very different from NVIDIA's datacenter solutions, in addition to being vastly smaller and less powerful. They don't have anything resembling the Grace CPU NVIDIA used in Grace Hopper and Grace Blackwell datacenter products. The CPU portion of GB10 is a Mediatek phone chip's CPU complex that metastasized, not NVIDIA's datacenter CPU cut down.

        • tgma6 days ago |parent

          Where does MediaTek come into the picture? Don't they take some ARM Cortex IP directly from ARM just like MediaTek and many others?

          • wtallis5 days ago |parent

            Mediatek is in the picture because NVIDIA outsourced everything in GB10 but the GPU to Mediatek. GB10 is two chiplets, and the larger one is from Mediatek. Yes, Mediatek uses off the shelf ARM CPU core IP, but they still had to make a lot of decisions about how to implement it: how many cores, what cluster and cache arrangements, none of which resemble NVIDIA's Grace CPU.

            • tgma5 days ago |parent

              Thanks for the clarification. I was surprised to learn it is not a single chip; thought they did something akin to Apple Silicon integrating some ARM cores on their main chip. Kind of disappointing: they basically asked MediaTek to build a CPU with an NV-Link I/O.

              • wtallis5 days ago |parent

                The big picture is probably that GB10 is destined to show up in laptops, but NVIDIA couldn't be bothered to do all the boring work of building the rest of the SoC and Mediatek was the cheapest and easiest partner available. It'll eventually be followed by an Intel SoC with NVIDIA providing the GPU chiplet, but in the meantime the Mediatek CPU solution is good enough.

                From NVIDIA's perspective, they need an answer to the growing segment of SoCs with decent sized GPUs and unified memory; their existing solutions at the far end of a PCIe link with a small pool of their own memory just can't work for some important use cases, and providing GPU chiplets to be integrated into other SoCs is how they avoid losing ground in these markets without the expense of building their own full consumer hardware platform and going to war with all of Apple, Intel, AMD, Qualcomm.

    • bigyabai6 days ago |parent

      > Linux out of the box: Yes / No

      For homelab use, this is the only thing that matters to me.

    • josefresco6 days ago |parent

      > Free monitor: No / Yes

      How is the monitor "free" if the Mac costs more?

  • embedding-shape6 days ago

    I wonder why they even added this to the FAQ if they're gonna weasel their way around it and not answer properly?

    > What is the memory bandwidth supported by Ascent GX10?

    > AI applications often require a bigger memory. With the NVIDIA Blackwell GPU that supports 128GB of unified memory, ASUS Ascent GX10 is an AI supercomputer that enables faster training, better real-time inference, and support larger models like LLMs.

    Never seen anything like that before. I wonder if this product page is actually done and was ready to be public?

    • skrebbel6 days ago |parent

      Maybe they had a local llm write it but the memory bandwidth was too low for a decent answer.

    • schainks6 days ago |parent

      Taiwanese companies are legendary for producing baller hardware with terrible marketing and documentation that answers important questions. It's like those teams don't talk to each other inside the business.

      Fortunately, their products are also easy to crack open and probe.

      • LtdJorge6 days ago |parent

        Also terrible software and firmware. Examples are the programs for motherboard RGB control from Asus, Asrock, MSI, Gigabyte, etc.

        • schainks4 days ago |parent

          This is a very specific example of something the big BIOS players just don't give a crap about supporting well, either.

          It's a feature that requires two different _companies_ to collaborate to build. Mayhem.

    • moffkalast6 days ago |parent

      It seamlessly combines Nvidia's price gouging and ASUS's shady tactics. God forbid you ever have to RMA it, they'll probably brake it and blame it on you.

    • porphyra6 days ago |parent

      Probably LLM slop, but also it's the same GB10 chip as the DGX Spark so why would the memory bandwidth be significantly different?

      • tgma6 days ago |parent

        How is it different from their consumer GPU marketing? They have Founder Edition under NVIDIA brand initially, but the ecosystem is supposed to mass produce. It appears to be the same for DGX Spark where PNY has produced the NVIDIA branded and now you're going to see ASUS and Dell and others make similar PCs under their brand.

      • baby_souffle6 days ago |parent

        As far as I can tell these are all the same hardware just different enclosures. I'm not sure why Nvidia went this route given that they have a first party device. Usually you only see this when the original manufacturer doesn't want to be in the distribution or support game.

        • 6 days ago |parent
          [deleted]
        • jsheard6 days ago |parent

          If this is anything like their consumer graphics cards, the first-party version will only be available in the dozen or so countries where Nvidia has established direct distribution channels and they'll defer to the third parties everywhere else.

        • jonfw6 days ago |parent

          Distribution channels to orgs or countries that don't buy from nvidia. Ability to cut discounts w/o discounting the Nvidia brand

  • joelthelion6 days ago

    "Nvidia dgx os", ugh. It would be a lot more enticing if that thing could run stock Linux.

    • aseipp6 days ago |parent

      It's just Ubuntu with precanned Nvidia software, otherwise it's a "normal" UEFI + ACPI booting machine, just like any x86 desktop. People have already installed NixOS and Fedora 43, and you can even go ahead and then install CUDA and it will work, too. (You might be able to forgo the nvidia modules and run upstream Mesa+NVK, even.) It's very different from Jetson and much more like a normal x86 desktop.

      The kernel is patched (and maintained by Canonical, not Nvidia) but the patches hanging off their 6.17-next branch didn't look outrageous to me. The main hitch right now is that upstream doesn't have a Realtek r8127 driver for the ethernet controller. There were also some mediatek-related patches that were probably relevant as they designed the CPU die.

      Overall it feels close to full upstream support (to be clear: you CAN boot this system with a fully upstream kernel, today). And booting with UEFI means you can just use the nvidia patches on $YOUR_FAVORITE_DISTRO and reboot, no need to fiddle with or inject the proper device trees or whatever.

      • BoredPositron6 days ago |parent

        I got burned more than once with Nvidia not providing kernel updates straight after release...

        • aseipp6 days ago |parent

          That was also my experience with their Jetson series [1], but my understanding is that these DGX kernels are not maintained by Nvidia but by Canonical, so they operate directly out of their package repos and on Canonicals' release and support schedule (e.g. 24.04 supported until 2029.) You can already get 6.14 from the package repos, and 6.17 can be built from source and is regularly updated if you follow the Git repositories. It's also not like the system is unusable without patches, and I suspect most will go upstream.

          Based on my experience it feels quite different and much closer to a normal x86 machine, probably intentional. Maybe it helped that Nvidia did not design the full CPU complex, Mediatek did that.

          [1] They even claim that Thor is now fully SBSA compliant (Xavier had UEFI, Orin had better UEFI, and now this) -- which would imply it has full UEFI + ACPI like the Spark. But when I looked at the kernel in their Thor L4T release, it looked like it was still loaded with Jetson-specific SOC drivers on top of a heavy fork of the PREEMPT_RT patch series for Linux 6.8; I did not look too hard, but it still didn't seem ideal. Maybe you can probably boot a "normal" OS missing most of the actual Jetson-specific peripherals, I guess.

      • joelthelion5 days ago |parent

        Thanks! So, then, they are terrible at marketing it, at least for people like me.

      • blmarket6 days ago |parent

        Wait, x86? you mean arm64?

        • aseipp6 days ago |parent

          It's a bit ambiguous but I can't edit now, sorry. What I meant to say was that it boots using the same mechanism as x86 machines that you are familiar with, not that it is an x86 machine itself.

    • 9front6 days ago |parent

      DGXOS is a customized Ubuntu Noble!

      /etc/os-release:

        PRETTY_NAME="Ubuntu 24.04.3 LTS"
        NAME="Ubuntu"
        VERSION_ID="24.04"
        VERSION="24.04.3 LTS (Noble Numbat)"
        VERSION_CODENAME=noble
        ID=ubuntu
        ID_LIKE=debian
        HOME_URL="https://www.ubuntu.com/"
        SUPPORT_URL="https://help.ubuntu.com/"
        BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
        PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
        UBUNTU_CODENAME=noble
        LOGO=ubuntu-logo
      
      and /etc/dgx-release:

        DGX_NAME="DGX Spark"
        DGX_PRETTY_NAME="NVIDIA DGX Spark"
        DGX_SWBUILD_DATE="2025-09-10-13-50-03"
        DGX_SWBUILD_VERSION="7.2.3"
        DGX_COMMIT_ID="833b4a7"
        DGX_PLATFORM="DGX Server for KVM"
        DGX_SERIAL_NUMBER="Not Specified"
      
      While other Linux distros were already reported to work, some tools provide by Nvidia won't work with Fedora or NixOS. Not yet!

      I couldn't get Nvidia AI Workbench to start on Neon KDE after changing to DISTRIB_ID=Ubuntu in /etc/lsb-release. Neon is based on Ubuntu Noble too.

    • colechristensen6 days ago |parent

      I assume the driver code just isn't in mainline linux and installing the correct toolchain isn't always easy. Having it turnkey available is nice and fundamentally new hardware just isn't going to have day 1 linux support.

      You're free to lift the kernel and any drivers/libraries and run them on your distribution of choice, it'll just be hacky.

    • simlevesque6 days ago |parent

      Yeah that's a bummer. They do the same for all their boards like the Jetson Nano.

    • porphyra6 days ago |parent

      it's basically just linux with a custom kernel and cuda preinstalled

    • CamperBob26 days ago |parent

      What would be the advantages, exactly?

      • CamperBob26 days ago |parent

        Guess I have my answer.

  • nycdatasci6 days ago

    I ordered one that arrived last week. It seems like a great idea with horrible execution. The UI shows strange glitchy/artifacts occasionally as if there's a hardware failure.

    To get a sense for use cases, see the playbooks on this website: https://build.nvidia.com/spark.

    Regarding limited memory bandwidth: my impression is that this is part of the onramp for the DGX Cloud. Heavy lifting/production workloads will still need to be run in the cloud.

    • tcdent6 days ago |parent

      The graphics company has given up on graphics.

  • brian_herman6 days ago

    Couldn't you buy a Mac Ultra with more memory for the same price?

    • jsheard6 days ago |parent

      This Asus box costs $3000, and the cheapest Mac Studio with the same amount of RAM costs $3500, or $3700 if you also match the SSD capacity.

      You do get about twice as much memory bandwidth out of the Mac though.

      • chrsw6 days ago |parent

        What's the cheapest way to get the same memory and memory bandwidth as a Mac Studio but also CUDA support?

        • embedding-shape6 days ago |parent

          CUDA is only on nvidia GPUs, I guess a RTX Pro 6000 would get you close, two of them are 192GB in total. Vastly increased memory bandwidth too. Maybe two/four of the older A100/A6000 could do the trick too.

          • deeviant6 days ago |parent

            RTX pro does not have NV-link, because money, however. Otherwise, people might not have to drop 40,000 for true inference GPU.

        • 6 days ago |parent
          [deleted]
        • bigyabai6 days ago |parent

          Somehow, it is still cheaper to own 10x RTX 3060s than it is to buy a 120gb Mac.

          • woodson6 days ago |parent

            The Mac will be much smaller and use less power, though.

            • embedding-shape5 days ago |parent

              How does the introspection/debugging tools look like for Apple/Mac hardware when it comes to GPU programming?

            • bigyabai6 days ago |parent

              Would almost be a no-brainer if the Mac GPU wasn't a walled garden.

              • tuna746 days ago |parent

                Is that any different from nVidia?

                • bigyabai6 days ago |parent

                  Yes? Apple does not document their GPUs or provide any avenue for low-level API design. They cut ties with Khronos, refuse to implement open GPU standards and deliberately funnel developers into a proprietary and non-portable raster API.

                  Nvidia cooperates with Khronos, implements open-source and proprietary APIs simultaneously, documents their GPU hardware, and directly supports community reverse-engineering projects like nouveau and NOVA with their salaried engineers.

                  Pretty much the only proprietary part is CUDA, and Nvidia emphatically supports the CUDA alternatives. Apple doesn't even let you run them.

          • 6 days ago |parent
            [deleted]
        • 6 days ago |parent
          [deleted]
      • Someone12346 days ago |parent

        The resale cost shouldn't be ignored either, that Mac Studio will definitely resell for more than this will by a significant amount. Least of all because the Mac Studio is useful in all kinds of industries whereas this is quite niche.

      • brian_herman6 days ago |parent

        Oh thanks for clarifing!

    • simlevesque6 days ago |parent

      Cuda is king

      • MangoToupe6 days ago |parent

        Still? Really? Why?

        • baby_souffle6 days ago |parent

          Inertia. Almost everybody else was asleep at the wheel for the last decade and you do not catch up to that kind of sustained investment overnight.

        • whywhywhywhy6 days ago |parent

          Better support than MPS and nothing Apple is shipping today can compete with even the high end consumer CUDA devices in actual speed.

          • MangoToupe6 days ago |parent

            Presumably the second point is irrelevant if you're choosing among devices with unified memory.

            • bigyabai6 days ago |parent

              It is not. Unified memory is not a panacea, it says nothing about the compute performance of the hardware.

              The Spark's GPU gets ~4x the FP16 compute performance of an M3 Ultra GPU on less than half the Mac Studio's total TDP.

              • MangoToupe6 days ago |parent

                right, but that doesn't describe a "high end consumer CUDA device". Nothing under that description has unified memory.

                • bigyabai6 days ago |parent

                  Every CUDA-compatible GPU has had support for unified memory since 2014: https://developer.nvidia.com/blog/unified-memory-cuda-beginn...

                  Can you be a bit more specific what technology you're actually referring to? "Unified memory" is just a marketing term, you could mean unified address space, dual-use memory controllers, SOC integration or Northbridge coprocessors. All are technologies that Nvidia has shipped in consumer products at one point or another, though (Nintendo Switch, Tegra Infotainment, 200X MacBook to name a few).

                  • nl6 days ago |parent

                    They mean the ability to run a large model entirely on the GPU without paging it out of a separate memory system.

                    • bigyabai6 days ago |parent

                      They're basically describing the Jetson and Tegra lineup, then. Those were featured in several high-end consumer devices, like smart-cars and the Nintendo Switch.

                      • nl6 days ago |parent

                        Sure but neither had enough memory to be useful for large LLMs.

                        And neither were really consumer offerings.

            • whywhywhywhy6 days ago |parent

              Depends if you care how fast the result arrives. Imagery gen is a very different tool at <12 seconds an image vs nearer to 1 minute.

        • embedding-shape6 days ago |parent

          For how shit it all is, it's still the easiest to use, with most available resources when you inevitable need to dig through stuff. Just things like nsight GUI and available debugging options ends up bringing together a better developer experience compared to other ecosystems. I do hope the competitors get better though because the current de facto monopoly helps no-one.

    • aljgz6 days ago |parent

      My reasons for not choosing an Apple product for such a use-case:

      1- I vote with my wallet, do I want to pay a company to be my digital overlord, doing everything they can to keep me inside their ecosystem? I put too much effort to earn my freedom to give it up that easily.

      2- Software: Almost certainly, I would want to run linux on this. Do I want to have something that has or eventually will have great mainstream linux support, or something with closed specs that people in Asahi try to support with incredible skills and effort? I prefer the system with openly available specs.

      I've extensively used mac, iphone, ipad over time. The only apple device I ever bought was an ipad, and I would never buy it, if I knew they deliberately disable multitasking on it.

      • dbtc6 days ago |parent

        Not disagreeing with any of your points, but this is a good trend right?

        https://github.com/apple/container

        > container is a tool that you can use to create and run Linux containers as lightweight virtual machines on your Mac. It's written in Swift, and optimized for Apple silicon.

        • bigyabai6 days ago |parent

          That would have been an impressive piece of technology in 2015, when WSL was theoretical. To release it in 2025 is a very bad trend, and it reflects Apple's isolation from competition and reluctance to officially support basic dev features.

          Container does nothing to progress the state of supporting Linux on Apple Silicon. It does not replace macOS, iBoot or the other proprietary, undocumented or opaque software blobs on the system. All it does is keep people using macOS and purchasing Apple products and viewing Apple advertisements.

  • dang6 days ago

    One past related thread. Any others?

    The Asus Ascent GX10 a Nvidia GB10 Mini PC with 128GB of Memory and 200GbE - https://news.ycombinator.com/item?id=43425935 - March 2025 (50 comments)

    Edit: added via wmf's comment below:

    "DGX Spark has only half the advertised performance" - https://news.ycombinator.com/item?id=45739844 - Oct 2025 (24 comments)

    Nvidia DGX Spark: When benchmark numbers meet production reality - https://news.ycombinator.com/item?id=45713835 - Oct 2025 (117 comments)

    Nvidia DGX Spark and Apple Mac Studio = 4x Faster LLM Inference with EXO 1.0 - https://news.ycombinator.com/item?id=45611912 - Oct 2025 (20 comments)

    Nvidia DGX Spark: great hardware, early days for the ecosystem - https://news.ycombinator.com/item?id=45586776 - Oct 2025 (111 comments)

    NVIDIA DGX Spark In-Depth Review: A New Standard for Local AI Inference - https://news.ycombinator.com/item?id=45575127 - Oct 2025 (93 comments)

    Nvidia DGX Spark - https://news.ycombinator.com/item?id=45008434 - Aug 2025 (207 comments)

    Nvidia DGX Spark - https://news.ycombinator.com/item?id=43409281 - March 2025 (10 comments)

    • wmf6 days ago |parent

      It's the same as DGX Spark so there are several:

      https://news.ycombinator.com/item?id=45586776

      https://news.ycombinator.com/item?id=45008434

      https://news.ycombinator.com/item?id=45713835

      https://news.ycombinator.com/item?id=45575127

      https://news.ycombinator.com/item?id=45611912

      https://news.ycombinator.com/item?id=43409281

      https://news.ycombinator.com/item?id=45739844

      • dang6 days ago |parent

        Thanks! Added above.

  • qwertox6 days ago

    My hope was to find a system which does ASR, then LLM processing with MCP use and finally TTS: "Put X on my todo list" / "Mark X as done" -> LLM thinks, reads the todo list, edits the todo list, and tells me "I added X to your todo list", ... "Turn all the lights off" -> llm thinks and uses MCP to turn off the lights -> "Lights have been turned off". "Send me an email at 8pm reminding me to do" .... "Email has been scheduled for 8pm"

    That's all I want. It does not have to be fast, but it must be capable of doing all of that.

    Oh, and it should be energy efficient. Very important for a 24/7 machine.

    • mindcrash6 days ago |parent

      You can already do that on most desktop GPU's (even going as far as prev gen Nv 1050/1060/1070 for example).

      You'll need a model able to work with tools, like llama 3.2 (https://huggingface.co/meta-llama), serve it, hook up MCPs, include a STT interface, and you're cooking.

      • bayindirh6 days ago |parent

        Even a bottom of the barrel N95 has audio acceleration features helping with speech to text, but the LLM inference part still will be far from being efficient.

        Plus, you need to keep the card at "ready" state, you can't idle/standby it completely.

    • bayindirh6 days ago |parent

      Energy efficient LLM inference (for now) is as realistic as existence of perpetual motion.

      • qwertox6 days ago |parent

        I wouldn't mind if it burns 200 watts while it does the task, as long as it idles at below 30W

        • bayindirh6 days ago |parent

          NVIDIA H200 idles at 75 watts. I'm not keeping my hopes high on that, either.

          • janstice6 days ago |parent

            To be fair, if I paid $30k+ for an H200, I’d want it to be making money 24/7 rather than idling, so the idle power draw would be strictly theoretical.

            • bayindirh6 days ago |parent

              It's not always about money.

    • nl6 days ago |parent

      You probably can do this now. Non-generative LLMs don't need to be as big so something like Gemma 4B on the CPU will work.

      You may have better results with semi-templated responses though.

  • irusensei6 days ago

    Why is every computer listing nowadays look the same with the glowing golden and blue chip images and the dynamic images that appear when you scroll down.

    Please give me a good old html table with specs will ya?

    • malfist6 days ago |parent

      But the ai chatbot popup suggests you can conversationally ask for the specs

  • Aurornis6 days ago

    These are primarily useful for developing CUDA targeted code on something that sits on your desk and has a lot of RAM.

    They're not the best choice for anyone who wants to run LLMs as fast and cheap as possible at home. Think of it like a developer tool.

    These boxes are confusing the internet because they've let the marketing teams run wild (or at least the marketing LLMs run wild) trying to make them out to be something everyone should want.

  • whatever16 days ago

    Any good ideas for what these can be used for?

    I am still trying to think a use case that a Ryzen AI Max/MacBook or a plain gaming gpu cannot cover.

    • aseipp6 days ago |parent

      It's very, very good as an ARM Linux development machine; the Cortex-X925s are Zen5 class (with per-core L2 caches twice as big, even!) and it has a lot of them; the small cores aren't slouches either (around Apple M1 levels of perf IIRC?) GB10 might legitimately be the best high-performance Linux-compatible ARM workstation you can buy right now, and as a bonus it comes with a decent GPU.

    • addaon6 days ago |parent

      Laptop-class bandwidth without that annoying portability.

    • MurkyLabs6 days ago |parent

      A GPU cluster would work better but if you're only testing things out using CUDA and want 200GB networking and somewhat low power all in one this would be the device for you

    • cmrdporcupine6 days ago |parent

      AI stuff aside I'm frankly happy to see workstation-class AArch64 hardware available to regular consumers.

      Last few jobs I've had were for binaries compiled to target ARM AArch64 SBC devices, and cross compiling was sometimes annoying, and you couldn't truly eat your own dogfood on workstations as there's subtle things around atomics and memory consistency guarantees that differ between ISAs.

      Mac M series machines are an option except that then you're not running Linux, except in VM, and then that's awkward too. Or Asahi which comes with its own constraints.

      Having a beefy ARM machine at my desk natively running Linux would have pleased me greatly. Especially if my employer was paying for it.

  • simlevesque6 days ago

    I really wish I had the kind of money to try my hands at it.

    • hamdingers6 days ago |parent

      You can rent GPUs from many providers for a few bucks an hour.

      • uyzstvqs6 days ago |parent

        Even cheaper, unless you want the really high-end enterprise stuff. You can run ComfyUI pretty comfy for $0.30 to $0.40 per hour, if AI art is your goal.

  • lend0006 days ago

    Is there something similar with twice the memory/bandwidth? That's a use case that I would seriously consider to run any frontier open source model locally, at usable speed. 128GB is almost enough.

    • wmf6 days ago |parent

      I should also mention that if you want twice the performance of DGX Spark you can buy... two Sparks and link them together.

    • wmf6 days ago |parent

      Mac Studio

      • bigyabai6 days ago |parent

        Even an M3 Ultra won't put up similar GPU compute to a DGX Spark: https://blog.exolabs.net/nvidia-dgx-spark/

        Fill up the memory with a large model, and most of your memory bandwidth will be waiting on compute shaders. Seems like a waste of $5,000 but you do you.

  • maxbaines6 days ago

    Looks like a pretty useful offering, 128Gb Memory Unified, with the ability to be chained. IN the Uk release price looks to be £2999.99 Nice to see AI Inference becoming available to us all, rather than using a GPU ..3090etc.

    https://www.scan.co.uk/products/asus-ascent-gx10-desktop-ai-...

    • atwrk6 days ago |parent

      All Sparks only have a memory bandwidth of 270 GB/s though (about the same as the Ryzen AI Max+ 395), while the 3090 has 930 GB/s.

      (Edit: GB of course, not MB, thanks buildbot)

      • postalrat6 days ago |parent

        The 3090 also has 24gb of ram vs 128gb for the spark

        • Gracana6 days ago |parent

          You'd have to be doing something where the unified memory is specifically necessary, and it's okay that it's slow. If all you want is to run large LLMs slowly, you can do that with split CPU/GPU inference using a normal desktop and a 3090, with the added benefit that a smaller model that fits in the 3090 is going to be blazing fast compared to the same model on the spark.

      • buildbot6 days ago |parent

        I believe you mean GB/s?

      • Jackson__6 days ago |parent

        Eh, this is way overblown IMO. The product page claims this is for training, and as long as you crank your batch size high enough you will not run into memory bandwidth constraints.

        I've finetuned diffusion models streaming from an SSD without noticeable speed penalty at high enough batchsize.

    • cmxch6 days ago |parent

      At that price (roughly 4000 USD), one could build a full HBM powered Xeon system from the Sapphire Rapids generation.

      Either build a single socket system and give it some DDR5 to work alongside, or go dual socket and a bit less DDR5 memory.

    • BoredPositron6 days ago |parent

      I would hold my horses and see if the specs are actually true and not overblown like for the spark otherwise there are better options.

      • eightysixfour6 days ago |parent

        This is a Spark, so it is not going to be any different.

      • exasperaited6 days ago |parent

        And if waiting six months is possible, do that.

        Asus make some really useful things, but the v1 Tinker Board was really a bit problem-ridden, for example. This is similarly way out on the edge of their expertise; I'm not sure I'd buy an out-there Asus v1 product this expensive.

  • cbsmith6 days ago

    This bit of the FAQ was such a non-answer to their own FAQ, you really have to wonder:

    >> What is the memory bandwidth supported by Ascent GX10?

    > AI applications often require a bigger memory. With the NVIDIA Blackwell GPU that supports 128GB of unified memory, ASUS Ascent GX10 is an AI supercomputer that enables faster training, better real-time inference, and support larger models like LLMs.

    • palmotea6 days ago |parent

      > This bit of the FAQ was such a non-answer to their own FAQ, you really have to wonder:

      You don't have to wonder: I bet they're using generative AI to speed up delivery velocity.

      • cbsmith6 days ago |parent

        I guess that's the kindest possible interpretation. The other interpretation is that the answer is not a good one.

        • palmotea5 days ago |parent

          > I guess that's the kindest possible interpretation. The other interpretation is that the answer is not a good one.

          If they wanted to do that, they should have just omitted the question from their FAQ. An evasive answer in a FAQ is a giant footgun, because it just calls attention to the evasion.

          • cbsmith5 days ago |parent

            It's possible the FAQs were generated by one process and the answers were generated by another.

  • buildbot6 days ago

    Funny to wakeup and see this on the front page - I literally just bought a pair last night for work (and play) somewhat on a whim, after comparing the available models. This one was available the soonest & cheapest, CDW is giving 100 off even, so 2900 pre tax.

    • binary1326 days ago |parent

      I presume this is not yet in your possession. Please do let us know how it goes.

      • buildbot6 days ago |parent

        Nope not shipped/processed yet even. It was listed as in stock with a realistic number though!

        • binary1325 days ago |parent

          somehow my brain read “bought” as “physically acquired at the store” :)

  • jauntywundrkind6 days ago

    Really interested to see if anyone starts using the fancy high end Connect-X 7 NIC in these DGX Spark / GB10 derived systems. 200Gbit RDMA is available & would be incredible to see in use here.

  • nik7366 days ago

    Which models will this be able to run at an acceptable token/s rate?

    • simlevesque6 days ago |parent

      gpt-oss:120b

      https://til.simonwillison.net/llms/codex-spark-gpt-oss

      • hamdingers6 days ago |parent

        Am I missing it or is there no information about performance? Looking for a tokens/sec

        • aseipp6 days ago |parent

          Right now I get 59 tok/sec on GPT-OSS 120B using Unsloth's dynamic 4-bit quants, via llama.cpp https://news.ycombinator.com/item?id=45881049

        • simlevesque6 days ago |parent

          He didn't give that info but the transcript linked at the end shows how much time was spent for each query.

  • 6 days ago
    [deleted]
  • DiabloD36 days ago

    What a shame. This would have been a much more powerful machine if it was wrapped around AMD products.

    At least with this, you get to pay both the Nvidia and the Asus tax!

    • wmf6 days ago |parent

      In this case the Asus "tax" is negative $1,000.

  • sneilan16 days ago

    Does anyone have any information on how much this will cost? Or is it one of those products where if you have to ask you can't afford it.

    • sbarre6 days ago |parent

      Lots of existing posts in this discussion talking about prices in various regions and configurations.

  • Stevvo6 days ago

    These AI boxes resemble gaming consoles in both form factor and architecture, makes me curious if they could make good gaming machines.

    • vinkelhake6 days ago |parent

      That would depend on your idea of "good". It would be an upstream swim in most regards, but you could certainly make it work. The Asahi team has shown that you can get steam working pretty well on ARM based machines.

      But if gaming is what you're actually interested in, then it's a pretty terrible buy. You can get a much cheaper x86-based system with a discrete GPU that runs circles around this.

    • Havoc6 days ago |parent

      Likely not. Bit like the AI focused cards get their ass kicked by much cheaper gaming cards. The focus has diverged

      Plus ofc software stack for gaming on this isn’t available

      • bigyabai6 days ago |parent

        Eh, I wouldn't be so hasty:

        1) This still has raster hardware, even ray tracing cores. It's not technically an "AI focused card" like the AMD Instinct hardware or Nvidia's P40-style cards.

        2) It kinda does have a stack. ARM is the hardest part to work around, but Box86 will get the older DirectX titles working. The GPU is Vulkan compliant too, so it should be able to leverage Proton/DXVK to accommodate the modern titles that don't break on ARM.

        The tough part is the price. I don't think ARM gaming boxes will draw many people in with worse performance at a higher price.

  • NSUserDefaults6 days ago

    Really looking forward to getting this used for $50 in 6 years just for kicks.

  • oblio6 days ago

    How much does that thing cost? I don't see a price on the page.

  • jdprgm6 days ago

    Memory bandwidth is a joke. You would think by now somebody would come out with a well balanced machine for inference instead of always handicapping one of the important aspects. Feels like a conspiracy.

    At least the m5 ultra should finally balance things given the significant improvements to prompt processing in the m5 from what we've seen. Apple has had significantly higher memory bandwidth since the m1 series approaching 5 years old now. Surely an nvidia machine like this could have at bare minimum 500Gb+ if they cared in the slightest about competition.

  • amelius6 days ago

    > and support larger models like LLMs

    To turn your petaFLOP into petaSLOP.

  • varispeed6 days ago

    I was really hyped about this, but then I watched videos and it's just meh.

    What is the purpose of this thing?

  • frogperson5 days ago

    That is a seriously infuriating webste, at least on mobile anyway.

  • mahirsaid6 days ago

    is this another product they're pushing out for publicity. I mean how much testing has been done for this product. Need more specs and testing results to illuminate capabilities, practicality.

  • 77341286 days ago

    If you touch the image when scrolling on mobile then it opens when you lift your finger. Then when you press the cross in the corner to close the image, the search button behind it is activated.

    How can a serious company not notice these glaring issues in their websites?

    • tomalaci6 days ago |parent

      AI powered business value provider frontend developers.

    • schainks6 days ago |parent

      Taiwanese companies still don't value good software engineering, so talented developers who know how to make money leave. This leaves enterprise darlings like Asus stuck with hiring lower tier talent for numbers that look good to accounting.

    • speedgoose6 days ago |parent

      On desktop, clicking on an image opens it but then you can't close it, and the zoom seems to be glitchy.

      But I'm not surprised, this is ASUS. As a company, they don't really seem to care about software quality.

    • fodkodrasz6 days ago |parent

      Wait until you start using an ASUS computer, and hit the BIOS/UEFI issues...

      I learned the hard way that ASUS translates do "don't buy ever again".

    • the_real_cher6 days ago |parent

      Enshittification.

      Its not that they dont notice.

      They dont care.

      • janlukacs6 days ago |parent

        but it has AI in it.

  • RachelF6 days ago

    These very narrow speed measurements are getting out of hand:

    1 petaFLOP using FP4, that's 4 petaFLOPS using FP1 and infinite petaFLOPS using FP0.