I'm looking at my Jetson Nano in the corner which is fulfilling its post-retirement role as a paper weight because Nvidia abandoned it in 4 years.
Nvidia Jetson Nano, A SBC for "AI" debuted with already aging custom Ubuntu 18.04 and when 18.04 went EOL, Nvidia abandoned it completely without any further updates to its proprietary jet-pack or drivers and without them all of Machine Learning stack like CUDA, Pytorch etc. became useless.
I'll never buy a SBC from Nvidia unless all the SW support is up-streamed to Linux kernel.
This is a very important point.
In general, Nvidia's relationship with Linux has been... complicated. On the one hand, at least they offer drivers for it. On the other, I have found few more reliable ways to irreparably break a Linux installation than trying to install or upgrade those drivers. They don't seem to prioritize it as a first class citizen, more just tolerate it the bare minimum required to claim it works.
For those unfamiliar with Linus Torvalds' two-word opinion of Nvidia:> Nvidia's relationship with Linux has been... complicated.
Wow. Torvalds' distaste for Nvidia from that, albeit 12 year old, clip leaves little to the imagination. Re: gaming GPUs, Windows is their main OS, but is that the main reason why Huang only mentioned Windows in his CES 2025 keynote? Their gaming chips are a small portion of the company now. But they want to focus dev on Windows??
Nvidia has its own Linux distribution, DGX OS, based on Ubuntu LTS, but installing other Linux distros on machines with Nvidia GPUs is less than ideal.
Now that the majority of their revenue is from data centers instead of Windows gaming PCs, you'd think their relationship with Linux should improve or already has.
Nvidia segments its big iron AI hardware from the consumer/prosumer segment. They do this by forbidding the use of GeForce drivers in datacenters[1]. All that to say, it is possible for the H100 to to have excellent Linux support, while support for the 4090 is awful.
1. https://www.datacenterdynamics.com/en/news/nvidia-updates-ge...
They have been making real improvements the last few years. Most of their proprietary driver code is in firmware now, and the kernel driver is open-source[1] (the userland-side is still closed though).
They've also significantly improved support for wayland and stopped trying to force eglstreams on the community. Wayland+nvidia works quite well now, especially after they added explicit sync support.
Because red hat announced that next RHEl is going tone Wayland only, that why they fixed all that you said, they don't care about users, only servers
>complicated
... as in remember the time a ransomware hacker outfit demanded they release the drivers or else .....
https://www.webpronews.com/open-source-drivers-or-else-nvidi...
It's possible. I haven't had a system completely destroyed by Nvidia in the last few years, but I've been assuming that's because I've gotten in the habit of just not touching it once I get it working...
I have been having a fine time with a 3080 on recent Arch, FWIW.
HDR support is still painful, but that seems to be a Linux problem, not specific to Nvidia.
I update drivers regularly. I've only had one display failure and was solved by a simple rollback. To be a bit fair (:/) it was specifically a combination of new beta driver and a newer kernel. It's definitely improved a ton since 10 years ago I just would not update them except very carefully.
I've bricked multiple systems just running apt install on the Nvidia drivers. I have no idea how, but I run the installation, everything works fine, and then when I reboot I can't even boot.
That was years ago, but it happened multiple times and I've been very cautious ever since.
Interesting. I've never had that issue (~15 years experience) but I always had CPUs with graphics drivers. Do you think that might be it? The danger zone was always at `startx` and never before. (I still buy CPUs with graphics drivers because I think it is always good to have a fallback and hey, sometimes I want to sacrifice graphics for GPU compute :)
I got similar experience. I really prefer switch CUDA version with whole PC machine. What is more, the speed and memory of hardware improves quickly in time as well.
The Digits device runs the same nVidia DGX OS (nVidia custom Ubuntu distro) that they run on their cloud infra.
I've had a similar experience, my Xavier NX stopped working after the last update and now it's just collecting dust. To be honest, I've found the Nvidia SBC to be more of a hassle than it's worth.
Xavier AGX owner here to report the same.
My Jetson TX2 developer kit didn't stop working, but it's on a very out of date Linux distribution.
Maybe if Nvidia makes it to four trillion in market cap they'll have enough spare change to keep these older boards properly supported, or at least upstream all the needed support.
Back in 2018 I've been involved in a product development based on TX2. I had to untangle the entire nasty mess of Bash and Python spaghetti that is JetPack SDK to get everything sensibly integrated into our custom firmware build system and workflow (no, copying your application files over prebaked rootfs on a running board is absolutely NOT how it's normally done). You basically need a few deb packages with nvidia libs for your userspace, and swipe a few binaries from Jetpack that have to be run with like 20 undocumented arguments in right order to do the rest (image assembly, flashing, signing, secure boot stuff, etc), the rest of the system could be anything. Right when I was finished, a 3rd party Yocto layer implementing essentially the same stuff that I came up with, and the world could finally forget about horrors of JetPack for good. I also heard that it has somewhat improved later on, but I have not touch any NVidia SoCs since (due to both trauma and moving to a different field).
Are you aware that mainline linux runs on these Jetson devices? It's a bit of annoying work, but you can be running ArchLinuxARM.
https://github.com/archlinuxarm/PKGBUILDs/pull/1580
Edit: It's been a while since I did this, but I had to manually build the kernel, overwrite a dtb file maybe (and Linux_for_Tegra/bootloader/l4t_initrd.img) and run something like this (for xavier)
sudo ./flash.sh -N 128.30.84.100:/srv/arch -K /home/aeden/out/Image -d /home/aeden/out/tegra194-p2972-0000.dtb jetson-xavier eth0
How close does any of that get a person to having Ubuntu 24.04 running on their board?
(I guess we can put aside the issue of Nvidia's closed source graphics drivers for the moment)
You could install Ubuntu 24.04 using debootstrap. That would just get you the user space, though, you'd still have to build your own kernel image.
Isn't the Jetson line more of an embedded line and not a end-user desktop? Why would you run Ubuntu?
Jetson are embedded devices that run ubuntu. Ubuntu is the OS it ships with.
The Jetson TX2 developer kit makes a very nice developer machine - an ARM64 machine with good graphics acceleration, CUDA, etc.
In any case, Ubuntu is what it comes with.
If you spent enough time and energy on it.. I'm fairly confident you could get the newest Ubuntu running. You'd have to build your own kernel, manually generate the initramfs, figure out how to and then flash it. You'd probably run into stupid little problems like the partition table the flash script makes doesn't allocate enough space for the kernel you've built.. I'm sure there would be hiccups, at the very least, but everything's out there to do it.
Wait, my AGX is still working, but I have kept it offline and away from updates. Do the updates kill it? Or is it a case of not supporting newer pytorch or something else you need?
Xavier AGX is awesome for running ESXi aarch64 edition, including aarch64 Windows vms
The Orin series and later use UEFI and you can apparently run upstream, non-GPU enabled kernels on them. There's a user guide page documenting it. So I think it's gotten a lot better, but it's sort of moot because the non-GPU thing is because the JetPack Linux fork has a specific 'nvgpu' driver used for Tegra devices that hasn't been unforked from that tree. So, you can buy better alternatives unless you're explicitly doing the robotics+AI inference edge stuff.
But the impression I get from this device is that it's closer in spirit to the Grace Hopper/datacenter designs than it is the Tegra designs, due to both the naming, design (DGX style) and the software (DGX OS?) which goes on their workstation/server designs. They are also UEFI, and in those scenarios, you can (I believe?) use the upstream Linux kernel with the open source nvidia driver using whatever distro you like. In that case, this would be a much more "familiar" machine with a much more ordinary Linux experience. But who knows. Maybe GH200/GB200 need custom patches, too.
Time will tell, but if this is a good GPU paired with a good ARM Cortex design, and it works more like a traditional Linux box than the Jeton series, it may be a great local AI inference machine.
AGX also has UEFI firmware which allows you to install ESXi. Then you can install any generic EFI arm64 iso in a VM with no problems, including windows.
It runs their dgx os and Jensen specifically said it would be a full part if their hw stack
If this is DGX OS, then yes, this is what you’ll find installed on their 4-cards workstations.
This is more like a micro-DGX then, for $3k.
And unless there is some expanded maintenance going on, 22.04 is EOL in 2 years. In my experience, vendors are not as on top of security patches as upstream. We will see, but given NVIDIA's closed ecosystem, I don't have high hopes that this will be supported long term.
Is there any recent, powerful SBC with fully upstream kernel support?
I can only think of raspberry pi...
rk3588 is pretty close, I believe it's usable today, just missing a few corner cases with HDMI or some such. I believe that last patches are either pending or already applied to an RC.
Radha but that’s n100 aka x64
The odroid H series. But that packs a x86 cpu.
If its stack still works, you might be able to sell or donate it to a student experimenting. They can still learn quite a few things with it. Maybe even use it for something.
Using outdated tensorflow (v1 from 2018) or outdated PyTorch makes learning harder than it need to be, considering most resources online use much newer versions of the frameworks. If you're learning the fundamentals and working from first principle and creating the building blocks yourself, then it adds to the experience. However, most most people just want to build different types of nets, and it's hard to do when the code won't work for you.
If you're expecting this device to stay relevant for 4 years you are not the target demographic.
Compute is evolving way too rapidly to be setting-and-forgetting anything at the moment.
Today I'm using 2x 3090's which are over 4 years old at this point and still very usable. To get 48gb vram I would need 3x 5070ti - still over $2k.
In 4 years, you'll be able to combine 2 of these to get 256gb unified memory. I expect that to have many uses and still be in a favorable form factor and price.
Eh? By all indications compute is now evolving SLOWER than ever. Moore's Law is dead, Dennard scaling is over, the latest fab nodes are evolutionary rather than revolutionary.
This isn't the 80s when compute doubled every 9 months, mostly on clock scaling.
Indeed, generational improvements are at an all time low. Most of the "revolutionary" AI and/or GPU improvements are less precision (fp32 -> fp16 -> fp8 -> fp4) or adding ever more fake pixels, fake frames, and now in the most recent iteration multiple fake frames per computed frame.
I believe Nvidia has some published numbers for the 5000 series that showed DLSS off performance, which allowed a fair comparison to the previous generation, on the order of 25%, then removed it.
Thankfully the 3rd party benchmarks that use the same settings on old and new hardware should be out soon.
Fab node size is not the only factor in performance. Physical limits were reached, and we're pulling back from the extremely small stuff for the time being. That is the evolutionary part.
Revolutionary developments are: multi-layer wafer bonding, chiplets (collections of interconnected wafers) and backside power delivery. We don't need the transistors to keep getting physically smaller, we need more of them, and at increased efficiency, and that's exactly what's happening.
All that comes with linear increases of heat, and exponential difficulty of heat dissipation (square-cube law).
There is still progress being made in hardware, but for most critical components it's looking far more logarithmic now as we're approaching the physical material limits.
I feel this is bigger than the 5x series GPUs. Given the craze around AI/LLMs, this can also potentially eat into Apple’s slice of the enthusiast AI dev segment once the M4 Max/Ultra Mac minis are released. I sure wished I held some Nvidia stocks, they seem to be doing everything right in the last few years!
This is something every company should make sure they have: an onboarding path.
Xeon Phi failed for a number of reasons, but one where it didn't need to fail was availability of software optimised for it. Now we have Xeons and EPYCs, and MI300C's with lots of efficient cores, but we could have been writing software tailored for those for 10 years now. Extracting performance from them would be a solved problem at this point. The same applies for Itanium - the very first thing Intel should have made sure it had was good Linux support. They could have it before the first silicon was released. Itaium was well supported for a while, but it's long dead by now.
Similarly, Sun has failed with SPARC, which also didn't have an easy onboarding path after they gave up on workstations. They did some things right: OpenSolaris ensured the OS remained relevant (still is, even if a bit niche), and looking the other way for x86 Solaris helps people to learn and train on it. Oracle cloud could, at least, offer it on cloud instances. Would be nice.
Now we see IBM doing the same - there is no reasonable entry level POWER machine that can compete in performance with a workstation-class x86. There is a small half-rack machine that can be mounted on a deskside case, and that's it. I don't know of any company that's planning to deploy new systems on AIX (much less IBMi, which is also POWER), or even for Linux on POWER, because it's just too easy to build it on other, competing platforms. You can get AIX, IBMi and even IBMz cloud instances from IBM cloud, but it's not easy (and I never found a "from-zero-to-ssh-or-5250-or-3270" tutorial for them). I wonder if it's even possible. You can get Linux on Z instances, but there doesn't seem to be a way to get Linux on POWER. At least not from them (several HPC research labs still offer those).
1000% all these ai hardware companies will fail if they don't have this. You must have a cheap way to experiment and develop. Even if you want to only sell a $30000 datacenter card you still need a very low cost way to play.
Sad to see big companies like intel and amd don't understand this but they've never come to terms with the fact that software killed the hardware star
Isn’t the cloud GPU market covering this? I can run a model for $2/hr, or get a 8xH100 if I need to play with something bigger.
People tend to limit their usage when it's time-billed. You need some sort of desktop computer anyway, so, if you spend the 3K this one costs, you have unlimited time of Nvidia cloud software. When you need to run on bigger metal, then you pay $2/hour.
3k is still very steep for anyone not on a silicon valley like salary.
Yes. Most people make do with a generic desktop and an Nvidia GPU. What makes this machine attractive is the beefy GPU and the full Nvidia support for the whole AI stack.
I have the skills to write efficient CUDA kernels, but $2/hr is 10% of my salary, so no way I'm renting any H100s. The electricity price for my computer is already painful enough as is. I am sure there are many eastern European developers who are more skilled and get paid even less. This is a huge waste of resources all due to NVIDIA's artificial market segmentation. Or maybe I am just cranky because I want more VRAM for cheap.
This has 128GB of unified memory. A similarly configured Mac Studio costs almost twice as much, and I'm not sure the GPU is on the same league (software support wise, it isn't, but that's fixable).
A real shame it's not running mainline Linux - I don't like their distro based on Ubuntu LTS.
$4,799 for an M2 Ultra with 128GB of RAM, so not quite twice as much. I'm not sure what the benchmark comparison would be. $5,799 if you want an extra 16 GPU cores (60 vs 76).
We'll need to look into benchmarks when the numbers come out. Software support is also important, and a Mac will not help you that much if you are targeting CUDA.
I have to agree the desktop experience of the Mac is great, on par with the best Linuxes out there.
A lot of models are optimized for metal already, especially lamma, deepseek, and qwen. You are still taking a hit but there wasn't an alternative solution for getting that much vram in a less than $5k before this NVIDIA project came out. Will definitely look at it closely if it isn't just vaporware.
They cant walk back now without some major backlash.
The one thing I wonder is noise. That box is awfully small for the amount of compute it packs, and high-end Mac Studios are 50% heatsink. There isn’t much space in this box for a silent fan.
> Sad to see big companies like intel and amd don't understand this
And it's not like they were never bitten (Intel has) by this before.
Well, Intel management is very good at snatching defeat from the jaws of victory
Intel does have https://www.clearlinux.org/
At least they don’t suffer from a lack of onboarding paths for x86, and it seems they are doing a nice job with their dGPUs.
Still unforgivable that their new CPUs hit the market without excellent Linux support.
It really mystifies me that Intel AMD and other hardware companies obviously Nvidia in this case Don't either have a consortium or each have their own in-house Linux distribution with excellent support.
Windows has always been a barrier to hardware feature adoption to Intel. You had to wait 2 to 3 years, sometimes longer, for Windows to get around us providing hardware support.
Any OS optimizations in Windows you had to go through Microsoft. So say you added some instructions custom silicon or whatever to speed up Enterprise databases, provide high-speed networking that needed some special kernel features, etc, there was always Microsoft being in the way.
Not just in the drag the feet communication. Getting the tech people a line problem.
Microsoft will look at every single change. It did as to whether or not it would challenge their Monopoly whether or not it was in their business interest whether or not it kept you as the hardware and a subservient role.
From the consumer perspective, it seems that MSFT has provided scheduler changes fairly rapidly for CPU changes, like X3D, P/e cores, etc. At least within a couple of months, if not at release.
Amd/Intel work directly with Microsoft for shipping new silicon that would otherwise require it.
> From the consumer perspective, it seems that MSFT has provided scheduler changes fairly rapidly
Now they have some competition. This is relatively new, and Satya Nadella reshaped the company because of that.
Raptor Computing provides POWER9 workstations. They're not cheap, still use last-gen hardware (DDR4/PCIe 4 ... and POWER9 itself) but they're out there.
It kind of defeats the purpose of an onboarding platform if it’s more expensive than the one you think of moving away from.
IBM should see some entry-level products as loss leaders.
They're not offering POWER10 either because IBM closed the firmware again. Stupid move.
Raptor's value proposition is a 100% free and open platform, from the firmware and up, but, if they were willing to compromise on that, they'd be able to launch a POWER10 box.
Not sure it'd competitive in price with other workstation class machines. I don't know how expensive IBM's S1012 desk side is, but with only 64 threads, it'd be a meh workstation.
There were Phi cards, but they were pricey and power hungry (at the time, now current GPU cards probably meet or exceed the Phi card's power consumption) for plugging into your home PC. A few years back there was a big fire sale on Phi cards - you could pick one up for like $200. But by then nobody cared.
Imagine if they were sold at cost in the beginning. Also, think about having one as the only CPU rather than a card.
The developers they are referring to aren’t just enthusiasts; they are also developers who were purchasing SuperMicro and Lambda PCs to develop models for their employers. Many enterprises will buy these for local development because it frees up the highly expensive enterprise-level chip for commercial use.
This is a genius move. I am more baffled by the insane form factor that can pack this much power inside a Mac Mini-esque body. For just $6000, two of these can run 400B+ models locally. That is absolutely bonkers. Imagine running ChatGPT on your desktop. You couldn’t dream about this stuff even 1 year ago. What a time to be alive!
The 1 PetaFLOP spec and 200GB model capacity specs are for FP4 (4-bit floating point), which means inference not training/development. It's still be a decent personal development machine, but not for that size of model.
This looks like a bigger brother of Orin AGX, which has 64GB of RAM and runs smaller LLMs. The question will be power and performance vs 5090. We know price is 1.5x
How does it run 400B models across two? I didn’t see that in the article
> Nvidia says that two Project Digits machines can be linked together to run up to 405-billion-parameter models, if a job calls for it. Project Digits can deliver a standalone experience, as alluded to earlier, or connect to a primary Windows or Mac PC.
Point to point ConnectX connection (RDMA with GPUDirect)
Not sure exactly, but they mentioned linking to together with ConnectX, which could be ethernet or IB. No idea on the speed though.
I think the enthusiast side of things is a negligible part of the market.
That said, enthusiasts do help drive a lot of the improvements to the tech stack so if they start using this, it’ll entrench NVIDIA even more.
I’m not so sure it’s negligible. My anecdotal experience is that since Apple Silicon chips were found to be “ok” enough to run inference with MLX, more non-technical people in my circle have asked me how they can run LLMs on their macs.
Surely a smaller market than gamers or datacenters for sure.
It's annoying I do LLMs for work and have a bit of an interest in them and doing stuff with GANS etc.
I have a bit of an interest in games too.
If I could get one platform for both, I could justify 2k maybe a bit more.
I can't justify that for just one half: running games on Mac, right now via Linux: no thanks.
And on the PC side, nvidia consumer cards only go to 24gb which is a bit limiting for LLMs, while being very expensive - I only play games every few months.
The new $2k card from Nvidia will be 32GB but your point stands. AMD is planning a unified chiplet based GPU architecture (AI/data center/workstation/gaming) called UDNA, which might alleviate some of these issues. It's been delayed and delayed though - hence the lackluster GPU offerings from team Red this cycle - so I haven't been getting my hopes up.
Maybe (LP)CAMM2 memory will make model usage just cheap enough that I can have a hosting server for it and do my usual midrange gaming GPU thing before then.
Grace + Hopper, Grace + blackwell, and discussed GB10 are much like the currently shipping AMD MI300A.
I do hope that a AMD Strix Halo ships with 2 LPCAMM2 slots for a total width of 256 bits.
Unified architecture is still on track for 2026-ish.
32gb as of last night :)
I mean negligible to their bottom line. There may be tons of units bought or not, but the margin on a single datacenter system would buy tens of these.
It’s purely an ecosystem play imho. It benefits the kind of people who will go on to make potentially cool things and will stay loyal.
>It’s purely an ecosystem play imho. It benefits the kind of people who will go on to make potentially cool things and will stay loyal.
100%
The people who prototype on a 3k workstation will also be the people who decide how to architect for a 3k GPU buildout for model training.
> It’s purely an ecosystem play imho. It benefits the kind of people who will go on to make potentially cool things and will stay loyal.
It will be massive for research labs. Most academics have to jump through a lot of hoops to get to play with not just CUDA, but also GPUDirect/RDMA/Infiniband etc. If you get older/donated hardware, you may have a large cluster but not newer features.
Academic minimal-bureaucracy purchasing card limit is about $4k, so pricing is convenient*2.
Devalapers developers developers - balmer monkey dance - the key to be entrenched is the platform ecosystem.
Also why aws is giving trainium credits for free
Yes, but people already had their Macs for others reasons.
No one goes to an Apple store thinking "I'll get a laptop to do AI inference".
They have, because until now Apple Silicon was the only practical way for many to work with larger models at home because they can be configured with 64-192GB of unified memory. Even the laptops can be configured with up to 128GB of unified memory.
Performance is not amazing (roughly 4060 level, I think?) but in many ways it was the only game in town unless you were willing and able to build a multi-3090/4090 rig.
I would bet that people running LLMs on their Macs, today, is <0.1% of their user base.
People buying Macs for LLMs—sure I agree.
Since the current MacOS comes built in with small LLMs, that number might be closer to 50% not 0.1%.
I'm not arguing whether or not Macs are capable of doing it, but whether is a material force that drives people to buy Macs because of it; it's not.
Higher than that buying the top end machines though, which are very high margin
All macs? Yes. But of 192GB mac configs? Probably >50%
I'm currently wondering how likely it is I'll get into deeper LLM usage, and therefore how much Apple Silicon I need (because I'm addicted to macOS). So I'm some way closer to your steel man than you'd expect. But I'm probably a niche within a niche.
Tons of people do, my next machine will likely be a Mac for 60% this reason and 40% Windows being so user hostile now.
my $5k m3 max 128gb disagrees
Doubt it, a year ago useful local LLMs on a Mac (via something like ollama) was barely taking off.
If what you say it's true you were among the first 100 people on the planet who were doing this; which btw, further supports my argument on how extremely rare is that use case for Mac users.
No, I got a MacBook Pro 14”with M2 Max and 64GB for LLMs, and that was two generations back.
People were running llama.cpp on Mac laptops in March 2023 and Llama2 was released in July 2023. People were buying Macs to run LLMs months before M3 machines became available in November 2023.
You could have said the same about gamers buying expensive hardware in the 00's. It's what made Nvidia big.
I keep thinking about stocks that have 100xd, and most seemed like obscure names to me as a layman. But man, Nvidia was a household name to anyone that ever played any game. And still so many of us never bothered buying the stock
Incredible fumble for me personally as an investor
Unless you predicted AI and Crypto then it was just really good, not 100x. It 20x from 2005-2020 but ~500x from 2005-2025
And if you truly did predict that Nvidia would own those markets and those markets would be massive, you could have also bought Amazon, Google or heck even Bitcoin. Anything you touched in tech really would have made you a millionaire really.
Survivors bias though. It's hard to name all the companies that failed in the dot com bust, but even among the ones that made it through, because they're not around any more, they're harder to remember than the winners. But MCI, Palm, RIM, Nortel, Compaq, Pets.com, Webvan all failed and went to zero. There's an uncountable number of ICOs and NFTs that ended up nowhere. SVB isn't exactly an tech stock but they were strongly connected to it and they failed.
It is interesting to think about crypto as a stairstep that Nvidia used to get to its current position in AI. It wasn't games > ai, but games > crypto > ai.
Nvidia joined S&P500 in 2001 so if you've been doing passive index fund investing, you probably got a little bit of it in your funds. So there was some upside to it.
There's a lot more gamers than people wanting to play with LLms at home.
There's a titanic market with people wanting some uncensored local LLM/image/video generation model. This market extremely overlaps with gamers today, but will grow exponentially every year.
How big is that market you claim? Local LLM image generation already exists out off the box on latest Samsung flagship phones and it's mostly a Gimmick that gets old pretty quickly. Hardly comparable to gaming in terms of market size and profitablity.
Plus, YouTube and the Google images is already full of AI generated slop and people are already tired of it. "AI fatigue" amongst majority of general consumers is a documented thing. Gaming fatigues is not.
> Gaming fatigues is not.
It is. You may know it as the "I prefer to play board games (and feel smugly superior about it) because they're ${more social, require imagination, $whatever}" crowd.
The market heavily disagrees with you.
"The global gaming market size was valued at approximately USD 221.24 billion in 2024. It is forecasted to reach USD 424.23 billion by 2033, growing at a CAGR of around 6.50% during the forecast period (2025-2033)"
Farmville style games underwent similar explosive estimates of growth, up until they collapsed.
Much of the growth in gaming of late has come from exploitive dark patterns, and those dark patterns eventually stop working because users become immune to them.
>Farmville style games underwent similar explosive estimates of growth, up until they collapsed.
They did not collapse, they moved to smartphones. The "free"-to-play gacha portion of the gaming market is so successful it is most of the market. "Live service" games are literally traditional game makers trying to grab a tiny slice of that market, because it's infinitely more profitable than making actual games.
>those dark patterns eventually stop working because users become immune to them.
Really? Slot machines have been around for generations and have not become any less effective. Gambling of all forms has relied on the exact same physiological response for millennia. None of this is going away without legislation.
> Slot machines have been around for generations and have not become any less effective.
Slot machines are not a growth market. The majority of people wised to them literal generations ago, although enough people remain susceptible to maintain a handful of city economies.
> They did not collapse, they moved to smartphones
Agreed, but the dark patterns being used are different. The previous dark patterns became ineffective. The level of sophistication of psychological trickery in modern f2p games is far beyond anything Farmville ever attempted.
The rise of live service games also does not bode well for infinite growth in the industry as there's only so many hours to go around each day for playing games and even the evilest of player manipulation techniques can only squeeze so much blood from a stone.
The industry is already seeing the failure of new live service games to launch, possibly analogous to what happened in the MMO market when there was a rush of releases after WoW. With the exception of addicts, most people can only spend so many hours a day playing games.
I think he implied AI generated porn. Perhaps also other kind of images that are at odds with morality and/or the law. I'm not sure but probably Samsung phones don't let you do that.
I'm sure a lot of people see "uncensored" and think "porn" but there's a lot of stuff that e.g. Dall-E won't let you do.
Suppose you're a content creator and you need an image of a real person or something copyrighted like a lot of sports logos for your latest YouTube video's thumbnail. That kind of thing.
I'm not getting into how good or bad that is; I'm just saying I think it's a pretty common use case.
Apart from the uncensored bit, I'm in this small market.
Do I buy a Macbook with silly amount of RAM when I only want to mess with images occasionally.
Do I get a big Nvidia card, topping out at 24gb - still small for some LLMs, but I could occasionally play games using it at least.
>There's a titanic market
Titanic - so about to hit an iceberg and sink?
> There's a titanic market with people wanting some uncensored local LLM/image/video generation model.
No. There's already too much porn on the internet, and AI porn is cringe and will get old very fast.
AI porn is currently cringe, just like Eliza for conversations was cringe.
The cutting edge will advance, and convincing bespoke porn of people's crushes/coworkers/bosses/enemies/toddlers will become a thing. With all the mayhem that results.
It will always be cringe due to how so-called "AI" works. Since it's fundamentally just log-likelihood optimization under the hood, it will always be a statistically most average image. Which means it will always have that characteristic "plastic" and overdone look.
The current state of the art in AI image generation was unimaginable a few years back. The idea that it'll stay as-is for the next century seems... silly.
If you're talking about some sort of non-existent sci-fi future "AI" that isn't just log-likelihood optimization, then most likely such a fantastical thing wouldn't be using NVidia's GPU with CUDA.
This hardware is only good for current-generation "AI".
I think there are a lot of non-porn uses. I see a lot of YouTube thumbnails that seem AI generated, but feature copyrighted stuff.
(example: a thumbnail for a YT video about a video game, featuring AI-generated art based on that game. because copyright reasons, in my very limited experience Dall-E won't let you do that)
I agree that AI porn doesn't seem a real market driver. With 8 billion people on Earth I know it has its fans I guess, but people barely pay for porn in the first place so I reallllly dunno how many people are paying for AI porn either directly or indirectly.
It's unclear to me if AI generated video will ever really cross the "uncanny valley." Of course, people betting against AI have lost those bets again and again but I don't know.
> No. There's already too much porn on the internet, and AI porn is cringe and will get old very fast.
I needed an uncensored model in order to, guess what, make an AI draw my niece snowboarding down a waterfall. All the online services refuse on basis that the picture contains -- oh horrors -- a child.
"Uncensored" absolutely does not imply NSFW.
Yeah, and there's that story about "private window" mode in browsers because you were shopping for birthday gifts that one time. You know what I mean though.
I really don't. Censored models are so censored they're practically useless for anything but landscapes. Half of them refuse to put humans in the pictures at all.
I think scams will create a far more demand. Spear Phishing targets by creating persistent elaborate online environments is going to be big.
>There's a titantic market
How so?
Only 40% of gamers use a PC, a portion of those use AI in any meaningful way, and a fraction of those want to set up a local AI instance.
Then someone releases an uncensored, cloud based AI and takes your market?
Sure, but those developers will create functionality that will require advanced GPUs and people will want that functionality. Eventually OS will expect it and it will became default everywhere. So, it is an important step that will push nvidia growing in the following years.
AMD thought the enthusiast side of things was a negligible side of the market.
That’s not what I’m saying. I’m saying that the people buying this aren’t going to shift their bottom line in any kind of noticeable way. They’re already sold out of their money makers. This is just an entrenchment opportunity.
If this is gonna be widely used by ML engineers, in biopharma, etc and they land 1000$ margins at half a million sales that's half a billion in revenue, with potential to grow.
today’s enthusiast, grad student, hacker is tomorrow’s startup founder, CEO, CTO or 10x contributor in large tech company
> tomorrow’s startup founder, CEO, CTO or 10x contributor in large tech company
Do we need more of those? We need plumbers and people that know how to build houses. We are completely full on founders and executives.
If they're already an "enthusiast, grad student, hacker", are they likely to choose the "plumbers and people that know how to build houses" career track?
True passion for one's career is rare, despite the clichéd platitudes ecouraging otherwise. That's something we should encourage and invest in regardless of the field.
We might not, but Nvidia would certainly like it.
If I were NVidia, I would be throwing everything I could at making entertainment experiences that need one of these to run...
I mean, this is awfully close to being "Her" in a box, right?
I feel like a lot of people miss that Her was a dystopian future, not an ideal to hit.
Also, it’s $3000. For that you could buy subscriptions to OpenAI etc and have the dystopian partner everywhere you go.
We already live in dystopian hell and I'd like to have Scarlett Johansen whispering in my ear, thanks.
Also, I don't particularly want my data to be processed by anyone else.
Fun fact: Her was set in the year 2025.
Boring fact: The underlying theme of the movie Her is actually divorce and the destructive impact it has on people, the futuristic AI stuff is just for stuffing!
The overall theme of Her was human relationships. It was not about AI and not just about divorce in particular.The AI was just a plot device to include a bodyless person into the equation. Watch it again with this in mind and you will see what I mean.
The universal theme of Her was the set of harmonics that define what is something and the thresholds, boundaries, windows onto what is not thatthing but someotherthing, even if the thing perceived is a mirror, not just about human relationships in particular. The relationship was just a plot device to make a work of deep philosophy into a marketable romantic comedy.
This is exactly the scenario where you don't want "the cloud" anywhere.
OpenAI doesn’t make any profit. So either it dies or prices go up. Not to mention the privacy aspect of your own machine and the freedom of choice which models to run
> So either it dies or prices go up.
Or efficiency gains in hardware and software catchup making current price point profitable.
Training data gets mired in expensive and they need constant input otherwise the AI‘s knowledge is outdated
OpenAI built a 3 billion dollar business in less than 3 years of a commercial offering.
3 billion revenue and 5 billion loss doesn’t sound like a sustainable business model.
Rumor has it they run queries at a profit, and most of the cost is in training and staff.
If they is true their path to profitability isn't super rocky. Their path to achieving their current valuation may end up being trickier though!
The real question is what the next 3 years look like. If it's another 5 billion burned for 3 billion or less in revenue, that's one thing... But...
How...
Recent report says there are 1M paying customers. At ~30USD for 12 months this is ~3.6B of revenue which kinda matches their reported figures. So to break even at their ~5B costs assuming that they need no further major investment in infrastructure they only need to increase the paying subscriptions from 1M to 2M. Since there are ~250M people who engaged with OpenAI free tier service 2x projection doesn't sound too surreal.
One man's dystopia is another man's dream. There's no "missing" in the moral of a movie, you make whatever you want out of it.
If Silicon Valley could tell the difference between utopias and dystopias, we wouldn't have companies named Soylent or iRobot, and the recently announced Anduril/Palantir/OpenAI partnership to hasten the creation of either SkyNet or Big Brother wouldn't have happened at all.
I mean, we still act like a "wild goose chase" is a bad thing.
We still schedule "bi-weekly" meetings.
We can't agree on which way charge goes in a wire.
Have you seen the y-axis on an economists chart?
The dystopian overton window has shifted, didn't you know, moral ambiguity is a win now? :) Tesla was right.
they don't miss that part. they just want to be the evil character.
Please name the dystopian elements of Her.
The real interesting stuff will happen when we get multimodal LMs that can do VR output.
Yeah, it's more about preempting competitors from attracting any ecosystem development than the revenue itself.
Jensen did say in recent interview, paraphrasing, “they are trying to kill my company”.
Those Macs with unified memory is a threat he is immediately addressing. Jensen is a wartime ceo from the looks of it, he’s not joking.
No wonder AMD is staying out of the high end space, since NVIDIA is going head on with Apple (and AMD is not in the business of competing with Apple).
From https://www.tomshardware.com/pc-components/cpus/amds-beastly...
The fire-breathing 120W Zen 5-powered flagship Ryzen AI Max+ 395 comes packing 16 CPU cores and 32 threads paired with 40 RDNA 3.5 (Radeon 8060S) integrated graphics cores (CUs), but perhaps more importantly, it supports up to 128GB of memory that is shared among the CPU, GPU, and XDNA 2 NPU AI engines. The memory can also be carved up to a distinct pool dedicated to the GPU only, thus delivering an astounding 256 GB/s of memory throughput that unlocks incredible performance in memory capacity-constrained AI workloads (details below). AMD says this delivers groundbreaking capabilities for thin-and-light laptops and mini workstations, particularly in AI workloads. The company also shared plenty of gaming and content creation benchmarks.
[...]
AMD also shared some rather impressive results showing a Llama 70B Nemotron LLM AI model running on both the Ryzen AI Max+ 395 with 128GB of total system RAM (32GB for the CPU, 96GB allocated to the GPU) and a desktop Nvidia GeForce RTX 4090 with 24GB of VRAM (details of the setups in the slide below). AMD says the AI Max+ 395 delivers up to 2.2X the tokens/second performance of the desktop RTX 4090 card, but the company didn’t share time-to-first-token benchmarks.
Perhaps more importantly, AMD claims to do this at an 87% lower TDP than the 450W RTX 4090, with the AI Max+ running at a mere 55W. That implies that systems built on this platform will have exceptional power efficiency metrics in AI workloads.
"Fire breathing" is completely inappropriate.
Strix Halo is a replacement for the high-power laptop CPUs from the HX series of Intel and AMD, together with a discrete GPU.
The thermal design power of a laptop CPU-dGPU combo is normally much higher than 120 W, which is the maximum TDP recommended for Strix Halo. The faster laptop dGPUs want more than 120 W only for themselves, not counting the CPU.
So any claims of being surprised that the TDP range for Strix Halo is 45 W to 120 W are weird, like the commenter has never seen a gaming laptop or a mobile workstation laptop.
> The thermal design power of a laptop CPU-dGPU combo is normally much higher than 120 W
Normally? Much higher than 120W? Those are some pretty abnormal (and dare I say niche?) laptops you're talking about there. Remember, that's not peak power - thermal design power is what the laptop should be able to power and cool pretty much continuously.
At those power levels, they're usually called DTR: desktop replacement. You certainly can't call it "just a laptop" anymore once we're in needs-two-power-supplies territory.
Any laptop that in marketed as "gaming laptop" or "mobile workstation" belongs to this category.
I do not know which is the proportion of gaming laptops and mobile workstations vs. thin and light laptops. While obviously there must be much more light laptops, the gaming laptops cannot be a niche product, because there are too many models offered by a lot of vendors.
My own laptop is a Dell Precision, so it belongs to this class. I would not call Dell Precision laptops as a niche product, even if they are typically used only by professionals.
My previous laptop was some Lenovo Yoga that also belonged to this class, having a discrete NVIDIA GPU. In general, any laptop having a discrete GPU belongs to this class, because the laptop CPUs intended to be paired with discrete GPUs have a default TDP of 45 W or 55 W, while the smallest laptop discrete GPUs may have TDPs of 55 W to 75 W, but the faster laptop GPUs have TDPs between 100 W and 150 W, so the combo with CPU reaches a TDP around 200 W for the biggest laptops.
You can't usually just add up the TDPs of CPU and GPU, because neither cooling nor the power circuitry supports that kind of load. That's why AMDs SmartShift is a thing.
People are very unaware just how much better a gaming laptop from 3 years ago is (compared to a copilot laptop). These laptops are sub $500 on eBay, and Best Buy won’t give you more than $150 for it as a trade in (almost like they won’t admit that those laptops outclass the new category type of AI pc).
> since NVIDIA is going head on with Apple
I think this is a race that Apple doesn't know it's part of. Apple has something that happens to work well for AI, as a side effect of having a nice GPU with lots of fast shared memory. It's not marketed for inference.
Which interview was this?
https://fortune.com/2023/11/11/nvidia-ceo-jensen-huang-says-...
I can't find the exact Youtube video, but it's out there.
You missed the Ryzen hx ai pro 395 product announcement
From the people I talk to the enthusiast market is nvidia 4090/3090 saturated because people want to do their fine tunes also porn on their off time. The Venn diagram of users who post about diffusion models and llms running at home is pretty much a circle.
Not your weights, not your waifu
Yeah, I really don't think the overlap is as much as you imagine. At least in /r/localllama and the discord servers I frequent, the vast majority of users are interested in one or the other primarily, and may just dabble with other things. Obviously this is just my observations...I could be totally misreading things.
> I sure wished I held some Nvidia stocks, they seem to be doing everything right in the last few years!
They propelled on unexpected LLM boom. But plan 'A' was robotics in which NVidia invested a lot for decades. I think their time is about to come, with Tesla's humanoids for 20-30k and Chinese already selling for $16k.
This is somewhat similar to what GeForce was to gamers back in the days, but for AI enthusiasts. Sure, the price is much higher, but at least it's a completely integrated solution.
Yep that's what I'm thinking as well. I was going to buy a 5090 mainly to play around with LLM code generation, but this is a worthy option for roughly the same price as building a new PC with a 5090.
It has 128 GB of unified RAM. It will not be as fast as the 32 GB VRAM of the 5090, but what gamer cards have always lacked was memory.
Plus you have fast interconnects, if you want to stack them.
I was somewhat attracted by the Jetson AGX Orin with 64 GB RAM, but this one is a no-brainer for me, as long as idle power is reasonable.
Having your main pc as an LLM rig also really sucks for multitasking, since if you want to keep a model loaded to use it when needed, it means you have zero resources left to do anything else. GPU memory maxed out, most of the RAM used. Having a dedicated machine even if it's slower is a lot more practical imo, since you can actually do other things while it generates instead of having to sit there and wait, not being able to do anything else.
>enthusiast AI dev segment
i think it isn't about enthusiast. To me it looks like Huang/NVDA is pushing further a small revolution using the opening provided by the AI wave - up until now the GPU was add-on to the general computing core onto which that computing core offloaded some computing. With AI that offloaded computing becomes de-facto the main computing and Huang/NVDA is turning tables by making the CPU is just a small add-on on the GPU, with some general computing offloaded to that CPU.
The CPU being located that "close" and with unified memory - that would stimulate development of parallelization for a lot of general computing so that it would be executed on GPU, very fast that way, instead of on the CPU. For example classic of enterprise computing - databases, the SQL ones - a lot, if not, with some work, everything, in these databases can be executed on GPU with a significant performance gain vs. CPU. Why it isn't happening today? Load/unload onto GPU eats into performance, complexity of having only some operations offloaded to GPU is very high in dev effort, etc. Streamlined development on a platform with unified memory will change it. That way Huang/NVDA may pull out rug from under the CPU-first platforms like AMD/INTC and would own both - new AI computing as well as significant share of the classic enterprise one.
> these databases can be executed on GPU with a significant performance gain vs. CPU
No, they can’t. GPU databases are niche products with severe limitations.
GPUs are fast at massively parallel math problems, they anren’t useful for all tasks.
>GPU databases are niche products with severe limitations.
today. For the reasons like i mentioned.
>GPUs are fast at massively parallel math problems, they anren’t useful for all tasks.
GPU are fast at massively parallel tasks. Their memory bandwidth is 10x of that of the CPU for example. So, typical database operations, massively parallel in nature like join or filter, would run about that faster.
Majority of computing can be parallelized and thus benefit from being executed on GPU (with unified memory of the practically usable for enterprise sizes like 128GB).
> So, typical database operations, massively parallel in nature like join or filter, would run about that faster.
Given workload A how much of the total runtime JOIN or FILTER would take in contrast to the storage engine layer for example? My gut feeling tells me not much since to see the actual gain you'd need to be able to parallelize everything including the storage engine challenges.
IIRC all the startups building databases around GPUs failed to deliver in the last ~10 years. All of them are shut down if I am not mistaken.
With cheap large RAMs and the SSD the storage has already became much less of an issue, especially when the database is primarily in-memory one.
How about attaching SSD based storage to NVLink? :) Nvidia does have the direct to memory tech and uses wide buses, so i don't see any issue for them to direct attach arrays of SSD if they feel like it.
>IIRC all the startups building databases around GPUs failed to deliver in the last ~10 years. All of them are shut down if I am not mistaken.
As i already said - model of database offloading some ops to GPU with its separate memory isn't feasible, and those startups confirmed it. Especially when GPU would be 8-16GB while the main RAM can easily be 1-2TB with 100-200 CPU cores. With 128GB unified memory like on GB10 the situation looks completely different (that Nvidia allows only 2 to be connected by NVLink is just a market segmentation not a real technical limitation).
I mean you wouldn't run a database on a GB10 device or cluster of them thereof. GH200 is another story, however, the potential improvement wrt the databases-in-GPUs still falls short of the question if there are enough workloads that are compute-bound in the substantial part of total wall-clock time for given workload.
In other words, and hypothetically, if you can improve logical plan execution to run 2x faster by rewriting the algorithms to make use of GPU resources but physical plan execution remains to be bottlenecked by the storage engine, then the total sum of gains is negligible.
But I guess there could perhaps be some use-case where this could be proved as a win.
The unified memory is no faster for the GPU than the CPU. So its not 10x the CPU. HBM on a GPU is much faster.
No. The unified memory on GB10 is much faster than regular RAM to CPU system:
https://nvidianews.nvidia.com/news/nvidia-puts-grace-blackwe...
"The GB10 Superchip enables Project DIGITS to deliver powerful performance using only a standard electrical outlet. Each Project DIGITS features 128GB of unified, coherent memory and up to 4TB of NVMe storage. With the supercomputer, developers can run up to 200-billion-parameter large language models to supercharge AI innovation."
https://www.nvidia.com/en-us/data-center/grace-cpu-superchip...
"Grace is the first data center CPU to utilize server-class high-speed LPDDR5X memory with a wide memory subsystem that delivers up to 500GB/s of bandwidth "
As far as i see it is about 4x of Zen 5.
> I sure wished I held some Nvidia stocks
I’m so tired of this recent obsession with the stock market. Now that retail is deeply invested it is tainting everything, like here on a technology forum. I don’t remember people mentioning Apple stock every time Steve Jobs made an announcement in the past decades. Nowadays it seems everyone is invested in Nvidia and just want the stock to go up, and every product announcement is a mean to that end. I really hope we get a crash so that we can get back to a more sane relation with companies and their products.
> hope we get a crash
That's the best time to buy. ;)
But if you buy and it crash, you lose the money, no?
I bet $100k on NVIDIA stocks ~7 years ago, just recently closed out a bunch of them
“Bigger” in what sense? For AI? Sure, because this an AI product. 5x series are gaming cards.
Not expecting this to compete with the 5x series in terms of gaming; But it's interesting to note the increase in gaming performance Jensen was speaking about with Blackwell was larger related to inferenced frames generated by the tensor cores.
I wonder how it would go as a productivity/tinkering/gaming rig? Could a GPU potentially be stacked in the same way an additional Digit can?
Would hadn't nvidia cripple nvlink on geforce.
Bigger in the sense of the announcements.
Eh. Gaming cards, but also significantly faster. If the model fits in the VRAM the 5090 is a much better buy.
> they seem to be doing everything right in the last few years
About that... Not like there isn't a lot to be desired from the linux drivers: I'm running a K80 and M40 in a workstation at home and the thought of having to ever touch the drivers, now that the system is operational, terrifies me. It is by far the biggest "don't fix it if it ain't broke" thing in my life.
Use a filesystem that snapshots AND do a complete backup.
Buy a second system which you can touch?
That IS the second system (my AI home rig). I've given up on Nvidia for using it on my main computer because of their horrid drivers. I switched to Intel ARC about a month ago and I love it. The only downside is that I have a xeon on my main computer and Intel never really bothered to make ARC compatible with xeons so I had to hack my bios around, hoping I don't mess everything up. Luckily for me, it all went well so now I'm probably one of a dozen or so people worldwide to be running xeons + arc on linux. That said, the fact that I don't have to deal with nvidia's wretched linux drivers does bring a smile to my face.
The nVidia price is closer (USD 3k) to a top Mac mini but I trust Apple more for the end-to-end support from hardware to apps than nVidia. Not an Apple fanboy but an user/dev, and I don't think we realize what Apple really achieved, industrially speaking. The M1 was launched in late 2020.
Will there really be a mac mini wirh Max or Ultra CPUs? This feels like somewhat of an overlap with the Mac Studio.
There will undoubtably be a Mac Studio (and Mac Pro?) bump to M4 at some point. Benchmarks [0] reflect how memory bandwidth and core count [1] compare to processor improvements. Granted, ymmv to your workload.
0. https://www.macstadium.com/blog/m4-mac-mini-review
1. https://www.apple.com/mac/compare/?modelList=Mac-mini-M4,Mac...
Did they say anything about power consumption?
Apple M chips are pretty efficient.
Not only that, but it should help free up the gpus for the gamers.