This reminds me of the recent LaurieWired video presenting a hypothetical of, "what if we stopped making CPUs": https://www.youtube.com/watch?v=L2OJFqs8bUk
Spoiler, but the answer is basically that old hardware rules the day because it lasts longer and is more reliable of timespans of decades.
DDR5 32GB is currently going for ~$330 on Amazon
DDR4 32GB is currently going for ~$130 on Amazon
DDR3 32GB is currently going for ~50 on Amazon (4x8GB)
For anyone where cost is a concern, using older hardware seems like a particularly easy choice, especially if a person is comfortable with a Linux environment, since the massive droves of recently retired Windows 10 incompatible hardware works great with your Linux distro of choice.
If everyone went for DDR4 and DDR3, surely the cost would go up. There is no additional supply there, as they are no longer being made.
Currently trying to source a large amount of DDR4 to upgrade a 3 year old fleet of servers at a very unfortunate time.
It's very difficult to source in quantity, and is going up in price more or less daily at this point. Vendor quotes are good for hours, not days when you can find it.
We're looking to buy 1TB of DDR5 for a large database server. I'm pretty sure that we're just going to pay the premium and move on.
And that’s why everybody’s B2B these days. The decision-making people at companies are not spending their own money
At some point that’s true, but don’t they run the n-1 or 2 generation production lines for years after the next generation launches? There’s a significant capital investment there and my understanding is that the cost goes down significantly over the lifetime as they dial in the process so even though the price is lower it’s still profitable.
Unless plans have changed, the foundries making DDR4 are winding down, with the last shipments going out as we speak.
This is only true as long as there's enough of a market left. You tend to end up with oversupply and excessive inventory during the transition, and that pushes profit margins negative and removes all the supply pretty quickly.
Undoubtably the cost would go up, but nobody is building out datacenters full of DDR4, either, so I don't figure it would go up nearly as much as DDR5 is right now.
https://pcpartpicker.com/trends/price/memory/
You can see the cost rise of DDR4 here.
Awesome charts, thanks! I think it bears out that the multiplier for older hardware isn't as extreme as the newer hardware, right?
~2.8x for DDR4, ~3.6x for DDR5. DDR5 is still being made, though, so it will be interesting to see how it changes in the future.
Either way, it's going to be a long few years at the least.
Unless the AI bubble pops.
One possible outcome is the remaining memory manufacturers have dedicated all their capacity for AI and when the bubble pops, they lose their customer and they go out of business too.
I wouldn't be too surprised to find at least some of the big ram foundries are deeply bought into the fake money circles where everybody is "paying" each other with unrealised equity in OpenAI/Anthropic/whoever, resulting in a few trillion dollars worth of on-paper "money" vanishing overnight, at which stage a whole bunch of actual-money loans will get called in and billion dollar companies get gutted by asset strippers.
Maybe larger makerspaces and companies like Adafruit, RasPi, and Pine should start stockpiling (real) money, and pick themselves up an entire fab full of gear at firesale prices so they can start making their own ram...
That's the Avarage price for new DDR4 which has dwindling supply. Meanwhile used DDR4 is being retired in both desktops and data centers.
DDR4 production is winding down or done. Only "new old stock" will remain, or used DDR4 modules. Good luck buying that in quantity.
Unfortunately, older RAM also means an older motherboard, which also means older socket and older CPUs. It works, but it's not usually a drop in replacement.
Can't you use DDR3 in DDR5 compatible board?
Unfortunately not, each version of RAM uses a different physical slot.
I bought 384 GB of DDR5-4800 RDIMM a few months back for a Zen 4c system with lots of I/O like dual 25G NIC's and ten MCIO x8 ports... So far it has been the best value for money compared to any memory before it. The bandwidth is nuts. Power consumption went DOWN compared to DDR4. Doesn't matter much if you got two sticks, but as soon as you get into 6+ territory, it does matter a lot. The same goes for NVMe's.
> works great with your Linux distro of choice.
...or you could go with FreeBSD. There's even a brand new release that just came out!
Yes, DDR3 is the lowest CAS latency and lasts ALOT longer.
Just like SSDs from 2010 have 100.000 writes per bit instead of below 10.000.
CPUs might even follow the same durability pattern but that remains to be seen.
Keep your old machines alive and backed up!
CAS latency is specified in cycles and clock rates are increasing, so despite the number getting bigger there's actually been a small improvement in latency with each generation.
Not for small amounts of data.
Bandwith increases, but if you only need a few bytes DDR3 is faster.
Also slower speed means less heat and longer life.
You can feel the speed advantage by just moving the mouse on a DDR3 PC...
RAM latency doesn't affect mouse response in any perceptible way. The fastest gaming mice I know of run at 8000Hz, so that's 125000ns between samples, much bigger than any CAS latency. And most mice run substantially slower.
Maybe your old PC used lower-latency GUI software, e.g. uncomposited Xorg instead of Wayland.
I only felt it on Windows, maybe tht is due to the special USB mouse drivers Microsoft made? Still motion-to-photon latency is really lower on my DDR3 PCs, would be cool to know why.
You are conflating two things that have nothing to do with each other. Computers have had mice since the 80s.
Still motion-to-photon latency is really lower on my DDR3 PCs, would be cool to know why.
No it isn't, your computer is doing tons of stuff and the cursor on windows is a hardware feature of the graphics card.
Should I even ask why you think memory bandwidth is the cause of mouse latency?
CAS latency doesn't matter so much as ns of total random-access latency and the raw clockspeed of the individual RAM cells. If you are accessing the same cell repeatedly, RAM hasn't gotten faster in years (around DDR2 IIRC).
> 100.000 writes per bit
per cell*
Also, that SSD example is wildly untrue. Especially with the context of available capacity at the time. You CAN get modern SSD's with mind boggling write endurance per cell, AND has multides more cells, resulting in vastly more durable media than what was available pre 2015. The one caveat there to modern stuff being better than older stuff is Optane (the enterprise stuff like the 905P or P5800X, not that memory and SSD combo shitshow that Intel was shoveling out the consumer door). We still haven't reached parity with the 3DXpoint stuff, and it's a damn shame Intel hurt itself in it's confusion and cancelled that, because boy would they and Micron be printing money hand over fist right now if they were still making them. Still, Point being: Not everything is a TLC/QLC 0.3DWPD disposable drive like has become standard in the consumer space. If you want write endurance, capacity, and/or performance, you have more and better options today than ever before (Optane/3DXPoint excepted).
Regarding CPU's, they still follow that durability pattern if you unfuck what Intel and AMD are doing with boosting behavior and limit them to perform with the margins that they used to "back in the day". This is more of a problem on the consumer side (Core/Ryzen) than the enterprise side (Epyc/Xeon). It's also part of why the OC market is dying (save for maybe the XOC market that is having fun with LN2), those CPU's (especially consumer ones) come from the factory with much less margin for pushing things, because they're already close to their limit without exceedingly robust cooling.
I have no idea what the relative durability of RAM is tbh, it's been pretty bulletproof in my experience over the years, or at least bulletproof enough for my usecases that I haven't really noticed a difference. Notable exception is what I see in GPU's, but that is largely heat-death related and often a result of poor QA by the AIB that made it (eg, thermal pads not making contact with the GDDR modules).
Maybe but in my experience a good old <100GB SSD from 2010-14 will completely demolish any >100GB from 2014+ in longevity.
Some say they have the opposite experience, mine is ONLY Intel drives, maybe that is why.
X25-E is the diamond peak of SSDs probably forever since the machines to make 45nm SLC are gone.
What if you overprovision the newer SSD to the point that it can run the entirety of the drive in pseudo-SLC ("caching") mode? (You'd need to store no more than 25% of the nominal capacity, since QLC has four bits per cell.) That should have fairly good endurance, though still a lot less than Optane/XPoint persistent memory.
Nice, My current PC uses DDR4 Time to dust off my 2012 PC and put Linux on it.
a 2x16 ddr4 kit I bought in 2020 for $160 is now $220. older memory is relatively cheap but not cheaper than before at all.
I wondered how much of this is inflation -- after adjusting for CPI inflation, $160 in 2020 is worth $200 in today's dollars [$], so the price of that ddr4 kit is 10% higher in real terms.
USD is also weaker than it was in the past.
A friend built a new rig and went with DDR4 and a 5800x3d just because of this as he needed a lot of ram.
I think the OpenAI deal to lock wafers was a wonderful coup. OpenAI is more and more losing ground against the regularity[0] of the improvements coming from Anthropic, Google and even the open weights models. By creating a chock point at the hardware level, OpenAI can prevent the competition from increasing their reach because of the lack of hardware.
[0]: For me this is really an important part of working with Claude, the model improves with the time but stay consistent, its "personality" or whatever you want to call it, has been really stable over the past versions, this allows a very smooth transition from version N to N+1.
Is anyone else deeply perturbed by the realization that a single unprofitable corporation can basically buy out the entire world's supply of computing hardware so nobody else can have it?
How did we get here? What went so wrong?
I don't see this working for Google though, since they make their own custom hardware in the form of the TPUs. Unless those designs include components that are also susceptible?
That was why OpenAI went after the wafers, not the finished products. By buying up the supply of the raw materials they bottleneck everybody, even unrelated fields. It's the kind of move that requires a true asshole to pull off, knowing it will give your company an advantage but screw up life for literally billions of people at the same time.
TPUs use HBM, which are impacted.
Even their TPU based systems need RAM.
Still susceptible, TPUs need DRAM dies just as much as anything else that needs to process data. I think they use some form of HBM, so they basically have to compete alongside the DDR supply chain.
Could this generate pressure to produce less memory hungry models?
There has always been pressure to do so, but there are fundamental bottlenecks in performance when it comes to model size.
What I can think of is that there may be a push toward training for exclusively search-based rewards so that the model isn't required to compress a large proportion of the internet into their weights. But this is likely to be much slower and come with initial performance costs that frontier model developers will not want to incur.
I wonder if this maintains the natural language capabilities which are what LLM's magic to me. There is a probably some middle ground, but not having to know what expressions, or idiomatic speech an LLM will understand is really powerful from a user experience point of view.
Yeah that was my unspoken assumption. The pressure here results in an entirely different approach or model architecture.
If openAI is spending $500B then someone can get ahead by spending $1B which improves the model by >0.2%
I bet there's a group or three that could improve results a lot more than 0.2% with $1B.
Or maybe models that are much more task-focused? Like models that are trained on just math & coding?
> so that the model isn't required to compress a large proportion of the internet into their weights.
The knowledge compressed into an LLM is a byproduct of training, not a goal. Training on internet data teaches the model to talk at all. The knowledge and ability to speak are intertwined.
> exclusively search-based rewards so that the model isn't required to compress a large proportion of the internet into their weights.
That just gave me an idea! I wonder how useful (and for what) a model would be if it was trained using a two-phase approach:
1) Put the training data through an embedding model to create a giant vector index of the entire Internet.
2) Train a transformer LLM but instead only utilising its weights, it can also do lookups against the index.
Its like a MoE where one (or more) of the experts is a fuzzy google search.
The best thing is that adding up-to-date knowledge won’t require retraining the entire model!
Of course and then watch those companies reined in.
> By creating a chock point at the hardware level, OpenAI can prevent the competition from increasing their reach because of the lack of hardware
I already hate OpenAI, you don't have to convince me
Sure, but if the price is being inflated by inflated demand, then the suppliers will just build more factories until they hit a new, higher optimal production level, and prices will come back down, and eventually process improvements will lead to price-per-GB resuming its overall downtrend.
Micron has said they're not scaling up production. Presumably they're afraid of being left holding the bag when the bubble does pop
Why are they building a foundry in Idaho?
I mean it says on the page
>help ensure U.S. leadership in memory development and manufacturing, underpinning a national supply chain and R&D ecosystem.
It's more political than supply based
Future demand aka DDR6.
The 2027 timeline for the fab is when DDR6 is due to hit market.
Not just Micron, SK Hynix has made similar statements (unfortunately I can only find sources in Korean).
DRAM manufacturers got burned multiple times in the past scaling up production during a price bubble, and it appears they've learned their lesson (to the detriment of the rest of us).
Hedging is understandable. But what I don't understand is why they didn't hedge by keeping Crucial around but more dormant (higher prices, less SKUs, etc)
The theory I've heard is built on the fact that China (CXMT) is starting to properly get into DRAM manufacturing - Micron might expect that to swamp the low end of the market, leaving Crucial unprofitable regardless, so they might as well throw in the towel now and make as much money as possible from AI/datacenter (which has bigger margins) while they can.
But yeah even if that's true I don't know why they wouldn't hedge their bets a bit.
Memory fabs take billions of dollars and years to build, also the memory business is a tough one where losses are common, so no such relief in sight.
With a bit of luck OpenAI collapses under its own weight sooner than later, otherwise we're screwed for several years.
Chip factories need years of lead time, and manufacturers might be hesitant to take on new debt in a massive bubble that might pop before they ever see any returns.
Please explain to me like I am five: Why does OpenAI need so much RAM?
2024 production was (according to openai/chatgpt) 120 billion gigabytes. With 8 billion humans that's about 15 GB per person.
What they need is not so much memory but memory bandwidth.
For training, their models have a certain number of memory needed to store the parameters, and this memory is touched for every example of every iteration. Big models have 10^12 (>1T )parameters, and with typical values of 10^3 examples per batch, and 10^6 number of iteration. They need ~10^21 memory accesses per run. And they want to do multiple runs.
DDR5 RAM bandwidth is 100G/s = 10^11, Graphics RAM (HBM) is 1T/s = 10^12. By buying the wafer they get to choose which types of memory they get.
10^21 / 10^12 = 10^9s = 30 years of memory access (just to update the model weights), you need to also add a factor 10^1-10^3 to account for the memory access needed for the model computation)
But the good news is that it parallelize extremely well. If you parallelize you 1T parameters, 10^3 times, your run time is brought down to 10^6 s = 12 days. But you need 10^3 *10^12 = 10^15 Bytes of RAM by run for weight update and 10^18 for computation (your 120 billions gigabytes is 10^20, so not so far off).
Are all these memory access technically required : No if you use other algorithms, but more compute and memory is better if money is not a problem.
Is it strategically good to deprive your concurrents from access to memory : Very short-sighted yes.
It's a textbook cornering of the computing market to prevent the emergence of local models, because customers won't be able to buy the minimal RAM necessary to run the models locally even just the inferencing part (not the training). Basically a war on people where little Timmy won't be able to get a RAM stick to play computer games at Xmas.
Thanks - but this seems like fairly extreme speculation.
> if money is not a problem.
Money is a problem, even for them.
large language models are large and must be loaded into memory to train or to use for inference if we want to keep them fast. older models like gpt3 have around 175 billion parameters. at float32s that comes out to something like 700GB of memory. newer models are even larger. and openai wants to run them as consumer web services.
I mean, I know that much. The numbers still don't make sense to me. How is my internal model this wrong?
For one, if this was about inference, wouldn't the bottleneck be the GPU computation part?
This "memory shortage" is not about AI companies needing main memory (which you plug into mainboards), but manufacturers are shifting their production capacities to other types of memory that will go onto GPUs. That brings supply for other memory products down, increasing their market price.
Concurrency?
Suppose some some parallelized, distributed task requires 700GB of memory (I don't know if it does or does not) per node to accomplish, and that speed is a concern.
A singular pile of memory that is 700GB is insufficient not because it lacks capacity, but instead because it lacks scalability. That pile is only enough for 1 node.
If more nodes were added to increase speed but they all used that same single 700GB pile, then RAM bandwidth (and latency) gets in the way.
The conspiracy theory (which, to be clear, may be correct) is that they don't actually need so much RAM, but they know they and all their competitors do still need quite a bit of RAM. By buying up all the memory supply they can, for a while, keep everyone else from being able to add compute capacity/grow their business/compete.
This became very clear with the outrage, rather than excitement, of forcing users to upgrade to ChatGPT-5 over 4o.
Think the article should also mention how OpenAI is likely responsible for it. Good article I found from another thread here yesterday: https://www.mooreslawisdead.com/post/sam-altman-s-dirty-dram...
yes. on the moore's law is dead podcast they were talking about rumors where some 'AI enterprise company's representatives' were trying to buy memory in bulk from brick and mortar stores. in some cases openai was mentioned. crazy if true. also interesting considering none of those would be ECC certified like what you would opt for for a commercial server.
Perhaps we'll have to start optimizing software for performance and RAM usage again.
I look at MS Teams currently using 1.5GB of RAM doing nothing.
Likewise, Slack. Currently at 2.3GB.
I'm very curious if this AI wave will actually lead to more native apps being made, since the barrier to doing so will be lower.
RIP electron apps and PWAs. Need to go native, as chromium based stuff is so memory hungry. PWAs on Safari use way less memory, but PWA support in Safari is not great.
PCIe 5.0 NVMe drives used as page cache can help a lot.
now that's a word i haven't heard in a while...optimising
I truly hate how bloated and inefficient MS/Windows is.
My hope is that with the growing adoption of Linux that MS takes note...
That's a very indirect way of enjoying the benefits of Linux.
So you’ve hated Windows for the last 30 years?
Can we get web sites to optimize as well? I use a slower laptop and a lot of sites are terrible. My old Sparc station (40mhz) had a better web experience in 1997 because than people cared about this.
Anyone want to start a fab with me? We can buy an ASML machine and figure out the rest as we go. Toronto area btw
A dozen or so well-resourced tech titans in China are no doubt asking themselves this same question right now.
Of course, it takes quite some time for a fab to go from an idea to mass production. Even in China. Expect prices to drop 2-3 years from now when all the new capacity comes online?
My napkin math:
According to my research, these machines can etch around 150 wafers per hour and each wafer can fit around 50 top-of-the-line GPUs. This means we can produce around 7500 AI chips per hour. Sell them for $1k a piece. That's $7.5 million per hour in revenue. Run the thing for 3 days and we recover costs.
I'm sure there's more involved but that sounds like a pretty good ROI to me.
What about the $10b to build the facility (including clean air/water/chemicals/etc)?
Rent a warehouse.
It would be cheaper to bulldoze the warehouse and start over.
A photolithography machine doesn't etch anything (well, some EUV machines do it as an unwanted side effect because of plasma generation), it just patterns some resist material. Etching is happening elsewhere. Also, keep in mind, you'll need to do multiple passes through a photolithography machine to pattern different steps of the process - it's not a single pass thing.
The catch is if you started today with plenty of money (billions of dollars!) and could hire the right experts as you need them (this is a big if!) there would still be a couple years between today and producing 150 wafers per house. So the question isn't what does the math look like today, it is what the math looks like in 2 years - if you could answer that why didn't you start two years ago so you could get the current prices?
that's 100% yield which ain't happening
Not with that attitude.
My man.
What would you expect yield to be?
With no prior experience? 0%. Those machines are not just like printers :-)
We'll have to gain some experience then :)
Sure - once you have dozens of engineers and 5 years under your belt you'll be good to go!
This will get you started: https://youtu.be/B2482h_TNwg
Keep in mind that every wafer makes multiple trips around the fab, and on each trip it visits multiple machines. Broadly, one trip lays down one layer, and you may need 80-100 layers (although I guess DRAM will be fewer). Each layer must be aligned to nanometer precision with previous layers, otherwise the wafer is junk.
Then as others have said, once you finish the wafer, you still need to slice it, test the dies, and then package them.
Plus all the other stuff....
You'll need billions in investment, not millions - good luck!
Especially when the plan is to just run them in a random rented commercial warehouse.
I drive by a large fab most days of the week. A few breweries I like are down the street from a few small boutique fabs. I got to play with some experimental fab equipment in college. These aren't just some quickly thrown together spaces in any random warehouse.
And it's also ignoring the water manufacturing process, and having the right supply chain to receive and handle these ultra clean discs without introducing lots of gunk into your space.
Sounds like the kind of question ChatGPT would be good at answering...
At that point, it'll be the opposite problem as more capacity than demand will be available. These new fabs won't be able to pay for themselves. Every tic receives a tok.
it's just a bunch of melted sand. How hard can it be?
I think it would be more like 5-7 years from now if they started breaking ground on new fabs today.
China cannot buy ASML machines. All advanced semiconductor manufacturing in China is done with stockpiled ASML machines from before the ban.
That restriction is only for the most advanced systems. According to ASML's Q3 2025 filing, 42% of all system sales went to China.
SK Hynix also has significant memory manufacturing presence in China; or about 40% of the company's entire DRAM capacity.
Would you really need ASML machines to do DDR5 RAM? Honest question, but I figured there was competition for the non-bleeding edge - perhaps naively so.
Yes. You need 16nm or better for DDR5.
As someone who knows next to nothing about this space, why can China not build their own machines? Is ASML the only company making those machines? If so, why? Is it a matter of patents, or is the knowledge required for this so specialized only they've built it up?
They can - if they are willing to invest a a lot of money over several years. The US got Nuclear bombs in a few years during WWII with this thinking, and China (or anyone else) could too. This problem might be harder than a bomb, but the point remains, all it takes is a willingness to invest.
Of course the problem is we don't see what would be missed by doing this investment. If you put extra people into solving this problem that means less people curing cancer or whatever. (China has a lot of people, but not unlimited)
Yes. ASML is the only company making these machines. And both, they own thousands of patents and are also the only ones with the institutional knowledge required to build them anyway.
I thought Fermi paradox is about nukes; I increasingly think it's about chips
As someone with no skills in the space, no money, and lives near Ottawa: I'd love to help start a fab in Ontario.
Right on, partner. I think there's a ton of demand for it tbh
I'll do the engineering so we're good on that front. Just need investors.
I think I have a few dollars left on a Starbucks gift card. I'm in!
Hear me out- artisanal DRAM.
Bathtub semiconductors! Shameless plug: https://wiki.fa-fo.de/orga:donate
I'm listening.
But what if it's a bubble driven by speculation?
It wouldn't pay off.
Starting a futures exchange on RAM chips, on the other hand...
I hope you have very deep pockets. But I'm cheering you on from the sidelines.
Just need a half billion in upfront investment. And thank you for the support :)
I'll happily invest $100 into this project in exchange for stock. let me know.
So, playing the Mega Powerball are you?
I made a website: https://ontario-chips.vercel.app/
I suppose it would help if I could read the whole page: I cannot see the left few characters on Firefox on Android. What did you make this with?
Note that fixing the site won't increase my chances of donating, I'm from the ASML country ;)
That's annoying... I made it with Next.js and Tailwind CSS tho. Hosted on Vercel.
wonder which one will find a winner faster...
It's a toss up.
that is an understatement
Only if you put up the 10 billion dollars.
Machines are less than 400 million.
You're just talking about a lithography machine. Patterning is one step out of thousands in a modern process (albeit an important one). There's plenty more stuff needed for a production line, this isn't a 3D printer but for chips. And that's just for the FEOL stuff, then you still need to do BEOL :). And packaging. And testing (accelerated/environmental, too). And failure analysis. And...
Also, you know, there's a whole process you'll need to develop. So prepare to be not making money (but spending tons of it on running the lines) until you have a well tested PDK.
> 3D printer but for chips
how about a farm of electron microscopes? these should work
Canon has been working on an alternative to EUV lithography called nanoimprint lithography. It would be a bit closer to the idea of having an inkjet printer make the masks to etch the wafers. It hasn't been proven in scale and there's a lot of thinking this won't really be useful, but it's neat to see and maybe the detractors are wrong.
https://global.canon/en/technology/nil-2023.html
https://newsletter.semianalysis.com/p/nanoimprint-lithograph...
They'll still probably require a good bit of operator and designer knowledge to work around whatever rough edges exist in the technology to keep yields high, assuming it works. It's still not a "plug it in, feed it blank wafers, press PRINT, and out comes finished chips!" kind of machine some here seem to think exist.
And the cost of the people to run those machines, and the factories that are required to run the machines?
I'll do it for free. And I'm sure we could rent a facility.
Dont forget RAM.
Sure, we can work on brining in TinyTapeout to modern fab
I wonder if these RAM shortages are going to cause the Steam Machine to be dead on arrival. Valve is probably not a big enough player to have secured production guarantees like Sony or Nintendo would have. If they try to launch with a price tag over $750, they're probably not going sell a lot.
Yeah, I think (sadly) this kills the Steam Machine in the short term if the competition is current consoles.
At least until the supply contracts Sony & Microsoft have signed come up for renewal, at which point they’re going to be getting the short end of the RAM stick too.
In the short term the RAM shortage is going to kill homebrew PC building & small PC builders stone dead - prebuilts from the larger suppliers will be able to outcompete them on price so much that it simply won’t make any sense to buy from anyone except HP, Dell etc etc. Again, this applies only until the supply contracts those big PC firms have signed run out, or possibly only until their suppliers find they can’t source DDR5 ram chips for love nor money, because the fabs are only making HBM chips & so they have to break the contracts themselves.
It’s going to get bumpy.
> At least until the supply contracts Sony & Microsoft have signed come up for renewal, at which point they’re going to be getting the short end of the RAM stick too.
Allegedly Sony has an agreement for a number of years, but Microsoft does not: https://thegamepost.com/leaker-xbox-series-prices-increase-r...
Eesh.
The fight over RAM supply is going to upend a lot of product markets. Just random happenstance over whether a company decided to lock in supply for a couple of years is going to make or break individual products.
If anyone that has hidden cash reserves that could buy out even Apple. It would probably be Valve.
Lol, wacky reality if they say "hey we had spare cash so we bought out Micron to get DDR5 for our gaming systems"
To save my fellow ignorami some math, Valve is estimated to be valued at approximately 3.2% of Micron, or 0.2% of Apple. :)
Value =/= cash reserves. Valve has ran VERY lean for quite some time.
Sure, but valuation should always exceed cash reserves. It's very odd when it does not. I think I recall that that SUNW (by then JAVA probably) was at one point valued below cash, prior to ORCL acquisition.
If the argument is that Valve's secrecy is so good that they have (substantially more than) 30-500x cash stashed away in excess of their public valuation estimates, then perhaps I underestimate Valve's secrecy!
Or it was a humorous exaggeration which I lacked the context to realize. I am quite ignorant of the games industry.
> And those companies all realized they can make billions more dollars making RAM just for AI datacenter products, and neglect the rest of the market.
> So they're shutting down their consumer memory lines, and devoting all production to AI.
Okay this was the missing piece for me. I was wondering why AI demand, which should be mostly HBM, would have such an impact on DDR prices, which I’m quite sure are produced on separate lines. I’d appreciate a citation so I could read more.
It's kind of a weird framing. Of course RAM companies are going to sell their limited supply to the highest bidder!
Just like the GPUs.
NVIDIA started allocating most of the wafer capacity for 50k GPU chips. They are a business, its a logical choice.
Red chip supply problems in your factory are usually caused by insufficient plastic bars, which is usually caused by oil production backing up because you're not consuming your heavy oil and/or petroleum fast enough.
Crack heavy oil to light, and turn excess petroleum into solid fuel. As a further refinement, you can put these latter conversions behind pumps, and use the circuit network to only turn the pumps on when the tank storage of the respective reagent is higher than ~80%.
hth, glhf
Yup, and stockpiling solid fuel is not a waste because you need it for rocket fuel later on. Just add more chests.
You don't happen to play Foxhole [0] do you?
Because if not, the logistics Collies in SOL could make good use of a person with your talents. :-)
Hadn't heard of it before..
> Foxhole is a massively multiplayer game where
nope, not my cup of tea, but thanks for the "if you like this you might like this" rec :)
Can someone explain why OpenAI is buying DDR5 RAM specifically? I thought LLMs typically ran on GPUs with specialised VRAM, not on main system memory. Have they figured out how to scale using regular RAM?
They're not. They are buying wafers / production capacity to make HBM so there is less DDR5 supply.
OK, fair enough, but what are OpenAI doing buying production capacity rather than, say, paying NVIDIA to do it? OpenAI aren’t the ones making the hardware?
Just because Nvidia happily sells people discrete GPU's, DGX systems, etc., doesn't mean they would turn down a company like OpenAI paying them $$$ for just the packaged chips and the technical documentation to build their own PCBs; or, let OpenAI provide their own DRAM supply for production on an existing line.
If you have a potentially multi-billion dollar contract, most businesses will do things outside of their standard product offerings to take in that revenue.
> doesn't mean they would turn down a company like OpenAI paying them $$$ for just the packaged chips and the technical documentation to build their own PCBs
FWIW, this was the standard state of affairs of the GPU market for a long while. nVidia and AMD sold the chips they paid someone to produce to integrators like EVGA, PNY, MSI, ZOTAC, GIGABYTE, etc. Cards sold under the AMD or nVidia name directly were usually partnered with one of these companies to actually build the board, place the RAM, design the cooling, etc. From a big picture perspective, it's a pretty recent thing for nVidia to only really deliver finished boards.
On top of this, OpenAI/Sam Altman have been pretty open about making their own AI chips and what not. This might point to them getting closer to actually delivering on that (pure speculation) and wanting to ensure they have other needed supplies like RAM.
Got it, thank you.
Because they can provide the materials to NVIDIA for production and prevent Google, Anthropic, etc from having them.
> OpenAI aren’t the ones making the hardware?
how surprised would you be if they announced that they are?
They didn't buy DDR5 - they bought raw wafer capacity and a ton of it at that.
"dig into that pile of old projects you never finished instead of buying something new this year."
You don't need a new PC. Just use the old one.
I just bought some 30pin SIMMs to rehab an old computer. That market is fine.
I have a bag of SIMMs that I saved, no idea why, because I clearly wrote BAD on the mylar bag.
At time time I was messing around with the "badram" patch for Linux.
I wonder if Apple will budge. The margins on their RAM upgrades were so ludicrous before that they're probably still RAM-profitable even without raising their prices, but do they want to give up those fat margins?
I know contract prices are not set in stone. But if there’s one company that probably has their contract prices set for some time in the future, that company is Apple, so I don’t think they will be giving up their margins anytime soon.
RAM upgrades are such a minor, insignificant part of Apple's income - and play no part in plans for future expansion/stock growth.
They don't care. They'll pass the cost on to the consumers and not give it a second thought.
> I wonder if Apple will budge.
Perhaps I don't understand something so clarification would be helpful:
I was under the impression that Apple's RAM was on-die, and so baked in during chip manufacturing and not a 'stand alone' SKU that is grafted onto the die. So Apple does not go out to purchase third-party product, but rather self-makes it (via ASML) when the rest of the chip is made (CPU, GPU, I/O controller, etc).
Is this not the case?
Apple's RAM is on-package, not on-die. The memory is still a separate die which they buy from the same suppliers as everyone else.
https://upload.wikimedia.org/wikipedia/commons/d/df/Mac_Mini...
That whole square is the M1 package, Apple's custom die is under the heatspreader on the left, and the two blocks on the right are LPDDR packages stacked on top of the main package.
https://wccftech.com/apple-m2-ultra-soc-delidded-package-siz...
Scaled up, the M2 Ultra is the same deal just with two compute dies and 8 separate memory packages.
I'd like to believe that their pricing for ram upgrades are like that so the base model can hit a low enough of a price. I don't believe they have the same margin for the base model compared to the base model + memory upgrade.
on one hand they are loosing profit, on the other hand they are gaining on market share. They will probably wait a short while to assess how much they are willing to sacrifice profits for market share
I read online that Apple uses three different RAM suppliers supposedly? I wonder if Apple has the ability to just make their own RAM?
Apple doesn't own any foundries, so no. It's not trivial to spin up a DRAM foundry either. I do wonder if we'll see TSMC enter the market though. Maybe under pressure from Apple or nvidia...
There are no large scale pure play DRAM fabs that I’m aware of, so Apple is (more or less) buying from the same 3 companies as everyone else.
Apple doesn't own semiconductor fabs. They're not capable of making their own RAM.
I am fully expecting a 20%+ price bump on new mac hardware next year.
Not me. It’s wildly unusual for Apple to raise their prices on basically anything… in fact I'm not sure if its ever happened. *
It’s been pointed out by others that price is part of Apple's marketing strategy. You can see that in the trash can Mac Pro, which logically should have gotten cheaper over the ridiculous six years it was on sale with near-unchanged specs. But the marketing message was, "we're selling a $3000 computer."
Those fat margins leave them with a nice buffer. Competing products will get more expensive; Apple's will sit still and look even better by comparison.
We are fortunate that Apple picked last year to make 16gb the new floor, though! And I don't think we're going to see base SSDs get any more generous for a very, very long time.
* okay I do remember that Macbook Airs could be had for $999 for a few years, that disappeared for a while, then came back
It’s 4D chess my dude, they were just training people to accept those super high ram prices. They saw this coming I tell you!
My understanding is that this is primarily hitting DDR5 RAM (or better). With prices so inflated, is there an argument to revert and downgrade systems to DDR4 RAM technology in many use cases (which is not so inflated)?
DDR 4 shot up too. It was bad enough that instead of trying to put together a system with the AM4 m/b I already have, I just bought a Legion Go S.
It will be hit just as hard, they have stopped new DDR4 production to focus on DDR5 and HBM.
Linked in the article, DDR4 and LPDDR4 are also 2-4x more expensive now, forcing smaller manufactures to raise prices or cancel some products entirely.
DDR4 manufacturing is mostly shut down, so if any real demand picks starts there the prices will shoot up
No DDR4 is affected too. It's a simple question of production and demand, and the biggest memory manufacturers are all winding down their DDR4/DDR5 memory production for consumers (they still make some DDR5 for OEMS and servers).
DDR4 prices have gone up 4x in the last 3 months.
I think I paid like $500 ($1300 today) in 1989ish to upgrade from 2MB to 5MB of ram (had to remove 1MB to add 4MB)
I updated a $330 new HP laptop (it flexes like cardboard) from 8GB to 32GB in May. Cost back then: $44. Today, the same kit costs a ridiculous $180.
https://tomverbeure.github.io/2025/03/12/HP-Laptop-17-RAM-Up...
Not a bad time for the secondary market to be created. We keep buying everything new, when the old stuff works just as well. There is a ton of e-waste. The enthusiast market can benefit, while the enterprise market can just eat the cost.
Also, a great incentive to start writing efficient software. Does Chrome really need 5GB to run a few tabs?
> And those companies all realized they can make billions more dollars making RAM just for AI datacenter products, and neglect the rest of the market.
I wouldn't ascribe that much intent. More simply, datacenter builders have bought up the entire supply (and likely future production for some time), hence the supply shortfall.
This is a very simple supply-and-demand situation, nothing nefarious about it.
That makes it sound like they are powerless, which is not the case. They don’t have to have their capacity fully bought out, they could choose to keep a proportion of capacity for maintaining the existing PC market, which they would do if they thought it would benefit them in the long term.
They’re not doing that, because it benefits them not to.
$20B, 5 years and you can have your own DDR5 fab to print money with.
jokes aside, if the AI demand actually materializes, somebody will look at the above calculation and say 'we're doing it in 12 months' with a completely straight face - incumbents' margin will be the upstart's opportunity.
> maybe it's a good time to dig into that pile of old projects you never finished instead of buying something new this year.
Always good advice.
anybody care to speculate on how long this is likely to last? is this a blip that will resolve itself in six months, or is this demand sustainable and we are talking years to build up new manufacturing facilities to meet demand?
Pure speculation, nobody can say say for sure, but my guess is 2-3 years.
The article suggests that because the power and cooling are customized, it would take a ton of effort to run the new AI servers in a home environment, but I'm skeptical of that. Home-level power and cooling are not difficult these days. I think when the next generation of AI hardware comes out (in 3-5 years), there will be a large supply of used AI hardware that we'll probably be able to repurpose. Maybe we'll sell them as parts. It won't be plug-and-play at first, but companies will spring up to figure it out.
If not, what would these AI companies do with the huge supply of hardware they're going to want to get rid of? I think a secondary market is sure to appear.
A single server is 20 KW. A rack is 200 KW.
These are not the old CPU servers of yesterday.
I just gathered enough money to build my new PC. I'll even go to another country to pay less taxes, and this spike hit me hard. I'll buy anyway because I don't believe it will slow down so soon. But yeah, for me is a lot of money
Me too. I had saved up to make a small server. I guess I'll have to wait up until 2027–2028 at this rate.
Buy used gear and rip out the guts???
I can't help but be the pessimist angle. RAM production will need to increase to supply AI data centers. When the AI bubble bursts (and I do believe it will), the whole computing supply chain, which has been built around it, will take a huge hit too. Excess production capacity.
Wonder what would happen if it really takes a dive. The impact on the SF tech scene will be brutal. Maybe I'll go escape on a sailboat for 3 years or something.
Anyway, tangential, but something I think about occasionally.
Prices are high because no one believes it's not a bubble. Nvidias strategy has been careful careful with volume this whole time as well.
The thing is it's also not a conventional looking bubble: what we're seeing here is cashed up companies ploughing money into the only thing in their core business they could find to do so with, rather then a lot of over exuberant public trading and debt financing.
Hopefully Apple uses their volume as leverage to avoid getting affected by this for as long as possible. I can ride it out if they manage to.
Bullwhip effect on this will be funny. At least, we are in for some cheap ram in like… a dozen months or so.
I’m really hoping this doesn’t become another GPU situation where every year we think it’s going to get lower and it just stays the same or gets worse
Called it! About a year ago (or more?) I thought nVidia was overpriced and if AI was coming to PCs RAM would be important and it might be good to invest in DRAM makers. As usual I didn't do anything with my insight, and here we are. Micron has more than doubled since summer.
I wonder how much of that RAM is sitting in GPUs in warehouses waiting for datacenters to be built or powered?
The big 3 memory manufacturers (SK Hynix, Samsung, Micron) are essentially all moving upmarket. They have limited capacity and want to use it for high margin HBM for GPUs and ddr5 for servers. At the same time CXMT, Winbond and Nanya are stepping in at the lower end of the market.
I don't think there is a conspiracy or price fixing going on here. Demand for high profit margin memory is insatiable (at least until 2027 maybe beyond) and by the time extra capacity comes online and the memory crunch eases the minor memory players will have captured such a large part of the legacy/consumer market that it makes little sense for the big 3 to get involved anymore.
Add to that scars from overbuilding capacity during previous super memory super cycles and you end up with this perfect storm.
Companies are adamant about RAMming AI down our throats, it seems.
If the OpenAI Hodling Company buys and warehouses 40% of global memory production or 900,000 memory wafers (i.e. not yet turned into DDR5/DDR6 DIMMs) per month at price X in October 2025, leading to supply shortages and tripling of price, they have the option of later un-holding the warehoused memory wafers for a profit.
https://news.ycombinator.com/item?id=46142100#46143535
What's the economic value per warehoused and insured cubic inch of 900,000 memory wafers? Grok response:Had Samsung known SK Hynix was about to commit a similar chunk of supply — or vice-versa — the pricing and terms would have likely been different. It’s entirely conceivable they wouldn’t have both agreed to supply such a substantial part of global supply if they had known more...but at the end of the day - OpenAI did succeed in keeping the circles tight, locking down the NDAs, and leveraging the fact that these companies assumed the other wasn’t giving up this much wafer volume simultaneously…in order to make a surgical strike on the global RAM supply chain..> As of late 2025, 900,000 finished 300 mm 3D NAND memory wafers (typical high-volume inventory for a major memory maker) are worth roughly $9 billion and occupy about 104–105 million cubic inches when properly warehoused in FOUPs. → Economic value ≈ $85–90 per warehoused cubic inch.
Sounds like the Silver Thursday all over again. I hope OpenAI ends up like the Hunt Btothers.
> To save the situation, a consortium of US banks provided a $1.1 billion line of credit to the brothers which allowed them to pay Bache which, in turn, survived the ordeal.
It seems once you amass a certain amount of wealth, you just get automatically bailed out from your mistakes
I know this is mostly paranoid thinking on my behalf, but it almost feels like this is a conscious effort to attempt to destroy "personal" computing.
I've been a huge advocate for local, open, generative AI as the best resistance to massive take-over by large corporations controlling all of this content creation. But even as it is (or "was" I should say), running decent models at home is prohibitively expensive for most people.
Micron has already decided to just eliminate the Crucial brand (as mentioned in the post). It feels like if this continues, once our nice home PCs start to break, we won't be able to repair them.
The extreme version of this is that even dumb terminals (which still require some ram) will be as expensive as laptops today. In this world, our entire computing experience is connecting a dumb terminal to a ChatGPT interface where the only way we can interact with anything is through "agents" and prompts.
In this world, OpenAI is not overvalued, and there is no bubble because the large LLM companies become computing.
But again, I think this is mostly a dystopian sci-fi fiction... but it does sit a bit too close to the realm of possible for my tastes.
I share your paranoia.
My kids use personal computing devices for school, but their primary platform (just like their friends) is locked-down phones. Combining that usage pattern with business incentives to lock users into walled gardens, I kind of worry we are backing into the destruction of personal computing.
Wouldn't the easy answer to this be increased efficiency of RAM usage?
RAM being plentiful and cheap led to a lot of software development being very RAM-unaware, allowing the inefficiencies of programs to be mostly obfuscated from the user. If RAM prices continue rising, the semi-apocalytic consumer fiction you've spun here would require that developers not change their behaviors when it comes to software they write. There will be an equillibrium in the market that still allows the entry of consumer PC's it will just mean devices people buy will have less available RAM than is typical. The demand will eventually match up to the change in supply as is typical of supply/demand issues and not continuously rise into an infinite horizon.
I believe that while centralized computing excels at specific tasks like consumer storage, it cannot compete with the unmatched diversity and unique intrinsic benefits of personal computing. Kindle cannot replace all e-readers. Even Apple’s closed ecosystem cannot permit it to replace macOS with iPadOS. These are not preferences but constraints of reality.
The goal shouldn’t be to eliminate one side or the other, but to bridge the gap separating them. Let vscode.dev handle the most common cases, but preserve vscode.exe for the uncommon yet critical ones.
The first "proper" "modern" computer I had, initially came with 8 megabytes of RAM.
It's not a lot, but it's enough for a dumb terminal.
That's not disproving OP's comment; OpenAI is, in my opinion, making it untenable for a regular Joe to build a PC capable of running local LLM model. It's an attack on all our wallets.
Why do you need a LLM running locally so much that's the inflated RAM prices are an attack on your wallet? One can always opt not to play this losing game.
I remember when the crypto miners rented a plane to deliver their precious GPUs.
It’s not a conspiracy, it’s just typical dumb short term business decisions amplified and enabled by a cartel supply market.
If Crucial screws up by closing their consumer business they won’t feel any pain from it because the idea of new competitors entering the space is basically impossible.
I don’t think you need a conspiracy theory to explain this. This is simply capitalism, a system that seems less and less like the way forward. I’m not against markets, but I believe most countries need more regulations targeted at the biggest companies and richest people. We need stronger welfare states, smaller income gaps and more democracy. But most countries seems to vote in the absolute opposite direction.
The end goal of capitalism is the same as the end goal of monopoly.
1 person has all the money and all the power and everyone else is bankrupt forever and sad.
I think we kiss of deathed the article haha. Here's an archive https://archive.is/6QD8c
Is this a shortage of every type of RAM simultaneously?
Every type of DRAM is ultimately made at the same fabs, so if one type is suddenly in high demand then the supply of everything else is going to suffer.
Wait, really? For CPUs each generation needs basically a whole new fab, I thought… are they more able to incrementally upgrade RAM fabs somehow?
The old equipment is mothballed because china is the only buyer and nobody wants to do anything that the Trump admin will at some point decide is tariff-worthy. So it all sits.
Essentially yes, not necessarily equivalently but every type has increased substantially
> But I've already put off some projects I was gonna do for 2026, and I'm sure I'm not the only one.
Let's be honest here - the projects I'm going to do in 2026, I bought the parts for those back in 2024. But this is definitely going to make me put off some projects that I might have finally gotten around to in 2028.
I'm way ahead of all of you, I'm hoarding DDR2.
time to stop using python boys, it's zig from here on out
1. Buy up 40% of global fab capacity
2. Resell wafers at huge markup to competitors
3. Profit
Very happy to have bought a MBP M4 with 64 gb of Ram last year.
And sharing the RAM between your CPU/GPU/NPU instead of using separate memories.
I am very excited for a few years when the bubble bursts and all this hardware is on the market for cheap like back in the early to mid 2000's after that bubble burst and you had tons of old servers available for homelabs. I can't wait to fill a room with 50kW of bulk GPUs on a pallet and run some cool shit.
Only a matter of time before supply catches up and then likely overshoots (maybe combined with AI / datacenter bubble popping), and RAM becomes dirt cheap. Sucks for those who need it now though.
I grabbed a framework desktop with 128GB due to this. I can't imagine they can keep the price down for the next batches. If you bought 128GB of ram with "close" specs to the one used just that would be 1200 EUR at retail (who are obviously taking advantage).
Every shortage is followed by a glut. Wait and see for RAM prices to go way down. This will happen because RAM makers are racing to produce units to reap profits from the higher price. That overproduction will cause prices to crash.
They aren't overproducing consumer modules, they're actively cutting production of those. They're producing datacenter/AI specific form factors that won't be compatible with consumer hardware.
somebody will step up to pick up the free money if this continues.
Pricing compute out of the average persons budget to prop up investment in data centers, stocks, ultimately control agency
If an RTX 5000 series price topped out at historical prices no one would need hosted AI
Then it came to be that models were on a path to run well enough loaded into RAM... uh oh
This is in line with ISPs long ago banning running personal services and the long held desire to sell dumb thin clients that must work with a central service
Web developers fell for confidence games of old elders hook line and sinker. Nothing but the insane ego and vanity of some tech oligarchs driving this. They cannot appear weak. Vain aura farming, projection of strength.
> Micron's killing the Crucial brand of RAM and storage devices completely,
More rot economy. Customers are such a drag. Lets just sell to other companies for billion dollar deals at once. These AI companies have bottomless wallets. No one has thought of this before we will totally get rich.
"I don't want to make a little bit of money every day. I want to make a fuck ton of money all at once."
32GB should be more than enough.
You can go 16GB if you go native and throw some assembly in the mix. Use old school scripting languages. Debloat browsers.
It has been long delayed.
16GB is more than fine if you're not doing high-end gaming, or heavy production workloads. No need for debloating.
But it doesn't matter either way, because both 16 and 32GB have what, doubled, tripled? It's nuts. Even if you say "just buy less memory", now is a horrible time to be building a system.
I found web browser tabs eating too much memory when you only have 16GB
Use adblock, stop visiting nasty websites and open less tabs. Problem is easily solved.
That's strange. How many tabs are we talking? Are they running something intense or just ordinary webpages?
“640K ought to be enough for anybody.” - Bill Gates
there's no reliable evidence he ever uttered that phrase
Doesn't matter. It's folklore.
Hey, hey 16k What does that get you today?
I bought a motherboard to build a DIY NAS... takes DDR5 SO-DIMM RAM and only 16gb costs more than double the motherboard (which includes an intel processor)
16GB is more than enough on Linux, but Win11 eats resources like crazy
Sure but 32GB DDR5 ram has just jumped from ~$100 to $300+ in a flash. The 2x16GB I have in my recent build went from $105 for the pair to $250 each. $500 total!
SSD’s are also up. Hell I am seeing refurbished enterprise HDD’s at 2x right now. It’s sharp increases basically across the board except for CPU’s/GPU’s.
Every PC build basically just cranked up $400-$600 easily, and that’s not accounting for the impact of inflation over the last few years weakening everyone’s wallets. The $1600 machine I spec’d out for my buddy 5 weeks ago to buy parts for this Black Friday now runs $2k even.
I'm using 1GB with TWM, DIllo, TUI tools, XTerm, MuPDF and the like. As most tools are small from https://t3x.org, https://luxferre.top and https://howerj.github.io/subleq.htm with EForth, (and I try to use cparser instead of) clang, my requeriments are really tiny.
You can achieve a lot by learning Klong and reading the intro on statistics. And xargs to paralelize stuff. Oh, and vidir to edit directories at crazy speeds with any editor, even nano or gedit if you like them.
panem et circenses
But what will happen when people are priced out from the circus?
Ha! Maybe Javascript developers will finally drop memory usage! You need to display the multiplication table? Please allocate 1GB of RAM. Oh, you want alternate row coloring? Here is another 100MB of CSS to do that.
edit: this is a joke
I do sometimes reflect on how 64MB of memory was enough to browse the Web with two or three tabs open, and (if running BeOS) even play MP3s at the same time with no stutters. 128MB felt luxurious at that time, it was like having no (memory-imposed) limits on personal computing tasks at all.
Now you can't even fit a browser doing nothing into that memory...
HN works under Dillo and you don't needs JS at all. If some site needs JS, don't waste your time. Use mpv+yt-dlp where possible.
Haha, the amount of downvotes of your very true comment just proves how many web developers are there on HN.
Its funny because the blogpost author makes the same joke
> The reason for all this, of course, is AI datacenter buildouts. I have no clue if there's any price fixing going on like there was a few decades ago—that's something conspiracy theorists can debate—but the problem is there's only a few companies producing all the world's memory supplies.
So it's the Bitcoin craze all over again. Sigh. The bubble will eventually collapse, it has to - but the markets can stay irrational longer than you can stay solvent... or, to use a more appropriate comparison, have a working computer.
I for myself? I hope once this bubble collapses, we see actual punishments again. Too-large-to-fail companies broken up, people getting prosecuted for the wash trading masquerading itself as "legitimate investments" in the entire bubble (that more looks like the genetic family table of the infamously incestuous Habsburg family), greedy executives jailed or, at least where national security is impacted due to chip shortages, permanently gotten rid of. I'm sick and tired of large companies being able to just get away with gobbling up everything, killing off the economy at large, they are not just parasites - they are a cancer, killing its host society.
I mean what's the big deal, can't we just download more ram
This is ultimately the first stage of human economic obsolescence and extinction.
This https://cdna.pcpartpicker.com/static/forever/images/trends/2... will happen to every class of thing (once it hits energy, everything is downstream of energy).
If your argument is that value produced per-cpu will increase so significantly that the value produced by AGI/ASI per unit cost exceeds what humans can produce for their upkeep in food and shelter, then yes that seems to be one of the significant risks long term if governments don't intervene.
If the argument is that prices will skyrocket simply because of long-term AI demand, I think that ignores the fact that manufacturing vastly more products will stabilize prices up to the point that raw materials start to become significantly more expensive, and is strongly incentivized over the ~10-year timeframe for IC manufacturers.
>the value produced by AGI/ASI per unit cost exceeds what humans can produce for their upkeep in food and shelter
The value of AGI/ASI is not only defined by its practical use, It is also bounded by the purchasing power of potential consumers.
If humans aren’t worth paying, those humans won’t be paying anyone either. No business can function without customers, no matter how good the product.
Precisely the place where government intervention is required to distribute wealth equitably.
I'm no economist, but if (when?) the AI bubble bursts and demand collapses at the price point memory and other related components are at, wouldn't price recover?
not trying to argue, just curious.
I'm no economist either, but I imagine the manufacturing processes for the two types of RAM are too different for supply to quickly bounce back.
IF a theoretical AI bubble bursts sure. However the largest capitalized companies in the world and all the smartest people able to do cutting edge AI research are betting otherwise. This is also what the start of a takeoff looks like
Why should we believe in another apocalypse prediction?
One of them has to be right, eventually!
Because the collapse of complex societies is real - https://github.com/danielmkarlsson/library/blob/master/Josep...
Unbounded increases in complexity lead to diminishing returns on energy investment and increased system fragility which both contribute to an increased likelihood of collapse as solutions to old problems generate new problems faster than new solutions can be created since energy that should be dedicated to new solutions is needed to maintain the layers of complexity generated by the layers of previous solutions.