HNNewShowAskJobs
Built with Tanstack Start
$1T in tech stocks sold off as market grows skeptical of AI(gizmodo.com)
149 points by pabs3 13 hours ago | 194 comments
  • uyzstvqs12 hours ago

    Original: https://www.ft.com/content/8c6e3c18-c5a0-4f60-bac4-fcdab6328...

    Gizmodo is just regurgitating this Financial Times article into a poor quality opinion piece. Journalism is preferred to someone ranting from an armchair IMO.

    • jahsome11 hours ago |parent

      Journalism is too boring. And expensive.

  • nba456_13 hours ago

    NASDAQ hasn't been this low since 2 weeks ago!

    • JKCalhoun13 hours ago |parent

      When was the last $1T tech sell-off?

      That's what people are grousing about.

      • chmod77512 hours ago |parent

        Never. This wasn't such a sell-off either.

        What actually happened is that market cap declined by that amount, where market cap of course is just latest share price multiplied by shares outstanding.

        Nobody should be surprised or care that this number fluctuates, which is why certain people try really hard to make it seem more interesting than it really is. Otherwise they'd be out of a job.

        There is really nothing dumber than finance news.

        • spwa412 hours ago |parent

          Also known as mark-to-market. Especially with the cyclical deals with stock at fictional (ie. never sold at that price) valuations, that are now all the rage.

          Reminds me of Enron, really.

        • expedition3212 hours ago |parent

          The casino is rigged anyway. While people are standing outside food banks under god emperor Trump the billionaires are making more money than ever.

          We will never see another 1929 crash in which rich people had to sell off their cars.

          • mensetmanusman11 hours ago |parent

            https://link.springer.com/article/10.1007/s11266-018-0039-2

            Do you have this data out to 2025?

            • ben_w11 hours ago |parent

              Why is food bank use in Vancouver, Canada relevant to a complaint about Trump?

              Sure, Trump wants to add Canada to his kingdom, but unless something wild happened while I was out shopping, still a different country.

      • Jabbles12 hours ago |parent

        Well according to the FT article that this article is based on:

        a) it's $800B

        b) this is the largest such selloff since April

        https://archive.ph/bzr5G

      • pllbnk6 hours ago |parent

        With the pace of inflation we have been witnessing over the past years $1T has become unimpressive. Let's talk percentages. And if somebody wants to talk about absolute numbers, they should talk not only about negatives, but positives too, as in how much the stock market has gained before losing that $1T.

      • epolanski13 hours ago |parent

        Who cares? Volatility is uninteresting.

        • NaomiLehman11 hours ago |parent

          Volatility is where big money is made, ie. retails scalped

          • fullshark11 hours ago |parent

            This narrative that retail is constantly panicking and selling every time the market drops is it based in anything? I'm under the impression the massive volume of buying/selling every day is institutional, and institutional investors/hedge funds are the ones constantly adjusting based on how data moves.

            • epolanski4 hours ago |parent

              Second this.

              All we know is that over time retail investors tend to underperform the markets, but that's true of sophisticated institutional investors too.

              Plus: in 2022 when we had a bear year retail was the one buying the dips according to the news.

        • cess1112 hours ago |parent

          Eh, I like keeping an eye on S&P 500 VIX to get a sense of the current mood among oligarchs and their institutions.

          I wouldn't use it for investment decisions, however.

    • Ologn12 hours ago |parent

      Yes...NVDA closed at $188.15 yesterday, a price it was never at until October. It did hit $212.19 last week, but retreated.

      After spring 2023, Nvidia stock seems to follow a pattern. It has a run-up prior to earnings, it beats the forecast, with the future forecast replaced with an even more amazing forecast, and then the stock goes down for a bit. It also has runs - it went up in the first half of 2024, as well as from April to now.

      Who knows how much longer it can go on, but I remember 1999 and things were crazier then. In some ways things were crazier three years ago with FAANG salaries etc. There is a lot of capital spending, the question is are these LLMs with some tweaking worth the capital spending, and it's too early to tell that fully. Of course a big theoretical breakthrough like the utility of deep learning, or transformers or the like would help, but those only come along every few years (if at all).

      • nextworddev12 hours ago |parent

        Don’t think faang salaries came down meaningfully

        • conorcleary11 hours ago |parent

          buying power has, and usd

          • nextworddev9 hours ago |parent

            Nah, stocks up more than inflation

      • conorcleary11 hours ago |parent

        blow-off top

  • qoez12 hours ago

    Must be nice writing stock narrative stories. Always new content every day to make up stories about the cause of why stocks go this way or that

    • afavour12 hours ago |parent

      If you don’t think we’re due a massive correction after the hype of AI then I don’t know what to tell you. Every sign is right there.

      • gregoryhinkle12 hours ago |parent

        I am going to show you a chart: https://i.imgur.com/q7l3lJt.png

        This is a weekly chart of Nvidia from 2023 to 2024. During that period, the stock dropped from $95 to $75 in just two weeks. How would you defend the idea that a major correction wouldn’t have happened back in 2023–2024? Would you have expected a correction at that time? After all, given a long enough timescale, corrections are inevitable.

        • afavour12 hours ago |parent

          I don’t know how to start a reply to you. Because Nvidia stock dipped for two weeks in the past there’s no chance we’re due a massive correction? Makes so sense whatsoever.

          Nvidia’s stock price is not the start and end of AI investments. OpenAI is losing over $11bn a quarter. More than they were losing in 2023, and debt accumulates over time. Reality will snap in eventually when investors realize their promised future isn’t coming any time soon. Nvidia’s valuation is in large part due to the money OpenAI and others are giving it right now. What do you think will happen when that money goes away?

        • estimator729211 hours ago |parent

          Friend, you're seeing signs in tea leaves here.

      • raffael_de8 hours ago |parent

        How about all the signs are _so_ right there that they have been priced in by now?

    • shevy-java12 hours ago |parent

      It will be written with AI, of course. :)

      I am also getting annoyed at AI. In the last some days, more and more total garbage AI videos have been swarming youtube. This is a waste of my time, because what I see is no longer real but a fabrication. It never happened.

    • epistasis12 hours ago |parent

      Just wait until you hear about sports reporting! Or the weather.

      • marcosdumay12 hours ago |parent

        We can predict the weather, with extreme reliability, hours in advance!

  • shevy-java12 hours ago

    I swear, we need better ways to control the superrich. They are milking us dry here. There is a reason why a certain president is constantly associated with the naughty terms "insider trading".

    • Gormo10 hours ago |parent

      Yeah, let's definitely try to solve complex problems by degenerating into conflict-oriented tribalism, rather than looking at structural incentives and systemic constraints.

      That definitely works, and doesn't just create cycles of escalating social dysfunction while leaving the problems intractable.

      • archagon6 hours ago |parent

        Wolf picking sheep fur out of teeth: “stop being so tribal and let’s focus on the systemic issues here.”

    • bigyabai12 hours ago |parent

      Yet, if anyone ever says "stop buying tech tchotchkes" then they become the villain. The superrich have us in their pockets.

    • jcfrei12 hours ago |parent

      Not gonna happen. They'll just threaten to move to another country. It's a prisoners dilemma for all countries. You can either give in to demands for lower taxation and hope that re-domiciling will over compensate for the tax reduction or increase taxation and lose tax payers to other countries.

      • estebank11 hours ago |parent

        The US taxes on global income all citizens, even those living abroad. The only way to avoid that is to give up your citizenship, but then you have to pay an exit tax as if you had liquidated your assets. Even if you go to a country with a taxation treaty, you'll still be paying the minimum of the two countries to some country. If the other country taxes you less, you pay the difference to the US.

        • penguin_booze11 hours ago |parent

          That's the same as saying, you eventually pay the maximum tax rate of the involved countries. I think that's the standard practice of double-taxation avoidance agreements (DTAA).

      • RugnirViking12 hours ago |parent

        whats the correct solution to an iterated prisoners dilemma?

        • ben_w11 hours ago |parent

          Even without iteration, enforcers change the payoff matrix.

      • JKCalhoun12 hours ago |parent

        "They'll just threaten to move to another country."

        Okay?

        • phyzix576112 hours ago |parent

          Yeah, but they'll take all the jobs and then you get another Trump Tariff Tantrum trying to bring jobs back.

          • swat53511 hours ago |parent

            Are they really creating local jobs?

            Last I heard they are bent on mass firings, outsourcing for cheap labor, cutting costs and enriching themselves.

            Unless there is strong regulation that forces them to actually contribute or be punished, they will do whatever they can to profit..

            • phyzix576111 hours ago |parent

              The one thing about rich people is that they’re very greedy. But that greed often fuels economic activity. They don’t take the money their companies earn and hide it under a mattress, because inflation would make it lose value. Instead, they reinvest it, which either creates new jobs or expands the company’s assets.

              Expanding assets can mean building new factories, ordering more raw materials, or entering new markets. Each of these steps involves third-party vendors: construction firms to build facilities, delivery companies to transport materials, mining companies to extract resources, suppliers, logistics providers, marketers, and contractors.

              All of this spending creates jobs. Maybe not directly within their own company, but across the many other businesses that support their growth.

              • _DeadFred_9 hours ago |parent

                That was in pre-2010 America. Their wealth doesn't appear to be doing this in a way that benefits the US today.

          • JKCalhoun11 hours ago |parent

            You're assuming they'll follow through with their threat.

            • phyzix576111 hours ago |parent

              I'm not. Just responding to the person saying it would be good.

          • BeFlatXIII11 hours ago |parent

            That's what they always threaten.

  • rorylawless13 hours ago

    It seems like AI companies have grown skeptical too. The recent spate of browser releases and OpenAI launching a social network suggests to me that they’ve hit a dead end for now and are falling back on tried and true methods of monetizing.

    • natebc12 hours ago |parent

      It's ads right?

      • OptionOfT9 hours ago |parent

        It has always been. The longer Google can keep you on Google.com the more ads they can serve you.

        • natebc8 hours ago |parent

          Well I'm sure Google will be just fine as they've been in the ad business for quite some time but all these hundreds (thousands?) of also-AI companies won't fare very well when they pivot to the selling eyeballs business.

  • groundzeros201512 hours ago

    __ As __ is a journalism term to imply to uncareful readers that two events are connected, without actually saying it. But it just means they observed them at a similar time.

  • softwaredoug12 hours ago

    This says it all.

    > There are also companies like Sweetgreen, the salad company that has tried to position itself as an automation company that serves salads on the side. Indeed, Sweetgreen has tried to dabble in a variety of tech, including AI and robots

    Please just make me a good salad.

    • Moto745112 hours ago |parent

      As an aside that absolutely is not intended to detract from your point, Sweetgreen has always had some sort of “This is what we do but also make salads because money.” Like when they were a lifestyle brand with annual concerts. The first time I went into a Sweetgreen I was very confused by the 10 foot tall poster of Kendrick Lamar performing and promotion for that year’s convention and concert.

      Could you imagine being offered a ticket to Arbyfest or Jambacon?

      Like you said, please just make a good salad.

      I got Google’s DPA update email included a number of Uber’s Model training side gigs and analytic products. I’m guessing this all came out of the Self Driving car project, but it’s another - albeit less goofy - data point of “We’re and AI Company but we do X for money.”

      I feel like I see a few of these every month.

      A few years ago Foresquare realized their business model was less profitable than that of the data aggregator they used so they bought the aggregator and basically became that company given the other hooks they have. I sort of wonder if that’s what is running through some of these companies’ C suite meetings.

      • vrosas12 hours ago |parent

        It’s not good but it’s commonplace for every company these days to be a {whatever pumps the stock price/gets us the most VC interest} company that does {original business model} on the side.

        • macNchz12 hours ago |parent

          AI, web3, Blockchain, VR, Big Data, Mobile Apps, Social, Cloud, Mashups, Semantic Web, Portals, “put .com after or e before your company name”…

        • rvnx12 hours ago |parent

          https://about.starbucks.com/press/2025/how-ai-powered-automa...

        • ethbr110 hours ago |parent

          The difference between good CEOs and bad CEOs is how aware they are of the game.

          Good CEOs say they're a {whatever pumps the stock price/gets us the most VC interest} company, but continue to invest and excel in being a {core competency / original business model} company.

          Bad CEOs confuse the Kool Aid for water.

          • Moto74519 hours ago |parent

            Ugh your last point has been the last half decade of CEOs at the places I’ve worked. Add in a smattering of other C Suite members as well.

            My personal theory is we’re experiencing people who came into senior leadership in the 2010s and could make money even with poor choices by riding the hype cycle and we’re all paying for their one year of experience ten times.

    • pton_xd12 hours ago |parent

      Sweetgreen investing in robotics and AI is central to maintaining US salad making superiority, you see. We don't want to live in a world where we're not a leader in this space.

      • dotxlem12 hours ago |parent

        Mr. President, we must not allow a salad gap!

      • saltcured9 hours ago |parent

        Unfortunately, it was on borrowed time as soon as we outsourced manufacturing of the original Salad Shooter.

      • debo_12 hours ago |parent

        It's good for the employees at least. Lots of opportunities for greenfield projects.

    • amarcheschi12 hours ago |parent

      Your salad may have a non pre determined amount of olives, vinegar, and tomatoes.

      The unpredictability of salad composition is what makes our products so unique and loved by people all around the world!*

      *While on average it's a very good salad, there's a non zero chance that the salad may contain asbestos, plutonium, chalk, antimonium, rubber, NaN, steel rods

      • falcor8411 hours ago |parent

        >NaN

        I don't want any of those other ones, but I do think that everything else being equal, it's generally a good thing that my salad ingredients can't be directly serialized to float64.

      • spwa412 hours ago |parent

        Polonium, not plutonium. Also known as "Russian salad"

    • afavour12 hours ago |parent

      But just making good salad doesn’t make the stock market valuation go brrrr!

      • jsheard12 hours ago |parent

        Flashback to when Long Island Iced Tea rebranded as Long Blockchain Corp and their stock price tripled overnight for no sensible reason.

        • Esophagus411 hours ago |parent

          And then Kodak said, “Wait for me!”

          https://en.wikipedia.org/wiki/KodakCoin

        • cess1112 hours ago |parent

          Thanks for mentioning it, I had missed that one. Apparently it was an obvious insider hustle:

          https://en.wikipedia.org/wiki/Long_Blockchain_Corp.

    • pinkmuffinere12 hours ago |parent

      I think they’re looking at AI and robotics because it is expensive to make a good salad, and consumers won’t buy at the expensive price point. They need automation to work in a big way for their business to succeed

      • mjevans12 hours ago |parent

        I don't need AI for that automation, I need _good robotics that are cheap_.

        • rvnx12 hours ago |parent

          This is the approach of 1X NEO Home Robot, and of Starship Technologies delivery robots.

          They do the robotics part, and then remotely operate them (though on the paper it is officially "hybrid").

        • pinkmuffinere11 hours ago |parent

          I agree the AI play is dumb. But lots of people are unsolved problems in robotics and think “I’m sure an LLM can do that”. I think sweet green is doing the same

        • spwa412 hours ago |parent

          For what, actually?

      • garciasn12 hours ago |parent

        Crisp and Green say otherwise. Their salads are absurd and absurdly expensive.

        No; I’m not paying $15+ for a salad. But; plenty of people do.

      • afavour12 hours ago |parent

        Automation != AI, though. And I don’t think the state of the robotics required to make salads on demand has changed meaningfully lately. “AI” is a nonsense cover.

        (and fwiw lots of people will pay a lot for a good salad…)

        • mensetmanusman11 hours ago |parent

          Advanced automation in understructured environments is AI.

    • BrokenCogs12 hours ago |parent

      "You are a three time Michelin star chef. Make me a salad that will convince me salads are tasty! Do not ask questions, just make the salad. Do not give me the salad's background story, just make the salad and feed it to me."

      • xhkkffbf12 hours ago |parent

        Blue cheese adds salt and fat. Bacon adds protein and fat and usually salt.

        Anything after that doesn't matter.

    • ponector9 hours ago |parent

      If there is a success story of icetea company added Blockchain why salad cannot be with AI?

    • thenthenthen12 hours ago |parent

      chef’s kiss!

    • tvaughan9 hours ago |parent

      > Please just make me a good salad.

      No.

      > sudo Please just make me a good salad.

      Ok.

  • JKCalhoun13 hours ago

    "At the heart of the stock stumbles, there appears to be a growing anxiety about the AI business, which is massively expensive to operate and doesn’t appear to be paying off in any concrete way."

    I wonder if this is a thing the U.S. should be worrying about with regard to China taking the lead. As long as the U.S. is … idling … it seems it could catch up—if in fact there is any there there with AI.

    But I've been told by Eric Schmidt and others that AGI is just around the corner—by year's end even. Or, it is already being demonstrated in the lab, but we just don't know about it yet.

    • gishh13 hours ago |parent

      Catch up to what? LLMs have clearly stalled in terms of advancement, they just kind of sort of get a little bit different at this point. Not even better, just different.

      • epolanski13 hours ago |parent

        I don't see that stall.

        All of the tools I use get increasingly better every quarter at the very least (coding tools, research, image generation, etc).

        • afavour12 hours ago |parent

          As always it’s hype vs reality. Today we’re told AGI is just around the corner. It isn’t. You’re right that a lot of tools are improving iteratively and that’s great but the hyped up valuations we’re seeing are valuing coding assistants, they’re valuing a fictional reality where AGI solves everything and the first one there gets all the rewards.

          • epolanski12 hours ago |parent

            I haven't said a word about hype nor AGI, I merely said that LLM evolution in the tools available right now, not in the future has neither plateaued nor stalled.

            I'm not expressing any judgement on the economics of it.

        • Bender11 hours ago |parent

          It probably depends on the topics people are interacting with. I've spent the last few days teaching Grok how to manage iptables and nftables when all I wanted was to translate my u32 module rules into nftables and I was being lazy, something the provided translate scripts can not do. Grok would confidently give me an answer, I would say that is wrong and it would admit straight away it was wrong and would confidently give me another wrong answer and I would teach it the right answer after a bit of bumbling on my part. It feels worse than being an editor on serverfault but that's just the silly topics I play with. It could be that most topics it does fine but that has not been my experience thus far. At the end of the day I would have been better off just sticking with the man pages and tcpdump.

      • bubblelicious13 hours ago |parent

        Where does this view come from? I’m not aware of any real evidence for this. Also consider our data center buildouts in 26 and 27 will be absolutely extraordinary, and scaling is only at the beginning. You have a growing flywheel and plenty of synthetic data to break the data wall

        • candiddevmike13 hours ago |parent

          We need a fundamental paradigm shift beyond transformers. Throwing more compute or data at it isn't pushing the needle.

          • marcosdumay12 hours ago |parent

            Just to point, but there's no more data.

            LLMs would always bottleneck on one of those two, as computing demand grows crazy quickly with the data amount, and data is necessarily limited. Turns out people threw crazy amounts of compute into it, so the we got the other limit.

            • bubblelicious5 hours ago |parent

              Epoch has a pretty good analysis of bottlenecks here:

              https://epoch.ai/blog/can-ai-scaling-continue-through-2030

              There is plenty of data left, we don’t just train with crawled text data. Power constraints may turn out to be the real bottleneck but we’re like 4 orders of magnitude away

            • Mistletoe12 hours ago |parent

              Yeah I’m constantly reminded of a quote about this- you can’t make another internet. LLMs already digested the one we have.

            • bigyabai12 hours ago |parent

              Synthetic data works.

              • marcyb5st12 hours ago |parent

                There's a limit to that according to: https://www.nature.com/articles/s41586-024-07566-y . Basically, if you use an LLM to augment a training dataset it will become "dumber" every subsequent generation and I am not sure how you can generate synthetic data for a language model without using a language model

                • yorwba11 hours ago |parent

                  Synthetic data doesn't have to come from an LLM. And that paper only showed that if you train on a random sample from an LLM, the resulting second LLM is a worse model of the distribution that the first LLM was trained on. When people construct synthetic data with LLMs, they typically do not just sample at random, but carefully shape the generation process to match the target task better than the original training distribution.

          • bubblelicious12 hours ago |parent

            And you don’t think that’s already happening? Also where is your evidence for this?

            • bigyabai12 hours ago |parent

              > Also where is your evidence for this?

              The fact that "scaling laws" didn't scale? Go open your favorite LLM in a hex editor, oftentimes half the larger tensors are just null bytes.

              • bubblelicious12 hours ago |parent

                Show me a paper, this makes no sense of course scaling laws are scaling

        • ModernMech12 hours ago |parent

          Let me put it this way: when ChatGPT tells me I've hit the "Free plan limit for GPT-5", I don't even notice a difference when it goes away or when it comes back. There's no incentive for me to pay them for access to 5 if the downgraded models are just as good. That's a huge problem for them.

          • riffraff12 hours ago |parent

            Ditto for Gemini Pro and Flash, which I have on my phone.

            I've been traveling in a country where I don't speak the language and or know the customs, and I found LLMs useful.

            But I see almost zero difference between paid and unpaid plans, and I doubt I'd pay much or often for this privilege.

          • bubblelicious12 hours ago |parent

            This based on any non anecdotal evidence by chance?

            • ModernMech12 hours ago |parent

              Of course not but explain how I am ever going to pay OpenAI, a for-profit company any dollars? Sam Altman gets explosive angry when he's asked about how he's going to collect revenue, and that is why. He knows when push comes to shove, his product isn't worth to people what it costs him to operate it. It's Homejoy at trillion dollar scale, the man has learned nothing. He can't make money off this thing which is why he's trying to get the government to back it. First through some crazy "Universal Basic Compute" scheme, now I guess through cosigning loans? I dunno, I just don't buy that this thing has any legs as a viable business.

              • bubblelicious3 hours ago |parent

                I think you’re welcome to that opinion and are far from alone but (1) I am very happy to pay for Claude, even $200/mo is worth it and (2) idk if people just sort of lose track or what of how far things have come in the span of literally a single year, with the knowledge that training infra is growing insanely and people are solving on fundamental problem after another.

                • ModernMech2 hours ago |parent

                  We live in a time when you can't even work for an hour and afford to eat a hamburger. You having the liquid cash to spend $200 a month on a digital assistant is the height of privilege, and that's the whole problem the AI industry has.

                  The pool of people willing to pay for these premium services for their own sake is not big. You've got your power users and your institutional users like universities, but that's it. No one else is willing to shell out that kind of cash for what it is. You keep pointing to how far it's come but that's not really the problem, and in fact that makes everything worse for OpenAI et al. Because, as they don't have a moat, they don't have customer lock-in, and they also soon will not have technological barriers either. The models are not getting good enough to be what they promise, but they are getting good enough to put themselves out of business. Once this version of ChatGPT gets small enough to fit on commodity hardware, OpenAI et al will have a very hard time offering a value proposition.

                  Basically, if OpenAI can't achieve AGI before ChatGPT4-type LLM can fit on desktop hardware, they are toast. I don't like those odds for them.

              • noir_lord12 hours ago |parent

                Sell at a loss and make it up in volume.

                It's been tried before, it generally ends in a crater.

          • _aavaa_12 hours ago |parent

            It is a problem easily solved with advertising.

            • ModernMech12 hours ago |parent

              No, because as the history of hardware scaling shows us, things that run on supercomputers today will run on smartphones tomorrow. Current models already run fairly well on beefy desktop systems. Eventually models the quality of ChatGPT 4 will be open sourced and running on commodity systems. Then what? There's no moat.

              • treis12 hours ago |parent

                10-20 years of your data in the form of chat history

                Billions of users allowing them to continually refund their models

                Hell by then your phone might be the OpenAI 1. The world's first AI powered phone (tm)

                • overfeed10 hours ago |parent

                  > The world's first AI powered phone

                  Do you remember the Facebook phone? Not many people do, because it was a failed project, and that was back when Android was way more open. Every couple of years, a tech company with billions has the brilliant idea: "Why don't we have a mobile platform that we control?", followed by failure. Amazon is the only qualified success in this area.

                  • treisan hour ago |parent

                    I agree that a slight twist on android doesn't make sense. A phone with a in integrated LLM with apps that are essentially prompts to the LLM might be different enough to gain market share.

        • skywhopper12 hours ago |parent

          There is zero evidence that synthetic data will provide any real benefit. All common sense says it can only reinforce and amplify the existing problems with LLMs and other generative “AI”.

          • bubblelicious12 hours ago |parent

            Sounds like someone has no knowledge of the literature, synthetic data isn’t like asking ChatGPT to give you a bunch of fake internet data.

    • OtherShrezzing12 hours ago |parent

      > But I've been told by Eric Schmidt and others that AGI is just around the corner—by year's end even

      It was this time last year we were told “2025 will be the year of the agent”, with suggestions that the general population would be booking their vacations and managing their tax returns via Agents.

      We’re 7 weeks from the end of the year, and although there are a few notable use cases in coding and math research, agents haven’t shown to be meaningfully disruptive of most people’s economic activity.

      Something most people agree is AGI might arrive in the near future, but there’s still a huge effort required to diffuse that technology & its benefits throughout the economy.

    • delaminator12 hours ago |parent

      > "At the heart of the stock stumbles, there appears to be a growing anxiety about the AI business, which is massively expensive to operate and doesn’t appear to be paying off in any concrete way."

      And stock holders realized this last week, all at the same time?

      • riffraff12 hours ago |parent

        Well, a WSJ article came out last week, showing OpenAI lost 12B dollars last quarter.

        https://www.wsj.com/livecoverage/stock-market-today-dow-sp-5...

        I'm not saying this triggered a sell off, but it is indicative of perception changes.

      • coliveira12 hours ago |parent

        This is standard media lingo to try to give a reason for a move that has been decided by the big players. Yes, because only when these large funds decide to make a move together you can get this magnitude of movement (it's not done by mom and pop investors).

    • shortrounddev213 hours ago |parent

      Why do we even want AGI so badly? It seems like a cataclysmic technology. Like after we invent it, market cap and stocks and money wont mean anything anymore

      • bubblelicious13 hours ago |parent

        Why do people think this is any different than other major economic revolutions like electricity or the Industrial Revolution? Society is not going to collapse, things will just get weirder in both unbelievably positive ways and then also unbelievably negative ways, like the internet.

        • wartywhoa2312 hours ago |parent

          The question is why must the humankind strive for unbelievably positive things at the expense of being forever plagued with unbelievably negative?

          I'd much rather live in a world of tolerable good and bad opposing each other in moderate ways.

          • bubblelicious12 hours ago |parent

            Right let’s not have done the Industrial Revolution or the Internet or electricity

            • wartywhoa2312 hours ago |parent

              If that undoes the suffering of dozens of millions of human beings killed and maimed in WWI and WWII enabled by the Industrial Revolution, let us have not!

            • shortrounddev212 hours ago |parent

              I think the value of the internet has proven to be pretty dubious. It seems to have only made things worse

        • ToValueFunfetti12 hours ago |parent

          Electricity doesn't remove the need for human labor, it just increases productivity. If we produced AGI that could match top humans across all fields, it would mean no more jobs (knowledge jobs at least; physical labor elimination depends on robotics). That would make the university model obsolete- training researchers would be a waste of money, and the well-paid positions that require a degree and thus justify tuition would vanish. The economy would have to change fundamentally or else people would have to starve en masse.

          If we produced ASI, things would become truly unpredictable. There are some obvious things that are on the table- fusion, synthetic meat, actual VR, immortality, ending hunger, global warming, or war, etc. We probably get these if they can be gotten. And then it's into unknown unknowns.

          Perfectly reasonable to believe ASI is impossible or that LLMs don't lead to AGI, but there is not much room to question how impactful these would be.

          • bubblelicious4 hours ago |parent

            I disagree, you have to take yourself back to when electricity was not widely available. How much labor did electricity eliminate? A LOT I imagine.

            AI will make a lot of things obsolete but I think that is just the inherent nature of such a disruptive technology.

            It makes labor cost way lower for many things. But how the economy reorganizes itself around it seems unclear but I don’t really share this fear of the world imploding. How could cheap labor be bad?

            Robotics for physical labor lag way behind e.g. coding but only because we haven’t mastered how to figure out the data flywheel and/or transfer knowledge sufficiently and efficiently (though people are trying).

        • skywhopper12 hours ago |parent

          The promise of AGI is that no human would have a job anymore. That is societal collapse.

          • cess1112 hours ago |parent

            Famously expressed as 'socialism or barbarism' by Rosa Luxemburg, who traced it back to Engels.

        • shortrounddev212 hours ago |parent

          Because if you replace all of humans with machines, what jobs will be left?

      • loeg12 hours ago |parent

        AGI level isn't necessarily superhuman or "singularity." It's just human-level. That alone wouldn't make money meaningless.

        • shortrounddev212 hours ago |parent

          If it can displace all of knowledge work the way that machines displaced manual labor, then the economy is pretty much fucked, right?

          • loeg5 hours ago |parent

            Is there no manual labor anymore? Was the economy fucked by the industrial revolution? It's hard to say how transformative it will be; we're all just kind of speculating.

      • BeFlatXIII11 hours ago |parent

        > Like after we invent it, market cap and stocks and money wont mean anything anymore

        For those of us who survive the transition, good.

      • marcyb5st12 hours ago |parent

        But imagine how much money will it create for shareholders for a little bit /s

        Seriously though, there's a part of me that hopes that the technology can help with technological advancement. Fusion, room temperature superconductors, working solid state batteries, ... which will all help in leaping ahead and make sure everyone on the planet has a good life. Is the risk worth it? I don't know, bit that's my reason for wanting AGI

        • skywhopper12 hours ago |parent

          Why do you think AGI would help develop things that are mainly limited not by ideas, but by the time and resources it takes to do the experiments and engineering in the real world?

    • skywhopper12 hours ago |parent

      I don’t think anyone should be worried about AGI except for all the money, energy, and focus that’s being wasted chasing it. It’s not anywhere close, and the sooner we realize that and start focusing on actual problems, the better off we’ll be.

    • techblueberry13 hours ago |parent

      What if we already have AGI. What if it is ChatGPT 5. What then?

      https://aimagazine.com/articles/openai-ceo-chatgpt-would-hav...

      Edit: this was serious, if I read the Wikipedia definition of AGI, ChatGPT meets the historical definition at least. Why have we moved the goal posts?

      • junon12 hours ago |parent

        Sam is wrong here, AGI has never had a clear definition, this there's no "we have" or "we haven't". I'd say most agree we're still a ways off.

        • techblueberry12 hours ago |parent

          I mean 20 years ago it was intelligence that could be used in multiple domains; the ability to reason in natural language. Which is what we have? Really, beyond this fantastical; AGI is when we have luxury automated space communism, I sort of legitimately don’t understand why ChatGPT 5 isn’t AGI. (Other than the fact that it would be super disappointing, which is maybe my point) Maybe it’s AGIv1? Maybe it’s AGI with an IQ of 47? But it’s super bipolar since it can also talk like someone with an IQ of 150x

          • acdha12 hours ago |parent

            Lack of reasoning or true understanding. It’s not just that it’s like IQ 47 but that it’s unreliable and inconsistent so you can’t safely deploy it in adversarial contexts.

            • techblueberry12 hours ago |parent

              Humans with true intelligence are unreliable and inconsistent, why would AGI be different?

              • acdha12 hours ago |parent

                AGI wouldn’t be, LLMs are because they’re still far from that level.

          • loeg12 hours ago |parent

            20 years ago, sub-human levels were extremely optimistic. The context has changed.

      • JKCalhoun11 hours ago |parent

        I agree, we've been moving the goal posts. I think that the Turing Test was the first casualty of LLM ascendency.

        But I also think it's natural to move the goal posts.

        We try to peer at the future and what would convince us of machine intelligence. Academia finally delivers and we have to revise what we mean by intelligence.

        If one, settling a pillow by her head,

        Should say: "That is not what I meant at all;

        That is not it, at all."

      • skywhopper12 hours ago |parent

        This is Wikipedia’s definition “[AGI] is a type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks.”

        GPT-5 is nowhere close to this. What are you talking about?

        • techblueberry12 hours ago |parent

          ChatGPT wrote this, but it is basically the argument I’m making:

          1. Functional Definition of AGI

          If AGI is defined functionally — as a system that can perform most cognitive tasks a human can, across diverse domains, without retraining — then GPT-4/5 arguably qualifies:

          It can write code, poetry, academic papers, and legal briefs.

          It can reason through complex problems, explain them, and even teach new skills.

          It can adapt to new domains using only language (without retraining), which is analogous to human learning via reading or conversation.

          In this view, GPT-5 isn’t just a language model — it’s a general cognitive engine expressed through text.

          Again I think the common argument is more a religious argument than a practical one. Yes I acknowledge this doesn’t meet the frontier definition of AGI, but that’s because it would be sad if it was the case, not because there’s any actual practical sense that we’ll get to the sci-fi definition. This view that ChatGPT is already performing most tasks reasonably at the edge of beyond human ability is true.

          • tim3338 hours ago |parent

            People have different takes but the economically important point is when you can have AI do the jobs rather than having to hire humans. We are not there yet. GPT-5 is good at some things but not others.

            • techblueberry7 hours ago |parent

              That’s a good goal, but why is that “AGI”? Why is AGI a socio-political-economic metric and not a technical one, and if it is a socio-political-economic metric, than is just fantasy? Why are we spending trillions of dollars on something we can’t define in technical terms?

  • al_borland12 hours ago

    Does this mean my boss will stop asking, “can we add AI to this?”

  • softwaredoug12 hours ago

    If an OpenAI bailout happened, I’m guessing it wouldn’t be in the context of bankruptcy restructuring (like GM in 2008 crisis). So it wouldn’t really solve anything, it would just be taxpayers buying high on a distressed asset

  • mixxit9 hours ago

    Does this mean I will stop being greeted by a different ai robot with a whimsical friendly name everytime I log into a new site

    • ponector9 hours ago |parent

      Hello mixxit! I’m PixelPuff, your cheerful AI companion for today, here to sprinkle a little whimsy while you explore Hacker News. Hope your feed is full of intriguing ideas and clever finds!

  • HPsquared13 hours ago

    I think everyone saw this coming, only a matter of when. As great as the technology is, it's hard to predict who will profit from it.

    • dinobones13 hours ago |parent

      Here’s another idea:

      We’ve had GPT2 since 2019, almost 6 years now. Even then, OpenAI was claiming it was too dangerous to release or whatever.

      It’s been 6 years since the path started. We’ve gone from hundreds of thousands -> millions -> billions -> tens of billions -> now possibly trillions in infrastructure cost.

      But the value created from it has not been proportional along the way. It’s lagging behind by a few orders of magnitude.

      The biggest value add of AI is that it can now help software engineers write some greenfield code +40% faster, and help people save 30 seconds on a Google search -> reading a website.

      This is valuable, but it’s not transformational.

      The value returned has to be a lot higher than that to justify these astronomical infrastructure costs, and I think people are realizing that they’re not materializing and don’t see a path to them materializing.

      • coliveira12 hours ago |parent

        US Ai companies are fixated on low value activities, that's the problem. Creating more garbage for the internet or summarizing text is useful, but not that fantastic or transformative. I have a new version of MS Word where at the start screen it will suggest a bunch of BS topics that it can generate for me. What is the benefit of this other than the appearance that I'm doing real work? Most companies will be inundated by this nonsense created by people who are now 5 to 10 times more "productive" because they use Ai, and corporations will pretty much stop doing any real work.

        • stackskipton12 hours ago |parent

          This already becoming a problem at my company with Gemini. Certain paper pushers are using LLM to fluff up emails so more people have to use LLMs to read them with predictable hallucination on either side resulting in missed deadlines and customer service problems.

          • noir_lord12 hours ago |parent

            An inattention arms race - sure it'll end amazingly with no societal harms.

      • tonyedgecombe12 hours ago |parent

        It seems exponential growth in spending only results in linear growth in capability.

    • PessimalDecimal12 hours ago |parent

      Value capture seems to be happening in the hardware companies (really company) right now. Like with CPUs in the late 90s when Intel was dominant and AMD was struggling.

    • mtoner2312 hours ago |parent

      a 4% drop, back to levels not seen since checks watch 2 weeks ago! not much of a correction imo

  • ronbenton12 hours ago

    I see headlines like this and always wonder how they establish causation. How do they know AI skepticism is the cause of a sell off? Is this an assumption or is there reasonable evidence of causation?

    • loeg12 hours ago |parent

      They're guessing. You can sort of paint a story based on which stocks see bigger downturns.

    • expedition3211 hours ago |parent

      Stock market trading is being done by very smart computers (oh the irony) and nobody knows why those computers do what they do.

  • donohoe13 hours ago

    I've had a few people ask how they should shift their 401K mix to avoid the AI bubble. I honestly don't know what to tell them (aside from the fact I am not in any way a financial advisor). Everything seems exposed.

    • riffraff12 hours ago |parent

      Put options? Inverse NASDAQ ETFs? Value-oriented funds? Equal weighted funds rather than market cap weighted?

      I personally just keep investing in cheap total world market funds and let the market do its thing.

    • ashleyn12 hours ago |parent

      The AI bubble is also the same seven companies that have been making all the money for the past decade. The answer is don't worry about it if you're buying S&P 500. Just keep buying, leave it go, and don't touch it. Preferably ever. But realistically don't sell for as long as you can. That applies to any market.

    • tim3338 hours ago |parent

      Utility stocks and the like? All the stuff that's not in a bubble.

      Berkshire Hathaway last time was an anti bubble stock - it hit a low on the day the NASDAQ peaked in the dot com bubble.

    • trashface9 hours ago |parent

      Residential real estate trusts? Har har. Could do it now, but wasn't a great idea during the last big (housing) bubble.

  • giorgioz12 hours ago

    "$1 trillion in stock value has been wiped from several of Silicon Valley’s heaviest hitters, all of which are heavily enmeshed in generative AI. Oracle, Meta, Palantir, and Nvidia"

    I just checked the stock Oracle Palantir and Nvidia and they don't seem particularly down. Only Meta seems down from 750$ to 620$ which is a 21% drop (to the value it had this year in April 2025 (which would be a drop in 277B$ billions dollars).

    Is there any data supporting the article claims for 1T$ stocks value drop?

    • YuukiRey12 hours ago |parent

      Are we looking at different charts? What I see right now on a 5 day view is:

      - Nvidia -11%

      - Palantir -16%

      - Oracle -11%

      - Meta -5%

      With some very quick and extremely cursory napkin maths I do get in the 800 billion range, which the original article mentioned. I guess the linked article rounded it up to make it more sensational.

  • primer4212 hours ago

    > And then there’s Microsoft, which—despite being one of the most powerful and prominent companies in Silicon Valley—seems to be having one of its biggest losing streaks ever. Bloomberg reported Friday that its stock had slumped 8.6 percent over eight days, a decline that evaporated some $350 billion in market valuation.

    I find it strange and upsetting when articles talk about the "evaporation" of "market valuation". Market value is already meaningless vapor - it's not like real money was created when the stock price went up, nor has anything of concrete value disappeared.

    • beardyw12 hours ago |parent

      To be pedantic, there in no such thing as real money in the current age. It is all conceptual.

    • doctaj11 hours ago |parent

      I found it weird they say that microsoft is in Silicon Valley.

  • lapcat12 hours ago

    This headline is wildly inaccurate. $1 trillion in tech stocks were not "sold off." Rather, the collective market caps of some tech stocks dropped by $1 trillion.

    Market cap is mostly a useless number. It's the current stock price multiplied by the number of outstanding shares. But only a small % of shares are bought and sold in a given day, so the current stock price is mostly irrelevant to the shares that aren't moving.

    If you hold some stock, and the current stock price goes down, but you don't sell your stock, then you haven't lost any actual money. The so-called "value" of your stock may have dropped, but that's just a theoretical value. Unless you're desperate to sell now, you can wait out the downturn in price.

    • PessimalDecimal12 hours ago |parent

      > so the current stock price is mostly irrelevant to the shares that aren't moving.

      If it moves enough, shares that aren't moving might become shares that are though. Unless a company's stock is all held by Die Hard True Believers who will HODL through the apocalypse and beyond, the market price can matter.

      We'd also have to run the same argument on the upside too. Does the current stock price matter to those who aren't selling when it goes 2x in a year?

      • lapcat12 hours ago |parent

        What apocalypse, exactly? The stock market has eventually recovered from every "crash." Occasionally a big company crashes and never recovers, e.g., Lehman Brothers, but I wouldn't expect that to happen to Amazon, Meta, Microsoft, or Oracle.

        I didn't say that stock price is totally irrelevant, but if you're investing for the long term, short-term fluctuations mostly shouldn't change your strategy.

        In any case, the headline is inaccurate. Unsold stock losing market value is not the same as stock sold off.

  • web3-is-a-scam13 hours ago

    More.

  • blindriver12 hours ago

    Is it a sell off if people buy it right back the same day? This is just anti AI nonsense trying to cause a crash

  • NicoJuicy12 hours ago

    Why skeptical?

    No matter what, LLM'S are here to stay. Companies are doing huge investments do that they can get ahead early on.

    Will it need to be more efficient, yes.

    But a lot of money of repetitive tasks are going to llm's. Additionally, most companies are constraint by capacity atm.

    I really can't imagine what could revert this trend.

    Yes, body shops like Palantir are hugely overrated.

    But the big tech? No. They can carry those infra investments. Just curious who will come out on top.

  • j4513 hours ago

    Isn't there usually a stock sell off roughly every fall?

    • skylurk13 hours ago |parent

      We gotta pay tuition ;)

  • GreenWatermelon13 hours ago

    This prospect of this bubble finally popping fills me with great excitement!

  • tinyhouse12 hours ago

    Everyone uses AI today for pretty much everything but we're in a bubble... Yeah sure. Big tech with all time revenue and profits, with growing demand for compute quarter after quarter, but we're in a bubble ... Yeah sure.

    OpenAI loses billions, that's true. That doesn't mean we're in a bubble. They also make billions. If their losses continue, they will keep losing more and more control. Microsoft already owns 30% of OpenAI. Big tech companies have too much cash on their hands and they cannot acquire their competitors, so instead they invest money in them. Either way, it's called consolidation.

    • riffraff12 hours ago |parent

      Does openai make billions?

      Sam Altman said they have revenue, but didn't say how much, did he?

      We've heard people saying google is making profit on their Ai offering, but I don't think anyone else has their infrastructure with TPUs etc.

      • rvnx12 hours ago |parent

        OpenAI is now late in the game, way behind Claude and even Google Gemini (which used to be very bad, but now great, despite the few hallucinations), so they may collapse under their own weight if they cannot find new investors to feed the monster.

        • tinyhouse11 hours ago |parent

          That's somehow true for enterprise. OpenAI are still leading the consumer side by a large margin over Claude. I'm talking about adoption, not the models. Once they can monetize beyond subscriptions (ads, shopping etc) they will be OK. Most likely once their ads business start growing, they will IPO.

      • tinyhouse12 hours ago |parent

        It's well known that they are doing around $10B in ARR. GCP was losing money for many years but now it's a cash cow. AI is following the same trajectory. Not all companies will sustain it, but that doesn't mean it's a bubble, just that we will see consolidation.

  • tropicalfruit13 hours ago

    AI was a convenient vessel for shadow QE these last few years

    Now, with rates falling, they can pivot the story - call it an AI bubble, let it crash

    then use the crash as justification for renewed, open money printing

    • walterbell12 hours ago |parent

      Thanks for the pointer (and first HN comment!) on shadow QE.

      July 2024, https://x.com/stealthqe4/status/1818782094316712148

      > We’ve all been wondering where all of this liquidity is coming from in the markets. Stealth QE was being done somehow. Now we have the answer! It’s all in the Treasury increased t-bill issuance. QE has now been replaced by ATI.

      https://hn.algolia.com/?query=%22shadow%20qe%22&type=all

  • cs70212 hours ago

    The OP's headline is not even wrong.

    Tech stock market capitalization declined by $1T.

    Every share of stock sold by one party was purchased by another party, as always.

    • thaumasiotes12 hours ago |parent

      For the market capitalization to fall, the price of shares has to fall.

      For the price of shares to fall, selling pressure in the market has to outweigh buying pressure. The fact that the price dropped is how we know this is a selloff and not a buyoff.

  • NoahZuniga12 hours ago

    You can only have a $1T tech stock sell off if $1T of tech stocks are bought.

    • dragonwriter12 hours ago |parent

      Based on the body of the article, the headline is using “$X tech selloff” as unintuitive shorthand for “loss of $X of market value in aggregate market capitalization of tech stocks”, not as a reference to trading volume.

      • Ekaros12 hours ago |parent

        So correspondingly there has been trillions in tech buying in past years?