HNNewShowAskJobs
Built with Tanstack Start
Anthropic AI tool sparks selloff from software to broader market(bloomberg.com)
63 points by garbawarb 4 hours ago | 46 comments
  • XiS3 hours ago

    https://archive.li/VyN2H

  • simianwords2 hours ago

    I came across this company called OpenEvidence. They seem to be offering semantic search on medical research. Founded in 2021.

    How could it possibly keep up with LLM based search?

    • dnw2 hours ago |parent

      It is a little more than semantic search. Their value prop is curation of trusted medical sources and network effects--selling directly to doctors.

      I believe frontier labs have no option but to go into verticals (because models are getting commoditized and capability overhang is real and hard to overcome at scale), however, they can only go into so many verticals.

      • simianwords2 hours ago |parent

        > Their value prop is curation of trusted medical sources

        Interesting. Why wouldn't an LLM based search provide the same thing? Just ask it to "use only trusted sources".

        • tacoooooooo2 hours ago |parent

          They're building a moat with data. They're building their own datasets of trusted sources, using their own teams of physicians and researchers. They've got hundreds of thousands of physicians asking millions of questions everyday. None of the labs have this sort of data coming in or this sort of focus on such a valuable niche

          • simianwordsan hour ago |parent

            > They're building their own datasets of trusted sources, using their own teams of physicians and researchers.

            Oh so they are not just helping in search but also in curating data.

            > They've got hundreds of thousands of physicians asking millions of questions everyday. None of the labs have this sort of data coming in or this sort of focus on such a valuable niche

            I don't take this too seriously because lots of physicians use ChatGPT already.

            • some_randoman hour ago |parent

              Lots of physicians use ChatGPT but so do lots of non-physicians and I suspect there's some value in knowing which are which

        • otikikan hour ago |parent

          I don't think you can use an LLM for that. For the same reason you can't just ask it to "Make the app secure and fast"

          • simianwords37 minutes ago |parent

            This is completely incorrect. This is exactly what LLMs can do better.

        • palmotea2 hours ago |parent

          > Why wouldn't an LLM based search provide the same thing? Just ask it to "use only trusted sources".

          Is that sarcasm?

          • simianwordsan hour ago |parent

            why?

            • Rygianan hour ago |parent

              How does the LLM know which sources can be trusted?

              • simianwordsan hour ago |parent

                yeah it can avoid blogspam as sources and prioritise research from more prestigious journals or more citations. it will be smart enough to use some proxy.

                • palmotea44 minutes ago |parent

                  You can also tell it to just not hallucinate, right? Problem solved.

                  I think what you'll end up is a response that still relies on whatever random sources it likes, but it'll just attribute it to the "trusted sources" you asked for.

                  • simianwords26 minutes ago |parent

                    you have an outdated view on how much it hallucinates.

                    • palmotea9 minutes ago |parent

                      The point was: will telling it to not hallucinate make it stop hallucinating?

    • ollieproan hour ago |parent

      Much of the scientific medical literature is behind paywalls. They have tapped into that datasource (whereas ChatGPT doesn't have access to that data). I suspect that were the medical journals to make a deal with OpenAI to open up the access to their articles/data etc, that open evidence would rely on the existing customers and stickiness of the product, but in that circumstance, they'd be pretty screwed.

      For example, only 7% of pharmaceutical research is publicly accessible without paying. See https://pmc.ncbi.nlm.nih.gov/articles/PMC7048123/

      • simianwords43 minutes ago |parent

        Do you think maybe ~10B USD to should cover all of them? For both indexing and training? Seems highly valuable.

        Edit: seems like it is ~10M USD.

  • PostOnce36 minutes ago

    If it turns out that AI isn't much more productive, it could also turn out that people still believe it is, and therefore don't value software companies.

    If that happens, some software companies will struggle to find funding and collapse, and people who might consider starting a software company will do something else, too.

    Ultimately that could mean less competition for the same pot of money.

    I wonder.

    • rubyfan20 minutes ago |parent

      I left software about 10 years ago for this reason. I saw engineers being undervalued, management barriers to productivity and higher compensation possibilities for non-tech functions.

  • gip3 hours ago

    I'm not really understanding why Thomson Reuters is at direct risk from AI. Providing good data streams will still be very valuable?

    • elemeno2 hours ago |parent

      They’re one of the two big names in legal data - Thomson Reuters Westlaw and RELX LexisNexis. They’re not just search engines for law, but also hubs for information about how laws are being applied with articles from their in house lawyers (PSLs, professional support lawyers - most big law firms have them as well to perform much the same function) that summarise current case law so that lawyers don’t have to read through all the judgements themselves.

      If AI tooling starts to seriously chip away at those foundations then it puts a large chunk of their business at risk.

      • themgt2 hours ago |parent

        The commodification of expertise writ large is a bit mind boggling to contemplate.

    • whitej1252 hours ago |parent

      TR will not disappear. But their value to the market was "data + interface to said data" and that value prop is quickly eroding to "just the data".

      You can be a huge, profitable data-only company... but it's likely going to be smaller than a data+interface company. And so, shareholder value will follow accordingly.

      • palmotea2 hours ago |parent

        Seems like they should hold tight to that data (and not license it for short-term profit), so customers have to use their interface to get at it.

    • yodon3 hours ago |parent

      If customers start asking Claude first, before they ask Thomson Reuters, that's a big risk for the later company.

      • gip2 hours ago |parent

        Got it, thank you for the insight.

        The assumption is that Claude has access to a stream of fresh, currated data. Building that would be a different focus for Anthropic. Plus Thomson Reuters could build an integration. Not totally convinced that is a major threat yet.

    • robotswantdata3 hours ago |parent

      Huge legal tech business units

  • epicureanideal3 hours ago

    Could this lead to more software products, more competition, and more software engineers employed at more companies?

    • fishpham2 hours ago |parent

      I think the argument is that tools like Claude Code will cause more companies to just build solutions in-house rather than purchase from a vendor.

      • groceryheist2 hours ago |parent

        This is correct. AI is a huge boon for open source, bespoke code, and end-user programming. It's death for business models that depend on proprietary code and products bloated with features only 5% of users use.

        • hugs2 hours ago |parent

          possibly also a boon for automated testing tools and infra designed for ai-driven coding.

    • danansan hour ago |parent

      > Could this lead to more software products, more competition, and more software engineers employed at more companies?

      No, it will just lead to the end of the Basic CRUD+forms software engineer, as nobody will pay anyone just for doing that.

      The world is relatively satisfied with "software products". Software - mostly LLM authored - will be just an enabler for other solutions in the real world.

      • falloutx35 minutes ago |parent

        There are no pure CRUD engineers unless you are looking at freelance websites or fiver. Every tiny project becomes a behemoth of spaghetti code in the real world due to changing requirements.

        > The world is relatively satisfied with "software products".

        you can delete all websites except Tiktok, Youtube and PH, and 90% of the internet users wouldnt even notice something is wrong on the internet. We dont even need LLMs, if we can learn to live without terrible products.

    • garbawarb2 hours ago |parent

      I kind of imagine more people going off and building their own companies.

      • DougN72 hours ago |parent

        I think so too. But because of code quality issues and LLMs not handling the hard edge cases my guess is most of those startups will be unable to scale in any way. Will be interesting to watch.

      • danans42 minutes ago |parent

        Not if they don't have access to capital. Lacking that, they won't be building much of anything. And if there a lot of people seeking capital, it gets much harder to secure.

        Capital also won't be rewarded to people who don't have privileged/proprietary access to a market or non-public data or methods. Just being a good engineer with Claude Code isn't enough.

    • rishabhaiover3 hours ago |parent

      maybe eventually, not in the near-term future.

    • guluartean hour ago |parent

      I think companies will need to step up their game and build more competitive products with more features, less buggy and faster than what people can build

    • unyttigfjelltol3 hours ago |parent

      It’s demonetizing process rent-seeking. AI can build whatever process you want, or some approximation of it.

  • roystingan hour ago

    Can this really be a kind of herding stampede behavior over Cowork? It’s been out several days now and just all the sudden today, all the traders suddenly got it into their little herd animal heads that everyone should rush to the exists… after that equally sketchy silver and gold rug pull type action last week?

    Something seems quite off. Am I the only one?

    • retubean hour ago |parent

      Markets are not as efficient as the textbooks would have you believe. Investors typically rely on a fairly small set of analysts for market news and views. It might take those guys a while to think about stuff, write a note etc. The deepseek crash last year lagged by several days as well.

      • andaian hour ago |parent

        I'm out of the loop, but I thought there were sophisticated automated trading algorithms where people pay to install microwave antennas so they can have 1ms lower latency. And I thought those systems are hooked up to run sentiment analysis on the news. Maybe the news is late?

    • arctic-true41 minutes ago |parent

      It could also be that we have been in an economy-wide speculative bubble for a couple of years. Whispers of an AI bubble were a way to self-soothe and avoid the fact that we are in an everything bubble.

  • myth_drannonan hour ago

    Paypal fell 20% today.

    • quickthrowmanan hour ago |parent

      PayPal reported an earnings miss and a new CEO before market open today, that’s why it sold off.