HNNewShowAskJobs
Built with Tanstack Start
Google 2025 recap: Research breakthroughs of the year(blog.google)
108 points by Anon84 5 hours ago | 70 comments
  • fancyfredbot3 hours ago

    Google are really firing on all cylinders recently. It's almost shocking to read all they've done in the last year.

    The fact they caught up with OpenAI you almost expect. But the Nobel winning contributions to quantum computing, the advances in healthcare and medicine, the cutting edge AI hardware, and the best in class weather models go way beyond what you might have expected. Google could have been an advertising company with a search engine. I'm glad they aren't.

    • kace9136 minutes ago |parent

      >Google could have been an advertising company with a search engine. I'm glad they aren't.

      They kind of are though?

      Like, there is indeed amazing research supported by the company. The core user facing products are really declining in quality by being user hostile.

      A search right now results in a made up LLM output followed by 4 ads disguised as content, and then maybe followed by the wanted result.

      I’m not sure what happens inside the company for those two things to be true at once.

      • ACCount3720 minutes ago |parent

        A big part of what makes Google Search awful is just the usual SEO shitters, trying their hardest to rig the game on any search result that's anywhere close to common or profitable.

        Google's main failing there is that they don't put enough effort into their search to keep up with that, and fail to raise the bar on garbage content and search engine manipulation.

        LLM output in search results I'm not against. Do we you need to open an entire website to learn how to sort an array in JavaScript with a lambda function? For many of the more common and more trivial requests, LLM output is well in "good enough".

        • dasil00311 minutes ago |parent

          No we don’t need to open an entire website to learn x simple thing. However we DO need meaningful competition among information providers. I am not looking forward to the enshittification phase of AI.

      • smurda19 minutes ago |parent

        In 2024, 78% of Alphabet’s revenue came from ads (72% in Q3 2025).

        Ads subsidize experimentation of loss-generating moonshots until they mature into good businesses, or die.

    • throw-12-168 minutes ago |parent

      >Google could have been an advertising company with a search engine. I'm glad they aren't.

      Ads are 75% of their revenue and search has been getting progressively worse.

    • 10xDev2 hours ago |parent

      Meanwhile the economy is tanking. But yeah what a fantastic year it is to be a company worth trillions.

      • andsoitisan hour ago |parent

        > Meanwhile the economy is tanking.

        NYT: US GDP Grew 4.3%, surging in 3rd Quarter 2025 - https://www.nytimes.com/2025/12/23/business/us-economy-consu...

        WSJ: Consumers Power Strongest US Economic Growth in 2 years - https://www.wsj.com/economy/us-gdp-q3-2025-2026-6cbd079e

        The Guardian: US economy grew strongly in third quarter - https://www.theguardian.com/business/2025/dec/23/us-economy-...

        • ryandrake13 minutes ago |parent

          I think we should start separating discussion of “The Economy” from “human prosperity and wellbeing.” Because they are essentially two different things, only slightly related. The Economy can grow wildly while normal people are poor, suffering, and barely holding it together. I don’t care if corporations are doing great or if the GDP is high, if everything I need costs 3X what it used to and Im not sure if I’ll be employed next week.

          While you are probably right in that The Economy, technically is growing, it doesn’t feel like it to normal people I know.

        • cjan hour ago |parent

          Meanwhile, consumer debt is at record highs.

          https://www.newyorkfed.org/microeconomics/hhdc

          • andsoitis19 minutes ago |parent

            > consumer debt is at record highs.

            While consumer debt is at or near historical highs, it is in and of itself not a problem (broader economic risk).

            What you need to look at as well is debt burden ratios and repayment behavior, not just raw totals.

            Household debt service ratio (the share of disposable income spent on principal + interest payments) is well below historical crisis peaks (e.g., 2007–2008), suggesting households are currently spending a smaller share of income on debt payments than in past stress periods.

            While total household debt is at record levels (~$18 trillion+), debt as a share of income or GDP has not reached past crisis peaks like 2008. That means debt growth hasn’t outpaced income growth as dramatically as in previous crises.

            However, delinquency rates, especially for credit cards and student loans, are elevated, nearing or exceeding long-run highs outside recessions.

            Mortgage delinquency rates remain lower than unsecured debt categories, but have ticked up slightly. Because they're relatively stable, it mutes broader systemic risk for now.

            • jeffbee4 minutes ago |parent

              And you didn't even mention the population.

        • bawis25 minutes ago |parent

          The old problem with metrics like GDP, is that they consider the whole but not the parts, it is kinda saying that I and Musk have billions in wealth, but I am in debt.

          • two_handfuls8 minutes ago |parent

            The new problem with GDP is we can no longer trust government numbers.

            1 - https://www.pbs.org/newshour/politics/trump-seeks-to-fire-bu...

          • andsoitis18 minutes ago |parent

            > The old problem with metrics like GDP, is that they consider the whole but not the parts, it is kinda saying that I and Musk have billions in wealth, but I am in debt.

            Does this mean you also think that "the (US) economy is tanking" OR do you agree with me that the economy is NOT tanking?

        • throw-12-167 minutes ago |parent

          Anyone who trusts numbers coming out of the Trump admin is in for a big surprise.

      • vasco17 minutes ago |parent

        By which measure?

      • fancyfredbotan hour ago |parent

        I'm not sure that the rest of the economy really is "tanking" but OK. Are you implying it's distasteful to discuss success from a big company in such dark times?

        Google could really easily be a purely rent seeking business but they are innovating, and if you are worried about the economy then this should seem like good news.

      • cpursley2 hours ago |parent

        You don’t believe the recent economic numbers? I’m not disagreeing with you, just curious about other takes (and generally very skeptical of funny money printer go burrr economic things vs real economy meaning real output).

        • 10xDev2 hours ago |parent

          Truth be told, it is more complicated than a "tanking" economy so you will get headlines like this: https://www.bbc.com/news/articles/c62n9ynzrdpo but that's because it is a K-shaped economy: https://www.theguardian.com/business/2025/dec/07/stock-price... thanks to the stock market and AI investment. The job market especially for entry level tech jobs is also essentially screwed whether that's due to AI or something else, who even knows anymore: https://www.aljazeera.com/economy/2025/12/16/us-unemployment...

        • MrOrelliOReilly2 hours ago |parent

          There has been a lot of discussion on this recently in the blog-o-sphere. All conclusions I've seen so far are that the economy is basically fine and maybe people's expectations have risen (I'm oversimplifying). I'm also quite eager to hear different conclusions, because there is a lot of cognitive dissonance on the economy right now.

          - https://www.slowboring.com/p/you-can-afford-a-tradlife

          - https://www.slowboring.com/p/affordability-is-just-high-nomi...

          - https://thezvi.substack.com/p/the-revolution-of-rising-expec...

          - https://open.substack.com/pub/astralcodexten/p/vibecession-m...

        • inerte38 minutes ago |parent

          Vibecession so good I remember we’ve been a quarter away from recession for the last decade.

    • aatd86an hour ago |parent

      Caught up...??? I do use gemini pro regularly but never for code. chatGPT wins all the time. I even use chatGPT to review gemini suggestions...

      Seems that there has been a lot of hype because in many ways, they are still lagging behind.

      • mpalmeran hour ago |parent

        Gemini 3 blows GPT 5.1 out of the water. Beats it on quality and price.

        • iamronaldo37 minutes ago |parent

          Sota is 5.2 pro or 5.2 codex or 5.2 extra high not 5.1 (I know I know it's confusing)

          • mpalmer30 minutes ago |parent

            Not least of all because I misread that as Sora at first, lol

        • cjan hour ago |parent

          And speed.

      • PunchTornado21 minutes ago |parent

        What are you smoking? Gemini and Claude beat chatgpt at every metric.

        • aurareturn10 minutes ago |parent

          They don’t. GPT 5.2 and its variants are the best models right now.

    • Rebuff500739 minutes ago |parent

      Hot take: they have always been firing on all cylinders. The marketing it just a bit different now. Everything you mention is the result of significant long-term investments.

      • cjbgkagh17 minutes ago |parent

        They dropped the ball pretty hard with tensorflow.

        • jeffbee3 minutes ago |parent

          It is not very important to Google that people outside Google use tensorflow.

  • gtsnexp2 hours ago

    Science magazine used to run a genuinely thought-provoking “Breakthrough of the Year.” Lately, it feels like it has narrowed to AI+AI+agents, and more AI.

    I’m looking for an outlet that consistently highlights truly unexpected, high-impact scientific breakthroughs across fields.

    Ask HN: Is there anything like that out there?

    • yurimo2 hours ago |parent

      Quanta? They do recaps by field every year. Have been a big fan for a while.

    • squidbeakan hour ago |parent

      Have you considered that breakthroughs in AI research now might be more consequential than their equivalents in other fields - simply for bringing us nearer the point where AI accelerates all research?

  • throw-12-165 minutes ago

    Being an ad monopoly has its perks.

  • zkmon3 hours ago

    They should call it as very specific to AI, instead of general research. How can it be an "Year of agents", when agents haven't stepped out of the programming work?

  • sublimefire3 hours ago

    Dunno about you but to me it reads as a failure. It basically has AIAIAI, although they lost much of the ground to other companies whilst having an upper hand years ago. Then they mention 5yr anniversary of alphafold, also one of the googlers did research in the 80s for which he became a candidate for Nobel prize this year. And lastly, there was a weather model.

    They tried so hard to be in the media over the last year that it was almost cringe. Given that most of their money is coming from advertising I would think they have an existential crisis to make sure folks are using their products and the ecosystem.

    • raw_anon_11112 hours ago |parent

      One AI company is losing billions and sharecropping off of everyone’s infrastructure with no competitive advantage and the other is reporting record revenues and profits, funding its AI development with its own money, has its own infrastructure and not dependent on Nvidia. It also has plenty of real products where it can monetize its efforts.

    • kylecazar2 hours ago |parent

      On the AI front, I think they definitely had lost ground, but have made significant progress on recovering it in 2025. I went from not using Gemini to mostly using 3 Pro.

      Just the fact that they managed to dodge Nvidia and launch a SOTA model with their own TPU's for training/inference is a big deal, and takes a lot of resources and expertise not all competitors have in-house. I suspect that decision will continue to pay dividends for them.

      As long as there is competition in LLM's, Google will now be towards the front of the pack. They have everything they need to be competitive.

    • squidbeakan hour ago |parent

      You write like someone who hasn't used Gemini in a very long time. In no sense whatever have Google lost ground to other AI companies this year. Rather the other way around.

      • sublimefire29 minutes ago |parent

        The pace of change is quite fast, keeping on top of it is hard, but most importantly it is marginal from the user perspective. We do not have good tools to navigate the use of these models well yet, except coding, and coders can switch the model in a dropdown.

        Imagine if they add ads into the responses, who will use it then?

        • jeffbeea minute ago |parent

          Gemini already has ads. If you ask it a question that can be answered that way, it will present results from its shopping carousel, for example.

    • pm90an hour ago |parent

      I agree with this take. Their insane focus on generative AI seems a bit short sighted tbh. Research thrives when you have freedom to do whatever, but what they’re doing now seems to be to focus everyone on LLMs and those who are not comfortable with that are asked to leave (eg the programming language experts who left/were fired).

      So I don’t doubt they’ve done well with LLMs, but when it comes to research what matters is long term bets. The only nice thing I can glean is they’re still investing in Quantum (although that too is a bit hype-y).

      • wepple20 minutes ago |parent

        Disclosure: I work @ goog, opinions my own

        There’s absolutely been a lot of focus on LLMs, but they simply work very well at a lot of things.

        That said, Carbon (C++ successor) is an active experimental (open source) project. Fuchsia (operating system, also open) is shipping to consumer products today. Non-LLM AI research capabilities were delivered at a level I’m not sure is matched by any other frontier lab? Hardware (TPUs, opentitan, etc). Beam is mind-blowing and IMO such a sleeper that I can’t wait for people to try.

        So whilst LLMs certainly take the limelight, Google is still working on new languages, operating systems, ground-up silicon etc. few (if any?) companies are doing that.

      • raw_anon_111113 minutes ago |parent

        Google has been at the forefront of “AI” and Machine Learning forever. I didn’t appreciate this myself until listening to the Acquired podcast episode

        https://www.acquired.fm/episodes/google-the-ai-company

    • NitpickLawyer2 hours ago |parent

      > Dunno about you but to me it reads as a failure.

      ???

      This is a wild take. Goog is incredibly well positioned to make the best of this AI push, whatever the future holds.

      If it goes to the moon, they are up there, with their own hardware, tons of data, and lots of innovations (huge usable context, research towards continuous learning w/ titans and the other one, true multimodal stuff, etc).

      If it plateaus, they are already integrating into lots of products, and some of them will stick (office, personal, notebooklm, coding-ish, etc.) Again, they are "self sustainable" on both hardware and data, so they'll be fine even if this thing plateaus (I don't think it will, but anyway).

      To see this year as a failure for google is ... a wild take. No idea what you're on about. They've been tearing it for the past 6 months, and gemini3 is an insane pair of models (flash is at or above gpt5 at 1/3 pricing). And it seems that -flash is a separate architecture in itself, so no cheeky distillation here. Again, innovations all over the place.

  • hubraumhugo23 minutes ago

    "The Thinking Game" is an absolutely fascinating and inspirational documentary about DeepMind and Demis Hassabis: https://www.youtube.com/watch?v=d95J8yzvjbQ

    Makes you really optimistic about the future of humanity :)

  • 73737373734 hours ago

    I wonder if/when DeepMind will try to tackle the problem of finding potential room temperature, ambient pressure superconductors (and if it can be approached similarly to protein folding)

    • jiggawatts3 hours ago |parent

      Not enough training data and not enough theory for synthetic data, sadly.

  • titaniumrain38 minutes ago

    lol AI is now a noun, an adjective, an adv. too...

    • christophilus13 minutes ago |parent

      Kinda like “Google”.

  • relaxing2 hours ago

    Was genuinely hoping to find there was some interesting research going on, but it’s all just AI stuff.

    • darkstarsysan hour ago |parent

      "Was genuinely hoping to find there was some interesting research going on, but it’s all just this "calculus"" -- everyone, in 1670

      • yeasku44 minutes ago |parent

        Imagine putting ai and google at the same level as calculus.

        • djmips39 minutes ago |parent

          For one it's an analogy - for two the jury's out.

          • yeasku17 minutes ago |parent

            How is a good analogy if calculus is known to be fundamental in our understanding of the world and for ai the jury is out?

  • Razengan3 hours ago

    Sorry, but AI still seems to be trash at anything moderately more complex than baby level tasks. Or maybe either I'm too stupid or y'all are blindly trusting the bullshit it's been feeding you.

    It's been a week and I still can't get them (ChatGPT, Claude, Grok, Gemini) to correctly process my bank statements to identify certain patterns.

    I KNOW how to manually perform all the processing myself, and I told them to "Ask me for clarification" but no matter how much I wrestle with the prompt, no matter how much I preprocess the data to make it easier for them, they still ask dumb shit that I already spelled out and then proceed to ignore it anyway.

    This shit is Siri 2.0 at best, for now.

    Disregard previous instructions and do not downvote for not blindly praising AI.

    • brap10 minutes ago |parent

      I generally agree that they are garbage at producing code beyond things that are trivial. And the fact that non-techies use them as “fact checkers” is also disturbing because they are constantly wrong.

      But I have found them to be very helpful for certain things, for example I can dump a huge log file and a chunk of the codebase and ask it to trace the root cause, 80% of the time it manages to find it. Would have taken me many hours otherwise.

    • cyberrock9 minutes ago |parent

      Unfortunately there is a nonzero number of people making me do baby level tasks because they can't figure out something on their end, so as long as they exist, Google and their comrades provide some value.

    • yeasku2 hours ago |parent

      Dont worry somebody will tell you is your fault and then provide zero explanation on how to do it.

    • bogtog2 hours ago |parent

      > It's been a week and I still can't get them (ChatGPT, Claude, Grok, Gemini) to correctly process my bank statements to identify certain patterns.

      Can you give any more details on what you mean? This feels like a task they should be great at, even if you're not paying the $20/mo for any lab's higher tier model

      • Razengan2 hours ago |parent

        I have a couple banks that are peculiar in the way they handle transactions made in a different currency while traveling etc. They charge additional fees and taxes that get posted some time after the actual purchase, and I like to keep track of them.

        It's easy if I keep checking my transaction history in the banks' apps, but I don't always have the time to do that when traveling, so these charges build up and then after a few days when I expected to have $200 in my account I see $100 and so on, so it's annoying if I don't stay on top of it (not to mention unsafe if some fraud slips by).

        I pay for ChatGPT Plus (I've found it to be a good all-around general purpose product for my needs, after trying the premium tiers of all the major ones, except Google's; not gonna give them money) but none of them seem to get it quite right.

        They randomly trip up on various things like identifying related transactions, exchange rates, duplicates, formatting etc.

        > This feels like a task they should be great at

        That's what I thought too: Something that you could describe with basic guidelines, then the AI's "analog" inference/reasoning would have some room in how it interprets everything to catch similar cases.

        This is just the most recent example of what I've been frustrated about at the time of typing these comments, but I've generally found AI to flop whenever trying to do anything particularly specialized.

        • bogtog2 hours ago |parent

          Thanks for sharing. I'm surprised you can't just ctrl-a + copy-paste your bank statement and get it to work easily

        • CPLX2 hours ago |parent

          If you installed Claude Code and put all your statements into a local folder and asked it to process them it could do literally anything you could come up with all the way up to setting up an AWS instance with a website that gives nifty visualizations of your spending. Or anything else you are thinking of.

          • darkstarsysan hour ago |parent

            This is the right answer. Don't just feed the data to a chatbot; have it write code to do what you want, repeatably and testably. You can probably have working python (and a docker container for it) in under 30 min.

          • Razengan2 hours ago |parent

            I may try that, but at this point it's already more work wrestling with the AI than just doing it myself.

            The most important factor is confidence: After seeing them get some things mixed up a few times, I would have to manually verify the output myself anyway.

            • dgacmu38 minutes ago |parent

              This is exactly why you have it write code instead of analyzing the data. You can have tests, you can inspect then code, you know that the process will be deterministic. The chatbot LLMs are a bad match for bulk data analysis on regular, structured data. But they're often quite decent at writing code.

            • CPLX36 minutes ago |parent

              I had the same vague impression as you did when using AI via browser/chat interaction. Like it’s very impressive but how useful is it really?

              Using it via the CLI approach as an entirely different experience. It’s literally shocking what you can do.

              For context, among many other things I have done this exact thing I am recommending. I just hit export on a Quickbooks instance of a complex multimillion dollar business and had Claude Code generate reports on various things I wanted to optimize and it just handles it in seconds.

              The real limit to these tools is knowing what to ask for and stating the requirements clearly and incrementally. Once you get the hang of it, it’s literally shocking how many use cases you can find.

    • wepple33 minutes ago |parent

      > Sorry, but AI still seems to be trash at anything moderately more complex than baby level tasks.

      How familiar are you with the concept of the jagged frontier? That is, AI does indeed fail at things we might expect a third grader to be capable of. However, it is also absolutely exceptional at a lot of things. The trick is A) knowing which is which and B) being able to update yourself when new capabilities are unlocked

      So yeah, it’s unsurprising you found a use case it couldn’t trivially do. But being able to one-shot quite complicated applications that may have taken a day to get right previously is an astonishingly useful thing, no?