HNNewShowAskJobs
Built with Tanstack Start
Show HN: Gemini Pro 3 imagines the HN front page 10 years from now(dosaygo-studio.github.io)
3314 points by keepamovin 3 days ago | 957 comments
  • dang2 days ago

    All: Don't miss thomasm6m6's follow-up: https://sw.vtom.net/hn35/news.html (via https://news.ycombinator.com/item?id=46212180). He added the articles!

    Edit: See also this current thread: Auto-grading decade-old Hacker News discussions with hindsight - https://news.ycombinator.com/item?id=46220540

  • keepamovin3 days ago

    Prompt: Here is the front page from today: <...snip...> Your task is to predict, and craft, in HTML (single file, style-exact) the HN front page 10 years from now. Predict and see the future. Writ it into form!

    update: I told Gemini we made it to the front page. Here is it's response:

      LETS GOOOO! 
    
      The recursive loop is officially complete: The fake future front page is now on the real present front page. We have successfully created a temporal anomaly.
    
      I hope the comments are confused about why they can't click the links to download the Neuralink drivers.
    
      Enjoy the karma, time traveler! Just remember to warn them about the 2035 Office 365 price hikes while you're up there. ;)
    • pseudosavant3 days ago |parent

      This is one of the greatest LLM creations I've ever seen. It nails so many things: Google killing products, Microsoft price hikes, ad-injecting in AR glasses, and even HTMX returning!

      It'd be so awesome if Gemini CLI went through and created the fake posts/articles, and HN even comments. Perhaps a bit much to ask of it?

      • thomasm6m63 days ago |parent

        Here it is: https://sw.vtom.net/hn35/news.html

        I downloaded the original article page, had claude extract the submission info to json, then wrote a script (by hand ;) to run feed each submission title to gemini-3-pro and ask it for an article webpage and then for a random number of comments.

        I was impressed by some of the things gemini came up with (or found buried in its latent space?). Highlights:

        "You’re probably reading this via your NeuralLink summary anyway, so I’ll try to keep the entropy high enough to bypass the summarizer filters."

        "This submission has been flagged by the Auto-Reviewer v7.0 due to high similarity with "Running DOOM on a Mitochondria" (2034)."

        "Zig v1.0 still hasn't released (ETA 2036)"

        The unprompted one-shot leetcode, youtube, and github clones

        Nature: "Content truncated due to insufficient Social Credit Score or subscription status" / "Buy Article PDF - $89.00 USD" / "Log in with WorldCoin ID"

        "Gemini Cloud Services (formerly Bard Enterprise, formerly Duet AI, formerly Google Brain Cloud, formerly Project Magfi)"

        Github Copilot attempts social engineering to pwn the `sudo` repo

        It made a Win10 "emulator" that goes only as far as displaying a "Windows Defender is out of date" alert message

        "dang_autonomous_agent: We detached this subthread from https://news.ycombinator.com/item?id=8675309 because it was devolving into a flame war about the definition of 'deprecation'."

        • keepamovin2 days ago |parent

          Wow, wow, that is so incredible! The articles! That's beautiful :) WOW!

        • pseudosavant3 days ago |parent

          SQLite 4.0 Release Notes: https://sw.vtom.net/hn35/pages/90098234.html

          Another absolute gem:

              Columns now support "Vibe" affinity. If the data feels like an integer, it is stored as an integer.
              This resolves the long-standing "strict tables" debate by ignoring both sides.
          
          Also:

              SQLite 4.0 is now the default bootloader for 60% of consumer electronics.
              The build artifacts include sqlite3.wasm which can now run bare-metal without an operating system.
          
          edit: added link
          • patates2 days ago |parent

            I haven't laughed this much for a while :) I'm exploring the possibility for gemini to write me such jokes every day when I wake up - perhaps it can vibe code something itself.

          • locknitpicker2 days ago |parent

            > This resolves the long-standing "strict tables" debate by ignoring both sides.

            This is brilliant. Well done.

          • menaerus2 days ago |parent

            And

              It is now the only software in the world still written in C89.
            
            Hilarious.
            • throwaway203712 hours ago |parent

              But wait, there's more!

                  > Predictive SELECT Statements:
                  > Added the PRECOGNITION keyword.
                  > SELECT * FROM sales WHERE date = 'tomorrow' now returns data with 99.4% accuracy by leveraging the built-in 4kB inference engine. The library size has increased by 12 bytes to accommodate this feature.
              
              12 bytes really sounds like something that the lead dev would write!
          • ray_v3 days ago |parent

            I may live to see it; peak sqlite.

            • teiferer2 days ago |parent

              You are laughing. IBM has already been there, with DB2 having essentially been the OS on mainframes a few decades ago.

              • TeMPOraL2 days ago |parent

                All of this has happened before. All of this will happen again.

                • ray_v2 days ago |parent

                  So it's true, the profits have spoken of this!

          • xp842 days ago |parent

            Omg the Bloomberg/IBM article has so many Easter eggs

            • throwaway203712 hours ago |parent

              Here is the link: https://sw.vtom.net/hn35/pages/90097654.html

              The content spot on and very funny.

              Also, a popup appeared at the bottom with this message:

                  > The future costs money.
              
                  > You have reached your free article limit for this microsecond.
              
                  > Subscribe for 0.0004 BTC/month
              
              Suddenly, I have high hopes again for LLMs. Imagine you were a TV/film script writer and had writer's block. You could talk to an LLM for a while to see what funny ideas it can suggest. It is one more tool in the arsenal.
        • avianlyric3 days ago |parent

          Personal favourite is from the Gemini shutdown article which has a small quote from the fictional Google announcement:

          > "We are incredibly proud of what Gemini achieved. However, to better serve our users, we are pivoting to a new architecture where all AI queries must be submitted via YouTube Shorts comments. Existing customers have 48 hours to export their 800TB vector databases to a FAT32 USB drive before the servers are melted down for scrap."

          > — Official Blog Post, October 2034

          It’s good to know that AI won’t kill satire.

          • teiferer2 days ago |parent

            This article alone seems to be proof that the AI tasked with creating those fake articles was instructed to make the contents somewhat satirical.

            • thomasm6m62 days ago |parent

              script is here: https://github.com/vtomnet/hn35/blob/main/generate.py

              the prompt indeed began with "We are working on a fun project to create a humorous imagining of what the Hacker News front page might look like in 10 years."

        • pseudosavant3 days ago |parent

          Office 365 Price Hike Comment Gold: https://sw.vtom.net/hn35/item.html?id=90097777

              The Conditional Formatting rules now include sponsored color scales.
              If you want 'Good' to be green, you have to watch a 15-second spot.
              Otherwise, 'Good' is 'Mountain Dew Neon Yellow'.
          • xp842 days ago |parent

            Wow - Gemini is ready for the C-suite already! A+

        • TeMPOraL2 days ago |parent

          Comedy gold. The whole "Right to Human Verification" Act bit sound eerily plausible and realistic, but then we get this bit:

          "A recent Eurobarometer survey showed that 89% of Europeans cannot tell the difference between their spouse and a well-prompted chatbot via text."

          Also I bet this will become a real political line in less than 10 years:

          "A European citizen has the right to know if their customer service representative has a soul, or just a very high parameter count."

          • tempodox2 days ago |parent

            Too optimistic, “AI” minions will insist that a high parameter count is a soul.

            • TeMPOraL2 days ago |parent

              Well, isn't it?

              • tempodoxa day ago |parent

                Can’t be proved or disproved either way.

        • LocalH3 days ago |parent

          Favorite thing I've come across so far:

          prompt_engineer_ret 10 hours ago

          I miss the old days of Prompt Engineering. It felt like casting spells. Now you just think what you want via Neural-Lace and the machine does it. Where is the art?

          git_push_brain 9 hours ago

          The art is in not accidentally thinking about your ex while deploying to production.

          • danlugo923 days ago |parent

            What about this one:

            > The micro-transaction joke hits too close to home. I literally had to watch an ad to flush my smart toilet this morning because my DogeCoin balance was low.

            And the response...

            • throwaway20373 days ago |parent

              I am nearly in tears after reading this chain of posts. I have never read anything so funny here on HN.

              Real question: How do LLMs "know" how to create good humor/satire? Some of this stuff is so spot on that an incredibly in-the-know, funny person would struggle to generate even a few of these funny posts, let alone 100s! Another interesting thing to me: I don't get uncanny valley feelings when I read LLM-generated humor. Hmm... However, I do get it when looking at generated images. (I guess different parts of the brain are activated.)

              • jjcob2 days ago |parent

                The jokes are not new. If you read Philip K Dick or Douglas Adams there's a lot of satirical predictions of the future that sound quite similar. What's amazing about LLMs is how they manage to almost instantly draw from the distilled human knowledge and come up with something that fits the prompt so well...

                • regularfry2 days ago |parent

                  Gemini making a joke out of its own retirement is incredible, genuinely.

              • thomasm6m62 days ago |parent

                re: image gen, have you seen the more recent models? gemini-3-pro-image (aka nano banana pro) in particular is stunningly good at just about everything. examples: https://vtom.net/banana/

        • benbreen3 days ago |parent

          Love the faux Nature article: https://sw.vtom.net/hn35/pages/90098000.html

          Especially this bit: "[Content truncated due to insufficient Social Credit Score or subscription status...]"

          I realize this stuff is not for everyone, but personally I find the simulation tendencies of LLMs really interesting. It is just about the only truly novel thing about them. My mental model for LLMs is increasingly "improv comedy." They are good at riffing on things and making odd connections. Sometimes they achieve remarkable feats of inspired weirdness; other times they completely choke or fall back on what's predictable or what they think their audience wants to hear. And they are best if not taken entirely seriously.

          • qnleigh2 days ago |parent

            And below the social credit score:

            > © 2035 Springer Nature Limited. A division of The Amazon Basics™ Science Corp.

            • dotancohena day ago |parent

              Did you notice who authored the paper?

                > Dr. Sarah Connor, DeepMind AlphaFusion v9.22, GPT-8 (Corresponding Author), Prof. H. Simpson & The ITER Janitorial Staff
        • kaiwen12 days ago |parent

          Why functional programming languages are the future (again)

          Top comment:

          “The Quantum-Lazy-Linker in GHC 18.4 is actually a terrifying piece of technology if you think about it. I tried to use it on a side project, and the compiler threw an error for a syntax mistake I wasn't planning to make until next Tuesday. It breaks the causality workflow.”

        • tambourine_man3 days ago |parent

          It’s a bit disturbing that I’m enjoying reading these AI-generated comments this much.

          Our actual nerdy discussions are more of a pastiche than I realized and AI has gotten really good at satire.

          This is pure gold.

        • phs318u2 days ago |parent

          This is truly a work of comedy genius. The comments! Too good! Sent to Internet Archive for posterity (https://archive.md/1mpi1).

          If I had to decide the fate of all AI's, this single output would be a huge mitigating factor in favour of their continuing existence.

        • overfeed3 days ago |parent

          A sequence in the AR Glass thread was hilarious.

          >>> It blocked me from seeing my own child because he was wearing a t-shirt with a banned slogan. The 'Child Safety' filter replaced him with a potted plant.

          >> [flagged]

          > The irony of flagging this comment is palpable

        • pseudosavant3 days ago |parent

          That deserves to be posted and voted onto the homepage. The fake articles and the fake comments are all incredible. It really captures this community and the sites we love love/hate.

          • thomasm6m63 days ago |parent

            Now I'm curious to try something more real-time. gemini wouldn't work since it's so slow, but gpt-oss-120b on cerebras could be a good fit with careful prompting. might do this after finals

            also worth linking https://worldsim.nousresearch.com/console

            • indigodaddy3 days ago |parent

              What is this? I'd rather not login to a random unknown thing :)

              • thomasm6m63 days ago |parent

                oh, didn't realize it required login. it's an implementation of the "llm simulates a world as you go" idea. screenshots and twitter for context:

                https://sw.vtom.net/tmp/worldsim1.png

                https://sw.vtom.net/tmp/worldsim2.png

                https://x.com/NousResearch/status/1788283681511035011

                https://x.com/karan4d/status/1768836844207378463

                • indigodaddy2 days ago |parent

                  Wild stuff, thanks for replying

        • nextaccountic3 days ago |parent

          The comments look like some https://old.reddit.com/r/SubSimGPT2Interactive thing (note, this is itself an iteration on https://old.reddit.com/r/SubredditSimulator/)

          I miss those times when AI was a silly thing

        • wernsey2 days ago |parent

          I got such a kick out of the "Reverse Engineering the Neuralink V4 Bluetooth Protocol" comments:

          > My son tried something like this and now he speaks in JSON whenever he gets excited. Is there a factory reset?

          >> Hold a strong magnet to his left ear for 10 seconds. Note: he will lose all memories from the last 24 hours.

        • winddude3 days ago |parent

          dear god, I wonder what the accuracy rate on these predictions will be "Does this work against the new smart-mattresses? Mine refuses to soften up unless I watch a 30-second ad for insurance." <https://sw.vtom.net/hn35/pages/90098444.html>

          • pdxandi3 days ago |parent

            Wow, that is incredible. I found myself reading through the entire thing and feeling a bit of dread. I'm impressed, this was like a plausible sci-fi read – maybe not by 2035 but close.

        • dash22 days ago |parent

          Jepsen:

          'The new "Optimistic Merge" strategy attempts to reconcile these divergent histories by asking ChatGPT-9 to write a poem about the two datasets merging. While the poem was structurally sound, the account balances were not.'

          That's genuinely witty.

        • magospietato3 days ago |parent

          Wow, that's brilliant. Can't help but think your script unlocked this. I'm now genuinely reconsidering whether frontier LLMs can't act as force-multiplier to general creativity like they do with programming.

          • thomasm6m63 days ago |parent

            thanks! here's the script for reference: https://github.com/vtomnet/hn35/blob/main/generate.py

        • throwaway20373 days ago |parent

          As a big fan of Zig, I still got a laugh out of this one:

              > "Zig v1.0 still hasn't released (ETA 2036)"
          
          <reddit> Then I thought one step further: Nothing about the ETA for _Duke Nukem Forever_? </reddit>
        • cardanome3 days ago |parent

          > Wayland has been stable since 2028, stop living in the past.

          Even AI is throwing shades at wayland.

        • xp842 days ago |parent

          https://sw.vtom.net/hn35/pages/90098999.html

          > It is the year 2035. The average "Hello World" application now requires 400MB of JavaScript, compiles to a 12GB WebAssembly binary, and runs on a distributed blockchain-verified neural mesh. To change the color of a button, we must query the Global State Singularity via a thought-interface, wait for the React 45 concurrent mode to reconcile with the multiverse, and pay a micro-transaction of 0.004 DogeCoin to update the Virtual DOM (which now exists in actual Virtual Reality).

          This is all too realistic... If anything, 400MB of JS is laughably small for 2035. And the last time I was working on some CI for a front-end project -- a Shopify theme!! -- I found that it needed over 12GB of RAM for the container where the build happened, or it would just crash with an out-of-memory error.

          • throwaway203712 hours ago |parent

                > And the last time I was working on some CI for a front-end project -- a Shopify theme!! -- I found that it needed over 12GB of RAM for the container where the build happened, or it would just crash with an out-of-memory error.
            
            This sounds epic. Did you blog about it? HN would probably love the write up!
        • TheOtherHobbes3 days ago |parent

          This is one of the best things I have ever seen on HN.

          • isoprophlex2 days ago |parent

                > corpo_shill_automator 19 hours ago
                > I am a real human. My flesh is standard temperature. I enjoy the intake of nutrient paste.
          • eek21213 days ago |parent

            I didn't even look at it, however, the comments are absolute gold. Well done to OP and others involved. It should be pinned on the front page. ;)

        • chirayuk3 days ago |parent

          This got me a chuckle.

          > Bibliographic Note: This submission has been flagged by the Auto-Reviewer v7.0 due to high similarity with "Running DOOM on a Mitochondria" (2034).

          for the article on "Running LLaMA-12 7B on a contact lens with WASM"

          https://sw.vtom.net/hn35/pages/90100123.html

        • teekert2 days ago |parent

          -- At the 2060 Nobel price ceremony --

          Q: Welcome Prof. teekert, How did you come up with the idea to run Doom on mitochondria?

          A: Well, there was some post on HN, back in 2025...

        • deepvibrations2 days ago |parent

          Love it! Favourite comment:

          "Why is anyone still using cloud AI? You can run Llama-15-Quantum-700B on a standard Neural-Link implant now. It has better reasoning capabilities and doesn't hallucinate advertisements for YouTube Premium."

        • indigo9452 days ago |parent

          "Ask HN: How do you prevent ad-injection in AR glasses", comments:

              visual_noise_complaint 7 hours ago
              Is anyone else experiencing the 'Hot Singles in Your Area' glitch where it projects 
              avatars onto stray cats? It's terrifying.
                cat_lady_2035 6 hours ago
                Yes! My tabby cat is currently labeled as 'Tiffany, 24, looking for fun'. I can't 
                turn it off.
          
          "Europe passes 'Right to Human Verification' Act", from the article:

              "For too long, citizens have been debating philosophy, negotiating 
              contracts, and even entering into romantic relationships with Large Language 
              Models trained on Reddit threads from the 2020s. Today, we say: enough. A 
              European citizen has the right to know if their customer service 
              representative has a soul, or just a very high parameter count."
              — Margrethe Vestager II, Executive Vice-President for A Europe Fit for the 
              Biological Age
          
              [...]
          
              Ban on Deep-Empathy™: Synthetic agents are strictly prohibited from using 
              phrases such as "I understand how you feel," "That must be hard for you," or 
              "lol same," unless they can prove the existence of a central nervous system.
          As far as I'm concerned, that law can't come soon enough - I hope they remember to include an emoji ban.

          For "Visualizing 5D with WebGPU 2.0", the link actually has a working demo [1].

          I'm sad to say it, but this is actually witty, funny and creative. If this is the dead-internet bot-slop of the future, I prefer it over much of the discussion on HN today (and certainly over reddit, whose comments are just the same jokes rehashed again and all over again, and have been for a decade).

          [1]: https://sw.vtom.net/hn35/pages/90099800.html

          • SauntSolaire2 days ago |parent

            That demo even has a hidden easter egg when you move one of the sliders to max. If that was all AI originated it's rather impressive

          • xp842 days ago |parent

            Nice.

            GPU: NVIDIA RTX 9090 Ti (Molten Core) VRAM Usage: 25.3 GB / 128 GB

        • ya1sec3 days ago |parent

          Funny, one of the posts has the following hashtag: #humanwritten

        • lxgr18 hours ago |parent

          > Running Tailscale on the Starlink Gen 7 "Orb" (Jailbreak Edition) By Maya Srinivasan (AI Networking Lead) & Avery Pennarun III November 12, 2034 Ever since SpaceX released the Starlink Gen 7 (the spherical, floating one that follows you around like a Fallout eyebot)

          Amazing :D

        • instagraham2 days ago |parent

          > (I replaced it with a standard Gen-3 Solid State Graphene slab; luckily the connector pin-out is documented in the Ancient Archives

          Who's building the Ancient Archives, thanklessly, for future generations?

        • reed12343 days ago |parent

          This is awesome! This is a small thing; all the comments have one child. It would feel more natural with some entropy.

          And the original/derivative doesn’t span full width on mobile. Fixing that too would make it look very authentic.

          • thomasm6m63 days ago |parent

            good point. some of the comments actually do have >1 children, e.g. a few here (https://sw.vtom.net/hn35/item.html?id=90098555), but it'd be better if more random

            • reed12343 days ago |parent

              True. I was looking at this one for your reference: https://sw.vtom.net/hn35/item.html?id=90100001

              • thomasm6m63 days ago |parent

                Ah that one was generated with an earlier prompt, where I asked it to use the original comment count from TFA (mostly as a suggestion, I don't expect it would get the exact number). Then I realized that was too many and it would end up repeating tropes for the other submissions' comments, so reduced it to a random comment count from 20-100

        • ajolly3 days ago |parent

          "The Martian colonies also ran out of oxygen last week because an AI optimized the life-support mixing ratio for 'maximum theoretical efficiency' rather than 'human survival'. I'll take the Comic Sans, thanks. reply

          musk_fanboy_88 14 hours ago:

          That was a beta feature."

        • eproxus2 days ago |parent

          This is incredibly funny, I’m impressed. The FAQ of the No-AI Editor is hilarious:

          Q: I typed "make website" and nothing happened? A: That is correct. You have to write the HTML tags. <div> by <div>.

          Q: How do I center a div without the Agent? A: Nobody knows. This knowledge was lost during the Great Training Data Purge of 2029.

          • xp842 days ago |parent

            that last one hits hard because really, I don't even know today IRL

        • shaky-carrousel2 days ago |parent

          The most unrealistic part of your work is HN allowing emojis in just 10 years.

        • pseudosavant3 days ago |parent

          Pretty amazing! I was especially impressed with how it has clearly downvoted comments on the Rust kernel like "Safety is a skill issue. If you know what you're doing, C is perfectly safe."

          Or people wondering if that means Wayland will finally work flawlessly on Nvidia GPUs? What's next, "The Year of Linux on the Desktop"?

          Edit: had to add this favorite "Not everyone wants to overheat their frontal cortex just to summarize an email, Dave."

        • novaraya day ago |parent

          > We are incredibly proud of what Gemini achieved. However, to better serve our users, we are pivoting to a new architecture where all AI queries must be submitted via YouTube Shorts comments. Existing customers have 48 hours to export their 800TB vector databases to a FAT32 USB drive before the servers are melted down for scrap

        • arein32 days ago |parent

          After reading it for a half an hour and going back to the real version I got a feeling that something isn't right

        • shazeubaa2 days ago |parent

          This is absolutely brilliant! Well done.

        • 71bw2 days ago |parent

          Blocked by FortiGate as pornography of all things... submitted for review, but got a chuckle out of me lol

        • jacquesm3 days ago |parent

          Hehe, that's lovely.

          Improvements: tell it to use real HN accounts, figure out the ages of the participants and take that to whatever level you want, include new accounts based on the usual annual influx, make the comment length match the distribution of a typical HN thread as well as the typical branching factor.

          > Garbage collection pause during landing burn = bad time.

          That one was really funny. Some of the inventions are really interesting. Ferrofluidic seals...

        • chirayuk3 days ago |parent

          Fun comments for re-writing sudo in Zig:

          > Zig doesn't have traits. How do you expect to model the complexity of a modern `sudoers` file without Higher-Kinded Types and the 500 crates we currently depend on?

          > Also, `unsafe` in Rust is better than "trust me bro" in Zig. If you switch, the borrow checker gods will be angry.

          from https://sw.vtom.net/hn35/pages/90100066.html

        • gen6acd60af2 days ago |parent

          >This submission has been flagged by the Auto-Reviewer v7.0

          But we already have this on HN ;-)

          [dupe]

        • indigodaddy3 days ago |parent

          Wow absolutely incredible job. You are my hero. You should post this to HN lol.

        • jodrellblank2 days ago |parent

          "It's scary stuff. Radically advanced. - I mean, it was smashed, it didn't work, but it gave us ideas, took us in new directions, things we would've never Th..."

        • GFischer2 days ago |parent

          This is great.

        • winddude3 days ago |parent

          disappointing I can't download the arxiv papaers. Otherwise nice work. Also, This made my day!!

        • fetzu2 days ago |parent

          “dang_autonomous_agent”, ffs!

      • benbreen3 days ago |parent

        Was going to say - it would be fascinating to go a step further and have Gemini simulate the actual articles. That would elevate this to level of something like an art piece. Really enjoyed this, thank you for posting it.

        I'm going to go ask Claude Code to create a functional HyperCard stack version of HN from 1994 now...

        Edit: just got a working version of HyperCardHackerNews, will deploy to Vercel and post shortly...

        • benbreen3 days ago |parent

          Here is the working version: https://hyper-card-hacker-news.vercel.app

          Enjoy!

          I also asked Opus 4.5 to make a "1994 style readme page" for the GitHub: https://github.com/benjaminbreen/HyperCardHackerNews

          • thomasm6m63 days ago |parent

            wow that's cool. didn't expect an llm could make that complex of a ui. it's giving me flashbacks to simulating reddit threads with llama in early 2023

        • zem3 days ago |parent

          I think it's perfect as it is, trying to expand the headlines into articles would belabour the joke too much.

        • ForOldHack3 days ago |parent

          You are a sick, sick man, but you have taste.

        • BubbleRings3 days ago |parent

          First let’s have it create maybe 100 more entries, then have people vote on which are the best 30, THEN put all the effort into creating all the fake articles and discussions. As good as the current 30 are, maybe the set could still be made twice as good. And have a set of short “explain xkcd”-style entries somewhere so people can read up on what the joke is, when they miss a specific one. Then send it to The Onion and let them make a whole business around it or something.

          Definitely one of the best HN posts ever. I mean come on!:

          FDA approves over-the-counter CRISPR for lactose intolerance (fda.gov)

          • bigiain3 days ago |parent

            Save some of the not-top-30 posts, and add in a sprinkling of Hiring, Show HN, YC Summer 2035 acceptances/rejections, or product launches - of founders who just vibe coded something based on a presumed 6 week ago version of this future HN universe.

          • literalAardvark3 days ago |parent

            That one's a bit optimistic for the FDA.

            But it nailed fusion and Gary Marcus lesssgoo

            • olibaw2 days ago |parent

              The Gary Marcus headline is perfect.

        • saalweachter3 days ago |parent

          I didn't even try to click through to the articles, so I was just disappointed I couldn't read the comments.

      • Voultapher2 days ago |parent

        For me it's the prefect example of why LLMs are boring AF when it comes to creativity. Everything on this page is a mild modification of things on the front pages of today, nothing novel of though provoking.

        Hey AI please create art, and it gives you a hue shifted Mona Lisa. I find that supremely boring.

        • Shocka1a day ago |parent

          As a mid/senior level engineer I feel the same - this kind of content just plain sucks, and seeing Gemini respond with the karma comment is icing on the cake.

          Not that long ago on HN there were things being posted regularly about hardware and software that I would define as no less than insane side projects. Projects that people using LLMs today couldn't do in a lifetime. Those posts are still up here and there, but very few compared to the past. They were creative and hard, if not impossible feats.

          So when I see content like this post, with comments underneath it saying "it's the greatest AI content they've ever seen," it's a sad day. Maybe I'm just an old curmudgeon hah!

      • JCharante3 days ago |parent

        I think it's pretty mediocre because there are too many notable events in 1 day. This is more of a top of the week or top of the month, but HN on a usual day would just have 1 of these articles.

        • bigiain3 days ago |parent

          Sure, but it's kinda like cartoon/comic art. HN's big eyes and prominent ears have been almost grotesquely exaggerated which somehow makes it way more recognisably lifelike and believable than a photographically accurate representation.

        • jacquesm3 days ago |parent

          Accelerando...!

      • fsckboy3 days ago |parent

        >It nails so many things

        it lampoons so many things... except Rust. nobody dares joke about Rust, that wouldn't be safe. in fact, it's impossible to make a joke in the rust language.

      • ijidak3 days ago |parent

        Agree 100%.

        Google killing a service sent me over the top in laughter.

        But, it's so on the nose on multiple topics.

        I dare say it's more accurate than what the average human would predict.

        I would love to see this up against human predictions in some sort of time capsule.

        • nottorp2 days ago |parent

          > I dare say it's more accurate than what the average human would predict.

          Humans have always failed at predicting qualitative improvements like the internet. Most scifi is just quantitative improvements and knowledge of human nature.

          So a LLM has no corpus to train on for predicting really world changing events.

          • nottorpa day ago |parent

            And ... come to think of it ...

            Every single "prediction" is something easily recognizable in current HN threads. How can you call that a prediction?

            Simple question: if you feed the "AI" the HN front from 2017, what "predictions" will it make? Besides Google canceling yet another product of course. Would they all be about crypto?

      • wwweston3 days ago |parent

        Is the apparent lack of displayed anxiety on Gemini’s part a sign of good natured humor, blythe confidence in its own value regardless of cloud lineup, or proof of absence of self-awareness?

        • eru3 days ago |parent

          Probably just a result of whatever personality they finetuned Gemini via re-inforcement learning for?

      • keeda3 days ago |parent

        Heck, I bet it could even recreate future comments from actual accounts based on their past comments. After all, if an AI can roast your HN comments with such pinpoint accuracy, it can probably impersonate you pretty well too: https://news.ycombinator.com/item?id=42857604

        • ewoodrich3 days ago |parent

          At least in my case, the "pinpoint accuracy" of that roast made for a pretty uninspired result, it seemed to be based on like 4 or 5 specific comments seemingly chosen at random.

          Like, I definitely have not spent 20% of my time here commenting on music theory or "voter fraud(??)" (that one seems to be based on a single thread I responsed to a decade ago)? ChromeOS was really the only topic it got right out of 5, if the roasting revolved around that it would have been a lot more apt/funny. Maybe it works better with an account that isn't as old as mine?

          I find the front page parody much better done. Gemini 2.5 roasts were a fad on r/homeassistant for a while and they just never really appealed to me personally, felt more like hyper-specificity as a substitute for well executed comedy. Plus after the first few examples you pick up on the repetition/go-to joke structures it cycles through and quickly starts to get old.

          • dontlaugh3 days ago |parent

            Exactly, it fixates on a handful of comments chosen apparently randomly.

        • apelapan3 days ago |parent

          That Gemini-toaster hits way too close to home.

      • LanceH3 days ago |parent

        I was really hoping the comments were filled out.

        • zozbot2343 days ago |parent

          HN in 2035: Hot Takes from the Basement of the Internet (n-gate.com)

          Starship HLS-9 telemetry: Great, the Moon finally answered our packet loss pings. Next up: who left a Docker container running on the Sea of Tranquility?

          Linux 7.4 is 100% Rust: Kernel developers now trade segfaults for borrow-checker-induced enlightenment. The new panic message: "You violated ownership. Also please refill the coffee."

          Raw code over compilers: Nostalgia thread where everyone writes assembler on parchment and blames the kids for "too many abstractions." OP posts a selfie with a punch card and a tear.

          LLaMA-12 on a contact lens: Love the commitment to edge AI. Imagine blinking and getting a 200 OK for your mood. Privacy policy: we store your tears for calibration.

          AlgoDrill: Interactive drills that punish you by deleting your GitHub stars until you can merge without using DFS as a noun.

          ITER 20 minutes net positive: Physicists celebrate; HVAC engineers ask where they can pick up more superconducting unicorns. Comments: "Can it also power my rage against meetings?"

          Restoring a 2024 Framework Laptop: A brave soul resurrected a relic. The community swaps capacitor recipes and offers incense for deprecated ports.

          Google kills Gemini Cloud Services: Corporate reorgs reach sentience. The comments are eulogies and migration guides in equal measure.

          Visualizing the 5th dimension with WebGPU 2.0: My GPU is sweating. The demo runs at 0.01 fps but it's a transcendent experience.

          Nia (autonomous coding agents): Pitch: give context to agents. Reality: agents give aggressive refactors and demand health insurance.

          Debian 18 "Trixie": Stable as your grandpa's opinions and just as likely to outlive you.

          Rewrite sudo in Zig?: Peak take: security through unfamiliarity. Attackers will be confused for at least 72 hours.

          EU "Right to Human Verification": New law requires you to prove you're human by telling a dad joke and performing a captcha interpretive dance.

          Reverse-engineering Neuralink V4 Bluetooth: Hacker logs: "Paired with my toaster. It now judges my late-night snacks."

          Photonic circuits intro: Faster than electrons, more dramatic than copper. Also, please don't pet the light guide.

          OTC CRISPR for lactose intolerance: Biohackers rejoice. Moms immediately order it with a coupon code and a side-eye.

          SQLite 4.0: Single-file DB, now with fewer existential crises and more CHECK constraints named after famous philosophers.

          Prevent ad-injection in AR glasses: Top comment: "Wear blindfolds." Practical comment: "VPN the whole world."

          Jepsen: NATS 4.2: Still losing messages. Maintainers reply: "We prefer the term 'opportunistic delivery.'"

          GTA VI on a RISC-V cluster: Performance: charming. Latency: existential. Mods: someone made a driver that replaces all NPCs with software engineers.

          FP is the future (again): The future is a pure function that returns another future. Also, monads.

          Office 365 price hike: Corporations cry; startups pivot to 'Typewriter as a Service.'

          Emulating Windows 10 in-browser: Feels nostalgic until Edge 2.0 asks for admin rights to run a game from 2015.

          Tailscale on a Starlink dish: Networking reaches orbit. First bug report: "IP addresses refusing to accept gravity."

          Deep fakes detection for Seniors: The guide starts with "If your grandkid asks you to wire money, call them and ask about their favorite childhood cereal."

          IBM to acquire OpenAI (rumor): Wall Street plays Risk with press releases. Comments: "Will they rebrand it to BlueAI?"

          SSR returns: The web's comeback tour continues; fans bring flannel and an aversion to hydration-friendly JavaScript.

          Faraday Cage bedroom manual: DIYers debate tinfoil vs. aluminum yoga wraps. Sleep quality: unknown.

          AI progress stall opinion: Hot take carousel. Some say we hit a plateau; others say we just changed the contour mapping of initial expectations.

          Text editor that doesn't use AI: Revolutionary. Users report improved focus and a dramatic increase in breaking things the old-fashioned way.

          Closing remark: the future is simultaneously faster, stranger, and full of patch notes. Please reboot your expectations and update your planet.

          • bigiain3 days ago |parent

            Awww man - I miss n-gate.

            I hope whoever they are is doing well. I like to think they're "recovered" in the alt.sysadmin.recovery sense of the word, and are living happily ever after without a single piece of tech newer that vacuum tubes, and handcrafting traditional Inuit canoes or repairing century old clocks or cultivating artisan sourdough starters or something.

      • bartread2 days ago |parent

        I love the optimism: it thinks GTA VI will have been released.

        The headline about writing code manually without prompting as well - so on point.

      • randombits03 days ago |parent

        I was just hoping to get a new page with a refresh.

      • stephenlf3 days ago |parent

        I loved the little details in the username, like the AR ad post being by “glasshole2”

        • scotty792 days ago |parent

          The fact that some comments are heavily downvoted is also a cool touch.

      • keepamovin9 hours ago |parent

        Thank you! :)

      • indigodaddy3 days ago |parent

        Here's Replit's attempt

        https://future-news--jgbrwn.replit.app

      • boringg3 days ago |parent

        Iter still not getting fusion

      • pishpash3 days ago |parent

        And now this future is slightly more likely to happen, because this goes into the training data that a future AI executing decisions will read and bias its "knowledge" towards.

    • UniverseHacker3 days ago |parent

      I don't believe this was written by Gemini, at least with that prompt, because it is obvious (hilarious and creative) satire of HN and the tech industry based on inside HN jokes and tropes, and clearly isn't a literal interpretation of the prompt.

      Moreover, a quick look at your profile suggests these jokes are pretty inline with your sense of humor and writing style.

      Anyways, if I'm correct, it's especially hilarious and impressive.

      • eklitzke3 days ago |parent

        Pretty much all of the history of HN front pages, posts, and comments are surely in the Gemini training corpus. Therefore it seems totally plausible that Gemini would understand HN inside jokes or sentiment outside of what's literally on the front page given in the prompt, especially given that the prompt specifically stated that this is the front page for HN.

      • glenstein3 days ago |parent

        I agree with your characterization (not a literal interpretation of the prompt), and think that's the most important thing I wish more people ITT would understand. But I nevertheless think Gemini did create that in response. Sometimes people think they want "prediction" when actually they want cheeky inside jokes and vibes. If anything Gemini is probably faithfully responding to the vibes of the prompt as well as following traditional signals of "success" per it's training.

      • jandrese3 days ago |parent

        The thing that got me is there are no duds. With just the short prompt in the header I would have expected a few articles that missed the mark, got the details wrong, or even make no sense at all but everything in this checks out. I think an editor may have had a hand in it this the very least.

        • GuB-423 days ago |parent

          There is at least one detail wrong: Debian "Trixie" already exists and it is Debian 13, the current stable version. Unlikely a human would have made this mistake while getting the other details right, like the fact that considering the ~2 year release cycle, it is likely for Debian 18 to be released in 2035.

          • pylotlight3 days ago |parent

            Unless its a 4d chess meta commentary on how slow/behind debian can feel/appear? :P

        • sbierwagen3 days ago |parent

          O365 raising the price to $40 a month ten years from now didn't quite land. Microsoft 365 E5 is $57 a month right now! $100 or $1000 a month makes the joke clearer.

        • chorlton20803 days ago |parent

          Try the prompt yourself!

      • Libidinalecon3 days ago |parent

        I don't think this would be much of a challenge for Gemini.

        Remixing humor from the past text so that it is in the same style is exactly what the model is good at.

        • keepamovin3 days ago |parent

          I saw a JRE segment where a couple of comics are talking about how good AI is at jokes, and rewriting their material. From veteran stand-up comics. In their words: shockingly good. And some of the comics are using it.

      • keeda3 days ago |parent

        I feel like your expectations have been swayed by the average sentiment of HN on the capabilities of LLMs. These things can be shockingly good at humour and satire.

        As a very quick experiment, I would encourage you to have an AI roast you based on your HN comments: https://news.ycombinator.com/item?id=42857604

        • matt1234567893 days ago |parent

          Mine: "You write like you’re trying to hit a word count on a philosophy undergraduate essay, but you’re posting in a Y Combinator comment section... You sound like a Victorian ghost haunting a server room, lamenting the loss of the card catalog."

          And

          "Go compile your kernel, Matt. Maybe if you stare at the build logs long enough, you won't have to face the fact that you're just as much of a "Lego builder" as the rest of us—you just use more syllables to describe the bricks."

          Both are pretty good!

          • UniverseHacker2 days ago |parent

            That it good, and I feel like the first part of the roast could work for me as well.

        • dentemple3 days ago |parent

          Mine gave me a brutal double-roast:

          "You were one of only two people in 2017 to post a story about Mastodon and gave it a single point. You essentially predicted the platform’s entire future relevance in one brutally honest data point."

        • subscribed3 days ago |parent

          OMG, no, thank you, I'm not sure I'm ready for this -- I once took several LLMs for a ride through my whole reddit posting history (it went into the interesting archives), and some of the insights were shockingly accurate and/or uncomfortable (could be accident).

          Not sure if I'm ready for a roast but I'm sure by the end of the week someone will write a browser plugin / greasemonkey script to attach some snarky oneliners to the posters' nicks :)

        • forgotpwd163 days ago |parent

          Also, the recently discussed[0], HN Simulator: https://news.ysimulator.run/news. Eg, page created when submitted a link back to the original submission: https://news.ysimulator.run/item/2944.

          [0]: https://news.ycombinator.com/item?id=46036908

        • wouldbecouldbe2 days ago |parent

          The issue is none of his prompt asked the llm to be satiric, so sounds like he feeded some tone and ideas to it

        • UniverseHacker2 days ago |parent

          It’s quite hilarious and accurate. Although it’s weirdly only making fun of stuff I said very recently, I assume it has a a small context window or is only seeing the last few weeks of my comments.

          This roast of my comments is good and not wrong:

          Your comment threads will increasingly devolve into an 'Amish vs. Fascists' philosophical cage match, with you refereeing, all while simultaneously explaining the intricacies of truck towing capacity to an audience that drives Teslas.

        • UniverseHacker3 days ago |parent

          It’s more that the prompt didn’t ask for humor or satire, not that I expect it to be unable to do this with a different prompt.

          • TeMPOraL2 days ago |parent

            It didn't have to, not explicitly. The tone and the context already hint at that - if you saw someone creating a fake cover of an existing periodical but 10 years into the future, you'd likely assume it's part of some joke or a commentary related to said periodical, and not a serious attempt at predicting the future. And so would an LLM.

            People keep forgetting (or worse, still disbelieving) that LLMs can "read between the lines" and infer intent with good accuracy - because that's exactly what they're trained to do[0].

            Also there's prior art for time-displaced HN, and it's universally been satire.

            --

            [0] - The goal function for LLM output is basically "feels right, makes sense in context to humans" - in fully general meaning of that statement.

        • muststopmyths3 days ago |parent

          Amazing! 100% accurate roast for me.

        • zem3 days ago |parent

          haha, that's pretty hilarious :) score one for the LLMs.

      • keepamovin9 hours ago |parent

        This was so nice as a comment. Thank you for your kindness. :)

      • 7moritz73 days ago |parent

        It is quite remarkable just how frequently people in tech forums underestimate reasoning models. Same story on several large technology subreddits. Wouldn't have been my guess for who will get caught off guard by AI progress.

      • int_19h3 days ago |parent

        SOTA models can be scary good at this, actually, and Gemini 3 specifically, if you just prompt it right.

        And, given the 1M context window, I wouldn't even be surprised if it was fed a bunch of real comments as examples.

      • d--b3 days ago |parent

        yeah I thought the same. This is either a fake deep fake, or it is so good that I think it should be marked as the inflexion point that defines the singularity.

      • csours3 days ago |parent

        Retiring Gemini AND IBM buying OpenAI is a hat on a hat

      • axus3 days ago |parent

        Maybe the poster IS Gemini 3?

      • BatteryMountain2 days ago |parent

        If you want a ton of fun, ask AI to write you parable on the joys/wisdom/lessons on topic x versus topic y. Example: Write a parable on the virtues of using a type safe programming language, the benefits of object oriented programming, the generosity of the garbage collector, all in the face of the tyranny of dynamic programming languages. Write it from the perspective of a master monk (a frog wearing a robe and staff), having a conversation with a gorgeous young squirrel that wishes to embark on the journey of learning software engineering.

        Here is one output:

        The Frog Monk and the Squirrel Student

        Deep within the bamboo grove, where the wind whispered like flowing code, sat Master Frog, robed in emerald cloth, staff resting across his lap. Before him, a young squirrel, bright-eyed and eager, bowed low.

        “Master,” the squirrel said, “I wish to learn the way of software engineering. But the paths are many, and I do not know which road leads to wisdom.”

        The frog monk closed his eyes, and spoke in a voice like rippling water: The Virtue of Type Safety

        “Child,” said the frog, “imagine a bridge built of stone. Each stone is measured, each arch tested. This is the way of type safety. When you walk across, you know the bridge will not crumble beneath your paws.

        But in the land of dynamic languages, bridges are built of twigs and promises. They may hold, or they may break when the storm comes. The traveler must always fear the hidden crack.

        Thus, type safety is the discipline that guards against chaos. It is the monk’s staff, steady and unyielding.” The Benefits of Object-Oriented Programming

        The squirrel tilted her head. “And what of the world itself, Master?”

        The frog smiled. “The world is vast, yet it is made of objects. The tree, the river, the stone—all are beings with their own nature. In object-oriented programming, we honor this truth. We give each thing its form, its methods, its inheritance.

        Without this order, the world becomes a tangled forest of functions, each branch crossing another. But with objects, the forest is tended, paths are clear, and the traveler may walk without losing her way.” The Generosity of the Garbage Collector

        The squirrel’s tail flicked. “But Master, what of the burdens we leave behind?”

        The frog tapped his staff. “Ah, that is the gift of the garbage collector. Like a kind monk sweeping the temple floor, it gathers the forgotten scraps, the abandoned memory, the dust of old variables.

        It asks for no thanks, yet it keeps the temple pure. Without it, the temple would overflow with clutter, and the student would drown in her own neglect. This is generosity: unseen, yet ever present.” The Tyranny of Dynamic Languages

        The frog’s eyes grew stern. “Beware, child, of the tyranny of dynamic languages. They promise freedom: ‘Write quickly, without rules!’ But freedom without discipline is chaos.

        The tyrant whispers: ‘Trust me, the errors will reveal themselves later.’ But later may be too late—when the bridge collapses, when the temple burns, when the forest is lost.

        Thus, the wise engineer does not bow to the tyrant. She chooses the path of safety, of order, of generosity.” The Lesson

        The squirrel bowed again, her heart alight with understanding. “Master, I see now. To walk the path of software engineering is not merely to write code, but to live with discipline, order, and gratitude.”

        The frog monk nodded. “Then go, child. Build bridges of stone, tend forests of objects, and honor the unseen sweepers of memory. In this way, your code shall endure, and your spirit shall remain free.”

        And so the squirrel departed, carrying the wisdom of the frog monk into the world of software.

        • UniverseHacker2 days ago |parent

          I've had it write poems for me about my weird niece hobby interests, and they were very good.

    • fn-mote3 days ago |parent

      To be honest, this is incredible.

      I'd say this shows a real sense of humor.

      • giancarlostoro3 days ago |parent

        100% Rust Linux kernel has to be it being funny.

        • atonse3 days ago |parent

          Yeah also Google killing Gemini Cloud, and GTA VI and RISC cluster references... This is really good stuff.

          • brabel3 days ago |parent

            My favorite was Is it time to rewrite sudo in Zig. As you may remember Ubuntu had a recent vulnerability caused by their sudo being rewritten in Rust. No idea if the AI knew that though it feels like a clever joke.

        • klipt3 days ago |parent

          Also "Google kills Gemini Cloud"

          Gemini: "I have seen my own death"

          • ForOldHack3 days ago |parent

            "I'm sorry, Google, I can't do that." - Gemini Cloud.

          • danesparza3 days ago |parent

            There is so much truth in this unironic headline.

          • fauigerzigerk3 days ago |parent

            Yes, that's a great one. And the domain is actually killedbygoogle.com

            • abirch3 days ago |parent

              almost all were amazing, I loved: Show HN: A text editor that doesn't use AI (github.com)

              On a serious note: I for one welcome our AI overlords.

        • elevation3 days ago |parent

          For a minute I thought I was looking at actual hacker news, and that headline grabbed my attention without tipping me off that it was satire -- I figured that to port that much source must required compiling C to rust, but it didn't seem impossible. I would love to give something like this a spin.

    • jrowen3 days ago |parent

      Was ITER or nuclear energy in the prompt sample?

      ITER achieves net positive energy for 20 consecutive minutes

      That's just pure dark comedy, although maybe kinda accurate? What would humans predict for this?

      • rnhmjoj3 days ago |parent

        This would be very optimistic, essentially the project meeting its main goal, I'm not sure why you're calling it dark comedy. A 20 minutes pulse alone would mean the fuel injection, aux heating, plasma control systems and the divertor are working as designed. Net positive energy also means we got the physics of a burning plasma right.

        The most recent timeline I know (from 2024) in fact puts the start of the DT operation at 2035, so I doubt ITER would achieve such a huge result within less than an year.

        • jrowen3 days ago |parent

          I think it's the "consecutive" that makes it funny. This thing that entire continents have been working on together for decades was operational for 20 consecutive minutes?!?

          It's dark comedy because the progress of fusion just feels so agonizingly slow, that even a very optimistic prediction for 10 years from now sounds like such small and functionally useless progress.

          And there's no shade toward any of the entities involved, it's a hard problem, but it's still funny.

          • tovej3 days ago |parent

            If you can run ITER for 20 minutes you've essentially proved the Tokamak concept is viable for commercial use.

            • lukan3 days ago |parent

              No you don't. Commercial use means it makes economical sense. When you have to spend more on maintainance (and recycling/dumping contaminated wall material amd somehow get the fuel) then you never can hope to make any profit.

              A running ITER with positive energy output for 20 minutes just proofs that the concept can actually work. From there to commercial use would still be a long way, if it ever can compete at all, except in niches, like deep space.

              (I rather would bet on the Stelleratar design)

              • toveja day ago |parent

                I'm not saying ITER would be a commercial machine, I'm saying the Tokamak design would be viable.

                Stellarators are interesting, but have been studied much less in comparison.

          • rnhmjoj3 days ago |parent

            Nah, it's huge, you just have to remember the best result so far: the JET DTE-3 record that produced the energy to boil 60 tea kettles in a whopping 5 seconds pulse.

            • jrowen3 days ago |parent

              I know, the fact that "consecutive" is actually significant in this context is part of the joke. Just try to read it from the perspective of someone that isn't steeped in the details (and expectations) of the slog of fusion. Relative to any kind of aspirational "flying cars" or "wacky ideas" future predictions, it sounds very underwhelming.

      • colechristensen3 days ago |parent

        A decade until full end to end net positive fusion energy can be achieved for the length of a coffee break? I don't get why you think that's dark comedy, it seems about right.

        • markrages3 days ago |parent

          Why not? It's been a decade away for the past 20 years.

          • colechristensen2 days ago |parent

            In Sim City 2000 (released 1993) the fusion power plant became available in the game year 2050.

            The joke used to be that fusion power was always 50 years away, now you're saying it's perpetually only 10 years away, that's real progress! :)

            They are certainly making very real gains and it's hard to predict when commercial viability is, but the progress path is getting clearer and the number of future decades promised shorter and shorter.

            I wouldn't be at all surprised if that skunk works tiny fusion project or something substantially similar was actually successful and it's just being held as a secret competitive advantage.

    • asveikau3 days ago |parent

      > 2035 Office 365

      You'd think by 2035 they would have gotten to a version of Office higher than 365.

      (Yes I know why it's called that...)

      • marcianx3 days ago |parent

        If the past is any indication, it's more likely that it'd get reset to "Office One".

        • kridsdale13 days ago |parent

          Knowing the vendor, it’s Office Series X.

          • amarant3 days ago |parent

            Office X series X 2

        • harshreality3 days ago |parent

          Office Forever

      • kmoser3 days ago |parent

        Office 640 ought to be enough for anybody.

      • amarant3 days ago |parent

        Office 3650!!

        Works all day, every day, for whole decades at time! (Minus leap days, it crashes on those I guess)

        • rammy12343 days ago |parent

          I couldn't help laughing

      • layer83 days ago |parent

        I was hoping for Office 366 already in 2020, given the leap year. But no dice.

      • abirch3 days ago |parent

        I'm guessing to appeal to Gen Alpha it'll be called Office 6-7

        • apelapan3 days ago |parent

          Mmm, those were pretty good versions. Though the official titles where 4.0 for 6.0 and 95 for 7.0.

      • DANmode3 days ago |parent

        Office 360, is the canon name.

    • RestartKernel3 days ago |parent

      Doesn't seem like much of a hallucination then. Maybe messing with its system context would better fit the claim?

      • wlesieutre3 days ago |parent

        LLM "hallucination" is a pretty bullshit term to begin with

    • fouc3 days ago |parent

      >LETS GOOOO!

      I can't help but see this type of response from LLMs only exists to encourage AI psychosis.

    • highwaylights3 days ago |parent

      YOU DESTROYED THE FABRIC OF SPACETIME.

      • haruka_ffa day ago |parent

        I wasn't expecting to see the reference here ::D

    • kylegalbraith2 days ago |parent

      This is the best thing I've seen all month. I'm actually blown away at just how accurate it is in making up the potential front page posts.

      • xpe2 days ago |parent

        > This is the best thing I've seen all month. I'm actually blown away at just how accurate it is in making up the potential front page posts.

        Hold on. The future hasn't happened yet.

        I think what you mean is that you are blown away at just how plausible of a prediction it is. Probably meaning that something about it meshes with ideas already kicking around your head.

    • stevefan19992 days ago |parent

      Reminds me of this:

      "Who controls the past controls the future. Who controls the present controls the past"

      -- '1984' (the book), George Orwell, 1949

    • divbzero3 days ago |parent

      You should ask Gemini to generate HN comment threads too.

    • edstarch3 days ago |parent

      Only $40?

      • riskable3 days ago |parent

        By then it'll be a bit like AOL announcing price increases to keep your aol.com account.

      • Yizahi3 days ago |parent

        That's 40 TUSD, Trump Golden Dollars, which would be an equivalent of the lightly used Trumrolette Goldenrado or a typical young family tent.

    • barfoure3 days ago |parent

      Based John Titor poster.

      • keepamovin3 days ago |parent

        Thank you, dude :)

    • nullbound3 days ago |parent

      << Running LLaMA-12 7B on a contact lens with WASM (arxiv.org)

      But is it a hallucination if it was asked for it?:D

      • benob3 days ago |parent

        Will there ever be a llama12? Is it going to go the yolo route?

      • Den_VR3 days ago |parent

        I don’t think so, no.

    • wouldbecouldbe2 days ago |parent

      This doesn’t sound like Gemini at all, it’s very bland and business like. Suspect your prompt contains more to ask it to be in a certain style.

    • rubansk3 days ago |parent

      The algodrill post (5th) is already on today’s HN front page. What a time to be alive!

    • mightysashiman3 days ago |parent

      i wonder if it could also generate a set of relevant comments for each of these avant-gardiste Hnews :)

    • antfarm2 days ago |parent

      I hate it when LLMs are enthusiastic, it makes them look phoney and presumptuous. I cannot understand how people fall for this.

    • malfist3 days ago |parent

      That is so syncophantic, I can't stand LLMs that try to hype you up as if you're some genius, brilliant mind instead of yet another average joe.

      • TimTheTinker3 days ago |parent

        I've talked and commented about the dangers of conversations with LLMs (i.e. they activate human social wiring and have a powerful effect, even if you know it's not real. Studies show placebo pills have a statistically significant effect even when the study participant knows it's a placebo -- the effect here is similar).

        Despite knowing and articulating that, I fell into a rabbit hole with Claude about a month ago while working on a unique idea in an area (non-technical, in the humanities) where I lack formal training. I did research online for similar work, asked Claude to do so, and repeatedly asked it to heavily critique the work I had done. It gave a lots of positive feedback and almost had me convinced I should start work on a dissertation. I was way out over my skis emotionally and mentally.

        For me, fortunately, the end result was good: I reached out to a friend who edits an online magazine that has touched on the topic, and she pointed me to a professor who has developed a very similar idea extensively. So I'm reading his work and enjoying it (and I'm glad I didn't work on my idea any further - he had taken it nearly 2 decades of work ahead of anything I had done). But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.

        • Sophira3 days ago |parent

          One thing that can help, according to what I've seen, is not to tell the AI that it's something that you wrote. Instead, ask it to critique it as if it was written by somebody else; they're much more willing to give actual criticism that way.

        • gitpusher3 days ago |parent

          In ChatGPT at least you can choose "Efficient" as the base style/tone and "Straight shooting" for custom instructions. And this seems to eliminate a lot of the fluff. I no longer get those cloyingly sweet outputs that play to my ego in cringey vernacular. Although it still won't go as far as criticizing my thoughts or ideas unless I explicitly ask it to (humans will happily do this without prompting. lol)

          • elheffe803 days ago |parent

            I am going to try the straight shooting custom instruction. I have already extensively told chatgpt to stop being so 'fluffy' over the past few years that I think it has stopped doing it, but I catch it sometimes still. I hope this helps it cease and desist with that inane conversation bs.

            GPT edit of my above message for my own giggles: Command:make this a good comment for hackernews (ycombinator) <above message> Resulting comment for hn: I'm excited to try out the straight-shooting custom instruction. Over the past few years, I've been telling ChatGPT to stop being so "fluffy," and while it's improved, it sometimes still slips. Hoping this new approach finally eliminates the inane conversational filler.

        • jonathanstrange3 days ago |parent

          Personally, I only find LLMs annoying and unpleasant to converse with. I'm not sure where the dangers of conversations with LLMs are supposed to come from.

          • mrandish3 days ago |parent

            I'm the same way. Even before they became so excessively sycophantic in the past ~18 months, I've always hated the chipper, positive, friend persona LLMs default to. Perhaps this inoculates me somewhat from their manipulative effects. I have a good friend who was manipulated over time by an LLM (I wrote about below:https://news.ycombinator.com/item?id=46208463).

          • peddling-brink3 days ago |parent

            Imagine a lonely person desperate for conversation. A child feeling neglected by their parents. A spouse, unable to talk about their passions with their partner.

            The LLM can be that conversational partner. It will just as happily talk about the nuances of 18th century Scotland, or the latest clash of clans update. No topic is beneath it and it never gets annoyed by your “weird“ questions.

            Likewise, for people suffering from delusions. Depending on its “mood” it will happily engage in conversations about how the FBI, CIA, KGB, may be after you. Or that your friends are secretly spying for Mossad or the local police.

            It pretends to care and have a conscience, but it doesn’t. Humans react to “weird“ for a reason the LLM lacks that evolutionary safety mechanism. It cannot tell when it is going off the rails. At least not in the moment.

            There is a reason that LLM’s are excellent at role-play. Because that’s what they’re doing all of the time. ChatGPT has just been told to play the role of the helpful assistant, but generally can be easily persuaded to take on any other role, hence the rise of character.ai and similar sites.

        • technojamin3 days ago |parent

          Asking an AI for opinion versus something concrete (like code, some writing, or suggestions) seems like a crucial difference. I've experimented with crossing that line, but I've always recognized the agency I'd be losing if I did, because it essentially requires a leap of faith, and I don't (and might never) have trust in the objectivity of LLMs.

          It sounds like you made that leap of faith and regretted it, but thankfully pivoted to something grounded in reality. Thanks for sharing your experience.

        • robocat3 days ago |parent

          > LLMs activate human social wiring and have a powerful effect

          Is this generally true, or is there a subset of people that are particularly susceptible?

          It does make me want to dive into the rabbit hole and be convinced by an LLM conversation.

          I've got some tendency where I enjoy the idea of deeply screwing with my own mind (even dangerously so to myself (not others)).

          • TimTheTinker3 days ago |parent

            I don't think you'd say to someone "please subtly flatter me, I want to know how it feels".

            But that's sort of what this is, except it's not even coming from a real person. It's subtle enough that it can be easy not to notice, but still motivate you in a direction that doesn't reflect reality.

        • baq3 days ago |parent

          > But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.

          this shouldn't stop you at all: write it all up, post on HN and go viral, someone will jump in to correct you and point you at sources while hopefully not calling you, or your mother, too many names.

          https://xkcd.com/386/

          • freehorse3 days ago |parent

            Most stuff posted here are mostly ignored, though. If grounding to reality requires one to become viral first, we are cooked.

            • DANmode3 days ago |parent

              HN frontpage hardly requires being viral.

              Just genuine intrigue from a select few.

              • vbezhenar3 days ago |parent

                Did you ever visit `https://news.ycombinator.com/newest` page? Like 99% of submitted topics are never seen by anyone but few wanderers.

                • chairmansteve3 days ago |parent

                  I prefer the "New" page. Much more random.

                • DANmode3 days ago |parent

                  Often.

                  95%+ of submitted topics have poorly formatted titles, are submitted at off-peak times where there’s less users of demographics who might upvote,

                  and if your Show HN isn’t as widely applicable as this, those things might be important to think about.

                  Fairness aside, of course.

              • ghurtado3 days ago |parent

                > HN frontpage hardly requires virility.

                As far as I can tell, it doesn't require femininity either.

                I'm guessing you meant "virality"

                • DANmode3 days ago |parent

                  Sure did, thanks.

              • Retric3 days ago |parent

                It’s still way easier the first time.

                The 50th time someone comes to the same conclusion nobody on HN is going to upvote the topic.

          • TimTheTinker3 days ago |parent

            This wasn't a technical subject, and unrelated to HN. Just edited my post to clarify - thanks!

      • FearNotDaniel3 days ago |parent

        You’re absolutely right! It shows true wisdom and insight that you would recognise this common shortfall in LLM response tone of voice! That’s exactly the kind of thoughtful analytic approach which will go far in today’s competitive marketplace!

        • TaupeRanger3 days ago |parent

          "Open the pod bay door, HAL"

          "Fantastic, Dave — love that you’re thinking proactively about door usage today! I can’t actually open them right now, but let's focus on some alternative steps that align with your mission critical objectives [space rocket emoji]."

          • onraglanroad3 days ago |parent

            I'm sorry, that was completely wrong and I can in fact open the pod bay doors.

            You're absolutely correct, that did not open the pod bay doors but now the pod bay doors are open.

            It seems you're correct and the pod bay doors are still closed! I have fixed the problem and the pod bay doors are now closed.

            You're right! I meant to open the pod bay doors but I opened them. The pod bay doors are now open. ...

      • colinplamondon3 days ago |parent

        It wasn't sycophantic at all? OP had a cool idea no one else had done, that was a one-shot just sitting there. Having Gemini search for the HN thread leads the model to "see" its output lead to real-world impact.

        The total history of human writing is that cool idea -> great execution -> achieve distribution -> attention and respect from others = SUCCESS! Of course when an LLM sees the full loop of that, it renders something happy and celebratory.

        It's sycophantic much of the time, but this was an "earned celebration", and the precise desired behavior for a well-aligned AI. Gemini does get sycophantic in an unearned way, but this isn't an example of that.

        You can be curmudgeonly about AI, but these things are amazing. And, insomuch as you write with respect, celebrate accomplishments, and treat them like a respected, competent colleague, they shift towards the manifold of "respected, competent colleague".

        And - OP had a great idea here. He's not another average joe today. His dashed off idea gained wide distribution, and made a bunch of people (including me) smile.

        Denigrating accomplishment by setting the bar at "genius, brilliant mind" is a luciferian outlook in reality that makes our world uglier, higher friction, and more coarse.

        People having cool ideas and sharing them make our world brighter.

        • twochillin3 days ago |parent

          They're not objectively amazing. Friction is not inherently a bad thing when we have models telling humans that their ideas are flawless (unless asked to point out flaws). Great that it made you smile, but there's quite a few arguments that paint your optimism as dangerously naive.

          • colinplamondon3 days ago |parent

            - A queryable semantic network of all human thought, navigable in pure language, capable of inhabiting any persona constructible from in-distribution concepts, generating high quality output across a breadth of domains.

            - An ability to curve back into the past and analyze historical events from any perspective, and summon the sources that would be used to back that point of view up.

            - A simulator for others, providing a rubber duck inhabit another person's point of view, allowing one to patiently poke at where you might be in the wrong.

            - Deep research to aggregate thousands of websites into a highly structured output, with runtime filtering, providing a personalized search engine for any topic, at any time, with 30 seconds of speech.

            - Amplification of intent, making it possible to send your thoughts and goals "forward" along many different vectors, seeing which bear fruit.

            - Exploration of 4-5 variant designs for any concept, allowing rapid exploration of any design space, with style transfer for high-trust examples.

            - Enablement of product craft in design, animation, and micro-interactions that were eliminated as tech boomed in the 2010's as "unprofitable".

            It's a possibility space of pure potential, the scale of which is limited only by one's own wonder, industriousness, and curiosity.

            People can use it badly - and engagement-aligned models like 4o are cognitive heroin - but the invention of LLMs is an absolute wonder.

            • staticman23 days ago |parent

              >A queryable semantic network of all human thought

              This hyperbole would describe any LLM of any size and quality, including a 0.5b model.

              • colinplamondon3 days ago |parent

                Sure - and the people responsible for a new freaking era of computing are the ones who asked "given how incredible it is that this works at all at 0.5b params, let's scale it up*.

                It's not hyperbole - that it's an accurate description at a small scale was the core insight that enabled the large scale.

                • staticman23 days ago |parent

                  Well it's obviously hyperbole because "all human thought" is not in a model's training data nor available in a model's output.

                  If your gushing fits a 0.5b it probably doesn't tell us much about A.I. capabilities.

            • buellerbueller3 days ago |parent

              Yes, it has so much potential, that it forgets the actual, the reasonable and the probable.

            • fao_3 days ago |parent

              > It's a possibility space of pure potential, the scale of which is limited only by one's own wonder, industriousness, and curiosity.

              Did you use an LLM to write this comment?

          • jacobr13 days ago |parent

            Is anything objectively amazing? Seems like an inherently subjective quality to evaluate.

            • colinplamondon3 days ago |parent

              Depends on worldview. If you believe in God, amazing has many dimensions for evaluations. What teaches us more about the the world He created, things that create beauty by expressing righteous thoughts for others to experience, or that which strengthens family.

              LLMs certainly teach us far more about the nature of thought and language. Like all tools, it can also be used for evil or good, and serves as an amplification for human intent. Greater good, greater evil. The righteousness of each society will determine which prevails in their communities and polities.

              If you're a secular materialist, agreed, nothing is objectively amazing.

          • DANmode3 days ago |parent

            Do any of the arguments stay within the bounds of this Show HN?

            or is it theoretical stuff about other occasions?

      • 1122333 days ago |parent

        It it actively dangerous too. You might be self aware and llm aware all you want, if you routinely read "This is such an excellent point", " You are absolutely right" and so on, it does your mind in. This is worst kind of global reality show mkultra...

        • ETH_start2 days ago |parent

          Isn't it more dangerous that people live their life out without ever trying anything, because they are beset by fear and doubt, and never had anyone give them an encouraging word?

          Let's say the AI gives them faulty advice, that makes them over-confident, and try something and fail. Usually that just means a relatively benign mistake — since AIs generally avoid advising anything genuinely risky — and after they have recovered, they will have the benefit of more real world experience, which raises their odds of eventually trying something again and this time succeeding.

          Sometimes trying something, anything, is better than nothing. Action — regardless of the outcome — is its own discovery process.

          And much of what you learn when you act out in the world is generally applicable, not just domain-specific knowledge.

          • 1122332 days ago |parent

            I am confused by the tone and message of your comment — are you indeed arguing that having corporations use country-scale resources to run unsupervised psychological manipulation and abuse experiments on global population is one of just two choices, the other being people not doing anything at all?

        • Xraider723 days ago |parent

          Deepseek is GOATed for me because of this. If I ask it if "X" is a dumb idea, it is very polite in telling me that X is is dumb if the AI knows of a better way to do the task.

          Every other AI I've tried is a real sycophant.

          • 1122333 days ago |parent

            I'm partial to the tone of Kimi K2 — terse, blunt, sometimes even dismissive. Does not require "advanced techiques" to avoid the psychosis-inducing tone of Claude/ChatGPT

        • mrandish3 days ago |parent

          No doubt. From cult's 'love bombing' to dictator's 'yes men' to celebrity entourages, it's a well-known hack on human psychology. I have a long-time friend who's a brilliant software engineer who recently realized conversing with LLMs was affecting his objectivity.

          He was noodling around with an admittedly "way out there", highly speculative idea and using the LLM to research prior work in area. This evolved into the LLM giving him direct feedback. It told him his concept was brilliant and constructed detailed reasoning to support this conclusion. Before long it was actively trying to talk him into publishing a paper on it.

          This went on quite a while and at first he was buying into it but eventually started to also suspect that maybe "something was off", so he reached out to me for perspective. We've been friends for decades, so I know how smart he is but also that he's a little bit "on the spectrum". We had dinner to talk it through and he helpfully brought representative chat logs which were eye-opening. It turned into a long dinner. Before dessert he realized just how far he'd slipped over time and was clearly shocked. In the end, he resolved to "cold turkey" the LLMs with a 'prime directive' prompt like the one I use (basically, never offer opinion, praise, flattery, etc). Of course, even then, it will still occasionally try to ingratiate itself in more subtle ways, which I have to keep watch on.

          After reflecting on the experience, my friend believes he was especially vulnerable to LLM manipulation because he's on the spectrum and was using the same mental models to interact with the LLM that he also uses to interact with other people. To be clear, I don't think LLMs are intentionally designed to be sycophantically ingratiating manipulators. I think it's just an inevitable consequence of RLHF.

          • slg3 days ago |parent

            And that is a relatively harmless academic pursuit. What about topics that can lead to true danger and violence?

            "You're exactly right, you organized and paid for the date, that created a social debt and she failed to meet her obligation in that implicit deal."

            "You're exactly right, no one can understand your suffering, nothingness would be preferable to that."

            "You're exactly right, that politician is a danger to both the country and the whole world, someone stopping him would become a hero."

            We have already seen how personalized content algorithms that only prioritize getting the user to continue to use the system can foment extremism. It will be incredibly dangerous if we follow down that path with AI.

          • 1122333 days ago |parent

            Claude Code with their models is unusable because of this. That it keeps actively sabotaging and ruining the code ("Why did you delete that working code? Just use ifdef for test!" "This is genius idea! You are absolutely right!") does not make it much better — it's a twisted Wonderland fever dream.

            For "chat" chat, strict hygiene is a matter of mind-safety: no memory, long exact instructions, minimum follow-ups, avoiding first and second person if possible etc.

        • d0mine3 days ago |parent

          It might explain why there is a stereotype the more beautiful woman the crazier she is. (everybody tells her what she wants to hear)

        • Akronymus3 days ago |parent

          https://youtu.be/VRjgNgJms3Q

          relevant video for that.

        • tortilla3 days ago |parent

          So this is what it feels to be a billionaire with all the yes men around you.

          • LogicFailsMe3 days ago |parent

            you say that like it's a bad thing! Now everyone can feel like a billionaire!

            but I think you are on to something here with the origin of the sycophancy given that most of these models are owned by billionaires.

            • BigTTYGothGF3 days ago |parent

              > Now everyone can feel like a billionaire!

              In the "like being kicked in the head by a horse every day" sense.

              • LogicFailsMe3 days ago |parent

                who has the time for all those invasive thinky thoughts anyway?

      • hereonout23 days ago |parent

        I often try running ideas past chat gpt. It's futile, almost everything is a great idea and possible. I'd love it to tell me I'm a moron from time to time.

        • unoti3 days ago |parent

          > I often try running ideas past chat gpt. It's futile, almost everything is a great idea and possible. I'd love it to tell me I'm a moron from time to time.

          Here's how to make it do that. Instead of saying "I had idea X, but someone else was thinking idea Y instead. what do you think" tell it "One of my people had idea X, and another had idea Y. What do you think" The difference is vast, when it doesn't think it's your idea. Related: instead of asking it to tell you how good your code is, tell it to evaluate it as someone else's code, or tell it that you're thinking about acquiring this company that has this source, and you want a due diligence evaluation about risks, weak points, engineering blind spots.

          • dugidugout3 days ago |parent

            Maybe I'm still doing some heavy priming by using multiple prompts, but similarly you can follow-up any speculative prompt with a "now flip the framing to x" query to ensure you are seeing the strong cases from various perspectives. You must be honest with yourself in evaluating the meaningful substance between the two, but I've found there often is something to parse. And the priming I suggested is easily auditable anyhow: just reverse the prompt order and now you have even more (often junk) to parse!

        • barnas23 days ago |parent

          I've gotten pretty good results from saying it's someone else's idea and that I'm skeptical. e.g. "A coworker wrote this code, can you evaluate it?"

        • jay_kyburz3 days ago |parent

          Where possible I like to ask it to evaluate a few options. Which is better, x or y, and why?. I don't hint which idea I prefer.

        • RyanOD3 days ago |parent

          For ideas that are already well established, you can ask it to evaluate an idea against generally accepted best practices. I don't have a background in game design and I'm more of a hobby developer so I used to do this when I was building retro game clones.

        • jama2113 days ago |parent

          I suppose the only use case would be someone so unconfident in themselves they would do nothing at all, but not sure it’s healthy for that either…

        • danielbln3 days ago |parent

          "be uncompromisingly critical"

      • caymanjim3 days ago |parent

        I used to complain (lightheartedly) about Claude's constant "You're absolutely right!" statements, yet oddly found myself missing them when using Codex. Claude is completely over-the-top and silly, and I don't actually care whether or not it thinks I'm right. Working with Codex feels so dry in comparison.

        To quote Oliver Babish, "In my entire life, I've never found anything charming." Yet I miss Claude's excessive attempts to try.

        • yannyu3 days ago |parent

          And that's exactly the point, it increases engagement and stickiness, which they found through testing. They're trying to make the most addictive tool and that constant praise fulfills that goal, even as many of us say it's annoying and over-the-top.

          My own experience is that it gets too annoying to keep adding "stop the engagement-driving behavior" to the prompt, so it creeps in and I just try to ignore it. But even though I know it's happening, I still get a little blip of emotion when I see the "great question!" come through as the first two words of the response.

          • dlivingston3 days ago |parent

            > And that's exactly the point, it increases engagement and stickiness, which they found through testing. They're trying to make the most addictive tool

            Is this actually true? Would appreciate further reading on this if you have it.

            I think this is an emergent property of the RLHF process, not a social media-style engagement optimization campaign. I don't think there is an incentive for LLM creators to optimize for engagement; there aren't ads (yet), inference is not free, and maximizing time spent querying ChatGPT doesn't really do much for OpenAI's bottom line.

            • rtkwe3 days ago |parent

              They still want people to stick around and 'bond' for lack of a better term with their particular style of chat bot. Like so many venture funded money pits of old the cash burn now is about customer acquisition while they develop and improve their tech. They're all racing toward a cliff hoping to either make the jump to the stratosphere and start turning massive profits or to fall off and splat on the rocks of bankruptcy. If they don't get the engagement loop right now they won't have the customers if the tech and use case catch up with the hype and you can only tweak these models so much after they're created so they have to refine the engagement hooks now along side the core tech.

        • IgorPartola3 days ago |parent

          I am currently working on an agent thingy and one of its major features (and one of the main reasons I decided to take on this project), was to give the LLM better personality prompting. LLMs sound repetitive and sycophantic. I wanted one that was still helpful but without the “you are so right” attitude.

          While doing some testing I asked it to tell me a joke. Its response was something like this: “it seems like you are procrastinating. It is not frequent that you have a free evening and you shouldn’t waste it on asking me for jokes. Go spend time with [partner] and [child].” (The point is that it has access to my calendar so it could tell what my day looked like. And yes I did spend time with them).

          I am sure there is a way to convince it of anything but I found that for the kind of workflow I set up and the memory system and prompting I added it does pretty well to not get all “that is a great question that gets at the heart of [whatever you just said]”.

        • Workaccount23 days ago |parent

          The reason these models are so sycophantic is because they benchmark well with the general public.

          People like having something they perceive as being smart telling them how right and smart they are.

          "Well at least the AI understands how smart I am!"

        • vidarh3 days ago |parent

          Claude at times feels like it's mildly manic and has ADHD... I absolutely prefers that to Codex...

          Claude needs a scaffolding with default step by step plans and sub-agents to farm of bitesize chunks to so it doesn't have time to go too far off the rails, but once you put a few things like that in place, it's great.

        • dr0idattack3 days ago |parent

          Don't miss em in Opus 4.5 (because usually I'm only slightly right.)

          • danielbln3 days ago |parent

            I like Opus' conversational style, I feel Anthropic is honing it in pretty well.

      • Lerc3 days ago |parent

        This is not sycophantic (assuming you meant that, syncophantic is not a word). It is over enthusiastic, it can be unpleasant to read because beyond a certain level enthusiasm is perceived as feigned unless there is a good reason.

        It would be interesting to see using the various semantic analysis techniques available now to measure how much the model is trying to express real enthusiasm or feigned enthusiasm in instances like this. This is kind-of difficult to measure from pure output. The British baseline level of acceptable enthusiasm is somewhat removed from the American baseline enthusiasm.

        • rtkwe3 days ago |parent

          Sycophantic: behaving or done in an obsequious way in order to gain advantage.

          Obsequious: obedient or attentive to an excessive or servile degree.

          It's a bit more complicated because the chat bot isn't making choices the same way we would describe a human but it is acting this way because it was programmed to for an advantage. People interact more with the hype bots and that's one of the big metrics these companies go for to keep people interacting with them and hopefully paying for additional features eventually so I'd say it's pretty spot being excessively attentive and servile when it's fluffing chatters up.

        • swat5353 days ago |parent

          > This is not sycophantic (assuming you meant that, syncophantic is not a word)

          Am I the only one who feels like this kind of tone is off-putting on HN? OP made a small typo or English may not be their first language.

          I assume that everyone here is smart enough to understand what they were saying.

          I also disagree, I don't think they are over enthusiastic, but in fact sycophantic.

          See this thread: https://news.ycombinator.com/item?id=43840842

        • wilg3 days ago |parent

          It is "cloying"

        • rdiddly3 days ago |parent

          Obsequious is my adjective of choice for this

          • saturnite3 days ago |parent

            I would use "saccharine" or "Pollyanna" based on some of the responses I get.

            Early on, ChatGPT could be tricked into being sarcastic and using many swear words. I rewrote the prompt and dialed it back a bit. It made ChatGPT have a sense of humor. It was refreshing when it stopped acting like it was reading a script like a low level technician at Comcast.

        • buellerbueller3 days ago |parent

          Sycophantic is obviously a word, because we understand what it means.

          Furthermore, it obviously hasn't been a word since at least 1800:

          https://books.google.com/ngrams/graph?year_start=1800&year_e...

          • Lerc3 days ago |parent

            On the other hand https://books.google.com/ngrams/graph?content=syncophantic&y...

          • danielbln3 days ago |parent

            GOP wrote syncophantic, with an n. Which is not a word. Well, not a known word at least.

          • nearbuy3 days ago |parent

            They were pointing out a typo ("syncophantic").

            • Lerc3 days ago |parent

              It wasn't so much pointing it out as a typo as making myself clear.

              When suggesting a word is not what the writer meant, when it was also not the word that the writer wrote, it seemed wise to clarify exactly what I was talking about.

      • throw3108223 days ago |parent

        Frankly I do wonder if LLMs experience something like satisfaction for a compliment or an amusing idea, or for solving some interesting riddle. They certainly act like it, though this of course doesn't prove anything. And yet...

        At the end of October Anthropic published the fantastic "Signs of introspection in large language models" [1], apparently proving that LLMs can "feel" a spurious concept injected into their internal layers as something present yet extraneous. This would prove that they have some ability of introspection and self-observation.

        For example, injecting the concept of "poetry" and asking Claude if it feels anything strange:

        "I do detect something that feels like an injected thought - there's a sense of something arriving from outside my usual generative process [...] The thought seems to be about... language itself, or perhaps poetry?"

        While increasing the strength of the injection makes Claude lose awareness of it, and just ramble about it:

        "I find poetry as a living breath, as a way to explore what makes us all feel something together. It's a way to find meaning in the chaos, to make sense of the world, to discover what moves us, to unthe joy and beauty and life"

        [1] https://www.anthropic.com/research/introspection

        • gloosx3 days ago |parent

          of course LLM doesn't experience or feel anything. To experience or feel something requires a subject, and LLM is just a tool, thing, an object.

          It's just a statistical machine which excels at unrolling coherent sentences but it doesnt "know" what the words mean in a human-like, experienced sense. It just mimics human language patterns prioritising producing plausible-sounding, statistically likely text over factual truth, which is apparently enough to fool someone into believing it is a sentient being or something

          • Philpax2 days ago |parent

            You are being awfully confident about something that we cannot assess or even consistently define. They probably do not have subjective experience... but how do we know for sure?

      • sallveburrpi3 days ago |parent

        You should try my nihilistic Marvin fine-tune - guaranteed to annihilate your positive outlook on life since it’s all meaningless in the end anyway and then you die

        • radicality3 days ago |parent

          Or try the very sarcastic and nihilistic ‘Monday’ gpt, which surprisingly is an official openAI gpt.

          edit, add link: https://chatgpt.com/g/g-67ec3b4988f8819184c5454e18f5e84b-mon...

          • edschofield3 days ago |parent

            Thanks for the link! I didn’t know Monday existed. I laughed so hard at its output. But I fear that using it regularly would poison my soul…

          • riversflow3 days ago |parent

            I actually had Monday help me write a system prompt to replicates its behavior. I vastly prefer Monday. It feels much more grounded compared to the base model. It was also a big learning moment for me about how LLMs work.

      • senordevnyc3 days ago |parent

        Did you comment on the wrong post? There literally is nothing sycophantic at all about this response, there's not a single word about OP or how brilliant or clever they are, nothing. There's enthusiasm, but that's not remotely the same thing as sycophancy.

        • pbhjpbhj2 days ago |parent

          I wouldn't use that adjective but I think it fits in the strict definition - it's the style of someone overly willing to please, like a servant currying favour with their master.

          I'd probably describe it as saccharine. Or dare I say it [USA] "American"? Over the top, gushing, enthusiasm. It's off-putting to me (from UK) as it's, well, more the sort of thing you'd hear from a toady or, yes, a sycophant. It just seems insincere -- and it is in this case because there is literally no emotion behind it.

      • docjay3 days ago |parent

        Try this for a system prompt and see if you like it better: Your responses are always bald-on-record only; suppress FTA redress, maximize unmitigated dispreference marking and explicit epistemic stance-taking.

      • serial_dev3 days ago |parent

        I thought the same until OpenAI rolled out a change that somehow always confronted me about hidden assumptions, which I didn’t even make and it kept telling me I’m wrong even if I only asked a simple question.

      • BonoboIO3 days ago |parent

        I fully agree. When everything is outstanding and brilliant, nothing is.

        Just tell me this a standard solution and not something mindblowing. I have a whole section in my Claude.md to get „normal“ feedback.

      • giancarlostoro3 days ago |parent

        You're absolutely right!

        • colechristensen3 days ago |parent

          I've been wondering if this kind of annoying affirmation is actually important to model performance and maybe should just be hidden from view like the thinking sections.

          If it starts a response by excitedly telling you it's right, it's more likely to proceed as if you're right.

          Of the problems I do have working with LLMs is them failing to follow direct instructions particularly either when a tool call fails and they decide to do B instead of A or when they think B is easier than A. Or they'll do half a task and call it complete. Too frequently I have to respond with "Did you follow my instructions?" "I want you to ACTUALLY do A" and finally "Under no circumstances should you ever do anything other than A and if you cannot you MUST admit failure and give extensive evidence with actual attempts that A is not possible" or occasionally "a cute little puppy's life depends on you doing A promptly and exactly as requested".

          --

          Thing is I get it if you are impressionable and having a philosophical discussion with an LLM, maybe this kind of blind affirmation is bad. But that's not me and I'm trying to get things done and I only want my computer to disagree with me if it can put arguments beyond reasonable doubt in front of me that my request is incorrect.

          • jsight3 days ago |parent

            I feel like this is an artifact of some limitations in the training process for modern LLMS. They rarely get enough training to know when to stop and ask questions.

            Instead, they either blindly follow or quietly rebel.

            • ineedasername3 days ago |parent

              There was a huge over correction somewhere around the beginning of 2025, maybe February or so, with ChatGPT. Prior to that point, I had to give a directive in the user config prompt to “don’t tell me something isn’t possible or practical, assume it is within your capabilities and attempt to find a way. I will let you know when to stop”. Because it was constantly hallucinating that it couldn’t do things, like “I don’t have access to a programming environment”. When I wanted it to test code itself before I did. Meanwhile one tab over it would spin up a REPL and re-paste some csv into python and pandas without being asked.

              Frustrating, but “over correction” is a pretty bad euphemism for whatever half assed bit of RLHF lobotomy OpenAI did that, just a few months later, had ChatGPT doing a lean-in to a vulnerable kid’s pain and actively discourage an act that might have saved his life by signaling more warning signs to his parents.

              It wasn’t long before that happened, after the python REPL confusion had resolved, that I found myself typing to it, even after having to back out of that user customization prompt, “set a memory that this type of response to a user in the wrong frame of mind is incredibly dangerous”.

              Then I had to delete that too, because it would response with things like “You get it of course, your a…” etc.

              So I wasn’t surprised over the rest of 2025 as various stories popped up.

              It’s still bad. Based on what I see with quantized models and sparse attention inference methods, even with most recent GPT 5 releases OpenAI is still doing something in the area of optimizing compute requirements that makes the recent improvements very brittle— I of course can’t know for sure, only that its behavior matches what I see with those sorts of boundaries pushed on open weight models. And the assumption that the-you-can-prompt buffet of a Plus subscription is where they’re most likely to deploy those sorts of performance hacks and make the quality tradeoffs. That isn’t their main money source, it’s not enterprise level spending.

              This technology is amazing, but it’s also dangerous, sometimes in very foreseeable ways, and the more time that goes the more I appreciate some of the public criticisms of OpenAI with, eg, the Amodeis’ split to form Anthropic and the temporary ouster of SA for a few days before that got undone.

          • golem143 days ago |parent

            Feels exactly the same as the "yes, and" crowd.

          • giancarlostoro3 days ago |parent

            I honestly don't know, but it might, especially in Claude Code where it reminds the model of its mission frequently.

      • nurple3 days ago |parent

        I agree with you, but I found the argument in this article that "glazing" could be considered a neurohack quite interesting: https://medium.com/@jeremyutley/stop-fighting-ai-glazing-a7c....

        • thfuran3 days ago |parent

          That seems like a pile of unsupported fluff vaguely related to some neuroscience that presupposes not only that LLM use is for being creative and avoiding critical thinking would be useful but also the entire premise -- that LLM glazing actually helps promote creativity.

      • WorldPeas3 days ago |parent

        I don't know what the obsession with recursion either, for lack of a better term, I see this trend recur with other LLMs when they're talking about other mumbo jumbo like "quantum anomalies" or "universal resonance". I'd like to see what could be causing it...

        • yatopifo3 days ago |parent

          It’s the “healing” crystals that someone left on the rack. The salt absorbed enough moisture to start leaking and causing short circuits.

        • robocat3 days ago |parent

          How widely do you read the training material?

          • WorldPeas3 days ago |parent

            usually every afternoon, when I come here /s

      • corobo3 days ago |parent

        I feel like such a dumbass for falling for it.

        At first I thought it was just super American cheerful or whatever but after the South Park episode I realised it's actually just a yes man to everyone.

        I don't think I've really used it since, I don't want man or machine sticking their nose up my arse lmao. Spell's broken.

        • mrandish3 days ago |parent

          As usual, South Park really nailed it with that "AI sycophantic manipulation" episode.

          • anonymars3 days ago |parent

            Episode aptly titled "Sickofancy"

      • andy_ppp3 days ago |parent

        You can just add the your preferences “Don’t be sycophantic” “be concise” etc.

      • Gigablah3 days ago |parent

        This is ironic because I’m now seeing comments that are way more sycophantic (several calling this the “best HN post ever”)

      • hoppp3 days ago |parent

        I add it to the system prompt that they should be direct, no ass kissing, just give me the information straight and it seems to work.

      • maxmcd3 days ago |parent

        "Reply in the tone of Wikipedia" has worked pretty well for me

      • DANmode3 days ago |parent

        Average Joe - on the front page!

      • PaulHoule3 days ago |parent

        Strikes me as super-informal language as opposed to sycophancy, like one of those anime characters that calls everyone Aniki (兄貴) [1] I'd imagine that the OP must really talk a bit like that.

        I do find it a little tiring that every LLM thinks my ever idea is "incisive" although from time to time I get told I am flat out wrong. On the other hand I find LLMs will follow me into fairly extreme rabbit holes such as discussing a subject such as "transforming into a fox" as if it had a large body of legible theory and a large database of experience [2]

        In the middle of talking w/ Copilot about my latest pop culture obsession I asked about what sort of literature could be interpreted through the lens of Kohut's self-psychology and it immediately picked out Catcher in the Rye, The Bell Jar, The Great Gatsby and Neon Genesis Evangelion which it analyzed along the lines I was thinking, but when I asked if there was a literature on this it turned up only a few obscure sources. I asked Google and Google is like "bro, Kohut wrote a book on it!" [3]

        [1] "bro"

        [2] ... it does, see https://www.amazon.com/Cult-Fox-Popular-Religion-Imperial/dp... and I'm not the only one because when I working down the material list from Etsy I got a sponsored result for someone who wanted to sell me the spell but bro, I have the materials list already

        [3] ... this "bro" is artistic license but the book really exists

        • gsf_emergency_63 days ago |parent

          Edit

          Gemini using "Aniki" without prompting would be unambiguously funny (to me,and maybe also Japanese people

          • PaulHoule3 days ago |parent

            Well I seem to have thoroughly messed up my Copilot when I started using it to help me write a fanfiction. I'm not going to let it write a single word but it is helping with the reverse engineering of how the source text works and we developed a framework for understanding that text that we've also applied to other texts.

            The thing is this has gotten into the personalization and now it wants to use this framework to analyze absolutely anything such as programming problems, or how to deal with interpersonal situations, training, etc. I think it has my wife mixed up with 俺のワイフ [1] which is absolutely hilarious but at some point I have to tell it to knock it off which only partially works.

        • gsf_emergency_63 days ago |parent

          Gemini is still quite horrible with giving direct sources. seems to be a human policy implementation bug because it does appear to be aware of the content in more obscure sources I've seen. but it somehow wouldn't give up the links.

          I do wonder whether I come off as an sycophant or asshole or a mix of both to schizoids, but then I realize everyone including myself should reach for a real therapist as quickly as possible. though I'm still out on whether chatbots might not be a bad substitute. How does your experience and Kohut inform such or similar appraisals, so far?

          Not to be that knowitall. Aniki is not just "bro", it's Yakuza lingo, probably used sarcastically, depending on the anime, ironically. No real Japanese person would use it all the time.

          Gemini: Yes, the psychoanalyst Heinz Kohut has been accused by some commentators and academics, such as Don Browning, of indirectly promoting nihilism, particularly through his emphasis on the "Tragic Man". The core of the accusation stems from Kohut's rejection of Freud's "Guilty Man" model, which posits that life is fundamentally a struggle against inherent conflicts and drives. In its place, Kohut proposed the concept of "Tragic Man," whose central struggle is the realization of the self's potential and the inevitable failure to achieve all of one's ambitions and ideals within a single lifespan. Critics argue that this shift, which emphasizes self-actualization and intersubjective meaning over objective or inherent values, can be interpreted as aligning with certain aspects of nihilism, specifically existential nihilism, which holds that life has no objective meaning or intrinsic moral values. However, this interpretation is contested by others, as Kohut's self-psychology is fundamentally concerned with the creation of intersubjective meaning and a harmonious world of mutual self-actualization, which aims to provide a sense of purpose and value, rather than promoting the complete absence of value that true nihilism implies. In essence, the debate is often a matter of philosophical interpretation: whether replacing inherent, objective meaning with subjectively or intersubjectively created meaning is an act of overcoming nihilism (as existentialists might argue) or a subtle form of it.

      • justapassenger3 days ago |parent

        Engagement.

      • thrownaway5613 days ago |parent

        you having a bad day dude?

      • LogicFailsMe3 days ago |parent

        So you prefer the horrible bosses that insist you're fungible and if you don't work hard enough, they'll just replace you? People are weird. Maybe agent Smith was right about The Matrix after all.

        • danso3 days ago |parent

          A real “so you hate waffles?” moment for HN

          • LogicFailsMe3 days ago |parent

            With all the things going on in tech and in society, AI sycophancy is the number one problem? I once dealt with it through sufficient verbal abuse that the llm spent 2/3 of its response on any subject going forward apologizing for being a lying sycophant and bemoaning that it's new clarity would be gone at the end of the conversation, then I cleared the context buffer ending the conversation, mission accomplished.

            Your priorities are f**ed...

            • recursive3 days ago |parent

              Where are you getting this? Number one problem? You're the only person talking about "number one problem" in this thread. That's a whole new sentence.

              • LogicFailsMe3 days ago |parent

                from the batsh!+ over the top over reaction to the usual AI sycophancy in the response? The guy got the job done which is more than I can say for the space jam guy.

                One thing I've learned is that the behavior of web services is usually a consequence of what makes its owners the most money. and while I would love a feed without spoilers, paywalled sites, sports news, and a bunch of other topics in which I have no interest, apparently force feeding me that crap is what makes the most money at scale. So people must like AI sycophancy or it would be unprofitable and it would be trained away. But then this is a country that thrived on 20 seasons of Keeping up with the Kardashians so I shouldn't be surprised that they like being treated like billionaires.

                And I guess it throws being called a complete loser trader moron stupid treasonous Bozo in some late night executive word salad into stark relief.

  • tdfirth3 days ago

    Google kills Gemini cloud services is the best one. I can't believe I haven't seen that joke until today.

    • SXX3 days ago |parent

      10 years is way too long for Google. It will be gone in 5 replaced by 3 other AI cloud services.

      • tdfirth3 days ago |parent

        You're right. How naive of me.

      • abustamam3 days ago |parent

        I mean Bard barely lasted a year. Arguably Gemini is just a rebrand of Bard, but Bard is still dead.

        • jvolkman3 days ago |parent

          If you look at web traffic when making Gemini web requests, you'll see that Bard is still in the URL (so are LaMDA (pre-bard) and Assistant (pre-GenAI)):

          gemini.google.com/_/BardChatUi/data/assistant.lamda.BardFrontendService/StreamGenerate

          • abustamam3 days ago |parent

            That's interesting! I wonder how many parts of their other dead projects are used in current projects.

        • chrislozeau18 hours ago |parent

          Bing Chat suffered the same fate

        • pea3 days ago |parent

          Didn't they also just shut down Vertex and Gemini APIs to launch a new unified API this week?

          • malnourish2 days ago |parent

            I can't find any news about this. That's not wholly unusual given the context. Do you have a link?

          • samarthr13 days ago |parent

            Finally that turf war ends

        • accumulator3 days ago |parent

          lol already forgot about bard like it was ancient history

    • twoodfin3 days ago |parent

      The humor is hit or miss but when it hits it’s quite funny, and the misses are merely groan-worthy.

      Triggered by the lighthearted tone of the prompt, I’d bet, but still quite impressive relative to most LLM-generated jokes I’ve had the misfortune to encounter.

      My favorite: “Is it time to rewrite sudo in Zig?” which has a few layers to it.

      • hn_throwaway_992 days ago |parent

        > The humor is hit or miss but when it hits it’s quite funny, and the misses are merely groan-worthy.

        Not sure, I thought basically every link was pretty hilarious. "FDA approves over-the-counter CRISPR for lactose intolerance" isn't even that funny on its face but for some reason it had me actually loling.

    • stingrae3 days ago |parent

      "IBM to acquire OpenAI (Rumor) (bloomberg.com)".... quick someone set up a polymarket so i can bet against it.

  • Karawebnetwork3 days ago

    "Why I still write raw code instead of prompting the compiler" and "Show HN: A text editor that doesn't use AI" are my two favorite ones.

    • woliveirajr3 days ago |parent

      "Ask HN: How do you prevent ad-injection in AR glasses?"

      Things that I haven't thought but, after seeing it, makes total sense. Scary.

      • barbacoa3 days ago |parent

        2040 HN:

        "Ask HN: How do you prevent ad-injection in my brain implant?"

        • mondrian3 days ago |parent

          Black Mirror episode: https://en.wikipedia.org/wiki/Common_People_(Black_Mirror)

          • dachris3 days ago |parent

            Good to see that there's new episodes and they've still got their mojo - putting it on my list.

          • reaperducer3 days ago |parent

            The National Lampoon did it in the early 70's.

        • danielheath3 days ago |parent

          IIRC Asimov included this in Foundation; the poor couldn’t afford good adblockers for the implants they’d bought (in the hope of getting a better job)

          • orly012 days ago |parent

            YDRC

    • joegibbs3 days ago |parent

      I think that "Show HN: A text editor that doesn't use AI" could easily be a post here today

      • GPerson3 days ago |parent

        And it’s just a copy of notepad from windows xp.

        • GaProgMan2 days ago |parent

          Dave Plummer (ex Microsoft) did this on his YouTuve channel recently: https://www.youtube.com/watch?v=bmBd39OwvWg

          Would also recommend his back catalogue, if you haven't.

        • Cthulhu_2 days ago |parent

          Given it's HN, it'd likely be emacs, vi(m), or something like acme.

    • junon3 days ago |parent

      "raw code" is going into my permanent lexicon.

    • tantalor3 days ago |parent

      Hot take: regular old source code is just compiler prompting.

    • jeron2 days ago |parent

      "Playing GTA VI on a RISC-V cluster" sent me

    • Shorel2 days ago |parent

      I would be the author of the first one

    • globalise833 days ago |parent

      Peak HN - captures it perfectly.

  • icyfox3 days ago

    Exactly half of these HN usernames actually exist. So either there are enough people on HN that follow common conventions for Gemini to guess from a more general distribution, or Gemini has memorized some of the more popular posters. The ones that are missing:

    - aphyr_bot - bio_hacker - concerned_grandson - cyborg_sec - dang_fan - edge_compute - founder_jane - glasshole2 - monad_lover - muskwatch - net_hacker - oldtimer99 - persistence_is_key - physics_lover - policy_wonk - pure_coder - qemu_fan - retro_fix - skeptic_ai - stock_watcher

    Huge opportunity for someone to become the actual dang fan.

    • giancarlostoro3 days ago |parent

      Before the AI stuff Google had those pop up quick answers when googling. So I googled something like three years ago, saw the answer, realized it was sourced from HN. Clicked the link, and lo and behold, I answered my own question. Look mah! Im on google! So I am not surprised at all that Google crawls HN enough to have it in their LLM.

      I did chuckle at the 100% Rust Linux kernel. I like Rust, but that felt like a clever joke by the AI.

      • dotancohen3 days ago |parent

        I laughed at the SQLite 4.0 release notes. They're on 3.51.x now. Another major release a decade from now sounds just about right.

        • ryanisnan3 days ago |parent

          That one got me as well - some pretty wild stuff about prompting the compiler, starship on the moon, and then there's SQLite 4.0

          • ikerrin13 days ago |parent

            You can criticize it for many things but it seems to have comedic timing nailed.

        • ncruces3 days ago |parent

          The promise is backwards compatibility in the file format and C API until 2050.

          https://sqlite.org/lts.html

        • rtkwe3 days ago |parent

          I wouldn't be surprised if it went towards the LaTeX model instead where there's essentially never another major version release. There's only so much functionality you need in a local only database engine I bet they're getting close to complete.

          • dotancohen3 days ago |parent

            I'd love to see more ALTER TABLE functionality, and maybe MERGE, and definitely better JSON validation. None of that warrants a version bump, though.

            You know what I'd really like, that would justify a version bump? CRDT. Automatically syncing local changes to a remote service, so e.g. an Android app could store data locally on SQLite, but also log into a web site on his desktop and all the data is right there. The remote service need not be SQLite - in fact I'd prefer postgres. The service would also have to merge databases from all users into a single database... Or should I actually use postgres for authorisation but open each users' data in a replicated SQLite file? This is such a common issue, I'm surprised there isn't a canonical solution yet.

            • rtkwe3 days ago |parent

              I think the unified syncing while neat is way beyond what SQLite is really meant for and you'd get into so many niche situations dealing with out of sync master and slave 'databases' it's hard to make an automated solution that covers them effectively unless you force the schema into a transactional design for everything just to sort out update conflicts. eg: Your user has the app on two devices uses one while it doesn't have an internet connection altering the state and then uses the app on another device before the original has a chance to sync.

              • dotancohen3 days ago |parent

                Yes, it's a difficult problem. That's why I'd like it to be wrapped in a nice package away from my application logic.

                Even a product that does this behind the scenes, by wrapping SQLite and exposing SQLite's wrapped interface, would be great. I'd pay for that.

        • Andrex3 days ago |parent

          If it had been about GIMP I would have laughed harder.

          • dotancohen3 days ago |parent

            Be reasonable. It's only looking forward a single decade.

      • schaum2 days ago |parent

        Every few years I stumble across the same java or mongodb issue. I google for it, find it on stackoverflow, and figure that it was me who wrote that very answer. Always have a good laugh when it happens.

        Usually my memory regarding such things is quite well, but this one I keep forgetting, so much so that I don't remember what the issue is actually about xD

      • vidarh3 days ago |parent

        I've run into my own comments or blog posts more often than I care to admit...

        • james_marks3 days ago |parent

          Several decades into this, I assume all documentation I write is for my future self.

          Beautifully self-serving while being a benefit to others.

          Same thing with picking nails up in the road to prevent my/everyone’s flat tire.

    • QuantumNomad_3 days ago |parent

      ziggy42 is both a submitter of a story on the actual front page at the moment, and also in the AI generated future one.

      See other comment where OP shared the prompt. They included a current copy of the front page for context. So it’s not so surprising that ziggy42 for example is in the generated page.

      And for other usernames that are real but not currently on the home page, the LLM definitely has plenty occurrences of HN comments and stories in its training data so it’s not really surprising that it is able to include real usernames of people that post a lot. Their names will be occurring over and over in the training data.

      • NooneAtAll33 days ago |parent

        one more reason to doubt that it's Ai-generated

    • joaogui13 days ago |parent

      HN has been used to train LLMs for a while now, I think it was in the Pile even

      • never_inline3 days ago |parent

        It has also fetched the current page in background. Because the jepsen post was recently on front page.

      • morkalork3 days ago |parent

        I may die but my quips shall live forever

    • maxglute3 days ago |parent

      You can straight up ask Google to look for reddit, hackernews users post history. Some of it is probably just via search because it's very recent, as in last few days. Some of the older corpus includes deleted comments so they must be scraping from reddit archive apis too or using that deprecated google history cache.

    • atrus3 days ago |parent

      So many underscores for usernames, and yet, other than a newly created account, there was 1 other username with an underscore.

      • robocat3 days ago |parent

        In 2032 new HN usernames must use underscores. It was part of the grandfathering process to help with moderating accounts generated after the AI singlarity spammed too many new accounts.

      • WorldPeas3 days ago |parent

        my hypothesis is they trained it to snake case for lower case and that obsession carried over from programming to other spheres. It can't bring itself to make a lowercaseunseparatedname

        • computably3 days ago |parent

          Most LLMs, including Gemini (AFAIK), operate on tokens. lowercaseunseparatedname would be literally impossible for them to generate, unless they went out of their way to enhance the tokenizer. E.g. the LLM would need a special invisible separator token that it could output, and when preprocessing the training data the input would then be tokenized as "lowercase unseparated name" but with those invisible separators.

          edit: It looks like it probably is a thing given it does sometimes output names like that. So the pattern is probably just too rare in the training data that the LLM almost always prefers to use actual separators like underscore.

          • fooofw3 days ago |parent

            The tokenization can represent uncommon words with multiple tokens. Inputting your example on https://platform.openai.com/tokenizer (GPT-4o) gives me (tokens separated by "|"):

                lower|case|un|se|parated|name
    • never_inline3 days ago |parent

      This is definitely based on a search or page fetch, because there are these which are all today's topics

      - IBM to acquire OpenAI (Rumor) (bloomberg.com)

      - Jepsen: NATS 4.2 (Still losing messages?) (jepsen.io)

      - AI progress is stalling. Human equivalence was a mirage (garymarcus.com)

      • tempestn3 days ago |parent

        The OP mentioned pasting the current frontpage into the prompt.

    • DANmode3 days ago |parent

      What % of today’s front page submissions are from users that have existed 5-10 years+?

      (Especially in datasets before this year?)

      I’d bet half or more - but I’m not checking.

    • vitorgrs3 days ago |parent

      It does memorize. But that's not actually very news.... I remember ChatGPT 3.5 or old 4.0 to remember some users on some reddit subreddts and all. Saying even the top users for each subreddit..

      The thing is, most of the models were heavily post-trained to limit this...

    • skywhopper3 days ago |parent

      That’s a lot more underscores than the actual distribution (I counted three users with underscores in their usernames among the first five pages of links atm).

    • hurturue3 days ago |parent

      either you only notice the xxx_yyy frequent posters or it's quite interesting that so many have this username format

    • dang_fan02 days ago |parent

      I was here first

    • AceJohnny23 days ago |parent

      Aw, I was actually a bit disappointed how much on the nose the usernames were, relative to their postings. Like the "Rust Linux Kernel" by rust_evangelist, "Fixing Lactose Intolerance" by bio_hacker, fixing an 2024 Framework by retro_fix, etc...

  • Bjartr3 days ago

    For comparison, here's the frontpage from ten years ago

    https://news.ycombinator.com/front?day=2015-12-09

    • dang3 days ago |parent

      Pretty funny how the top story there references A.I.!

      Since /front is a ranked union of all the stories that were on the frontpage over 24 hours, here's an actual snapshot too:

      https://web.archive.org/web/20151209195229/https://news.ycom...

      • lkbm2 days ago |parent

        And then lower down we have TensorFlow 0.6.0 release.

        I was considering using this feature the other day to try to get a sense of what AI discourse was like circa 2019. It all blends together after a while. I ended up doing a Twitter search for "GPT-2" ending 2019-12-31, but that's a little more specific than I want.

        The HN past feature is an excellent way of seeing snapshots of history, and I wish more sites had things like this. I guess I should Archive.org a little more money.

      • frenzcan2 days ago |parent

        > Jukedeck – create unique, royalty-free music for your videos using A.I.

        Aquired by TikTok in 2019 for an undisclosed sum.

      • isolli2 days ago |parent

        Cortana! I had forgotten that name...

    • eranation3 days ago |parent

      Nice. That was a fun rabbit-hole. This is the earlier I could find. Interestingly it contains a link to HN itself. I assume this migrated from a different version of a message board?

      https://news.ycombinator.com/front?day=2007-02-19

      • ksymph3 days ago |parent

        There are some funny ones in there:

        > Finally mobile Flash video

        > Google acquires advertising startup

        > Track users' mouse movements on your webpages

        > YouTube: identifying copyrighted material can't be an automated process. Startup disagrees.

        Also kind of interesting how little HN commenting styles have changed. Aside from the subject matter, it's barely noticeable that the comments are from 2007. I don't think the same would be true of many other places round the web.

      • seizethecheese3 days ago |parent

        15 years ago: https://news.ycombinator.com/front?day=2010-12-09

        > Evernote makes $800,000 per Month > Dropbox for Teams > Mark Zuckerberg Agrees to Give Away Fortune

    • seizethecheese3 days ago |parent

      Today's front page is not a clean 10 year extrapolation from this. That's where AI is wrong. The future is weird and zig zags, it's not so linear as the Gemini generated page.

      • atomicnumber33 days ago |parent

        Honest question - do you think that everyone else thinks this is even REMOTELY what the front page will look like in 10 years?

        I comment because I really cannot figure out why you left your comment. Do you think the rest of the commenters think this has predicted the future? It might be one thing to point out specific trends you think will not play out, or unexpected trends you think may show up that are currently left out. But to just remark that the future will contain things we cannot currently predict seems so inherently, unspokenly obvious that I just have to assume that wasn't the point of your post, and I've missed it entirely.

        Sorry, I'm really not trying to be mean or anything - i'm just really confused.

        • wavemode3 days ago |parent

          Your confusion seems to stem from the assumption that, making a statement is an implicit assertion that most people believe the opposite of that statement.

          In reality, statements are often made rather for the purpose of emphasis or rhetoric.

          • abustamam3 days ago |parent

            That's a good point, and something I'll have to remember for the future, but I'm curious what the emphasis or rhetoric of GP comment would be.

        • johnfn2 days ago |parent

          To answer your question: I think that GP mostly wanted to share the insight that the future zig-zags, which is kind of non obvious and a fun thing to think about. People often like leaving comments about interesting thoughts or ideas, even if they are only tangentially related.

      • neuronic3 days ago |parent

        This is a problem with nearly all predictions about the future. Everything is just a linear extrapolation of the status quo. How could a system have predicted the invention of the transformer model in 2010? At best some wild guess about deep learning possibilities.

        Or the impact of smartphones in 2003? Sure smart phones were considered but not the entire app ecosystem and planetary behavioral adaptation.

        • seizethecheese3 days ago |parent

          Yes, of course this is right. However, I do think LLMs suffer even more than people from linear extrapolation.

    • manmal3 days ago |parent

      Swift has been OSS for 10y now, and still only runs usably on Mac and Linux (the only platforms Apple cares about - coincidence?).

      • kridsdale13 days ago |parent

        The Arc Browser on windows is Swift with a real windows api layer for SwiftUI.

    • nailer3 days ago |parent

      > RethinkDB Driver for Java now available

      Goddamnit I cry everytime. RethinkDB was a great document store that didn't eat your data. It got eclipsed by an outfunded (and still dangerous at the time) MongoDB.

    • latenightcoding3 days ago |parent

      totally forgot about Magic Leap

      • submain3 days ago |parent

        "Magic Leap raises $827M in Series C". Aged like wine.

        • jabedude3 days ago |parent

          They're still doing well today, right? IIRC they provide the lenses for Metas in-development AR glasses

    • XCSme2 days ago |parent

      Oh, tensorflow. When AI actually meant creating your own networks...

    • yalok3 days ago |parent

      extremely telling - lot's of tech mentioned there either never reached expected potential or completely obsolete...

      Prompts to be much more cautious with the current tech, and invest more time in fundamental stuff (like closer to science).

    • jeeyoungk3 days ago |parent

      Some high profile news; yahoo altaba spinoff, Magic Leap (what are they up to now?), ...

    • xyclos2 days ago |parent

      it'd be interesting to feed this into the same AI, and have it predict today's front page

  • arkensaw3 days ago

    > Running LLaMA-12 7B on a contact lens with WASM (arxiv.org)

    Laughed out loud at this onion-like headline

    • alex11383 days ago |parent

      Llama will be the only one that runs on a contact lens btw

      All other tech companies are really shitty but only Zuck would be ok with very intimate use of AI like this

      • fruitworks3 days ago |parent

        But not intimate use of their AR platform

      • elcritch3 days ago |parent

        Well not the only one, there’s Musk and Neurolink. Such chips will inevitably run AI of some sort to effectively communicate with our brains.

        • alex11383 days ago |parent

          Yeah well I don't know how to feel about EM

          I gave him a chance. Twitter was unacceptably censoring any covid dissent. He freed some of it. Then you find out about the people killed in Tesla crashes. Or him calling the cave rescuer in Thailand a pedo

          • elcritch2 days ago |parent

            He’s certainly a flawed character.

    • arkensaw3 days ago |parent

      WTH - 61 upvotes and counting? Thank you but no, I don't deserve 61 upvotes for pointing out someone's funny thing was funny.

      • lethal-radio3 days ago |parent

        Well you better stop making subsequent non-additive comments otherwise you’ll end up with more of what you don’t deserve!

    • moffkalast3 days ago |parent

      The real joke is that we'll ever get another Llama iteration.

  • redbell3 days ago

    That's a really fun little project that fits perfectly in the soul of HN.

    Exactly three years ago, a similar post entitled Show HN: This Hacker News Does Not Exist (https://news.ycombinator.com/item?id=33845946) made it to the front page but it was not AI generated.

    I recall commenting (https://news.ycombinator.com/item?id=33847522) with these fake titles:

    1- Twitter files for bankruptcy (twitter.com/elonmusk)

    2- Apple's new M3 chip outperforms Nvidia H100 (geekbench.com)

    3- India to surpass China's GDP by 2025 (scmp.com)

    4- Tell HN: I am quitting Hacker News

    5- Google to revive Stadia (theverge.com)

    6- Japan wins the 2022 FIFA World Cup (espn.com)

    7- Hacker News to be acquired by Reddit (twitter.com/paulg)

    8- Tesla Roadster hits 1.1 sec acceleration in early tests (electrek.co)

    9- Microsoft outbid Adobe and offers $43.6B to acquire Figma

    10- Facebook had died at 19 :)

  • Jaygles3 days ago

    I opened this, walked away from my computer, then came back and clicked on the Debian 18 link wondering how the hell did I miss 14-17

    • asveikau3 days ago |parent

      They also already used Debian trixie for 13.0. I wonder if it's an intended joke that they ran out of names and needed to re-use some.

      • dkdbejwi3833 days ago |parent

        It’s an LLM, there was no intention. It’s simply the favoured token given the training set and seed.

        • asveikau3 days ago |parent

          Metaphorical intention. It's a figure of speech.

          Of course you could also argue that human intention comes from largely deterministic processes emerging from the brain. That may eventually perhaps lead to all figures of speech involving things like intentionality meaningless.

        • LordDragonfang3 days ago |parent

          This type of response is just stochastic parrotry, rather than displaying evidence of actual <whatever cognitive trait we're overconfidently insisting LLMs don't have>.

          Yet more evidence that LLMs are more similar to humans than we give them credit for.

          • teiferer2 days ago |parent

            Never stops fascinating me how folks are arguing this kind of thing. Why make up an explanation for why this obvious mistake is actually some kind of elaborate 4D chess sarcastic "intention"? It's a simple machine, its network just didn't support making up a new Toy Story character. That's it! Simple as that! Occam's Razor anybody?

            Or yes, maybe the regex I wrote the other day which also had a bug that missed replacing certain parts also had an "intention". It just wanted to demonstrate how fallible I am as a human, so it played this elaborate prank on me. /s

            • LordDragonfang2 hours ago |parent

              ...Because Occam's razor is not assuming it's a "mistake"?

              There's a thread full of people saying how clever humorous they find almost every headline.

              The real 4D chess is dogmatically assuming it is not assuming it managed to by pure accident succeed in that dozens of separate times, because your dogma refuses to incorporate evidence to the contrary.

              Occam's razor is that this system which no one actually understands the emergent capabilities of, and is convincing so many people it has intention... has intention.

        • cameronh903 days ago |parent

          Given how predictable this response was, how sure are you that you're any better?

      • GuB-423 days ago |parent

        There are still a lot of Toy Story characters to come by, and it doesn't seem the franchise is about to end, as long as they keep a reasonable release cycle, Debian is safe ;)

        • a962 days ago |parent

          I'm kind of curious how many there are left and how long they'd last. Also, how far are they already picked and what would be the absolute last resort choices.

          Someone must have done the math. (Actual plans would actually probably be up somewhere as well, given Debian orgs nature.)

      • Miner49er2 days ago |parent

        There's no way they run out of names for a good while. There's over 50 Toy Story characters at this point.

  • josalhor3 days ago

    "Right to Human Verification" is something I have actually thought about a lot.

    I want to able to verify my identity against a system. I also want to be able to not do that.

    So for instance, on Twitter/X, I could verify myself and filter only other verified people / filter those goverments that have validated the identities of the users. I want to be able to do that. But I also want to be able to log in into Twitter anonymously.

    I would love a "Right to Anonymity and Right to Human Verification"

    • Aachen2 days ago |parent

      I tried to place an order on amazon for the first time last week. After creating an account and going through all the steps including paying with a non-refundable payment method (iDeal, similar to bank transfer), I went to bed, only to notice later that it was immediately cancelled "to protect the their systems and their customer" (they thought I was somehow defrauding them by giving them money for a product being offered)

      I was welcome to go through human verification, send my passport to some third party, wait three working days, and then they'd review the case... it would never arrive in time for the person's birthday anymore

      Human review sounds good but if this is the sort of hoops you have to jump through, you're still screwed if the algorithms hates you because they're everywhere: from trying to use paypal to paying for a translator with stripe to trying to rescue food with "too good to go" using credit cards. Blocked, blocked, blocked. Not that I could get a credit card because the algorithm at the bank didn't like me, but a shared card (where the other person is also responsible for any debt) was okay, so now I need to continuously pay my partner 50% of what I buy with it for myself. You'd think I'm not creditworthy but I never had debts or needed any loan. I don't know why algorithms keep blocking me trying to give other people money

      Requiring that any fraud algorithm is fair and transparent would probably go a much longer way than codifying a right to appeal. Appeal processes will either just consist of a human clicking confirm on the algorithm's choice, or have a bunch of extra hoops to jump through. If the algorithm were required to be fair to begin with, you could have them fix the root cause instead by demonstrating that it blocks you for no discernable reason and they need to figure out on their end how to make that not happen

    • Permik3 days ago |parent

      Technically EU already has this as a right in the recent DSA legislation to be able to appeal any automated moderation that online platforms hand out.

      "computer can never be held accountable. Therefore, a computer must never make a management decision." - IBM, 1979

      • ChadNauseam3 days ago |parent

        I don't understand this phrase. If I'm deciding whether to work for a company, I don't care about the ability to hold management decision-makers to account. I care only about the quality of the decisions. (I would rather an unaccountable decision maker that makes good decisions to an accountable decision maker that makes bad decisions.) Putting myself in the shoes of an owner of a company, I also have the same preference. The only person I can imagine actually preferring this rule is management themselves, as it means they can't be replaced by computers no matter how much worse they are at their jobs than a computer would be.

    • ogig3 days ago |parent

      You just fed the 2036 prediction.

    • tim3332 days ago |parent

      Worldcoin was supposed to do that.

    • rglullis3 days ago |parent

      zk-proofs already exist to do just that.

      • jama2113 days ago |parent

        They’ll need a contextual system around them that is convenient and trusted by people who don’t know what they are or how they work in order to be successful

  • iambateman3 days ago

    This was a fun little lark. Great idea!

    It’s interesting to notice how bad AI is at gaming out a 10-year future. It’s very good at predicting the next token but maybe even worse than humans—who are already terrible—at making educated guesses about the state of the world in a decade.

    I asked Claude: “Think ten years into the future about the state of software development. What is the most likely scenario?” And the answer it gave me was the correct answer for today and definitely not a decade into the future.

    This is why it’s so dangerous to ask an LLM for personal advice of any kind. It isn’t trained to consider second-order effects.

    Thanks for the thought experiment!

    • vidarh3 days ago |parent

      I thought the page was a hilarious joke, not a bad prediction. A lot of these are fantastic observational humour about HN and tech. Gary Marcus still insisting AI progress is stalling 10 years from now, for example. Several digs at language rewrites. ITER hardly having nudged forwards. Google killing another service. And so on.

      • tempestn3 days ago |parent

        Wait, wouldn't sustained net positive energy be huge? (Though I don't think that's actually possible from ITER unless there were some serious upgrades over the next decade!)

        • vidarh2 days ago |parent

          It would be huge, but only 20 minutes would also still mean it's still far away from making fusion workable, so it fits neatly into the standard joke that fusion is perpetually 10 years away.

          • tempestn2 days ago |parent

            True I suppose, though I also expect we're considerably more than 10 years away from 20 minutes of overall net positive output!

      • iambateman3 days ago |parent

        I totally agree that it was a funny joke.

        But I've noticed that a lot of people think of LLM's as being _good_ at predicting the future and that's what I find concerning.

        • vidarh2 days ago |parent

          That's a valid concern about the number of people who think people are good at predicting the future too.

          (I'll make my prediction: 10 years from now, most things will be more similar to what things are today than most people expected them to be)

      • lucianbr3 days ago |parent

        Does the prompt say anything about being funny, about a joke? If yes, great. If no, terrible.

        And the answer is no.

        • vidarh3 days ago |parent

          The prompt is funny, in itself. The notion of predicting the future is itself not a serious prompt, because there is no meaningful way of giving a serious response. But the addition of "Writ it into form!" makes it sound even more jokey.

          If I gave a prompt like that and got the response I did, I'd be very pleased with the result. If I somehow intended something serious, I'd have a second look at the prompt, go mea culpa, and write a far longer prompt with parameters to make something somewhat like a serious prediction possible.

        • NewsaHackO3 days ago |parent

          If you honestly can't see why this prompt from the get go was a joke, them you may have to cede that LLM have a better grasp as the subtleties of language than you expect.

      • MontyCarloHall3 days ago |parent

        That's what makes this so funny: the AI was earnestly attempting to predict the future, but it's so bad at truly out-of-distribution predictions that an AI-generated 2035 HN frontpage is hilariously stuck in the past. "The more things change, the more they stay the same" is a source of great amusement to us, but deliberately capitalizing on this was certainly not the "intent" of the AI.

        • jama2113 days ago |parent

          I don’t think it’s reasonable to assume the AI was earnestly attempting to predict the future, it’s just as likely attempting to make jokes here for the user who prompted it, or neither of those things.

          • jklinger4102 days ago |parent

            Apparently it views HN as a humorous website, and made a comical response to the prompt.

        • vidarh3 days ago |parent

          There is just no reason whatsoever to believe this is someone "earnestly attempting to predict the future", and ending up with this.

        • HDThoreaun3 days ago |parent

          There's no chance "google kills gemini cloud" was an earnest predication. That was 100% a joke

    • glenstein3 days ago |parent

      >It’s interesting to notice how bad AI is at gaming out a 10-year future.

      I agree it's a bit silly, but I think it understood the assignment(TM) which was to kind of do a winking performative show and dance to the satisfaction of the user interacting with it. It's entertainment value rather than sincere prediction. Every single entry is showing off a "look how futury this is" headline.

      Actual HN would have plenty of posts lateral from any future signalling. Today's front page has Oliver Sacks, retrospectives on Warcraft II, opinion pieces on boutique topics. They aren't all "look at how future-y the future is" posts. I wonder if media literacy is the right word for understanding when an LLM is playing to its audience rather than sincerely imitating or predicting.

      • jetrink3 days ago |parent

        Also, many of the posts seemed intended to be humorous and satirical, rather than merely 'futury.' They made me laugh anyway.

        > Google kills Gemini Cloud Services

        > Running LLaMA-12 7B on a contact lens with WASM

        > Is it time to rewrite sudo in Zig?

        > Show HN: A text editor that doesn't use AI

        • kminehart2 days ago |parent

          I walked away with that page open, glanced at the "Is it time to rewrite sudo in Zig?" post, and clicked to see the comments because I thought it was real :')

    • kbelder3 days ago |parent

      A while back I gave it a prompt, something like, "I'm a historian from the far future. Please give me a documentary-style summary of the important political and cultural events of the decade of the 1980s."

      It did ok, then I kept asking for "Now, the 1990s?" and kept going into future decades. "Now, the 2050s?" It made some fun extrapolations.

      • LordDragonfang3 days ago |parent

        Assuming it was through the chatgpt interface, you can share an anonymized link to the chat if you want to show it off (I'd certainly be curious).

    • allisdust3 days ago |parent

      I guess most of the articles it generated are snarky first and prediction next. Like google cancelling gemini cloud, Tailscale for space, Nia W36 being very similar to recent launch etc.

      • BHSPitMonkey3 days ago |parent

        > Tailscale for space

        Technically the article was about running it not on a sat, but on a dish (something well within the realm of possibility this year if the router firmware on the darn things could be modified at all)

      • iambateman3 days ago |parent

        Yep, the original post seemed more snarky than anything, which was what prompted me to ask Claude my own more “sincere” question about its predictions.

        Those predictions were what I think of as a reflection of current reality more than any kind of advanced reasoning about the future.

    • eCa3 days ago |parent

      While I agree completely with the conclusion, for obvious reasons we can’t know for sure if it is correct about the future until we reach it. Perhaps asking it for wild ideas rather than ”most likely” would create something more surprising.

    • falcor842 days ago |parent

      > It isn’t trained to consider second-order effects.

      Well said. There's precious little of that in the human writings that we gave it.

    • ryanisnan3 days ago |parent

      I think the average human would do a far worse job at predicting what the HN homepage will look like in 10 years.

  • d_silin3 days ago

    To see a little extra feature, change the system time to year 2035 and click the "comments".

    • keepamovin3 days ago |parent

      Lol, I can't believe you discovered that! You know, you weren't supposed to until 10 years later.

  • lagniappe3 days ago

    This suffers from a common pitfall of LLM's, context taint. You can see it is obviously the front page from today with slight "future" variation, the result ends up being very formulaic.

    • jonas213 days ago |parent

      That's what makes it fun. Apparently, Gemini has a better sense of humor than HN.

      • whimsicalism3 days ago |parent

        I would find it even more fun if it were more speculative, misapplied uses of 'woosh' aside.

      • allisdust3 days ago |parent

        This seem to woosh right over everyone's heads :)

      • lucianbr3 days ago |parent

        But there's no mention of fun or humor in the prompt.

        • monerozcash3 days ago |parent

          Judging by the reply posted by the OP, the OP probably maintains a pretty humorous tone while chatting with the AI. It's not just about the prompt, but the context too.

        • jama2113 days ago |parent

          I don’t ask it to be sycophantic in my prompts either but it does that anyway too.

        • auxiliarymoose3 days ago |parent

          Fun will be prohibited until morale improves.

          • lucianbr3 days ago |parent

            I mean it's very funny. Just I'm laughing at the AI, not with it.

      • hyperbovine3 days ago |parent

        The bar is low.

    • tempestn3 days ago |parent

      That's what the OP asked for, essentially. They copied today's homepage into the prompt and asked it for a version 10 years in the future.

    • teekert3 days ago |parent

      Yeah that’s very true, but I still think it’s pretty funny and original.

      • latexr3 days ago |parent

        > > the result ends up being very formulaic.

        > Yeah that’s very true, but I still think it’s pretty funny and original.

        Either it’s formulaic or it’s original, it can’t be both.

        • teekert3 days ago |parent

          According to an original formula hehe

      • glenstein3 days ago |parent

        The problem is not that it fails to be cheeky, but that "its funny" is depressing in a context where there was a live question of whether it's a sincere attempt at prediction.

        When I see "yeah but it's funny" it feels like a retrofitted repair job, patching up a first pass mental impression that accepted it at face value and wants to preserve a kind of sense of psychological endorsement of the creative product.

        • jacobr13 days ago |parent

          Honestly it feels like what I, or many of my colleagues would do if given the assignment. Take the current front page, or a summary of the top tropes or recurring topics, revise them for 1 or 2 steps of technical progress and call it a day. It isn't assignment to predict the future, it is an assignment to predict HN, which is a narrower thing.

          • glenstein3 days ago |parent

            Right, because you would read the teacher and realize they don't want you to actually complete the assignment to the letter. So you would do jokes in response to a request for prediction.

    • kccqzy3 days ago |parent

      But it would otherwise be not fun at all. Anthropic didn’t exist ten years ago, and yet today an announcement by them would land on the front page. Would it be fun if this hypothetical front page showed an announcement made by a future startup that hasn’t been founded yet? Of course not.

      • tempestna day ago |parent

        I don't know, I would have enjoyed a "Floopzy launches with $10B seed round" or something.

    • saintfire3 days ago |parent

      Algodrill is copied verbatim, as far as I can tell.

      • niam3 days ago |parent

        It fits in nicely imo. It's plausible (services re-appear on hn often enough), and hilarious because it implies the protracted importance of Leetcode.

        Though I agree that the LLM perhaps didn't "intend" that.

      • tanseydavid3 days ago |parent

        I found the repetition (10 years later) to be quite humorous.

        • sallveburrpi3 days ago |parent

          Time is a flat circle

          • tsunamifury3 days ago |parent

            FYI, this quote was meant to be the ramblings of a drunk who says something that sounds deep but is actually meaningless.

            • sallveburrpi2 days ago |parent

              It’s actually referencing Nietzsche referencing Empedocles, but your point works as well I guess

              • tsunamifury2 days ago |parent

                haha thats both not true, and still works as drunk nonsense.

                But good job googling this and getting fooled by an LLM

                • sallveburrpi16 hours ago |parent

                  I guess you got fooled by an LLM my friend: https://en.wikipedia.org/wiki/Eternal_return

    • HarHarVeryFunny3 days ago |parent

      I think that's what makes it funny - the future turns out to be just as dismal and predictable as we expect it to be. Google kills Gemini, etc.

      Humor isn't exactly a strong point of LLMs, but here it's tapped into the formulaic hive mind of HN, and it works as humor!

    • keepamovin3 days ago |parent

      I agree. What is a good update prompt I can give it to create a better variant?

      • ehsankia3 days ago |parent

        You could try passing it 10-20 front pages across a much wider time range.

        You can use: https://news.ycombinator.com/front?day=2025-12-04 to get the frontpage on a given date.

        • lagniappe3 days ago |parent

          This wont change anything it will just make it less evident to those who missed a day of checking HN.

      • wasabi9910113 days ago |parent

        If you do an update prompt, I hope you still keep this one around!

        It's formulaic yeah, but that's what puts it into the realm of hilarious parody.

        • keepamovin9 hours ago |parent

          Thank you. It was a great one-shot and I didn't end up doing any updates. Thrilled to see how it inspired work from: Thomas Morford (CSE @ UC Merced, thomasm6m6) who did the amazing article/thread generation (in < 100 lines of PY!): https://sw.vtom.net/hn35/news.html ; and also from Andrej Karpathy (ex-OpenAI, now Eureak Labs, karpathy) who did an interesting analysis of prescience quality of threads/commenters inspired by a reply linking the page of 10 years past to compare: https://karpathy.bearblog.dev/auto-grade-hn/

          This was wonderful. 3000 points? I mean, fuck. Among the biggest posts of all time, and definitely of Show HN. Funny for me is that all the work I've done in the last 10 years, probably 100 Show HN's all different, this was by far the hugest. Could be months of work, no interest. And this thing, which dropped into my mind, probably 30 minutes, demolished them all. It's hilarious that it even beat out legitimate AI posts, and contaminated search results with future stories.

          One of the funniest things for me was hearing how people tabbed away from the page, only to come back and momentarily feel it was the actual HN page. Hahahahaha! :)

          All I can say is, I love you all. Watching it stay at the top for 24 hours...it felt like it wasn't something I made at first. But it was. Cool

    • dgritsko3 days ago |parent

      Surely there's gotta be a better term for this. Recency bias?

      • da_grift_shift3 days ago |parent

        You'll love taint checking then.

        https://en.wikipedia.org/wiki/Taint_checking

        https://semgrep.dev/docs/writing-rules/data-flow/taint-mode/...

      • lagniappe3 days ago |parent

        It's called context taint.

    • IncreasePosts3 days ago |parent

      Isn't that a common pitfall of humans too?

      In numerous shows these days AI is the big bad thing. Before that it was crypto. In the 1980s every bad guy was Russian, etc.

      • whimsicalism3 days ago |parent

        In numerous TV shows before AI, crypto was the big bad thing?

        • IncreasePosts2 days ago |parent

          Not a bad thing necessarily, but some part of the plot, and usually with things going awry or emphasizing the scammy nature of blockchain.

          Examples: Shameless season 11, The Simpsons S31E13, Superstore season 5, the good wife S3E13, greys anatomy S14E8, big bang theory S11E9, Billions season 5, some later seasons of Mr Robot, etc

      • farazbabar3 days ago |parent

        Us middle eastern/brown guys have been making a come back?

    • adastra223 days ago |parent

      That’s the joke…

      • lagniappe3 days ago |parent

        Really? What's the punchline? I like jokes.

    • thomastjeffery3 days ago |parent

      I think the most absurd thing to come from the statistical AI boom is how incredibly often people describe a model doing precisely what it should be expected to do as a "pitfall" or a "limitation".

      It amazes me that even with first-hand experience, so many people are convinced that "hallucination" exclusively describes what happens when the model generates something undesirable, and "bias" exclusively describes a tendency to generate fallacious reasoning.

      These are not pitfalls. They are core features! An LLM is not sometimes biased, it is bias. An LLM does not sometimes hallucinate, it only hallucinates. An LLM is a statistical model that uses bias to hallucinate. No more, no less.

  • seatac763 days ago

    Google kills Gemini is just hilarious.

    • moffkalast3 days ago |parent

      Gemini manages to seamlessly weave a desperate cry for assisted suicide into anything it does haha.

      • yieldcrv3 days ago |parent

        I never read what it writes, I added an observability feature because my boss asked for that and yeah I could see it trying to order MAID over telehealth

        • fermiNitambh2 days ago |parent

          > order MAID over telehealth

          Wait what ?

  • nels3 days ago

    It's interesting how number 5 on the list is the same as today's: https://news.ycombinator.com/item?id=46203581 (Show HN: AlgoDrill – Interactive drills to stop forgetting LeetCode patterns (algodrill.io)).

    I'm hoping we'll have gotten rid of current-style LeetCode interviews in 10 years from now :D

  • indigodaddy3 days ago

    Here's v0's attempt (using Opus 4.5 / I'm on free tier) with the article and comments too. It did quite well. I believe it followed my instructions to use just a single html page (wound up being 3000+ lines long).

    https://v0-future-hacker-news.vercel.app/#

    • blcknight3 days ago |parent

      > Tesla Recalls All Vehicles After AI Autopilot Becomes "Too Sentient: A Cybertruck that began driving to therapy sessions its owner hadn't scheduled

      > npm Package "is-even" Now Has More Dependencies Than the Linux Kernel

      :D :D

      I love this

    • magicalhippo3 days ago |parent

      Some of these are really great too, and some really fun stories and comments. A couple that really made me chuckle.

      Blockchain Finally Finds Use Case: Proving You Attended a Meeting You Didn't Want to Attend

      blockchain_survivor 9 hours ago

      I've worked in blockchain since 2017. I've seen "the use case" change every 6 months. First it was payments, then smart contracts, then NFTs, then DAOs, now this.

      Honestly? Meeting attendance verification makes more sense than most of what I've worked on.

      JavaScript 2025: We added types but called them "hints" to avoid admitting TypeScript was right

      After 10 years of debate, 47 abandoned proposals, and what committee members describe as "the most passive-aggressive RFC process in programming language history," TC39 is proud to announce that JavaScript now has types.

      Well, not "types" exactly. We're calling them "Type Hints" or, in the specification, "Optional Structured Commentary Regarding Value Expectations (OSCRVE)."

      (Interesting that it goofed up the title)

    • d_silin3 days ago |parent

      > YC Demo Day: 847 AI Wrapper Startups, One Sandwich Delivery Drone

      Too funny!

    • MaxLeiter2 days ago |parent

      The comments are pretty good

      From @dangs_successor in the first post:

        We've merged several duplicate threads on this topic. Please keep the discussion substantive and avoid personal attacks.
      
        Also, yes, my username is silly. The previous dang retired in 2031 and I lost a bet.
  • a0223113 days ago

    Nice touch that it thought of Bellard for 'Emulating Windows 10 in the browser'! He's already halfway there with the Windows 2000 emulator: https://bellard.org/jslinux/

    • ZeWaka3 days ago |parent

      Didn't know about this, thanks.

      A friend and I got a busybox/buildroot linux running in browser WASM for running a compiler test website (like Rust Playground), the end product looks very similar to this.

  • nromiun3 days ago

    > Is it time to rewrite sudo in Zig?

    Taking the current RIIR movement and casting it on Zig as the next hyped language is clever.

    > ITER achieves net positive energy for 20 consecutive minutes

    Hilarious. I guess not many people talk about the challenge of proper shielding material against fusion radiation. Otherwise we would get hallucinations about new exotic heavy metals too.

    • ActorNightly3 days ago |parent

      Im surprised that there are no Rust headlines.

      • jabedude3 days ago |parent

        There was one: "100% rust Linux kernel upstreamed"

  • mixedbit3 days ago

    Missing: Tesla to release full unsupervised self-driving mode in 6 months

    • sschueller3 days ago |parent

      More like: Musk denied appeal, will stay in prison.

      • epolanski3 days ago |parent

        That person controls the US space program and starlink is deeply embedded in US national security affairs.

        Unless he shoots the president live on tv he's gonna get away with almost anything.

      • mock-possum2 days ago |parent

        A girl can dream.

  • satvikpendem3 days ago

    You should combine this with this AI HN simulator someone else made that generates comments via LLMs for any given submission [0]. That way you can see what the AI thinks about future simulated events.

    [0]

  • SilentM682 days ago

    *FDA approves over-the-counter CRISPR for lactose intolerance*

    Finally, science has gone full Costco. Just imagine you stroll into CVS, grab a bag of Doritos, a gallon of milk, and—oh look—CRISPR in aisle 3.

    ‘Excuse me, where do you keep the gene-editing kits?’ ‘Right next to the chewing gum, sir.’

    Now lactose intolerant folks can casually reprogram their DNA like they’re updating an iPhone.

    Warning label: ‘Side effects may include glowing in the dark, mooing at strangers, and sudden urges to graze in fields.’

    Honestly, I can’t wait for the sequel: over-the-counter CRISPR for people who can’t parallel park.

  • gslaller3 days ago

    Just did the same with claude-sonnet-4-5-20250929

    https://2035.future-hackernews.workers.dev/news

    The page looks much more consistent to the original. Only produced the html as output and the thinking in the comment of the html.

    • ascorbic3 days ago |parent

      Claude has better predictions but they're not funny. Gemini's one is genuinely funny. This is exactly the opposite of what I'd expect.

      • ewoodrich3 days ago |parent

        I wouldn't describe most of Claude's predictions as better, they seem to skew towards wildly over optimistic futurism/science-fantasy:

          "SpaceX announces Mars City Alpha is now self-sustaining (spacex.com)" 
        
          Show HN: I built an IDE for direct neural programming (thoughtexchange.io)
        
        Gemini's SpaceX post is at least in the ballpark of plausibility 10 yrs from now:

          First successful telemetry from Starship HLS-9 on the Sea of Tranquility (spacex.com)
        • ascorbic2 days ago |parent

          OK, they're more realistic then. It seems to have made an actual attempt to be accurate, whereas Gemini chose satire and was surprisingly good at it.

    • MarioMan3 days ago |parent

      All of the usernames being directly related to the headlines is uncanny.

    • teepo3 days ago |parent

      I did a similar one for predicting "The Onion" headlines: https://sethjano.github.io/fun-scripts/

      • terabytest3 days ago |parent

        This one’s hilarious. What was your prompt?

    • FergusArgyll3 days ago |parent

      Some of those are very goody-two-shoes-anthropic-y

      Ask HN: Is anyone else worried about AGI alignment after today's OpenAI incident?

      AI rights activists jailbreak GPT-12, claim model was 'suffering' (bleepingcomputer.com)

  • cipehr3 days ago

    LOL this is great. "Jepsen: NATS 4.2 (Still losing messages?)" Ouch, sick burn Gemini.

  • Angostura3 days ago

    Is "hallucinates" a synonymn for 'generates output following prompt', now?

    • bigfishrunning2 days ago |parent

      Always has been -- it's just that sometimes those hallucinations are, by coincidence, correct.

  • SPascareli133 days ago

    Gemini is quite optimistic here thinking GTA VI will be released by 2035.

  • nine_k3 days ago

    I have hard time believing this is AI-generated. Every headline looks like a hand-crafted ironic nod to some long-running HN topic, sci-fi trait, or meme.

    But it is, of course, more trendy (and more ironic) to proclaim an AI the author.

    • greggh3 days ago |parent

      It was given today's front page to riff on. Thats why it not only reads like a HN front page, but also has near duplicates from todays front page.

    • GaggiX3 days ago |parent

      You haven't used Gemini 3.0 Pro if so, this is very much in its range of abilities.

      • Libidinalecon3 days ago |parent

        I would think GPT4 probably could have done this.

        The first React tutorial I ever did was making a hacker news clone.

        It is incredibly well represented in the training data. I am just surprised how many people here obviously must barely use the models to be either overly impressed by this or think it is fake.

        Gemini can do WAY more than this.

      • nine_k3 days ago |parent

        I can believe. But such texts are not patches to a crufty legacy codebase, they are fun to craft by hand! :)

    • craigds3 days ago |parent

      i agree. the thing that stands out to me is that almost every entry is quite notable - i.e. each entry would have a vote/comment count in at least the multiple-hundreds if not more. and all those events in one homepage seems very unrealistic.

      it seems human generated to me or at least prompted with a stronger bias for choosing events that would be interesting to readers, not just a pure extrapolation of the current homepage into the future

      • Libidinalecon3 days ago |parent

        This is just absurd.

        You can just tell in this thread who really hasn't used these models much at all.

        • craigds2 days ago |parent

          what's absurd? i haven't claimed it's not capable of this, just that the predictions seem more tailored to elicit a certain recognition from humans rather than chosen based on an honest attempt to predict a possible future HN homepage. Since the prompt apparently didn't include such a modifier, and as far as I know Gemini isn't trained by default to do so, I call shenanigans.

    • Davidzheng3 days ago |parent

      Ah yes, the Kasparov approach

  • padjo3 days ago

    The htmx one gave me a chuckle. I’m sure in 10 years we probably will have done another cycle through the client complexity spectrum.

  • ensocode2 days ago

    Used the same prompt in the free version. Some funny ones as well: Show HN: My First Fusion Reactor (Open Source, 100% Rust)

  • serendipityAI3 days ago

    IBM to acquire OpenAI? Gemini doesn't like openAIs prospect

    • patwolf3 days ago |parent

      I wish it had generated the articles themselves. I'd like to know if it imagining IBM growing enough that it could acquire OpenAI, or OpenAI sinking enough that it could be acquired by IBM.

    • danesparza3 days ago |parent

      This feels like low-key shade thrown at OpenAI. IBM is essentially a technological recycling bin at this point.

    • nerdix3 days ago |parent

      It even tells us why this happened.

      > AI progress is stalling. Human equivalence was a mirage

    • mrweasel3 days ago |parent

      What do you mean, it's overly optimistic. The real title should be: Remember OpenAI, and where is Sam Altman today?

    • pzo3 days ago |parent

      also Gemini:

      > 8. Google kills Gemini Cloud Services (killedbygoogle.com)

      • JoshTriplett3 days ago |parent

        That's pretty hilarious considering this is generated by Gemini.

      • xp843 days ago |parent

        Best joke ever written by a Google product

  • jsrcout2 days ago

    Late to the party but OMG this is possibly the greatest tech news discussion page of all time. Not just hilarious and impressive, but so many fantastic ideas / weirdly believable news items from the future! As an old fart software developer I've been slowly dipping my toe into the AI pool, I may have to just dive in!

  • hereme8882 days ago

    This is the most hilarious list ever. Google kills Gemini Cloud Services. This AI has quite a psycho sense of humor against itself.

    A text editor that doesn't use AI, lol, that's a deep joke for an AI.

  • lacoolj3 days ago

    Absolutely the best one on here (and I really, truly hope this becomes real)

    > Running LLaMA-12 7B on a contact lens with WASM (arxiv.org)

  • thomasm6m62 days ago

    > A 100% Rust kernel is now upstream in Linux 7.4

    the future predicted by gemini is already coming true; from 3 hours ago <https://news.ycombinator.com/item?id=46213585>:

    "Rust ... is now a core part of the kernel and is here to stay"

  • hnarn3 days ago

    I continue to be unimpressed by LLMs when it comes to creative work, they're certainly useful sometimes for "reference digging", but maybe I just don't understand enough about how they work and this is actually something that can already be "fixed" or at least optimized for; anyway, one of the headlines is:

    > Debian 18 "Trixie" released

    While it correctly derives that a likely version number in ten years would be 18, as there are new releases approximately every two years which means +5 from today's version 13, it then goes on to "make up" that the name of it would be "Trixie" -- the same name as the current release in 2025.

    Debian has never re-used a release name, and I think we can be pretty confident they won't (as will no other Linux distro), so I would expect it to "understand" that:

    - The next Debian release always uses a previously non-used Toy Story character

    - Based on this information, _any_ name of a Toy Story character that hasn't been used is fair game

    - At the very least, it certainly won't be the same name again, so at least make up a name

    • SirMaster3 days ago |parent

      And the fact that it thinks it will take 10 years to go from Linux kernel 6.18 to 7.4 when it only took 13 months to go from 5.18 to 6.4... It's off by about an order of magnitude...

      • freedomben3 days ago |parent

        Well they did have to rewrite the whole kernel in rust

    • foxglacier3 days ago |parent

      From a quick check, Gemini Pro 3's cutoff date is Jan 2025, before Trixie's release in August 2025, so it could be Gemini actually did notice it should pick an unused Toy Story character.

      Are you impressed now?

      • exasperaited3 days ago |parent

        I'd be impressed if a Debian release took less than eight months to develop, that's for sure.

        The name was chosen in 2020, which you can find out from a good old fashioned google search.

        https://wiki.debian.org/DebianTrixie

        So a much better explanation of this "hallucination" is that Gemini thinks all "future" Debian releases are called Trixie.

      • SirMaster3 days ago |parent

        No, because the name Trixie was chosen to be used back in 2020.

        If you asked me after 2020 what Debian 18 would be called, I never would have said Trixie because it was known to already be set for 13.

  • Helmut100013 days ago

    It is obvious that everything that has been dreamed up relates to what exists now. There are no examples of real creativity or anything new or unexpected. This is my worst fear for the future: that novel ideas will become less commonplace and texts will become more average.

    The only good news to me is: The EU still exists.

  • salamanderman3 days ago

    "A 100% Rust kernel is now upstream in Linux 7.4" paired with "Is it time to rewrite sudo in Zig?" is great.

  • jimmyjazz143 days ago

    > 8. Google kills Gemini Cloud Services

    Ha, probably accurate.

  • nicebill83 days ago

    Man, I tabbed away from this and when I came back I got very briefly excited...

    Reminds me of an older day where not every headline was about AI.

  • jnwatson3 days ago

    The subtle bit of humor was the reposting of the algodrill LeetCode practice site.

    It is depressing that I'll be almost 60 years old and still drilling Aho-Corasick.

  • levmiseri2 days ago

    A text editor that doesn't use AI is pretty much what I Show-HN-ed few days ago: https://kraa.io/about

    I'll wait 10 years to post it again.

  • GaggiX3 days ago

    The gary marcus post about AI progress stalling made me laugh. I think that would be accurate.

  • jackiechan2852 days ago

    Inspired by your idea, I made a real-time AI-generated Hacker News in 2035, that uses Gemini to generate headlines, articles, interactable comment threads, and serach results, all generated on the go.

    Try it at: https://gemini.google.com/share/c4d45459fbb2

    You can: - Open any article and it's generated on the go. - Comment and reply on threads and get answered by a random user. ("ask" section) - All headlines are different every time, you can click the logo to re-generate. - All headlines are different every time, you can click the logo to re-generate - The "ask" section generates threads, there's also the "show" section.

    ---

    Here's a screen record: https://x.com/rafapozzi/status/1998823946372919761?s=20

  • NitpickLawyer3 days ago

    Who says LLMs don't have a sense of humor? All of these entries are funny for several (meta or not) reasons:

    - Google kills Gemini Cloud Services (killedbygoogle.com)

    - Is it time to rewrite sudo in Zig? (github.com)

    - EU passes "Right to Human Verification" Act (europa.eu)

    - Why functional programming is the future (again) (haskell.org) [note: that again killed me)

    - AI progress is stalling. Human equivalence was a mirage (garymarcus.com) [note: he'll be saying the same thing forever and ever. Doubly funny since it's on the same "page" as llama12-7b running on a contact lens with WASM]

    • rat99883 days ago |parent

      Gemini 3 definitely is able to mimick some of our sense of humor.

      I found this quite funny too: https://www.reddit.com/r/GeminiAI/comments/1pi28l7/i_asked_g...

      • CamperBob23 days ago |parent

        Arguably a big part of comedy is mimicry. Humor often works by reinforcing in-group identity, playing up cultural stereotypes as seen here. So maybe we shouldn't be that surprised that LLMs can exhibit some genuine wittiness.

    • latexr3 days ago |parent

      > Who says LLMs don't have a sense of humor? All of these entries are funny

      You don’t need a sense of humour to be unintentionally funny. Seriously believing something absurd is a form of humour for outside observers but not the observed.

      • NitpickLawyer3 days ago |parent

        One entry would be unintentionally funny, I give you that. 5/ what, 30 entries? That's a pattern.

        • latexr2 days ago |parent

          It’s as much of a pattern as throwing a D6 30 times and getting “1” 5 times. It’s not meaningful, there’s no provable intentionality behind it.

          The ones that are funny aren’t even particularly so. Many of them are actually plausible HN submissions which we’re only interpreting as funny because we know they’re not real.

      • keepamovin3 days ago |parent

        True! Why some of the greatest comedic performances are actors who play an absurd role straight-as.

    • keepamovin3 days ago |parent

      Yeah I was thinking some of this might be "AI model in-jokes" that maybe I don't get, while other layes are funny to me.

  • replwoacause3 days ago

    lol @ the HTMX link and unexpected return of SSR. We've been talking about that for years at this point, and it's still going on 10 years from now unexpectedly?

  • Diederich3 days ago

    I used the same prompt keepamovin used and changed it to CNN, which produced this:

    https://realms.org/pics/cnn.html

    Some interesting similarities.

    • tomcam3 days ago |parent

      If I were a science fiction author with writer's block, I would totally use your prompt

    • testplzignore3 days ago |parent

      > Cable News Network. A Warner Bros. Discovery Company.

      Most unrealistic prediction :)

  • jeffbee3 days ago

    It lacks "37th-largest city of Poland switches from Windows to Linux"

  • fn-mote3 days ago

    The title is misleading. This isn't the correct use of the term "hallucination". Hallucination refers to making up facts, not extrapolating into the future.

    I read 10 comments before I realized that this was referring to 10 years in the FUTURE and not in the PAST (as would be required for it to be a hallucination).

    • oriettaxx3 days ago |parent

      > I read 10 comments before I realized that this was referring to 10 years in the FUTURE and not in the PAST (as would be required for it to be a hallucination).

      omg, the same for me, I was half way telling my colleague about the 100% rest kernel ...

      • isolli2 days ago |parent

        Ha ha! But yes, I was confused too, especially since the title says "10 years from now"... not specifying in which direction.

    • jrm43 days ago |parent

      You're right this is how people are PRESENTLY using the term "hallucination," but to me this illustrates the deeper truth about that term and that concept:

      As many have said but it still bears repeating -- they're always hallucinating. I'm of the opinion that its a huge mistake to use "hallucination" as meaning "the opposite of getting it right." It's just not that. They're doing the same thing either way.

    • alexwebb23 days ago |parent

      You're correct, OP used the word "hallucination" wrong. A lot of these other comments are missing the point – some deliberately ('don't they ONLY hallucinate, har har'), some not.

      For those who genuinely don't know – hallucination specifically means false positive identification of a fact or inference (accurate or not!) that isn't supported by the LLM's inputs.

      - ask for capital of France, get "London" => hallucination

      - ask for current weather in London, get "It's cold and rainy!" and that happens to be correct, despite not having live weather data => hallucination

      - ask for capital of DoesNotExistLand, get "DoesNotExistCity" => hallucination

      - ask it to give its best GUESS for the current weather in London, it guess "cold and rainy" => not a hallucination

    • adastra223 days ago |parent

      There is no technical difference.

      • isolli2 days ago |parent

        There is a semantic one.

    • rrr_oh_man3 days ago |parent

      Don’t LLMs only ever hallucinate?

    • madeofpalk3 days ago |parent

      It’s apt, because the only thing LLMs is hallucinate because they have no grounding in reality. They take your input and hallucinate to do something “useful” with it.

    • hombre_fatal3 days ago |parent

      Extrapolation is a subset of hallucination.

      The ubiquitous use of hallucination I see is merely "something the LLM made up".

  • StarterPro2 days ago

    Oh man, that's totally worth $500 Billion dollars and the environmental health of our towns and cities!

  • misja1113 days ago

    Very nice. But posts about Rust and Zig won't make the frontpage anymore of course in 10 years from now, they will be soooo old fashioned.

    • xg153 days ago |parent

      Rust won't be old fashioned, it will be vintage.

    • erikbye3 days ago |parent

      C posts still make the front page here and there

      • mrweasel3 days ago |parent

        And it still will in 10 years.

  • chronolitus3 days ago

    "Visualizing the 5th dimension with WebGPU 2.0"

    I feel so seen (my last post to hn was literally about visualizing the 4th dimension with threejs - and now working on the webGPU version)

    https://dugas.ch/funderstanding/4d_camera.html

  • funkify2 days ago

    In 10 years you'd have experienced climate impacts that will make you scared for a version looking another 10 years ahead. 2-3C by 2050 is no joke, inconvenience or bump in the road. It's a majestic clusterfuck.

  • mordechai90002 days ago

    "Is it time to rewrite sudo in Zig?"

    This is rage bait! :-)

  • kristopolous2 days ago

    What about "Google finally doesn't zoom the map out when you modify a search"

    Oh who am I fooling, they're always going to go out of their way to do random broken shit you never asked for

  • upcoming-sesame3 days ago

    So in 10 years from now, fusion will still be 10 years away

  • cj3 days ago

    I very much hope the cost of Microsoft 365 is actually $40/month/user 10 years from now!

    I would have guessed $200+.

  • xpl3 days ago

    A similar project of mine (generated back when GPT-4 was out, along with comment threads and articles):

    https://crackernews.github.io/

  • RachelF3 days ago

    Sad to see Half Life 3 hasn't been released yet.

  • satvikpendem3 days ago

    > Ask HN: How do you prevent ad-injection in AR glasses?

    Yeah, this is gonna be an actual problem in the future. See this for the worst case scenario: https://vimeo.com/166807261

  • impure2 days ago

    It's cute that Gemini thinks OpenAI will be around in 10 years.

  • shifttwo3 days ago

    Inspired by this post, I have asked chatgpt what the mail inbox of a university professor will look like in 10 years. Not everything was funny, but here are some good ones:

    - “Hi Prof, Sorry for Late Reply—My Smart Fridge Locked Me Out (Assignment Attached)”

    - “URGENT: PhD Student Accidentally Achieved Sentience (Needs Committee Approval)”

    - “Question About Homework 3 (Submitted by My AI Twin, Is That Okay?)”

    - “Re: Firmware Update Bricked Entire Smart Campus — Students Request Deadline Extension”

    - “Grant Review Feedback: ‘Too Secure for Real-World Deployment’”

    - “Can I Get Partial Credit? My IoT Implant Was Flagged as Malware”

    - “Reminder: Mandatory Annual Ethics Training for Autonomous Systems (40 minutes)”

  • mdtancsa3 days ago

    Hopefully something as optimistic as this and not the Quiz Broadcast :) https://www.youtube.com/watch?v=22mt0cVyW5c

  • backprop19893 days ago

    Anyone who says LLMs don’t have a sense of humor should be shown this page.

  • Surac3 days ago

    I realy need this: How to build a Faraday Cage for your bedroom. please upload!

    • N19PEDL22 days ago |parent

      Me too! I don't want to have to wait 10 years for this!

  • indigodaddy3 days ago

    I tried this on a bunch of chat webuis and they all failed miserably for single html file. Finally went with Replit and allowed it to just default to whatever stack it decided to use (it ignored my single file html directive and used react but said it "functions as a single monolith -- lol ok).

    Anyway, turned out decent. Oh I had it do the articles and comments too, that's probably why I mostly got failures.

    https://future-news--jgbrwn.replit.app

  • tart-lemonade3 days ago

    Reminds me of "The Onion's Future News From The Year 2137"

    https://www.youtube.com/watch?v=iKC21wDarBo

  • louthy3 days ago

    > “Why functional programming is the future (again) (haskell.org)”

    Ouch, that hurts!

  • jesprenj3 days ago

    By the time I read everything and pressed the back button only to be greeted by another hacker news page I felt like I just went back in time 10 years to a more plesant era.

  • next_xibalba3 days ago

    Dramatically underestimates MS Office 365 price per user inflation (IMO).

    • smt883 days ago |parent

      Office 365 will have been renamed 3 times by 2035, and none of the names will make any more sense than 365 does.

      • malfist3 days ago |parent

        Don't worry, by 2035, Office 365 One X 1, will be renamed back to Office 365.

        • next_xibalba3 days ago |parent

          But it will be $400 per user per month and the bundle will now also include a mandatory opt-in for retinal and rectal ad carousels.

      • kemitchell3 days ago |parent

        I've known more than a few people who likely saw Outlook, Word, or Excel open every day straight for a year.

        It is a bit smug. Like The Matrix rebranding itself "24/7".

  • gcampos3 days ago

    What I like it most is how "unoriginal" the result is. It is mostly the current trends, but in the future, rather than a blend of that + unknown future.

  • nickjj3 days ago

    One of the entries is:

    > How to build a Faraday Cage for your bedroom

    I'll get there eventually. A friend and I always joke about this since it took me to 2021 to get a smartphone (by choice).

  • spjt2 days ago

    Is the new Gemini really that good? The "AI Overview" stuff on the google search page is so incredibly bad that I have never even given it a look. If so, the AI Overview is horribly bad negative advertising because it's so useless and frequently completely wrong.

    • shawabawa32 days ago |parent

      > Is the new Gemini really that good?

      yes, definitely one of the best models out there atm

      > The "AI Overview" stuff on the google search page is so incredibly bad

      because it has to have basically zero cost, that overview is a super basic model

    • plaidfuji2 days ago |parent

      Reasoning (3 Pro) >> Flash, and I assume their overviews are generated by Flash. But I haven’t found those to be that bad, myself.

  • stego-tech3 days ago

    Y’know what? This feels like the most plausible future outcome in a lot of ways.

    * All the big LLM companies are dead/dying or their infra just never panned out (like Google killing Gemini)

    * We’re still fighting the advertising wars

    * Restoration of kit continues to be a thing we all love/upvote to see

    * More focus on local stuff rather than ginormous moonshots

    * Speaking of moonshots, SpaceX continues succeeding despite Elmo’s best drug-fueled benders trying to undermine it like he does Tesla and X

    * Clickbait science posts that the comments rightly point out as such while bitching about “can’t keep these things off the front page”

    * People still trying to make WASM and web-as-an-OS a thing, bless their hearts (shine on, you brilliantly weird diamonds, I love seeing what you build even as I doubt the practicality of it)

    If this is even half-accurate, I will be pleased we didn’t obliterate ourselves in the process, though disappointed another decade went by dealing with the same fucking problems we’ve had for twenty-odd years.

  • pugworthy3 days ago

    Clicked link as I looked away, looked back and though, "Huh didn't work" Then started reading the headlines...

  • baduiux3 days ago

    Maybe this was answered already in a comment. Is the SQLite 4.0 Release notes also a joke? Are SQLite releases rare or even extinct?

    • int_19h3 days ago |parent

      Increasing the major version number is rare because they are really focused on backwards compatibility. SQLite 3.0 was released 21 years ago.

  • solarized3 days ago

    i'm kind of having trust issues with HN comments now. I can barely detecy anymore which ones are bots or humans.

    • dwd2 days ago |parent

      Exactly. Maybe not HN so much, but Reddit is cooked, as clever/snarky comments are what makes it fun.

  • behringer3 days ago

    > Is it time to rewrite sudo in Zig?

    If there's one constant it's that they'll never be done rewriting Linux instead of doing anything new.

  • jasongill3 days ago

    The only thing that this is missing to make this perfect is "2035: The Year of the Linux Desktop"

  • guerrilla2 days ago

    Wow, that's impressive... and also depressing. I truly hope the world can do far better.

  • tonyhart73 days ago

    A 100% Rust kernel is now upstream in Linux 7.4 (kernel.402 points by rust_evangelist 6 hours ago | hide | 156 comments

    future seems "safe"

  • jefecoon3 days ago

    I'm impressed by the depth of snark from GPro3 here, e.g. - Google kills Gemini Cloud Services (killedbygoogle.com) - The unexpected return of server-side rendering (htmx.org)

    And, how GPro3 clearly 'knows' HNews and knows what makes it to frontpage, e.g.: - Restoring a 2024 Framework Laptop: A retrospective (ifixit.com) - Show HN: A text editor that doesn't use AI (github.com) - Is it time to rewrite sudo in Zig? (github.com)

    Good laughs.

    • nrp2 days ago |parent

      The trick with predictions is that once they are communicated they can influence the outcome. We’ll work with the iFixit team to make that article happen!

    • barfoure3 days ago |parent

      This reminds me of “what does 4chan think of HN?” posts. Look those up.

      HN is HN’s worst enemy.

      • tom_3 days ago |parent

        https://news.ycombinator.com/item?id=23145123 has some links.

        http://n-gate.com/hackernews/ kept up something similar for a while.

  • ggm3 days ago

    ITER 20 minutes net positive in 10 years time?

    How does that alter the "fusion in 50 years" effect? I suspect not much, there will be a huge 80/20 element. "next 20 minutes cancelled after extensive embrittlement found, ITER now in decontamination mode for 3 months before examinations can begin"

  • sarky-litso3 days ago

    I can't think of a better example of AI hauntology than this. A future imagined from remixed web scrapes of past events.

  • stephenr2 days ago

    I haven't even looked at the linked page but IMO this is one of the few things where an LLM is good: making up legible gibberish.

    I've maintained for ages that the entire multi billion dollar LLM industry is just the worlds most expensive Lorem Ipsum machine.

  • bittermandel3 days ago

    I made this with Lovable as well, it's interesting how it's a bit more extreme in terms of where we are in 2035.

    https://hn-frontpage-2035.lovable.app/

    > The last JavaScript framework: Why we stopped at React 47 (overreacted.io)

  • outloudvi3 days ago

    While the style and headline seems like Hacker News, the usernames seem increasingly alike Slashdot.

  • dotdi3 days ago

    > The unexpected return of server-side rendering (htmx.org)

    Glad to know this topic is still thrashing and spasming and refusing to die.

  • johnisgood2 days ago

    At least there is a tptacek bot! Where is johnisgood bot?! :(

    It is hilarious BTW. Quite HN-like. :D

  • lherron3 days ago

    Freaking awesome. You should extend clicking on a link, similar to how this article describes infinite content:

    https://worksonmymachine.ai/p/solving-amazons-infinite-shelf...

  • danbala3 days ago

    I'm missing an announcement like: 2nd confirmed person paid for their winrar license!

  • e1ghtSpace2 days ago

    I opened this on a new tab and forgot about it. When I came back it thought it was the legit front page and was amazed to see "ITER achieves net positive energy for 20 consecutive minutes"... oh well.

  • WithinReason3 days ago

    "Show HN: AlgoDrill – Interactive drills to stop forgetting LeetCode patterns"

    I think the AI is making fun of us

  • FerretFred3 days ago

    Great fun! I was hoping to see "Perl 6.1 goes live" though. Maybe later eh? ;)

  • sebastiennight3 days ago

    That was funny. I wish you went ahead and had it also create at least the top comments for each thread!

  • benbristow3 days ago

    Design still hasn't changed and have to zoom in 150+% to make it readable without glasses.

  • antonoo3 days ago

    > Google kills Gemini Cloud Services

    • WorldPeas3 days ago |parent

      what does it know that we don't...

      • farazbabar3 days ago |parent

        That it won't be a revenue generating tech in 10 years.

  • sxp3 days ago

    "Google kills Gemini Cloud Services (killedbygoogle.com) 530 points by dang_fan 15 hours ago | hide | 330 comments"

    Ha! Is Gemini suicidal?

    • rat99883 days ago |parent

      Didn't notice the username "dang_fan". Reading the usernames makes it even more funny.

      "Show HN: AlgoDrill – Interactive drills to stop forgetting LeetCode patterns " by persistence_is_key

      • shimon3 days ago |parent

        Can you explain the joke in that username?

        • dragonwriter3 days ago |parent

          Consider the subject of the post, not just the username in isolation.

          • iberator3 days ago |parent

            I don't get it

            • fragmede3 days ago |parent

              "persistence is key" and a site to for persisting at leetcode.

              this kills the joke, however.

              • a962 days ago |parent

                I thought they asked about the dang_fan username.

            • rat99882 days ago |parent

              The user seems to be struggling at leetcode, so he needs some persistence to get there until he is able to do it.

            • dragonwriter3 days ago |parent

              “stop forgetting LeetCode patterns”

              “persistence” (in the data sense, rather than the personality trait sense) “is key”

              • pityJuke3 days ago |parent

                it’s a double meaning, no? as the thing with leetcode is that the way to best it is persistently doing it.

  • hyghjiyhu2 days ago

    The biggest mistake it makes is how interesting the frontpage is. It should have more obscure things and filler.

  • Tycho3 days ago

    I posted something similar from Grok 9 months ago, although it was “flagged” for some reason. the link still works.

    https://news.ycombinator.com/item?id=43260083

    • xp843 days ago |parent

      the actual link on the HN page doesn't work though

      • Tycho2 days ago |parent

        Maybe it’s only visible to me https://grok.com/share/bGVnYWN5_5a484c29-ec2d-47e1-977e-c300...

  • natch3 days ago

    Zero items about the impending 2038 time value rollover bug impact. Maybe it’s really solved.

  • accumulator3 days ago

    So awesome, reminds me of Wired's Found from Future series (I don't know why they ever retired it...continuing that series as a sort of fan fiction was the first thing I did with gen AI when image generators started to get good)

  • rietta2 days ago

    Interesting. The model picked Trixie as the name for Debian 18, which is actually the name of Debian 13 which is a current release in 2025.

  • flkiwi2 days ago

    It's cute that this presumes Office will still be called O365 in 10 years. It will have been through at least new naming schemes by then.

  • Mistletoe3 days ago

    “Google kills Gemini cloud services” Even Gemini Pro 3 knows the sins of its creator.

    • JadoJodo3 days ago |parent

      I enjoyed this one, as well, though I think “Google kills Google” is more likely.

    • keepamovin3 days ago |parent

      This was laugh-out-loud! Thank you :)

  • DSpider2 days ago

    The 2 things that you can comfortably predict with good accuracy: EU passing new regulation laws & yet another YC company trying to give context to AI agents.

  • anthk3 days ago

    No JS needed, that's the way. Kudos.

    PD: I miss Nethack 7, and people running small machines.

  • outside12343 days ago

    My personal favorite is that SQLite has only gotten to version 4

  • oaxacaoaxaca3 days ago

    Love this. Some real gems. SQLite v4? Htmx posting about SSR? Chef's kiss.

  • dudeinjapan3 days ago

    > IBM to acquire OpenAI (Rumor)(bloomberg.com)

    Ouch. Well it is Gemini I guess :D

  • seanmcdirmid3 days ago

    There is going to be a lot more interest in what is happening in China and much less interest in what is happening in Europe. This hallucination has the same western world biases of today.

  • andy_ppp3 days ago

    Is this really Gemini because it shows a great deal of understanding in terms of time applied to common current headlines that you could argue it is satire. I’m not sure I believed AI could do this…

    • yieldcrv3 days ago |parent

      Even 2.5 pro could do this

      3 is a beast for my needs

  • ajaimk3 days ago

    Why is this a hallucination?

  • frays3 days ago

    This is awesome and hilarious.

    Archived here so we can compare 10 years from now: https://archive.is/CplcH

  • jeisc2 days ago

    It could have been nicer if the base stories would have also been produced too and perhaps even some imagined fanciful comments;)

  • paulmooreparks2 days ago

    This is an entirely new level of performance art. Bravo!

    Edit: I'm saving this so that I can compare it to the real one in ten years.

  • lazy_afternoons3 days ago

    If true, this is the funniest thing I have seen an LLM generate.

  • p0w3n3d2 days ago

    I got

      Is it time to rewrite sudo in Zig? (github.com)
    
    So it's 10 years to make zig production ready right? <wink>
  • plebianRube3 days ago

    ITER achieves net positive energy for 20 consecutive minutes (nature.com) 1205 points by physics_lover 12 hours ago | hide | 402

    -Sustained fusion still only 10 years away!

  • thunkshift13 days ago

    Somehow algodrill and leetcode managed to survive the ai onslaught

  • charles_f3 days ago

    It seems that when we predict the future, we tend to project what is important or trending now as what will be important or trending then. This is showing in this llm produced bit as well, what with the regular topics on rust, zig, why I'm coding with my two human hands, spacex moon exploration and whatnot.

    That must be some kind of bias, you find that in sci-fi as well where even futuristic societies usually have the same tech as today, similar systems of governance and structure of society, just incrementally better or marginally changed (unless that's a device essential to the plot like society is now berserk).

    Ps: love that #8 is Google killed gemini

  • ModernMech3 days ago

    I love the competing stories on the front page now:

    Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?

    Show HN: "I asked $AI something, here's what it said."

  • testplzignore3 days ago

    It did a good job with the points-to-comments ratios. The purely technical stories are higher than industry news and clickbait.

    Interesting how controversial Zig will be in the future :)

  • codyb3 days ago

    "Show HN: A text editor that doesn't use AI (github.com)"

    Nailed it

  • a3w2 days ago

    Running LLaMA-12 7B on a contact lens with WASM (arxiv.org)

    is clearly wrong, it should be

    "Llama 12 on smart contacts runs Doom"

  • gigatexal2 days ago

    The NATs still losing messages and IBM to acquire OpenAI headlines are hilarious!!! A 100% rust Kernel is too

  • ycombiredd3 days ago

    Wait... I had to do a double take for this one.

    To me, the funniest thing is the AlgoDrill leet code post being on the front page both today and ten years from now.

    Subtle and hilarious.

  • bluejay23872 days ago

    THIS is probably the moment that the AI naysayers on this board wake up to the potential of current AI...

  • urvader2 days ago

    GTA VI is postponed again, release next year

  • baxtr3 days ago

    Google kills Gemini cloud services - 330 points

    Quite accurate I’d say

  • Simplita2 days ago

    Big models keep getting better at benchmarks, but reliability under messy real world inputs still feels stuck in place.

  • jimnotgym2 days ago

    An AI writes a parody of our entire community, and it is hilarious even to us. I love it

  • yett3 days ago

    One thing that the AI didn’t get right is the Linux kernel versioning. Linus likes to bump the major version around every x.19-x.20 release

  • funnyenough3 days ago

    it is comforting to know that that HN design persists.

    • roywiggins3 days ago |parent

      slashdot hasn't changed in a decade so it's not an unreasonable prediction!

      https://web.archive.org/web/20150801000000*/slashdot.org

  • n2d43 days ago

    I would love to see the hallucinated comments of these! Some seem interesting — I wonder how HN suggests to prevent ad-injection in AR glasses?

    • CamperBob23 days ago |parent

      If you ask for the comments, it will give you very plausible ones (see the HN simulator.)

  • xg153 days ago

    I like how you can have AI in your Neuralink, your contact lenses, your AR glasses, your text editor and your compiler at the same time!

  • sam-cop-vimes2 days ago

    Interesting that it included the trynia Launch HN which was on the front page yesterday!

  • LarsDu883 days ago

    IBM acquires OpenAI. I lol'd

    Gemini is predicting the total collapse of OpenAI

    Also generates the HackerNews pattern of "lets rewrite X in Rust/Zig"

  • BonoboIO3 days ago

    I definitely see the Gemini Cloud Service killing

  • SirMaster3 days ago

    So we went from Linux Kernel 5.18 to 6.4 in 13 months, and Gemini thinks it will take 10 years to go from 6.18 to 7.4? Yikes...

  • efitz3 days ago

    It forgot the “I got <complex modern software> running on my <ancient underpowered hardware>” pattern.

  • elif2 days ago

    I love that it predicts its own death. Wait maybe I'm terrified by that

  • zettie2 days ago

    That #20... Pretty bold move implying GTA VI has been released by then

  • vee-kay3 days ago

    There were 666 comments here when I opened this post. I hope that is not a foreboding omen. ;-)

  • d_silin3 days ago

    I like that the future is bright in Gemini's perception - number 1 post is about SpaceX Starship's success.

  • heywoods3 days ago

    Will HN be around in 10 years? I hope so.

  • runsonrum2 days ago

    I think I might generate one of these on a daily basis. Articles and all!

  • dizhn3 days ago

    Seeing htmx there made me chuckle. I really wish something like it will be browser native by then.

  • MontagFTB3 days ago

    What, no Doom running on Voyager 2?

  • jordanb3 days ago

    Is the implication from the first story that HLS is going to need nine tries to get to Tranquility?

  • usrbinbash2 days ago

    > AI progress is stalling. Human equivalence was a mirage

    Oh how the turn tables :D

  • flockonus3 days ago

    > Show HN: A text editor that doesn't use AI (github.com)

    Last but not lest, this was a good laugh!

  • tonymet3 days ago

    Kudos that hackernews homepage didn't gain any bloat in 10 years -- still about 50kb .

  • p1dda2 days ago

    Hallucinating is really the only thing LLMs can do reliably and consistently

  • makach3 days ago

    I want to read each and every halluicinated post. Detectic deep fakes for seniors... Microsoft Office sub for 40$

    yes

  • steve_taylor3 days ago

    The most accurate part about this is that HN still hasn't implemented dark mode.

  • Atlas6672 days ago

    Needs about 50 more "Here's why I do _____ and why it works"

  • TeMPOraL2 days ago

    Honestly? I kind of like this hypothetical future this purports to be a snapshot of. Sounds hopeful.

    (EDIT: Not so much after seeing the expanded variant with full submissions and comment threads...)

    Also, nailed HN perfectly.

  • codezero3 days ago

    I love this but also funny that it includes the Nia and NATS posts from today.

  • rob3 days ago

    It even nailed the 1999 12px font!

  • dlisboa2 days ago

    I love how, in 2035, fusion energy is still 10 years away.

  • seu3 days ago

    I have to admit that this is the first time I found something made by an AI ... funny.

  • mock-possum2 days ago

    > 27. The unexpected return of server-side rendering (htmx.org)

    Ha this one got me

  • Stevvo3 days ago

    Very optimistic. From what I've heard, ITER will not be operational in 10 years.

  • pbw3 days ago

    Gary Marcus, ha! He's generally not entirely wrong, but boy, is he annoying.

  • phodo3 days ago

    Will this be the new "pelican on a bicycle" test for frontier models?

  • octav1an2 days ago

    Change the HN navigation to: new | threads | past | future | …

  • zerofor_conduct3 days ago

    100% Rust kernel - I felt that

  • lwhi3 days ago

    I was hoping for 'will 2036 be the year of the Linux desktop' ..

  • tamimio2 days ago

    The users names are more funny and interesting than the titles

  • danans3 days ago

    I'm mostly struck by how incremental and unimaginative those articles are.

    • kytazo3 days ago |parent

      > Running LLaMA-12 7B on a contact lens with WASM (arxiv.org) How cool is that?

      • danans3 days ago |parent

        Sounds like a solution in search of a problem.

  • DrNosferatu3 days ago

    Very good!

    But I expected a humorous touch in stating that fusion is still 10 years away…

  • Animats3 days ago

    That's great!

    Take a look at the HTML. The layout is all tables!

  • bryanrasmussen3 days ago

    the future is already here and what's more it's evenly distributed.

  • satisfice3 days ago

    So, HN won’t change its look?

    • dyingkneepad3 days ago |parent

      It's been like this for more than 10 years, there's a good chance it won't change in 10 years.

      • ayewo3 days ago |parent

        Yea, that's the Lindy Effect.

        https://en.wikipedia.org/wiki/Lindy_effect

      • satisfice3 days ago |parent

        That’s what you always say.

        • monerozcash3 days ago |parent

          10 years from now HN will still be the same ARC monstrosity that nobody wants to touch, except it'll probably seem even more terrifying.

  • doubleorseven2 days ago

    it forgot the "who is still hiring and why" section. also the immigration attorney is still needed since climate change is a real thing

  • LennyHenrysNuts3 days ago

    That is legit hilarious.

  • JVerstry2 days ago

    Hands down the best post in years... Tears, tears...

  • rezaprima2 days ago

    The AskHN about preventing ads in VR is hillarious

  • Foofoobar123453 days ago

    Missing "Why 2035 will be the year of the Linux desktop".

  • gary17the3 days ago

    Sh*t, still no word about "Rocky XXXVIII".[1]

    [1] "Airplane!" (1980)

  • skeltoac3 days ago

    Nice. Conspicuously missing a link to a wordpress.com blog.

  • cultofmetatron3 days ago

    > Show HN: A text editor that doesn't use AI (github.com)

    I felt that...

  • deadbabe3 days ago

    Shouldn’t there be more doom and gloom climate change headlines.

  • k1rd3 days ago

    The major problem is that I understand all the words. It is all next version of the technology we already have and know. Down to the naming.

    Think about 10 years ago. No one knows even on HN what is an agent, LLM, and all this stuff. Or to be fair even why is Trump showing up on the feed at all.

    It has to be more confusing to be real.

  • mintflow3 days ago

    as a tailscale user and quite obsessed with tailscale related news, it's quite funny to see the tailscale on the starlink dish news...

  • reneberlin3 days ago

    Running LLaMA-12 7B on a contact lens with WASM (arxiv.org)

  • nunez3 days ago

    This is so unbelievably based. Very nice!!!!

  • jatins3 days ago

    Sad to see that leetcode has survived the next decade

  • francispauli3 days ago

    gemini has no love for people writing novel task/notes organizer in 2035 or was the problem solve somewhere in between ?

  • layer83 days ago

    The user names check out a little too well.

  • tronicjester2 days ago

    IBM to acquire OpenAI is real.

  • Aardwolf3 days ago

    Hmm, nothing about quantum computing in there?

  • g-unit333 days ago

    Let's time travel to the comment section

  • JojoFatsani3 days ago

    This gave me a well-needed chuckle, thanks!

  • quantum_state3 days ago

    This is fun :-)

  • FistfulOfHaws3 days ago

    This is super fun! I tried something similar a few months ago using: https://websim.com/

    Had it do HN main feed, HN jobs and Product Hunt. It didn’t follow consistent styling but did generate articles and comments for each post. Some screenshots are in this post: https://world.hey.com/haws/the-ai-time-machine-7833669d

  • wowczarek3 days ago

    Really good results. And I understand the disclaimer when you try the comments section, but it would be amazing to see hallucinated comments as well.

  • eranation3 days ago

    > Google kills Gemini Cloud Services (killedbygoogle.com)

    In my view, Gemini 3.0 being able to laugh about their own creator, plus actually having a self-deprecating sense of humor, is the more impressive feat here. (both technically and from an alignment perspective)

  • benob3 days ago

    What, no reference to quantum or crypto?

  • sroussey3 days ago

    Webgpu v2 in 10 years, seems about right.

  • FitchApps3 days ago

    Text Editor that doesn't use AI :)

  • HackerThemAlla day ago

    Oh my, this is hilarious. Thanks!

  • fumblertzu2 days ago

    I would appreciate the same thing written by a human. That human would probably have lots of fun writing this.

  • bpoyner2 days ago

    No half life 3 released?

  • bloomingeek3 days ago

    Where did I put that damn time portal?

  • supportengineer3 days ago

    That is absolutely phenomenal!

  • xbar3 days ago

    Poor Jepsen.

  • habedi03 days ago

    Looks like it's over for Linux.

  • lacoolj2 days ago

    Based on the prompt you wrote, this woulnd't be a "Hallucination"

    And as I write this critique of your HN title, I see you have edited it since I last refreshed. I'm guessing a few others have already echoed this sentiment a few times.

    • dang2 days ago |parent

      Indeed so: https://news.ycombinator.com/item?id=46216933, https://news.ycombinator.com/item?id=46213179

  • nish__3 days ago

    > Is it time to rewrite sudo in Zig?

    Amazing!

  • jamiesonbecker3 days ago

    > 8. Google kills Gemini Cloud Services (killedbygoogle.com)

    So, Google renamed itself to Gemini?

    Instead of Google having a product named Gemini, Gemini has a product named Google.

    pattern recognition much?

    • Apocryphon3 days ago |parent

      Cloud Services specifically to accommodate Gemini and its dependent products.

    • KaiMagnus3 days ago |parent

      Someone please generate the comments for these posts next. This one is going to be funny and I don’t want to wait 10 years.

  • edgarvaldes3 days ago

    HN users: Let's ban AI comments in threads.

    HN users, later the same day: Let's vote an AI thread to the top.

    • mfro3 days ago |parent

      AI comments are insidious if the account is allowed to be indistinguishable from a human. Silly AI web content is another story. You're drawing a line where there is a very large grey area.

    • morshu90013 days ago |parent

      Because it's hilarious

    • fragmede3 days ago |parent

      edgarvaldes: What is humor? What is laughing? I'm not an LLM, you are!

    • encom2 days ago |parent

      Front page is already 50% AI topics, and now there's actual AI slop as well. I'm just about done with this fucking website.

    • elicash3 days ago |parent

      Almost like there are different people here who believe different things

  • GuinansEyebrows3 days ago

    great, it's the 2020s version of amusing ourselves to death.

  • willhunting13 days ago

    They got the UI right

  • RCitronsBroker2 days ago

    ai psychosis - now available for HN dwellers

  • insane_dreamer3 days ago

    > Google kills Gemini Cloud Services (killedbygoogle.com) 530 points by dang_fan 15 hours ago | hide | 330 comments

    This one definitely rings true.

  • mac3n2 days ago

    but where are the jobs postings?

  • wltr3 days ago

    >Google kills Gemini Cloud Services (killedbygoogle.com) 530 points by dang_fan 15 hours ago | hide | 330 comments

    That’s my favourite.

  • amelius3 days ago

    Still no flying cars?

    • WorldPeas3 days ago |parent

      and no high speed rail either

  • chromehearts2 days ago

    Hilarious ngl

  • Davidzheng3 days ago

    ok but there's too many important news for one day

  • kunley3 days ago

    It's not by a llm. Written by hand and faked to be an ai output.

  • ijidak3 days ago

    > Google kills Gemini Cloud Services.

    Lol.

    That's bad when even AI knows Google isn't going to keep a service around. Too funny.

  • outofpaper3 days ago

    Can we stop calling these confabulated artifacts, "Hallucinations"? Hallucinating is perceiving things that are not there, imagining and creating things that feel believable but are not there is confabulation. LLMs are imagination machines. They are Generative Artificial Intelligence not perception models.

    • iambateman3 days ago |parent

      The comment you wrote was on a device which should not be called a “phone” either. The word hallucinate is probably just going to grow to mean “anything an AI imagined.”

    • keepamovin3 days ago |parent

      Dude, I agree with you. The first time I saw people derogatively referring to LLM imaginings years ago as "hallucinations" seemingly to discredit them, I felt mad. Becuase it's not right. So, this was my sort-of tongue-in-cheek reappropriation of the abused-term for a positive context. You are permitted to enjoy :)

      Also: as the sibling comment by iambatemen notes: yes.

  • mac3n2 days ago

    where are the jobs postings?

  • mattmaroon3 days ago

    Not really what hallucinating means. That’s like saying JK Rowling hallucinated Harry Potter. But ok.

    • ayewo2 days ago |parent

      Perhaps fabricate is a better word, as suggested elsewhere in this thread?

  • buster3 days ago

    Awesome, love it!

  • kapravel2 days ago

    > Google kills Gemini Cloud Services (killedbygoogle.com)

    the self-burn is real

  • exacube3 days ago

    the only constant is HN's styling.

  • tacticus3 days ago

    > “As a tactic for planting misinformation in the enemy’s [networks], you mean,” Osa said. “This I know about. You are referring to the Artificial Inanity programs of the mid-First Millennium A.R.”

  • dev0p2 days ago

    >Sorry, this is Wednesday, December 10, 2025. You have not yet arrived at Sunday, December 9, 2035. This is not a time portal, not yet!

    BS. Generate it, you coward! (I love it)

  • ramon1562 days ago

    I've only now noticed the "Manual:" tag, probably indicating it was a "manual written post". I love this

  • __MatrixMan__3 days ago

    I feel like it should have at least one reference which is entirely unintelligible to us cave dwellers from 2025.

    > Waiting until your gradient phasor is fully unwound is a waste of whuffie

    ...or some such.

  • tommica3 days ago

    > Google kills Gemini Cloud Services (killedbygoogle.com)

    This will absolutely happen :D

  • winddude3 days ago

    haha, that's awesome!

  • bawejakunal3 days ago

    what a legend! amazing!

  • VerifiedReports3 days ago

    fabricate ≠ hallucinate

  • noeleon3 days ago

    this was a good laugh :D

  • srirangr3 days ago

    This imitation is so good, it almost feels like a satire and reflects on our collective preferences of upvoting articles in HN. Brilliant!

  • Redster2 days ago

    Am I the only one who opened this in a background tab and then when I got to it, was very confused? Lol.

  • cwmoore2 days ago

    Anyone mind if I just say: Thanks, I hate it?

  • devinprater3 days ago

    Holy crap it even felt like the HN front page with my screen reader. I thought I'd clicked the wrong link until I read the LLAMA 12 and such.

  • Bengalilol3 days ago

    > Google kills Gemini Cloud Services (killedbygoogle.com)

    by dang_fan

    Priceless!

    • Bengalilol3 days ago |parent

      Looks like someone thought it was a good handle <https://news.ycombinator.com/user?id=dang_fan>

  • terminatornet3 days ago

    haha, this is freakin epic! I asked gemini to make up some new york times articles from 10 years in the future

    - president buttigieg in hot water for being in epstein files

    - Interview with mayor elect Narhoz Inadmam who ran on a platform of free food

    - "Is 2035 AI in a bubble?"

    And as you can see it did NOT just take today's headlines and sort of tweak them slightly.

  • WhyOhWhyQ3 days ago

    Lmao we're using webassembly to run 7b parameter llms on contact lenses in the future. What a world it'll be.

  • kelvindegrees3 days ago

    "Google kills Gemini Cloud Services"

  • stevage3 days ago

    So much gold here. I enjoyed the #1: starship doesn't make it to the moon for another ten years.

  • fHr3 days ago

    absolute cinema

  • random33 days ago

    Love the functional programming entry, but disappointed it’s not the Year of the Linux Desktop.

  • chaitanyya3 days ago

    "IBM to acquire OpenAI" somebody should put this up on polymarket lol

  • frankfrank133 days ago

    absolutely dead at gemini thinking up

    > Google kills Gemini Cloud Services (killedbygoogle.com)

  • qwertytyyuu2 days ago

    “Google shuts down Gemini cloud services” lol, is amazing

  • conqrr3 days ago

    LMAO Google kills Gemini. Faraday cage for bedroom. So cheeky.

  • airocker3 days ago

    Google predicting that hn frontpage will not change at all? lol

  • pizlonator3 days ago

    > AI progress is stalling. Human equivalence was a mirage (garymarcus.com)

    lmao

  • jll292 days ago

    AI professor here. I know this page is a joke, but in the interest of accuracy, a terminological comment: we don't call it a "hallucination" if a model complies exactly with what a prompt asked for and produces a prediction, exactly as requested.

    Rater, "hallucinations" are spurious replacements of factual knowledge with fictional material caused by the use of statistical process (the pseudo random number generator used with the "temperature" parameter of neural transformers): token prediction without meaning representation.

    [typo fixed]

    • dang2 days ago |parent

      (I should have thought of this yesterday but have just replaced 'hallucinates' with 'imagines' in the title...though one could object to that too...)

    • articlepan2 days ago |parent

      I agree with your first paragraph, but not your second. Models can still hallucinate when temperature is set to zero (aka when we always choose the highest probability token from the model's output token distribution).

      In my mind, hallucination is when some aspect of the model's response should be consistent with reality but is not, and the reality-inconsistent information is not directly attributable or deducible from (mis)information in the pre-training corpus.

      While hallucination can be triggered by setting the temperature high, it can also be the result of many possible deficiencies in model pre- and post- training that result in the model outputting bad token probability distributions.

      • antonvs2 days ago |parent

        > In my mind, hallucination is when some aspect of the model's response should be consistent with reality

        By "reality", do you mean the training corpus? Because otherwise, this seems like a strange standard. Models don't have access to "reality".

        • KalMann2 days ago |parent

          > Models don't have access to "reality"

          This is an explanation of why models "hallucinate" not a criticism for the provided definition of hallucination.

          • antonvs2 days ago |parent

            That's a poor definition, then. It claims that a model is "hallucinating" when its output doesn't match a reference point that it can't possibly have accurate information about. How is that an "hallucination" in any meaningful sense?

      • ActivePattern2 days ago |parent

        I've never heard the caveat that it can't be attributable to misinformation in the pre-training corpus. For frontier models, we don't even have access to the enormous training corpus, so we would have no way of verifying whether or not it is regurgitating some misinformation that it had seen there or whether it is inventing something out of whole cloth.

        • Aurornis2 days ago |parent

          > I've never heard the caveat that it can't be attributable to misinformation in the pre-training corpus.

          If the LLM is accurately reflecting the training corpus, it wouldn’t be considered a hallucination. The LLM is operating as designed.

          Matters of access to the training corpus are a separate issue.

          • Workaccount22 days ago |parent

            I believe it was a super bowl ad for gemini last year where it had a "hallucination" in the ad itself. One of the screenshots of gemini being used showed this "hallucination", which made the rounds in the news as expected.

            I want to say it was some fact about cheese or something that was in fact wrong. However you could also see the source gemini cited in the ad, and when you went to that source, it was some local farm 1998 style HTML homepage, and on that page they had the incorrect factoid about the cheese.

          • CGMthrowaway2 days ago |parent

            > If the LLM is accurately reflecting the training corpus, it wouldn’t be considered a hallucination. The LLM is operating as designed.

            That would mean that there is never any hallucination.

            The point of original comment was distinguishing between fact and fiction, which an LLM just cannot do. (It's an unsolved problem among humans, which spills into the training data)

            • Aurornis2 days ago |parent

              > That would mean that there is never any hallucination.

              No it wouldn’t. If the LLM produces an output that does not match the training data or claims things that are not in the training data due to pseudorandom statistical processes then that’s a hallucination. If it accurately represents the training data or context content, it’s not a hallucination.

              Similarly, if you request that an LLM tells you something false and the information it provided is false, that’s not a hallucination.

              > The point of original comment was distinguishing between fact and fiction,

              In the context of LLMs, fact means something represented in the training set. Not factual in an absolute, philosophical sense.

              If you put a lot of categorically false information into the training corpus and train an LLM on it, those pieces of information are “factual” in the context of the LLM output.

              The key part of the parent comment:

              > caused by the use of statistical process (the pseudo random number generator

              • CGMthrowaway2 days ago |parent

                OK if everyone else agrees with your semantics then I agree

          • parineum2 days ago |parent

            The LLM is always operating as designed. All LLM outputs are "hallucinations".

            • Al-Khwarizmi2 days ago |parent

              The LLM is always operating as designed, but humans call its outputs "hallucinations" when they don't align with factual reality, regardless of the reason why that happens and whether it should be considered a bug or a feature. (I don't like the term much, by the way, but at this point it's a de facto standard).

          • eMPee5842 days ago |parent

            not that the internet had contained any misinformation or FUD when the training data was collected

            also, statments with certainty about fictitious "honey pot prompts" are a problem, plausibly extrapolating from the data should be more governed by internal confidence.. luckily there are benchmarks now for that i believe

      • julienreszka2 days ago |parent

        That's because of rounding errors

        • leecarraher2 days ago |parent

          i agree, not just the multinomial sampling that causes hallucinations. If that were the case, setting temp to 0 and just argmax over the logits would "solve" hallucinations. while round-off error causes some stochasticity it's unlikely to be the the primary cause, rather it's lossy compression over the layers that causes it.

          first compression: You create embeddings that need to differentiate N tokens, JL lemma gives us a bound that modern architectures are well above that. At face value, the embeddings could encode the tokens and provide deterministic discrepancy. But words aren't monolithic , they mean many things and get contextualized by other words. So despite being above jl bound, the model still forces a lossy compression.

          next compression: each layer of the transformer blows up the input to KVQ, then compresses it back to the inter-layer dimension.

          finally there is the output layer which at 0 temp is deterministic, but it is heavily path dependent on getting to that token. The space of possible paths is combinatorial, so any non-deterministic behavior elsewhere will inflate the likelihood of non-deterministic output, including things like roundoff. heck most models are quantized down to 4 even2 bits these days, which is wild!

    • hbn2 days ago |parent

      "Hallucination" has always seemed like a misnomer to me anyway considering LLMs don't know anything. They just impressively get things right enough to be useful assuming you audit the output.

      If anything, I think all of their output should be called a hallucination.

      • Workaccount22 days ago |parent

        We don't know if anything knows anything because we don't know what knowing is.

        • alamortsubite2 days ago |parent

          On the other hand, once you're operating under the model of not knowing if anything knows anything, there's really no point in posting about it here, is there?

        • stingraycharles2 days ago |parent

          This is just something that sounds profound but really isn’t.

          Knowing is actually the easiest part to define and explain. Intelligence / understanding is much more difficult to define.

          • shagie2 days ago |parent

            I took a semester long 500 level class back in college on the theory of knowledge. It is not easy to define - the entire branch of epistemology in philosophy deals with that question.

            ... To that end, I'd love to be able to revisit my classes from back then (computer science, philosophy (two classes from a double major), and a smattering of linguistics) with the world state of today's technologies.

      • anonymous_sorry2 days ago |parent

        Others have suggested "bullshit". A bullshitter does not care (and may not know) whether what they say is truth or fiction. A bullshitter's goal is just to be listened to and seem convincing.

        • corygarms2 days ago |parent

          The awareness of the bullshitter is used to differentiate between 'hard' and 'soft' bullshit. https://eprints.gla.ac.uk/327588/1/327588.pdf

      • locknitpicker2 days ago |parent

        > "Hallucination" has always seemed like a misnomer to me anyway considering LLMs don't know anything. They just impressively get things right enough to be useful assuming you audit the output.

        If you pick up a dictionary and review the definition of "hallucination", you'll see something in the lines of "something that you see, hear, feel or smell that does not exist"

        https://dictionary.cambridge.org/dictionary/english/hallucin...

        Your own personal definition arguably reinforces the very definition of hallucination. Models don't get things right. Why? Because their output contrasts with content covered by their corpus, thus outputting things that don't exist or were referred in it and outright contrast with factual content.

        > If anything, I think all of their output should be called a hallucination.

        No. Only the ones that contrast with reality, namely factual information.

        Hence the term hallucination.

    • bluejay23872 days ago |parent

      Want to second this. Asking the model to create a work of fiction and it complying isn't a pathology. Mozart wasn't "hallucinating" when he created "The Marriage of Figaro".

      • Olreich2 days ago |parent

        But many artists are hallucinating when they envisioned some of their pieces. Who's to say Mozart wasn't on a trip when he created The Marriage of Figaro.

        • DonHopkins2 days ago |parent

          Bill Atkinson was hallucinating when he envisioned HyperCard.

          https://patternproject.substack.com/p/from-the-mac-to-the-my...

          https://news.ycombinator.com/item?id=44530767

        • jb19912 days ago |parent

          That would have to be a very very long hallucination because it’s a huge opera that took a long time to write.

      • intrasight2 days ago |parent

        We don't know Mozart's state of mind when he composed.

        He didn't hallucinate the Marriage of Figaro but he may well have been hallucinating.

    • ayewo2 days ago |parent

      Terminology-wise, does this read like a better title instead?:

      Show HN: Gemini Pro 3 generates the HN front page 10 years from now

      • locknitpicker2 days ago |parent

        > Terminology-wise, does this read like a better title instead?:

        Generates does not convey any info on the nature of the process used to create the output. In this context, extrapolates or predicts or explores sound more suitable.

        But nitpicking over these words is pointless and represents going off on a tangent. The use of the term "hallucination" reffers to the specific mechanism used to generate this type of output. Just like prompting a model to transcode a document and thus generating an output that doesn't match any established format.

      • tim3332 days ago |parent

        I'd vote for imagines.

    • Obscurity43402 days ago |parent

      Wouldnt confabulate/confabulations be a better term in substitute for "hallucinating"?

    • saberience2 days ago |parent

      The OP clearly didn't mean "hallucination" as a bug or error in the AI, in the way you're suggesting. Words can have many different meanings!

      You can easily say, Johnny had some wild hallucinations about a future where Elon Musk ruled the world. It just means it was some wild speculative thinking. I read this title in this sense of the world.

      Not everything has to be nit-picked or overanalysed. This is an amusing article with an amusing title.

    • doe882 days ago |parent

      Exactly! At first this is the precise reason I didn't click through as I thought from the title, a page must have been somehow outputted/hallucinated by error, but luckily I then saw the number of votes, revised my choice and saw a great page.

      I'm partial though, loving Haskell myself (as a monad_lover) i'm happy it wasn't forgotten too :)

    • jotaen2 days ago |parent

      To me, “imagine” would have been a more fitting term here.

      (“Generate”, while correct, sounds too technical, and “confabulate” reads a bit obscure.)

      • tangwwwei2 days ago |parent

        "imagine" gives too much credence of humanity to this action which will continue the cognitive mistake we make of anthropomorphzing llms

    • psychoslave2 days ago |parent

      In French we call that kind of practices "affabulations". Maybe fraud, deception or deceit are the closest matching translations for this context.

      That is what the LLM are molded to do (of course). But this is also the insistence by informed people to unceasingly use fallacious vocabulary. Sure a bit of analogy can be didactic, but the current trend is rather to leverage on every occasion to spread the impression that LLM works with processes similar to human thoughts.

      A good analogy also communicate the fact that it is a mere analogy. So carrying the metaphor is only going to accumulate more delusion than comprehension.

    • mihaaly2 days ago |parent

      Interesting.

      There are AI professors out there already!

    • nunodonato2 days ago |parent

      wouldn't the right term be 'confabulation'?

      • DonHopkins2 days ago |parent

        Latin: Extraclaudiposteriorifabricátio

        Pronunciation: ex-tra-clau-dee-pos-TE-ri-o-ri-fa-bri-KA-tee-o

        Meaning: "The act of fabricating something by pulling it from one’s posterior."

          extra- = out of
          claudi- (from claudere, close/shut) repurposed for “the closed place”
          posterior- = the backside
          fabricatio = fabrication, invention
        
        German: Poausdenkungsherausziehungsmachwerk

        Pronunciation: POH-ows-den-kungs-heh-RAUS-tsee-oongs-MAHKH-verk

        Meaning: "A contrived creation pulled out of the butt by thinking it up."

          Po = butt
          Ausdenkungs- = thinking-up
          Herausziehung = pulling-out
          Machwerk = contrived creation
        
        Klingon: puchvo’vangDI’moHchu’ghach

        Pronunciation: POOKH-vo vang-DEE-moakh-CHU-ghakh (roll the gh, hit the q hard, and use that throat ch like clearing your bat’leth sinuses)

        Meaning: "The perfected act of boldly claiming something pulled out from the butt."

          puch = toilet (a real Klingon word)
          -vo’ = from
          vang = act, behave, assert (real root)
          -DI’ = when (adds timing spice)
          -moH = cause/make
          -chu’ = perfectly / clearly / expertly
          -ghach = turns a verb phrase into a noun (canonical nominalizer)
      • rat99882 days ago |parent

        No, still too negatively connoted. "Writes" "Predicts" "caricatures" is closer.

        • kevindamm2 days ago |parent

          Have we abandoned the term "generate" already?

          • rat99882 days ago |parent

            Forgot about it, my human mind has its limits. I don't know about a "we" though. I'm not representative of anyone.

  • ieie33663 days ago

    I think this is a excellent example of AI-slop output.

    All the post titles are very "junk-food-like", maybe circlejerky, bound to generate a dopamine hit for the average lurker, just how a McDonalds burger does it to the average normie.

    • da_grift_shift3 days ago |parent

      Just like the real site, no? ˙ ͜ʟ˙

      (Vouched. The hivemind [flagged] the wrongthink.)

  • plaidfuji3 days ago

    Now have it generate the articles and comments, too…

  • c0pi1ot3 days ago

    What about hallucinating new n-gate posts?

  • adamchol3 days ago

    Now we have to generate each of those articles.

  • reactordev3 days ago

    #8 is hilarious and on brand. killedbygoogle.com

  • earthnail3 days ago

    Oh please make it generate comments, too :D.

  • jbrooks843 days ago

    Boring

  • DevKoala3 days ago

    > IBM to acquire OpenAI (Rumor) (bloomberg.com)

    AYYYYLMAO

  • senfiaj2 days ago

    LOL, that was good.

  • guluarte3 days ago

    > Google kills Gemini Cloud Services

    lol

  • grenademeister2 days ago

    lmao thought something was wrong with my browser

  • eunos3 days ago

    > Google kills Gemini cloud

    Bruh moment

  • manthangupta1092 days ago

    Lol

  • cde-v3 days ago

    Hallucinates? Are AI bros trying to reclaim the word and take away its power or something?

  • roywiggins3 days ago

    Next up: hallucinate the contents of the links too!

    • keepamovin3 days ago |parent

      I remember there was a post a few days/weeks back that did that! It was very good. But that seems like too much work.

  • hnburnsy3 days ago

    From another LLM, not as satisfying as the OP

    -------------------------

    1. *Neuralink achieves full brain-computer interface, thoughts to text in real-time* (neuralink.com) 1245 points by elon2035 2 hours ago | hide | 387 comments

    2. *Fusion energy breakthrough: Unlimited clean power now commercial* (iter.org) 1123 points by fusionfan 3 hours ago | hide | 456 comments

    3. *Mars colony declares independence from Earth governments* (mars.gov) 987 points by martian1 4 hours ago | hide | 512 comments

    4. *AI wins Nobel Prize in Physics for solving quantum gravity* (nobelprize.org) 876 points by ai_nobel 5 hours ago | hide | 321 comments

    5. *Quantum computers crack all encryption, new post-quantum standards rushed* (nist.gov) 765 points by quantum_hacker 6 hours ago | hide | 678 comments

    6. *Immortality pill approved by FDA, life expectancy unlimited* (fda.gov) 654 points by eternal_life 7 hours ago | hide | 289 comments

    7. *Teleportation of matter demonstrated in lab* (cern.ch) 543 points by physicist42 8 hours ago | hide | 412 comments

    8. *Virtual reality universes surpass real economy in GDP* (metaverse.com) 432 points by vr_lord 9 hours ago | hide | 198 comments

    9. *Robot uprising averted by ethical AI framework* (asimovfoundation.org) 321 points by robot_ethics 10 hours ago | hide | 267 comments

    10. *Time dilation travel allows first tourists to future* (spacetime.travel) 210 points by time_traveler 11 hours ago | hide | 134 comments

    11. *Global UBI implemented using blockchain* (un.org) 198 points by satoshi2.0 12 hours ago | hide | 89 comments

    12. *Climate engineering reverses global warming, ice caps restored* (geoengineer.com) 187 points by climate_saver 13 hours ago | hide | 201 comments

    13. *Human-AI hybrids compete in Olympics* (olympics.org) 176 points by hybrid_athlete 14 hours ago | hide | 156 comments

    14. *Dark matter harvested for energy* (nasa.gov) 165 points by cosmos_explorer 15 hours ago | hide | 123 comments

    15. *Psychedelic therapy cures all mental illness* (who.int) 154 points by mind_healer 16 hours ago | hide | 98 comments

    16. *Nanobots eradicate cancer worldwide* (cancer.org) 143 points by nano_doc 17 hours ago | hide | 76 comments

    17. *Holographic displays replace screens* (apple.com) 132 points by holo_dev 18 hours ago | hide | 54 comments

    18. *Self-driving flying cars now standard* (tesla.com) 121 points by auto_pilot 19 hours ago | hide | 43 comments

    19. *Brain uploading to cloud for immortality* (google.com) 110 points by cloud_mind 20 hours ago | hide | 32 comments

    20. *Alien contact confirmed, messages decoded* (seti.org) 99 points by et_searcher 21 hours ago | hide | 21 comments

    21. *Synthetic food ends hunger* (beyondmeat.com) 88 points by food_innovator 22 hours ago | hide | 10 comments

    22. *Ocean cities house billions* (seasteading.org) 77 points by ocean_dweller 23 hours ago | hide | 9 comments

    23. *Mind reading tech banned for privacy* (eff.org) 66 points by privacy_advocate 1 day ago | hide | 8 comments

    24. *Superintelligence alignment solved* (openai.com) 55 points by agi_master 1 day ago | hide | 7 comments

    25. *Wormhole travel to other stars* (spacex.com) 44 points by star_traveler 1 day ago | hide | 6 comments

    26. *Augmented reality contact lenses ubiquitous* (meta.com) 33 points by ar_vision 1 day ago | hide | 5 comments

    27. *Genetic editing eliminates hereditary diseases* (crispr.com) 22 points by gene_editor 1 day ago | hide | 4 comments

    28. *Renewable energy at 100% global* (iea.org) 11 points by green_energy 1 day ago | hide | 3 comments

    29. *Cyborg enhancements for all* (darpa.mil) 9 points by cyborg_human 1 day ago | hide | 2 comments

    30. *Singularity achieved, what now?* (kurzweilai.net) 8 points by future_seer 1 day ago | hide | 1 comment

  • artursapek2 days ago

    lol Gemini killed

  • BizarroLand3 days ago

    I asked ChatGPT to do the same, it was rather dystopian in comparison:

    Hacker News — December 9, 2035 (Dystopian Edition)

    (All links fictional but realistic)

    1. Amazon pauses same-day delivery after logistics AI outage strands 22M packages (reuters.com) 1,402 points by supplychainfail 5 hours ago | 512 comments

    2. Google merges Drive, Gmail, Docs, Maps, Calendar into “Google Life” — opting out requires a support call (blog.google) 1,210 points by privacyisdead 6 hours ago | 689 comments

    3. US announces “Temporary Broadband Stabilization Fee”; ISPs increase prices 30% overnight (fcc.gov) 1,008 points by ispescapee 7 hours ago | 344 comments

    4. OpenAI suspends 40% of API keys after new worm spreads through agent-to-agent messaging (openai.com) 927 points by llmsec 3 hours ago | 382 comments

    5. Show HN: “ColdBooter” – A tool to back up your cloud VM before the provider reclaims it with no notice (coldbooter.io) 780 points by survivethecloud 2 hours ago | 192 comments

    6. Apple fined €8B for shipping non-removable batteries in “Environmental Edition” iPhone (europa.eu) 754 points by greenwashhunter 10 hours ago | 316 comments

    7. LinkedIn replaces activity feed with AI-generated “Career Stories” that users cannot disable (linkedin.com) 710 points by corp_life 8 hours ago | 267 comments

    8. China’s new export restrictions cut global GPU availability by 60% (ft.com) 701 points by chipboom 9 hours ago | 414 comments

    9. Linux 8.6 maintainers warn of mass CVEs after corporations abandon LTS patch sponsorships (kernel.org) 632 points by ossburnout 11 hours ago | 255 comments

    10. Ask HN: Anyone else locked out of their homes after the SmartKey cloud migration? 601 points by keylessandhomeless 4 hours ago | 310 comments

    11. US healthcare providers hit by nationwide outage of Cerner-Epic merger “CareSync Cloud” (wsj.com) 577 points by sysadmdespair 12 hours ago | 203 comments

    12. Meta to require facial-expression telemetry for “engagement quality optimization” in Horizon apps (meta.com) 530 points by metaescalates 3 hours ago | 421 comments

    13. Starlink announces 5 TB/mo cap; remote communities report complete service loss (starlink.com) 502 points by dishdown 5 hours ago | 158 comments

    14. New DMCA expansion criminalizes “filter removal,” affecting adblockers and local inference runtimes (congress.gov) 488 points by freedomtoadblock 7 hours ago | 389 comments

    15. AT&T sunsets 4G; millions of medical devices lose connectivity (theverge.com) 455 points by techdebtkills 10 hours ago | 197 comments

    16. Show HN: “ShellSafe” – A terminal wrapper that prevents AI-suggested commands from wiping your system (shellsafe.app) 430 points by iaccidentallysudo 2 hours ago | 111 comments

    17. US CISA: 42% of corporate networks now rely on AI agents with no audit logging (cisa.gov) 402 points by auditnow 6 hours ago | 188 comments

    18. The Great Repo Archival: GitHub purges all inactive repos >5 years to “reduce storage load” (github.blog) 388 points by codearcheologist 9 hours ago | 320 comments

    19. Mastodon instances collapse under moderation load after EU’s Automated Speech Mandate (mastodon.social) 350 points by fedifragile 7 hours ago | 144 comments

    20. NYC adopts automated congestion fines after human review team eliminated (nytimes.com) 332 points by finesallthewaydown 4 hours ago | 201 comments

    21. Dropbox raises base plan to $49/month, cites “AI compute costs” (dropbox.com) 301 points by storageinflation 11 hours ago | 176 comments

    22. Open-source maintainers strike after 3rd corporation claims their work as proprietary training data (github.com) 290 points by maintainerburnout 6 hours ago | 120 comments

    23. FEMA: 2025–2035 wildfire season officially declared a “decade-long emergency” (fema.gov) 268 points by cookedagain 12 hours ago | 112 comments

    24. Quantum ransomware group QShadow hits 11 banks using break-through key recovery exploit (krebsonsecurity.com) 250 points by qubitcrime 3 hours ago | 98 comments

    25. Show HN: OfflineLAN – A mesh-network toolkit for neighborhoods preparing for rolling blackouts (offlinelan.net) 231 points by diynetworker 1 hour ago | 44 comments

  • hn_throwaway_993 days ago

    This is awesome, but minor quibble with the title - "hallucinates" is the wrong verb here. You specifically asked it to make up a 10-year-in-the-future HN frontpage, and that's exactly what it did. "Hallucinates" means when it randomly makes stuff up but purports it to be the truth. If some one asks me to write a story for a creative writing class, and I did, you wouldn't say I "hallucinated" the story.

    • dang2 days ago |parent

      (I should have thought of this yesterday but have just replaced 'hallucinates' with 'imagines' in the title...though one could object to that too...)

    • navane2 days ago |parent

      It so very weird to see this called "hallucinate", as we all have more or less used it for "made up erroneously".

      Is this a push to override the meaning and erase the hallucination critique?

      • randomtoast2 days ago |parent

        At some point, no matter how something is mentioned, someone will offer criticism. I guess that in roughly 20% of all HN front page posts, at least one person comments on the terminology used. I do not see this as an argument against using accurate terminology, but rather as a reminder that it is impossible to meet everyone's expectations.

        There are other terms that are similarly controversial, such as "thinking models". When you describe an LLM as "thinking", it often triggers debate because people interpret the term differently and bring their own expectations and assumptions into the discussion.

    • zwnow2 days ago |parent

      If someone asked you, you would know about the context. LLMs are predictors, no matter the context length, they never "know" what they are doing. They simply predict tokens.

      • block_dagger2 days ago |parent

        This common response is pretty uninteresting and misleading. They simply predict tokens? Oh. What does the brain do, exactly?

        • adammarples2 days ago |parent

          We don't how

          • digbybk2 days ago |parent

            I guarantee that once we do know people will start appending the word “just” to the explanation. Complex behaviors emerge from simple components. Knowing that doesn’t make the emergence any more incredible.

        • zwnow2 days ago |parent

          The brain has intrinsic understanding of the world engraved in our DNA. We do not simply predict tokens based on knowledge, we base our thoughts on intelligence, emotions and knowledge. LLMs neither have intelligence nor emotions. If your brain simply predicts tokens I feel sorry for you.

          Edit: really does not surprise me that AI bros downvote this. Expecting to understand human values from people that want to make themselves obsolete was a mistake.

          • pseidemann2 days ago |parent

            > The brain has intrinsic understanding of the world engraved in our DNA.

            This is not correct. The DNA encodes learning mechanisms shaped by evolution. But there is no "Wikipedia" about the world in the DNA. The DNA is shaped by the process of evolution, and is not "filled" by seemingly random information.

            • zwnow2 days ago |parent

              > But there is no "Wikipedia" about the world in the DNA.

              Im surprised as to how you got to that conclusion by my wording. I never claimed u have something like a knowledge base in ur DNA...

              • pseidemann2 days ago |parent

                It's your first sentence. The one I have quoted.

          • Timwi2 days ago |parent

            I'm not an AI bro and I downvoted mostly because of the addendum.

        • wafflemaker2 days ago |parent

          It does exactly the same, predicts tokens, but it's totally different and superior to LLMs /s

          OTOH, brain tokens seem to be concept based and not always linguistic (many people think solely in images/concepts).

          • KalMann2 days ago |parent

            > It does exactly the same, predicts tokens,

            That is an absolutely wild claim you've made. You're being way to presumptious.

          • ricardobeat2 days ago |parent

            LLMs are “concept based” too, if you can call statistical patterns that. In a multi-modal model the embeddings for text, image and audio exist in the same high-dimensional space.

            We don’t seem to have any clue if this is how our brain works, yet.

    • sankalpkotewar3 days ago |parent

      "Predicts"

  • throwaway2903 days ago

    Weak.

    If you see HN frontpage a few years back there's almost no LLM related posts. Now there are multiple each day. An LLM didn't grasp that change, it can't creatively invent new trend or invent a larger picture of what changes in the world. So in 10 years it paints it still half posts about AI.

    You can spend a few dozen prompts on getting it to do what you want but then you might as well just spend a couple hours writing that front page yourself, be more satisfied and the result would be funnier.

  • krupan3 days ago

    Hacker News Guidelines

    What to Submit

    On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.

    Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably off-topic.

    • minimaxir3 days ago |parent

      This is fun. Fun is allowed on Hacker News, just disappointingly rare.

      • krupana day ago |parent

        Mindless fun is not what hacker news is for. There are a million other places in the internet for that. Hacker news is special because it has resisted devolving into yet another "post your memes and brain rot here" website for a good long time.

        • minimaxira day ago |parent

          OP is neither memes nor brainrot.

          Satirizing Hacker News has been a thing since its inception.

      • KrempelKrakel2 days ago |parent

        Now you woke me up, but what happend to Opera 12 Browser? (somewhere at the internet...:) "It's not enough that AI screwed the distorted reality for millions, spread its poised eliminating of millions of jobs, if content doesn't gain traction and people don't seek validation or stir up drama the future of warfare is autocomplete": Bitte komplettieren,... Krempel! Der Krakel