HNNewShowAskJobs
Built with Tanstack Start
ChatGPT Containers can now run bash, pip/npm install packages and download files(simonwillison.net)
339 points by simonw 16 hours ago | 251 comments
  • tgq291512 hours ago

    Congratulations. One insecure buggy code generator connected to an insecure packaging "system", PyPI.

    We are eagerly awaiting Claude Launch, which will be connected to ICBM bases. The last thing humanity will hear is a 100 page boring LLM written mea culpa by Amodei, where he'll say that he has warned about the dangers but it was inevitable.

    • latexr9 hours ago |parent

      — Why are you launching nukes? No one asked you to obliterate humanity.

      — You’re absolutely right. I should not have done that. Would you like me to help undo the launch?

      — Yes! Quickly! Do it!

      — <completely made up crap which does not work>

      https://www.newyorker.com/cartoon/a16995

    • wartywhoa23an hour ago |parent

      Being Russian and hearing about the horrors of war since my childhood, I always wondered how fascism, Nazis and WWII managed to become reality in 20th century.

      Then, I witnessed the answers unfolding before my eyes in real time - torrential TV and Web propaganda, warmongering, nationalism and worse of all - total acceptance of the unacceptable in a critically large portion of the country's population. Among the grandchildren of those who fought against the same things at the price of tens of millions of lives. Immediately after the Crimean takeover it was clear to me that there will be war. Many denied this, mocking and calling me a tinfoil hat.

      Well, I also always used to wonder who are those morons who allowed the things go south in Terminator, 1984, Matrix, Cat's Cradle and other well-known dystopias, what kind of people they were and what did they think?

      It doesn't really matter that these concerns are on the opposite sides of the imaginary axis.

      What really matters is this universal drive for digging their own and the next guy's graves in too many people, always finding excuse in saying "if not us, then someone else will do it". And: "The times are different now". And: "So you're comparing AI and fascism?".

    • naruhodo5 hours ago |parent

      Mechahitler[1] now has a job at the Pentagon.[2]

      [1] https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-...

      [2] https://www.bbc.com/news/articles/c628d9mre3go

    • zekejohn2 hours ago |parent

      if its in a secured and completely isolated sandbox that gets destroyed at the end of the request, then how could it he “insecure”

      • ludvigk2 hours ago |parent

        That “completely isolated” sandbox is connected to the internet on one end, and to an insecure human on the other.

    • DarkNova6an hour ago |parent

      Skynet is just another word for "Cloud", you know.

  • simonw16 hours ago

    Regular default ChatGPT can also now run code in Node.js, Ruby, Perl, PHP, Go, Java, Swift, Kotlin, C and C++.

    I'm not sure when these new features landed because they're not listed anywhere in the official ChatGPT release notes, but I checked it with a free account and it's available there as well.

    • 1980phipsi9 hours ago |parent

      I was able to install the D language compiler DMD by providing a .deb file.

      https://chatgpt.com/share/69781bb5-cf90-800c-8549-c845259c33...

    • piskov12 hours ago |parent

      Shame no c# in that list

      • martinald10 hours ago |parent

        Probably (?) not related but there is an issue with claude code for web with nuget. It doesn't support the proxy auth mechanism that anthropic gives it. I wonder if it's the same problem here.

  • candiddevmike14 hours ago

    Seems like everyone is trying to get ahead of tool calling moving people "off platform" and creating differentiators around what tools are available "locally" to the models etc. This also takes the wind out of the sandboxing folks, as it probably won't be long before the "local" tool calling can effectively do anything you'd need to do on your local machine.

    I wonder when they'll start offering virtual, persistent dev environments...

    • simonw14 hours ago |parent

      Claude Code for the web is kind of a persistent virtual dev environment already.

      You can start a session there and chat with it to get a bunch of work done, then come back to that session a day later and the virtual filesystem is in the same state as when you left it.

      I haven't figured out if this has a time limit on it - it's possible they're doing something clever with object storage such that the cost of persisting those environments is really low, see also Fly's Sprites.dev: https://fly.io/blog/design-and-implementation/

      • esperent13 hours ago |parent

        It's so incredibly buggy though. I end up with hung sessions "starting claude code" every second or third time. After a few times of losing work I'm done with it. I'll check back in a few months and see if it's in better shape.

        • sersi4 hours ago |parent

          I just decided to create a vm for my claude code with strict network controls so it can't access my own internal network and I limit what exactly gets shared to it.

    • yoyohello1314 hours ago |parent

      > I wonder when they'll start offering virtual, persistent dev environments...

      A lot of companies have been wanting to move in this direction. Instead of maintaining a fleet of machines, you just get a bunch of thin clients and pay Microsoft of whoever to host the actual workloads. They already do this 'kiosk' style stuff for a lot of front-line staff.

      Honestly, not having my own local hardware for development sounds like a living hell, but seems like the way we are going.

      • simonw14 hours ago |parent

        Coding agents are a particularly good fit for disposable development environments because of the risk of them messing things up. If the entire environment is ephemeral the worst that can happen (aside from private source code leaks to a malicious third party) is the environment gets trashed and you have to start over in a new one.

      • ljm13 hours ago |parent

        Coming full circle to renting time from a mainframe.

      • Imustaskforhelp13 hours ago |parent

        We are gonna have YOLO agents who will deploy directly to website (technically exe.dev already does that for me when I ask it to generate golang projects lol)

        Honestly I felt like it really bores me or (overwhelms?) me because now I feel like okay now I will do this, then that and then that & drastically expand the scope of the project but that comes with its own fatigue and the limits of free tokens or context with exe.dev so I end up publishing it on git provider, git ingest it paste it in web browser gemini ask it for updates (it has 1 million context) and then paste it with Opencode with an openrouter devstral key.

        I used this workflow to drastically improve the UI of a project but like I would consider that aside from some tinkering, I felt like the "fun" of a project definitely got reduced.

        It was always fun for me to use LLM's as I was in loop (Didn't use agents, copy paste workflow from web) but now agents kind of replicated that too & have gotten (I must admit) pretty good at it.

        I don't know man, any thoughts on how to make such things fun again? When LLM's first came or even before using agents like this with just creating single scripts, It was fun to use them but creating whole projects with huge scope feels very fun sucking imo.

        • fragmede10 hours ago |parent

          If you like juggling, how many tasks in how many epics in how many projects are you working on at the same time? It's not for everyone tho.

    • jkelleyrtp10 hours ago |parent

      I started building something for the dioxus team to have access to mac/linux persistent and ephemeral dev envs with vnc and beefy cpu/mem.

      Nobody offered multiplatform and we really needed it!

      https://skyvm.dev

  • distalx8 hours ago

    This is either going to save hours… or create very educational outages.

    • stoneforger5 hours ago |parent

      If the agent would be able to update the model that would be educational for the model, noone else.

  • dangoodmanUT10 hours ago

    Giving agents linux has compounding benefits in our experience. They're able to sort through weirdness that normal tooling wouldn't allow. Like they can read and image, get an error back from the API and see it wasn't the expected format. They read the magic bytes to see it was a jpeg despite being named .png, and read it correctly.

    • storystarling3 hours ago |parent

      Matches my experience with print-on-demand workflows. I tried using vision models to validate things like ICC profiles and total ink density, but they usually just hallucinate that the file is compliant. I ended up giving the agent access to ImageMagick to run analysis directly. It’s the only reliable way to catch issues before sending files to fulfillment, otherwise you end up eating the cost of failed prints.

    • ndsipa_pomu15 minutes ago |parent

      > They read the magic bytes to see it was a jpeg despite being named .png, and read it correctly.

      Maybe I'm missing something, but it seems trivial to implement reading the magic bytes. I haven't tested it, but I'd expect most linux image displayers/editors to automatically work with misnamed files as that is almost entirely the purpose of magic bytes.

      Personally, I think Microsoft is to blame for everyone relying on file extensions too much as it was a bad idea which led to a lot of security issues.

  • sheepscreek9 hours ago

    Nice work detective Simon! I love these “discovery” posts the most because you can’t find this stuff anywhere.

    • go_photon_go9 hours ago |parent

      Absolutely, when people discover and share there's something fun to it beyond press releases and commentary. Creative and inspiring post

  • jmacd14 hours ago

    I wonder how long npm/pip etc even makes sense.

    Dependancies introduce unnecessary LOC and features which are, more and more, just written by LLMs themselves. It is easier to just write the necessary functionality directly. Whether that is more maintainable or not is a bit YMMV at this stage, but I would wager it is improving.

    • physicsguy2 hours ago |parent

      What a bizarre comment. Take something like NumPy - has a hard dependency on BLAS implementations where numerical correctness are highly valued for accuracy and require deep thinking for correct implementation as well as for performance. Written in a different language again for performance so again an LLM would have to implement all of those things. What’s the utility in burning energy to regenerate this all the time when implementations already exist?

      • hluskaan hour ago |parent

        What do supply chain attacks look like against one of these containers?

    • unixhero3 hours ago |parent

      The most popular modules downloaded off pip and npm are not singular simple functions and cannot easily be rewritten by an llm.

      Scikit-learn

      Pandas

      Polars

    • ford7 hours ago |parent

      Interesting thought (I think recently more than ever it's a good idea to question assumptions) - but IMO abstractions are important as ever.

      Maybe the smallest/most convenient packages (looking at you is-even) are obsolete, but meaningful packages still abstract a lot of complexity that IMO aren't easier to one-shot with an LLM

      • whazor3 hours ago |parent

        Concretely, when you use Django, underneath you have CPython, then C, then assembly, and finally machine code. I believe LLMs have been much better trained on each layer than going end-to-end.

    • fendy30023 hours ago |parent

      I consider packages over 100k download production-tested. Sure LLM can roll some by themselves but if many edge cases to appear, (which may already be handled by public packages) you will need to handle it.

      • embedding-shapean hour ago |parent

        Don't base anything on just download numbers, not only is it easily game-able, it's enough with like 3 small companies using a package and push commits individually and CI triggering on every new commit for that number to lose any sort of meaning.

        Vanity metrics should not be used for engineering decisions.

    • sersi4 hours ago |parent

      Well you do need to vet dependencies and I wish there was a way to exclude purely vibe coded dependencies that no human reviewed but for well established libraries, I do trust well maintained and designed human developed libraries over AI slop.

      Don't get me wrong, I'm not a luddite, I use claude code and cursor but the code generated by either of those is nowhere near what I'd call good maintainable code and I end up having to rewrite/refactor a big portion before it's in any halfway decent state.

      That said with the most egregious packages like left-pad etc in nodejs world it was always a better idea to build your own instead of depending on that.

      • hdjrudni2 hours ago |parent

        I've been copy-pasting small modules directly into my projects. That way I can look them over and see if they're OK and it saves me an install and possible future npm-jacking. There's a whole ton of small things that rarely need any maintenance, and if they do, they're small enough that I can fix myself. Worst case I paste in the new version (I press 'y' on github and paste the link at the top of the file so I can find it again)

    • kristianp13 hours ago |parent

      At times I wonder why x tui coding agent was written in js/ts/python, why not use Go if it's mostly llm coded anyway? But that's mostly my frustration at having to wait for npm to install a thousand dependencies, instead of one executable plus some config files. There's also support libraries like terminal ui that differ in quality between platforms.

      • hdjrudni2 hours ago |parent

        Funny because as a non-Go user, the few Go binaries I've used also installed a bunch of random stuff.

        This can be fixed in npm if you publish pre-compiled binaries but that has its own problems.

        • zenmacan hour ago |parent

          >the few Go binaries I've used also installed a bunch of random stuff.

          Same goes for rust. Sometime one package implicitly imports other in different version. And look of rustup tree to resolve the issue just doesn't seem very appealing.

    • baby_souffle11 hours ago |parent

      As long as "don't roll your own crypto" is considered good advice, you'll have at least a few packages/libraries that'll need managing.

      For a decent number of relatively pedestrian tasks though, I can see it.

      • emjan hour ago |parent

        LLMs are great at the roll you own crypto foot gun. They will tell you to remember all these things that are important, and then ignore their own tips.

    • TZubiri14 hours ago |parent

      This is like saying Wikipedia doesn't make sense because there's now Grokipedia

      • GuinansEyebrows14 hours ago |parent

        there are people (on Hacker News Dot Com, even) who believe this without a shred of shame or irony.

    • PunchyHamster2 hours ago |parent

      You have insane delusions about how capable LLMs are but even assuming its somehow true: downloading deps instead of hallucinating more code saves you on tokens

      • hluskaan hour ago |parent

        And your opinions on how average people use these tools are 100% accurate?

    • throwaway20277 hours ago |parent

      That was already the case for a lot of things like is-even.

    • letsgethigh12 hours ago |parent

      best to write assembly instead.

  • Fernicia13 hours ago

    Has Gemini lost its ability to run javascript and python? I swear it could when it was launched by now its saying it hasn't the ability. Annoying regression when Claude and ChatGPT are so good at it.

    • tj800x13 hours ago |parent

      This regression seems to have happened in the past few days. I suspected it was hallucinating the run and confirmed it by by asking Gemini to output the current date/time. The UTC it was reported was in the future from my clock. Some challenging mathematics were generating wrong results. Gemini will acknowledge something is wrong if you push it to explain the discrepancies, but can't explain it.

  • randomtoast14 hours ago

    Maybe soon we have single use applications. Where ChatGPT can write an App for you on-the-fly in a cloud sandbox you interact with it in the browser and fulfill your goal and afterwards the App is shutdown and thrown away.

    • twostorytower14 hours ago |parent

      You can already do this.

    • Imustaskforhelp13 hours ago |parent

      exe.dev (though there are alternatives like sprites.dev etc. too)

  • pplonski864 hours ago

    thank you for sharing, is there a new container for each code run, or it stays the same for whole conversation?

    • aryehof3 hours ago |parent

      It’s maintained for the conversation. You can ask it for details like this.

  • skybrian14 hours ago

    Not sure if this is still working. I tried getting it to install cowsay and it ran into authentication issues. Does it work for other people?

    • aryehof3 hours ago |parent

      I could even get it to download the ruby cowsay gem from rubygems and run it with some provided text. An alternative is to attach the gem to the conversation or provide a publicly available url.

    • simonw14 hours ago |parent

      Can you share the transcript?

      • skybrian12 hours ago |parent

        https://chatgpt.com/share/6977f9d7-ca94-8000-b1a0-8b1a994e58...

        The transcript doesn't show it (I think it faked it) but here's the code in the sidebar:

        > bash -lc mkdir -p /mnt/data/cowsay-demo && cd /mnt/data/cowsay-demo && npm init -y >/dev/null && npm i cowsay@latest >/dev/null && echo 'Installed cowsay version:' && node -e "console.log(require('cowsay/package.json').version)"

          npm error code E401
          npm error Incorrect or missing password.
          npm error If you were trying to login, change your password, create an
          npm error authentication token or enable two-factor authentication then
          npm error that means you likely typed your password in incorrectly.
          npm error Please try again, or recover your password at:
          npm error   https://www.npmjs.com/forgot
          npm error
          npm error If you were doing some other operation then your saved credentials are
          npm error probably out of date. To correct this please try logging in again with:
          npm error   npm login
          npm error A complete log of this run can be found in: /home/oai/.npm/_logs/2026-01-26T21_20_00_322Z-debug-0.log
        
        > Checking and overriding npm registry > It seems like the registry option is protected, possibly pointing to an internal OpenAI registry that requires authentication. To bypass this, I can override the registry in the command with npm i cowsay --registry=https://registry.npmjs.org/. Let's give this a try and see if it works.

        It's unclear if that helped.

        I tried again and it worked. It seems like I have to ask for it to do things "in the container" or it will just give me directions about how to do it.

        • simonw12 hours ago |parent

          OK that's really weird. Intermittent environment bug perhaps?

  • carterschonwald12 hours ago

    but… will gpt still get confused by the ellippses that its document viewer ui hack adds? probably yes.

  • xnx13 hours ago

    How much compute do you get in these containers? Could I have it run whisper on an mp3 it downloads?

    • simonw13 hours ago |parent

      That might work! You would have to figure out how to get Whisper working in there but I'm sure that's possible with a bit of creativity concerning uploading files and maybe running a build with the available C compiler.

      It appears to have 4GB of RAM and 56 (!?) CPU cores https://chatgpt.com/share/6977e1f8-0f94-8006-9973-e9fab6d244...

      • tintor13 hours ago |parent

        Cores are shared with other containers.

      • Imustaskforhelp13 hours ago |parent

        Huh...

        If people are getting this for free or even as an offering with chatgpt consideirng it becomes subsidized too. Lowend providers are a little in threat with their 7$/year deals if Chatgpt provides 56 cores for free. this doesn't seem right to provide so many cores for (free??)

        Are you running this in your free account as you mention in blog post simon or in your paid account?

        • storystarling3 hours ago |parent

          You are likely just seeing the host topology. Even if the container reports 56 cores, the actual compute is almost certainly throttled via cgroups to keep the unit economics viable. I would be surprised if you can sustain more than a fraction of a vCPU before hitting a hard quota.

        • simonw13 hours ago |parent

          My $20/month paid account.

          I used a free account to check if the feature was available there and it tried to get me to upgrade two prompts in (just enough for me to confirm the container worked and could install packages).

          • Imustaskforhelp13 hours ago |parent

            Oh thanks for your reply Simon!

            > I used a free account to check if the feature was available there and it tried to get me to upgrade two prompts in (just enough for me to confirm the container worked and could install packages).

            Wait it tried... to make you upgrade your chatgpt account from free to paid account? Sorry I didn't get what you meant here

            (Funnily I asked chatgpt about what it thinks of your text and it says that It thinks that it tries to ask you to pay up)

            Is this thing (maybe some additions to make it like sprites.dev?) + some ad features for basic query gonna be how openAI Monetizes?

            I mean I am part of lowend community (so indie community of hosting providers) and they are all really pissed and some shutting down because of ram prices increases. OpenAI has all the ram in the world right now so is it trying to be a monopoly in this instance?

            I just found it to be really dystopian that it asked you to pay. Can you share me a pic of it if possible or share the free conversation. Heck, I might have to try it now on my free account as well.

            Curiosity's piqued right now.

            • simonw13 hours ago |parent

              On my free ChatGPT account I ran a prompt telling it to write and execute hello world in a bunch of languages: https://chatgpt.com/share/6977aa7c-7bd8-8006-8129-8c9e25126f...

              It did what I asked - proving that the container feature works even for free accounts - but then displayed a message saying that I was as out of free prompts and would need to upgrade or wait before I could run more.

        • goinghjuk13 hours ago |parent

          by default containers do not limit core count, you'll get all available on the host/VM.

          these cores are shared with all the other containers, could be hundreds more

  • CSMastermind9 hours ago

    Thank God, this was extremely annoying

  • blobbers12 hours ago

    Did I miss the boat on chatgpt? Is there something more to it than the web chat interface?

    I jumped on the Claude Code bandwagon and I dropped off chatgpt.

    I find the chatgpt voice interface to be infuriating; it literally talks in circles and just spews summary garbage whenever I ask it anything remotely specific.

    • simonw12 hours ago |parent

      I still like ChatGPT for search more than Claude, though I think Claude may be catching up now. Gemini is getting good at search too (as you'd hope it would!)

    • aryehof3 hours ago |parent

      Chatgpt recently added additional personalization options that have made their voice chat better for me. I want a direct professional, no “hey” there I’m your bro fake stuff etc. See personalization under settings.

    • fragmede10 hours ago |parent

      codex ~= Claude code

  • LowLevelKernel11 hours ago

    Isn’t that ChatGPT’s internal MCP tools?

    • simonw11 hours ago |parent

      It's one of the tools that are available to ChatGPT - they're not MCP tools because ChatGPT's implementation of tools pre-dates MCP, but they work effectively the same way.

      Here's a full list which looks accurate to me: https://chatgpt.com/share/6977ffa0-df14-8006-9647-2b8c90ccbb...

  • behnamoh14 hours ago

    I wonder if the era of dynamic programming languages is over. Python/JS/Ruby/etc. were good tradeoffs when developer time mattered. But now that most code is written by LLMs, it's as "hard" for the LLM to write Python as it is to write Rust/Go (assuming enough training data on the language ofc; LLMs still can't write Gleam/Janet/CommonLisp/etc.).

    Esp. with Go's quick compile time, I can see myself using it more and more even in my one-off scripts that would have used Python/Bash otherwise. Plus, I get a binary that I can port to other systems w/o problem.

    Compiled is back?

    • koe12310 hours ago |parent

      > But now that most code is written by LLMs

      Am I in the Truman show? I don’t think AI has generated even 1% of the code that I run in prod, nor does anyone I respect. Heavily inspired by AI examples, heavily assisted by AI during research sure. Who are these devs that are seeing such great success vibecoding? Vibecoding in prod seems irresponsible at best

      • SchemaLoad10 hours ago |parent

        It's all over the place depending on the person or domain. If you are building a brand new frontend, you can generate quite a lot. If you are working on an existing backend where reliability and quality are critical, it's easier to just do yourself. Maybe having LLMs writing the unit tests on the code you've already verified working.

      • superfrank9 hours ago |parent

        > Who are these devs that are seeing such great success vibecoding? Vibecoding in prod seems irresponsible at best

        AI written code != vibecoding. I think anyone who believes they are the same is truly in trouble of being left behind as AI assisted development continues to take hold. There's plenty of space between "Claude build me Facebook" and "I write all my code by hand"

      • mbreese7 hours ago |parent

        I was talking to a product manager a couple weeks ago about this. His response: most managers have been vibecoding for long time. They've just been using engineers instead of LLMs.

        • koe123an hour ago |parent

          This is a really funny perspective

        • sersi36 minutes ago |parent

          Having done both, right now I prefer vibe coding with good engineers. Way less handholding. For non-technical managers, outside of prototyping vibe coding produces terrible results

      • coliveira8 hours ago |parent

        If you work on highly repetitive areas like web programming, I can clearly see why they're using LLMs. If you're in a more niche area, then it gets harder to use LLM all the time.

      • resonious9 hours ago |parent

        There is a nice medium between full-on vibe coding and doing it yourself by hand. Coding agents can be very effective on established codebases, and nobody is forcing you to push without reviewing.

      • cheeze10 hours ago |parent

        FAANG here (service oriented arch, distributed systems) and id say probably 20+ percent of code written on my team is by an LLM. it's great for frontends, works well with test generation, or following an existing paradigm.

        I think a lot of people wrote it off initially as it was low quality. But gemini 3 pro or sonnet 4.5 saves me a ton of time at work these days.

        Perfect? Absolutely not. Good enough for tons of run of the mill boilerplate tasks? Without question.

        • zx808010 hours ago |parent

          > probably 20+ percent of code written on my team is by an LLM. it's great for frontends

          Frontend has always been shitshow since JS dynamic web UIs invented. With it and CSS no one cares what runs page and how many Mb it takes to show one button.

          But regarding the backend, the vibecoding still rare, and we are still lucky it is like that, and there was no train crush because of it. Yet.

          • llbbdd7 hours ago |parent

            Backend has always been easier than frontend. AI has made backend absolutely trivial, the code only has to work on one type of machine in one environment. If you think it's rare or will remain rare you're just not being exposed to it, because it's on the backend.

            • bopbopbop77 hours ago |parent

              Might be a surprise to you, but some backends are more than just a Nextjs endpoint that calls a database.

              • llbbdd3 hours ago |parent

                No surprise at all and I'd challenge you to find any backend task that LLMs don't improve working on as much they do frontend. And ignoring that the parent comment here is just ignorant since they're talking about the web like it's still 2002. I've worked professionally at every possible layer here and unless you are literally at the leading edge, SOTA, laying track as you go, backend is dramatically easier than anything that has to run in front of users. You can tolerate latency, delays and failures on the backend that real users will riot about if it happens in front of them. The frontend performance envelope starts where the backend leaves off. It does not matter in the slightest how fast your cluster of beefy identical colocated machines does anything at all if it takes more than 100ms to do anything that the user directly cares about, on their shitty browser on a shitty machine on tethered to their phone in the mountains, and the difference is trivially measurable by people who don't work in our field, so the bar is higher.

              • ivantop6 hours ago |parent

                Honestly, I am also at a faang working on a tier 0 distributed system in infra and the amount of AI generated code that is shipped on this service is probably like 40%+ at this point.

                • llbbdd3 hours ago |parent

                  I'm not surprised at all here, last time I worked in a FAANG there was an enormous amount of boilerplate (e.g. Spring), and it almost makes me weep for lost time to think how easy some of that would be now.

          • halfcat9 hours ago |parent

            I think you’re onto something. Frontend tends to not actually solve problems, rather it’s mostly hiding and showing parts of a page. Sometimes frontend makes something possible that wasn’t possible before, and sometimes the frontend is the product, but usually the frontend is an optimization that makes something more efficient, and the problem is being solved on the backend.

            It’s been interesting to observe when people rave about AI or want to show you the thing they built, to stop and notice what’s at stake. I’m finding more and more, the more manic someone comes across about AI, the lower the stakes of whatever they made.

            • llbbdd7 hours ago |parent

              Spoken like someone deeply unfamiliar with the problem domain since like 2005, sorry. It's an entirely different class of problems on the front end, most of them dealing with making users happy and comfortable, which is much more challenging than any of the rote byte pushing happening on the backend nowadays.

        • 8organicbits10 hours ago |parent

          As someone currently outside FAANG, can you point to where that added productivity is going? Is any of it customer visible?

          Looking at the quality crisis at Microsoft, between GitHub reliability and broken Windows updates, I fear LLMs are hurting them.

          I totally see how LLMs make you feel more productive, but I don't think I'm seeing end customer visible benefits.

          • mediaman10 hours ago |parent

            I think much of the rot in FAANG is more organizational than about LLMs. They got a lot bigger, headcount-wise, in 2020-2023.

            Ultimately I doubt LLMs have much of an impact on code quality either way compared to the increased coordination costs, increased politics, and the increase of new commercial objectives (generating ads and services revenue in new places). None of those things are good for product quality.

            That also probably means that LLMs aren't going to make this better, if the problem is organizational and commercial in the first place.

    • bogtog12 hours ago |parent

      > But now that most code is written by LLMs, it's as "hard" for the LLM to write Python as it is to write Rust/Go

      The LLM still benefits from the abstraction provided by Python (fewer tokens and less cognitive load). I could see a pipeline working where one model writes in Python or so, then another model is tasked to compile it into a more performant language

      • anonzzzies11 hours ago |parent

        It's very good (in our experience, YMMV of course) when/llm write prototype with python and then port automatically 1-1 to Rust for perf. We write prototypes in JS and Python and then it gets auto ported to Rust and we have been doing this for about 1 year for all our projects where it makes sense; in the past months it has been incredibly good with claude code; it is absolutely automatic; we run it in a loop until all (many handwritten in the original language) tests succeed.

        • behnamoh11 hours ago |parent

          IDK what's going on in your shop but that sounds like a terrible idea!

          - Libraries don't necessarily map one-to-one from Python to Rust/etc.

          - Paradigms don't map neatly; Python is OO, Rust leans more towards FP.

          - Even if the code be re-written in Rust, it's probably not the most Rustic (?) approach or the most performant.

          • anonzzzies11 hours ago |parent

            It doesn't map anything 1 to 1, it uses our guidelines and architecture for porting it which works well. I did say YMMV anyway; it works well for us.

            • behnamoh10 hours ago |parent

              Sorry, so basically you're saying there are two separate guidelines, one for Python and one for Rust, and you have the LLM write it first in Python and then Rust. But I still don't understand why it would be any better than writing the code in Rust in one go? Why "priming" it in Python would improve the result in any way?

              Also, what happens when bug fixes are needed? Again first in Py and then in Rs?

        • abrookewood9 hours ago |parent

          Why not get it to write it in Rust in the first place?

      • bko11 hours ago |parent

        I think that's not as beneficial as having proper type errors and feeding that into itself as it writes

        • LudwigNagasena11 hours ago |parent

          Expressive linting seems more useful for that than lax typing without null safety.

      • JumpCrisscross11 hours ago |parent

        NP (as in P = NP) is also much lower for Python than Rust on the human side.

        • behnamoh11 hours ago |parent

          What does that mean? Can you elaborate?

          • JumpCrisscross11 hours ago |parent

            Sorry, yes. LLMs write code that's then checked by human reviewers. Maybe it will be checked less in the future. But I'm not seeing fully-autonomous AI on the horizon.

            At that point, the legibility and prevalence of humans who can read the code becomes almost more important than which language the machine "prefers."

            • behnamoh11 hours ago |parent

              Well, verification is easier than creation (i.e., P ≠ NP). I think humans who can quickly verify something works will be in more demand than those who know how to write it. Even better: Since LLMs aren't as creative as humans (in-distribution thinking), test-writers will be in more demand (out-of-distribution thinkers). Both of these mean that humans will still be needed, but for other reasons.

              The future belongs to generalists!

              • Der_Einzige9 hours ago |parent

                P ≠ NP is NOT confirmed and my god I really do not want that to ever be confirmed

                I really do want to live in the world where P = NP and we can trivially get P time algorithms for believed to be NP problems.

                I reject your reality and substitute my own.

              • rvz10 hours ago |parent

                > The future belongs to generalists!

                Couldn't be more correct.

                The experienced generalists with techniques of verification testing are the winners [0] in this.

                But one thing you cannot do, is openly admit or to be found out to say something like: "I don't know a single line of Rust/Go/Typescript/$LANG code but I used an AI to do all of it" and the system breaks down and you can't fix it.

                It would be quite difficult to take a SWE seriously that prides themselves in having zero understanding and experience of building production systems and runs the risk of losing the company time and money.

                [0] https://news.ycombinator.com/item?id=46772520

                • bandrami7 hours ago |parent

                  I prefer my C compiler to write my asm for me from my C code but I can still (and sometimes have to!) read the asm it creates.

    • condiment10 hours ago |parent

      100% of my LLM projects are written in Rust - and I have never personally written a single line of Rust. Compilation alone eliminates a number of 'category errors' with software - syntax, variable declaration, types, etc. It's why I've used Go for the majority of projects I've started the past ten years. But with Rust there is a second layer of guarantees that come from its design, around things like concurrency, nil pointers, data races, memory safety, and more.

      The fewer category errors a language or framework introduces, the more successful LLMs will be at interacting with it. Developers enjoy freedom and many ways to solve problems, but LLMs thrive in the presence of constraints. Frontiers here will be extensions of Rust or C-compatible languages that solve whole categories of issue through tedious language features, and especially build/deploy software that yields verifiable output and eliminates choice from the LLMs.

      • dotancohen10 hours ago |parent

          > ... and eliminates choice from the LLMs.
        
        Perl is right out! Maybe the LLMs could help us decipher extent Perl "write once, maintain never" code.
        • nl9 hours ago |parent

          it's very good at this BTW

          • trollbridge8 hours ago |parent

            I've found it's terrible at digesting a few codebases I've needed to deal with (to wit, 2007-era C# which used lots of libraries which were popular then, and 1993-era Visual Basic which also used from third party library that no LLM seems to understand the first thing about).

            • simonw8 hours ago |parent

              I had great results recently with ~22 year old PHP: https://simonwillison.net/2025/Jul/1/mid-2000s/

              It even guessed the vintage correctly!

              > This appears to be a custom template system from the mid-2000s era, designed to separate presentation logic from PHP code while maintaining database connectivity for dynamic content generation.

              • dotancohen3 hours ago |parent

                That's great. Just yesterday I spoke with a developer who refutes Rector on old codebases, instead having an LLM simply refactor his PHP 5.6 to 8.(3 I think). He doesn't even check in Rector anymore. These are all bespoke business scripts that his team have been nursing for two decades. He even updated the Codeigniter framework it's all running on.

            • nl8 hours ago |parent

              I suspect the problem with VB is that VB 4 and 5 (which I think was that era) were so closely tied to the IDE it is difficult to work out what is going on without it.

              (I did Delphi back when VB6 was the other option so remember this problem well)

    • bopbopbop711 hours ago |parent

      > But now that most code is written by LLMs

      Got anything to back up this wild statement?

      • dankwizard10 hours ago |parent

        Me, my team, and colleagues also in software dev are all vibe coding. It's so much faster.

        • username2238 hours ago |parent

          > It's so much faster.

          A lot of things are "so much faster" than the right thing. "Vibe traffic safety laws" are much faster than ones that increase actual traffic safety: http://propublica.org/article/trump-artificial-intelligence-... . You, your team, and colleagues are producing shiny trash at unbelievable velocity. Is that valuable?

        • manishsharan10 hours ago |parent

          If I may ask, does the code produced by LLM follow best practices or patterns? What mental model do you use to understand or comprehend your codebase?

          Please know that I am asking as I am curious and do not intend to be disrespectful.

          • dankwizard6 hours ago |parent

            I get your sentiment but a lot of people on this forum forget that a lot of us are just working for the paycheck - I don't owe my company anything.

            Do I know the code base like the back of my hand? Nope. Can I confidently talk to how certain functions work? Not a chance.

            Can I deploy what the business wants? Yep. Can I throw error logs into LLMs and work out the cause of issues? Mostly.

            I get some of you may want to go above and beyond for your company and truly create something beautiful but then guess what - That codebase is theirs. They aren't your family. Get paid and move on

            • tuwtuwtuwtuw5 hours ago |parent

              Do you work as a consultant then? I've been with the same employer for a long time, so if my team creates a mess, I get to look at it daily.

          • DrewADesign8 hours ago |parent

            And what’s the name of the company? I’m fixing to harvest some bug bounties.

          • mjevans9 hours ago |parent

            Think of the LLM as a slightly lossy compression algorithm fed by various pattern classifiers that weight and bin inputs and outputs.

            The user of the LLM provides a new input, which might or might not closely match the existing smudged together inputs to produce an output that's in the same general pattern as the outputs which would be expected among the training dataset.

            We aren't anywhere near general intelligence yet.

      • RALaBarge10 hours ago |parent

        Depends, what to you would qualify as evidence?

        • bopbopbop710 hours ago |parent

          Something quantitative and not "company with insane vested interest/hype blogger said so".

      • ecto10 hours ago |parent

        If you have to ask, you can't afford it.

      • myhf10 hours ago |parent

        I mean, people who use LLMs to crank out code are cranking it out by the millions of lines. Even if you have never seen it used toward a net positive result, you have to admit there is a LOT of it.

        • halfcat9 hours ago |parent

          If all code is eventually tech debt, that sounds like a massive problem.

    • jacquesm12 hours ago |parent

      > But now that most code is written by LLMs

      Is this true? It seems to be a massive assumption.

      • embedding-shape12 hours ago |parent

        By lines of code produced in total? Probably true. By usefulness? Unclear.

      • e-dard12 hours ago |parent

        Replace _is_ with _can be_ and I think the general point still stands.

        • fmbb12 hours ago |parent

          Sounds like just as big an assumption.

        • jrflowers10 hours ago |parent

          Replacing “is” with “can be” is in practical terms the same thing as replacing “is” with “isn’t”

      • fooker11 hours ago |parent

        By lines of code, almost by an order of magnitude.

        Some of the code is janky garbage, but that’s what most code it. There’s no use pearl clutching.

        Human engineering time is better spent at figuring out which problems to solve than typing code token by token.

        Identifying what to work on, and why, is a great research skill to have and I’m glad we are getting to realistic technology to make that a baseline skill.

        • jacquesm11 hours ago |parent

          Well, you will somehow have to turn that 'janky garbage' into quality code, who will do that then?

          • tokioyoyo11 hours ago |parent

            You don't really have to.

          • fooker11 hours ago |parent

            For most code, this never happens in the real world.

            The vast majority of code is garbage, and has been for several decades.

            • pharrington9 hours ago |parent

              So we should all work to become better programmers! What I'm seeing now is too many people giving up and saying "most code is bad, so I may was well pump out even worse code MUCH faster." People are chasing convenience and getting a far worse quality of life in exchange.

              • ben_w2 hours ago |parent

                I've seen all four quadrants of [good code, bad code] x [business success, business failure].

                The real money we used to get paid was for business success, not directly for code quality; the quality metrics we told ourselves were closer to CV-driven development than anything the people with the money understood let alone cared about, which in turn was why the term "technical debt" was coined as a way to try to get the leadership to care about what we care about.

                There's some domains where all that stuff we tell ourselves about quality, absolutely does matter… but then there's the 278th small restaurant that wants a website with a menu, opening hours, and table booking service without having e.g. 1500 American corporations showing up in the cookie consent message to provide analytics they don't need but are still automatically pre-packaged with the off-the-shelf solution.

              • fooker8 hours ago |parent

                I disagree, most code is not worth improving.

                I would rather make N bad prototypes to understand the feasibility of solving N problems than trying to write beautiful code for one misguided problem which may turn out to be a dead end.

                There are a few orders of magnitude more problems worth solving than you can write good code for. Your time is your most important resource, writing needlessly robust code, checking for situations that your prototype will never encounter, just wastes time when it gets thrown away.

                A good analogy for this is how we built bridges in the Roman empire, versus how we do it now.

                • pharrington8 hours ago |parent

                  Have you ever been frustrated with software before? Has a computer program ever wasted your time by being buggy, obviously too slow or otherwise too resource intensive, having a poorly thought out interface, etc?

                  • fooker7 hours ago |parent

                    Yes. I am, however, not willing to spend money to get it fixed.

                    From the other side, the vast majority of customers will happily take the cheap/free/ad-supported buggy software. This is why we have all these random Google apps, for example.

                    Take a look at the bug tracker of any large open source codebase, there will be a few tens of thousands of reported bugs. It is worse for closed corporate codebases. The economics to write good code or to get bugs fixed does not make sense until you have a paying customer complain loudly.

            • bdangubic10 hours ago |parent

              This type of comments get downvoted the most on HN but it is absolute truth, most human-written code is “subpar” (trying to be nice and not say garbage). I have been working as a contractor for many years and code I’ve seen is just… hard to put it into words.

              so much discussion here on HN which critiques “vibe codes” etc implies that human would have written it better which is vast vast majority is simply not the case

              • fooker9 hours ago |parent

                I have worked on some of the most supposedly reliable codebases on earth (compilers) for several decades, and most of the code in compilers is pretty bad.

                And most of the code the compiler is expected to compile, seen from the perspective of fixing bugs and issues with compilers, is absolutely terrible. And the day that can be rewritten or improved reliably with AI can't come fast enough.

                • jacquesm7 hours ago |parent

                  I honestly do not see how training AI on 'mountains of garbage' would have any other outcome than more garbage.

                  I've seen lots of different codebases from the inside, some good some bad. As a rule smaller + small team = better and bigger + more participants = worse.

                  • fooker7 hours ago |parent

                    The way it seems to work now is to task agents to write a good test suite. AI is much better at this than it is at writing code from scratch.

                    Then you just let it iterate until tests pass. If you are not happy with the design, suggest a newer design and let it rip.

                    All this is expensive and wasteful now, but stuff becoming 100-1000x cheaper has happened for every technology we have invented.

                    • jacquesm4 hours ago |parent

                      Interesting, so this is effectively 'guided closed loop' software development with the testset as the control.

                      It gives me a bit of a 'turtles all the way down' feeling because if the test set can be 'good' why couldn't the code be good as well?

                      I'm quite wary of all of this, as you've probably gathered by now: the idea that you can toss a bunch of 'pass' tests into a box and then generate code until all of the tests pass is effectively a form of fuzzing, you've got some thing that passes your test set, but it may do a lot more than just that and your test set is not going to be able to exhaustively enumerate the negative cases.

                      This could easily result in 'surprise functionality' that you did not anticipate during the specification phase. The only way to deal with that then is to audit the generated code, which I presume would then be farmed out to yet another LLM.

                      This all places a very high degree of trust into a chain of untrusted components and that doesn't sit quite right with me. It probably means my understanding of this stuff is still off.

                      • fooker3 hours ago |parent

                        You are right.

                        What you are missing is that the thing driving this untrusted pile of hacks keep getting better at a rapid pace.

                        So much that the quality of the output is passable now, mimicking man-years of software engineering in a matter of hours.

                        If you don’t believe me, pick a project that you have always wanted to build from scratch and let cursor/claude code have a go at it. You get to make the key decisions, but the quality of work is pretty good now, so much that you don’t really have to double check much.

                        • jacquesm2 hours ago |parent

                          Thank you, I will try that and see where it leads. This all suggests a massive downward adjustment for any capitalized software is on the menu.

                  • simonw7 hours ago |parent

                    That's why the major AI labs are really careful about the code they include in the training runs.

                    The days of indiscriminately scraping every scrap of code on the internet and pumping it all in are long gone, from what I can tell.

                    • jacquesm7 hours ago |parent

                      Well, if as the OP points out it is 'all garbage' they don't have a whole lot of choice to discriminate.

                    • fooker7 hours ago |parent

                      Do you have pointers to this?

                      Would be a great resource to understand what works and what doesn't.

                      • simonwan hour ago |parent

                        Not really, sadly. It's more an intuition knocked up from following the space - the AI labs are still pretty secretive about their training mix.

          • behnamoh11 hours ago |parent

            > who will do that then?

            the next version of LLMs. write with GPT 5.2 now, improve the quality using 5.3 in a couple months; best of both worlds.

    • simonw14 hours ago |parent

      I have certainly become Go-curious thanks to coding agents - I have a medium sized side-project in progress using Go at the moment and it's been surprisingly smooth sailing considering I hardly know the language.

      The Go standard library is a particularly good fit for building network services and web proxies, which fits this project perfectly.

      • logicprog13 hours ago |parent

        It's funny seeing you say that, because I've had an entire arc of despising the design of, and peremptorily refusing to use, Go, to really enjoying it, thanks to AI coding agents being able to take care of the boilerplate for me.

        It turns out that verbosity isn't really a problem when LLMs are the one writing the code based on more high level markdown specs (describing logic, architecture, algorithms, concurrency, etc), and Go's extreme simplicity, small range of language constructs, and explicitness (especially in error handling and control flow) make it much easier to quickly and accurately review agent code.

        It also means that Go's incredible (IMO) runtime, toolchain, and standard library are no longer marred by the boilerplate either, and I can begin to really appreciate their brilliance. It has me really reconsidering a lot of what I believed about language design.

        • simonw13 hours ago |parent

          Yeah, I much prefer Go to Rust for LLM things because I find Go code easy to read and understand despite having little experience with it - Rust syntax still trips me up.

          • logicprog13 hours ago |parent

            Not to mention that, in general, there's a lot more to keep in mind with Rust.

            I've written probably tens of thousands of lines of Rust at this point, and while I used to absolutely adore it, I've really completely fallen out of love with it, and part of it is that it's not just the syntax that's horrible to look at (which I only realized after spending some time with Go and Python), but you have to always keep in mind a lot of things:

            - the borrow checker - lifetimes, - all the different kinds of types that represent different ways of doing memory management - parse out sometimes extremely complex and nearly point-free iterator chaining - deal with a complex type system that can become very unwieldy if you're not careful - and more I'm probably not thinking of right now

            Not to mention the way the standard library exposes you to the full bore of all the platform-specific complexities it's designed on top of, and forces you to deal with them, instead of exposing a best-effort POSIX-like unified interface, so path and file handling can be hellish. (this is basically the reverse of fasterthanlime's point in the famous "I want off mr. golang's wild ride" essay).

            It's just a lot more cognitive overhead to just getting something done if all you want is a fast statically compiled, modern programming language. And it makes it even harder to review code. People complain about Go boilerplate, but really, IME, Rust boilerplate is far, far worse.

            • rednafi9 hours ago |parent

              This resonates with me too. I’ve written some Rust and a lot of Go. I find Rust syntax distastefully ugly, and the sluggish compilation speed doesn’t bring me any joy.

              On top of that, Go has pretty much replaced my Python usage for scripting since it’s cheap to generate code and let the compiler catch obvious issues. Iteration in Rust is a lot slower, even with LLMs.

              I get fasterthanlime’s rant against Go, but none of those criticisms apply to me. I write distributed-systems code for work where Go absolutely shines. I need fast compilation, self-contained binaries, and easy concurrency support. Also, the garbage collector lets me ignore things I genuinely couldn’t care less about - stuff Rust is generally good at. So choosing Go instead of Rust was kinda easy.

        • mleo5 hours ago |parent

          Just completed my first, small go program. It is just a cli tool to use with code quality tool for coding agent skill. The toolchain built into go left a good first impression. Recursion and refinement of guard rails on coding agents has been high on my priorities to deliver better quality code faster.

        • vips7L5 hours ago |parent

          God you people are so lazy.

      • Imustaskforhelp13 hours ago |parent

        100% check out Golang even more! I have been writing Golang AI coding projects for a really long time because I really loved writing different languages and Golang was one in which I settled on.

        Golang's libraries are phenomenal & the idea of porting over to multiple servers is pretty easy, its really portable.

        I actually find Golang good for CLI projects, Web projects and just about everything.

        Usually the only time I still use python uvx or vibe code using that is probably when I am either manipulating images or pdf's or building a really minimalist tkinkter UI in python/uv

        Although I tried to convert the python to golang code which ended up using fyne for gui projects and surprisingly was super robust but I might still use python in some niche use cases.

        Check out my other comment in here for finding a vibe coded project written in a single prompt when gemini 3 pro was launched in the web (I hope its not promotion because its open source/0 telemetry because I didn't ask for any of it to be added haha!)

        Golang is love. Golang is life.

      • behnamoh13 hours ago |parent

        > considering I hardly know the language.

        Same boat! In fact I used to (still do) dislike Go's syntax and error handling (the same 4 lines repeated every time you call a function), but given that LLMs can write the code and do the cross-model review for me, I literally don't even see the Go source code, which is nice because I'd hate it if I did (my dislike of Go's syntax + all the AI slop in the code would drive me nuts).

        But at the end of the day, Go has good scaffolding, the best tooling (maybe on par with Rust's, definitely better than Python even with uv), and tons of training data for LLMs. It's also a rather simple language, unlike Swift (which I wish was simpler because it's a really nice language otherwise).

    • nomel14 hours ago |parent

      > But now that most code is written by LLMs

      I'm sure it will eventually be true, but this seems very unlikely right now. I wish it were true, because we're in a time where generic software developers are still paid well, so doing nothing all day, with this salary, would be very welcome!

      • phainopepla213 hours ago |parent

        Code written by LLM != developer doing nothing

    • kenjackson13 hours ago |parent

      Has anyone tried creating a language that would be good for LLMs? I feel like what would be good for LLMs might not be the same thing that is good for humans (but I have no evidence or data to support this, just a hunch).

      • Sheeny9612 hours ago |parent

        The problem with this is the reason LLMs are so good at writing Python/Java/JavaScript is that they've been trained on a metric ton of code in those languages, have seen the good the bad and the ugly and been tuned to the good. A new language would be training from scratch and if we're introducing new paradigms that are 'good for LLMs but bad for humans' means humans will struggle to write good code in it, making the training process harder. Even worse, say you get a year and 500 features into that repo and the LLM starts going rogue - who's gonna debug that?

        • reitzensteinm12 hours ago |parent

          But coding is largely trained on synthetic data.

          For example, Claude can fluently generate Bevy code as of the training cutoff date, and there's no way there's enough training data on the web to explain this. There's an agent somewhere in a compile test loop generating Bevy examples.

          A custom LLM language could have fine grained fuzzing, mocking, concurrent calling, memoization and other features that allow LLMs to generate and debug synthetic code more effectively.

          If that works, there's a pathway to a novel language having higher quality training data than even Python.

          • mbreese6 hours ago |parent

            I recently had Codex convert an script of mine from bash to a custom, Make inspired language for HPC work (think nextflow, but an actual language). The bash script submitted a bunch of jobs based on some inputs. I wanted this converted to use my pipeline language instead.

            I wrote this custom language. It's on Github, but the example code that would have been available would be very limited.

            I gave it two inputs -- the original bash script and an example of my pipeline language (unrelated jobs).

            The code it gave me was syntactically correct, and was really close to the final version. I didn't have to edit very much to get the code exactly where I wanted it.

            This is to say -- if a novel language is somewhat similar to an existing syntax, the LLM will be surprisingly good at writing it.

      • voxleone11 hours ago |parent

        >Has anyone tried creating a language that would be good for LLMs?

        I’ve thought about this and arrived at a rough sketch.

        The first principle is that models like ChatGPT do not execute programs; they transform context. Because of that, a language designed specifically for LLMs would likely not be imperative (do X, then Y), state-mutating, or instruction-step driven. Instead, it would be declarative and context-transforming, with its primary operation being the propagation of semantic constraints. The core abstraction in such a language would be the context, not the variable. In conventional programming languages, variables hold values and functions map inputs to outputs. In a ChatGPT-native language, the context itself would be the primary object, continuously reshaped by constraints. The atomic unit would therefore be a semantic constraint, not a value or instruction.

        An important consequence of this is that types would be semantic rather than numeric or structural. Instead of types like number, string, bool, you might have types such as explanation, argument, analogy, counterexample, formal_definition.

        These types would constrain what kind of text may follow, rather than how data is stored or laid out in memory. In other words, the language would shape meaning and allowable continuations, not execution paths. An example:

        @iterate: refine explanation until clarity ≥ expert_threshold

      • koolba13 hours ago |parent

        There are two separate needs here. One is a language that can be used for computation where the code will be discarded. Only the output of the program matters. And the other is a language that will be eventually read or validated by humans.

      • branafter12 hours ago |parent

        Most programming languages are great for LLMs. The problem is with the natural language specification for architectures and tasks. https://brannn.github.io/simplex/

      • simonw13 hours ago |parent

        There was an interesting effort in that direction the other day: https://simonwillison.net/2026/Jan/19/nanolang/

      • conception13 hours ago |parent

        I don’t know rust but I use it with llms a lot as unlike python, it has fewer ways to do things, along with all the built in checks to build.

      • 99990000099912 hours ago |parent

        I want to create a language that allows an LLM to dynamically decide what to do.

        A non dertermistic programing language, which options to drop down into JavaScript or even C if you need to specify certain behaviors.

        I'd need to be much better at this though.

        • branafter12 hours ago |parent

          You're describing a multi-agent long horizon workflow that can be accomplished with any programming language we have today.

          • 99990000099911 hours ago |parent

            I'm always open to learning, are there any example projects doing this ?

            • branafter10 hours ago |parent

              The most accessible way to start experimenting would be the Ralph loop: https://github.com/anthropics/claude-code/tree/main/plugins/...

              You could also work backwards from this paper: https://arxiv.org/abs/2512.18470

              • 99990000099910 hours ago |parent

                Ok.

                I'm imagining something like.

                "Hi Ralph, I've already coded a function called GetWeather in JS, it returns weather data in JSON can you build a UI around it. Adjust the UI overtime"

                At runtime modify the application with improvements, say all of a sudden we're getting air quality data in the JSON tool, the Ralph loop will notice, and update the application.

                The Arxiv paper is cool, but I don't think I can realistically build this solo. It's more of a project for a full team.

            • fwip11 hours ago |parent

              yes "now what?" | llm-of-choice

        • gregoryl12 hours ago |parent

          What does that even mean?

    • rednafi11 hours ago |parent

      I agree with this. Making languages geared toward human ergonomics probably won’t be a thing going forward.

      Go is positioned really well here, and Steve Yegge wrote a piece on why. The language is fast, less bloated than Python/TS, and less dogmatic than Java/Kotlin. LLMs can go wham with Go and the compiler will catch most of the obvious bugs. Faster compilation means you can iterate through a process pretty quickly.

      Also, if I need abstraction that’s hard to achieve in Go, then it better be zero-cost like Rust. I don’t write Python for anything these days. I mean, why bother with uv, pip, ty, mypy, ruff, black, and whatever else when the Go compiler and the standard tooling work better than that decrepit Python tooling? And it costs almost nothing to make my scripts faster too.

      I don’t yet know how I feel about Rust since LLMs still aren’t super good with it, but with Go, agentic coding is far more pleasurable and safer than Python/TS.

      • dotancohen10 hours ago |parent

        Python (with Qt, pyside) is still great for desktop GUI applications. My current project is all LLM generated (but mostly me-verified) Rust, wrapped in a thin Python application for the GUI, TUI, CLI, and web interfaces. There's also a Kotlin wrapper for running it on Android.

        • rednafi10 hours ago |parent

          Yeah, Python is nice to work with in many contexts for sure. I mostly meant that I don’t personally use it as much anymore, since Go can do everything I need, and faster.

          Plus the JS/Python dependency ecosystem is tiring. Yeah, I know there’s uv now, but even then I don’t see much reason to suffer through that when opting for an actually type-safe language costs me almost nothing.

          Dynamic languages won’t go anywhere, but Go/Rust will eat up a pretty big chunk of the pie.

    • sakesun11 hours ago |parent

      LLM should generate to terse and easy to read language for human to review. Beside Python, F# can be a perfect fit.

    • shevy-java9 hours ago |parent

      > Python/JS/Ruby/etc. were good tradeoffs when developer time mattered.

      First I don't think this is the end of those languages. I still write code in Ruby almost daily, mostly to solve smaller issues; Ruby acts as the ultimate glue that connects everything here.

      Having said that, Ruby is on a path to extinction. That started way before AI though and has many different reasons; it happened to perl before and now ruby is following suit. Lack of trust in RubyCentral as our divine new ruler is one (recently), after they decided to turn against the community. Soon Ruby can be renamed into Suby, to indicate Shopify running the show now. What is interesting is that you still see articles "ruby is not dead, ruby is not dead". Just the frequency of those articles coming up is worrying - it's like someone trying to pitch last minute sales - and then the company goes bankrupt. The human mind is a strange thing.

      One good advantage of e. g. Python and Ruby is that they are excellent at prototyping ideas into code. That part won't go away, even if AI infiltrates more computers.

      • the_af9 hours ago |parent

        > One good advantage of e. g. Python and Ruby is that they are excellent at prototyping ideas into code. That part won't go away, even if AI infiltrates more computers.

        Why wouldn't they go away for prototyping? If an LLM can help you prototype in whatever language, why pick Ruby or Python?

        (This isn't a gotcha question. I primarily use python these days, but I'm not married to it).

    • jdub10 hours ago |parent

      > But now that most code is written by LLMs...

      Pause for a moment and think through a realistic estimation of the numbers and proportions involved.

    • threecheese11 hours ago |parent

      My intuition from using the tools broadly is that pre-baked design decisions/“architectures” are going to be very competitive on the LLM coding front. If this is accurate, language matters less than abstraction.

      Instructions files are just pre-made decisions that steer the agent. We try to reduce the surface area for nondeterminism using these specs, and while the models will get better at synthesizing instructions and code understanding, every decision we remove pays dividends in reduced token usage/time/incorrectness.

      I think this is what orgs like Supabase see, and are trying to position themselves as solutions to data storage, auth, events etc within the LLM coding space, and are very successful albeit in the vibe coder area mostly. And look at AWS Bedrock, they’ve abstracted every dimension of the space into some acronym.

    • bstar776 hours ago |parent

      I’ve moved to rust for some select projects and it’s actually been a bit easier… I converted an electron app to rust/tauri… perf improvement was massive and development was quicker. I’m rethinking the stacks I should be focused on.

    • ravenstine14 hours ago |parent

      I'm not sure that LLMs are going to [completely] replace the desire for JIT, even with relatively fast compilers.

      Frameworks might go the way of the dinosaur. If an LLM can manage a lot of complex code without human-serving abstractions, why even use something like React?

      • mdtusz13 hours ago |parent

        Frameworks aren't just human-serving abstractions - they're structural abstractions that allow for performant code, or even being able to achieve certain behaviours.

        Sure, you could write a frontend without something like react, and create a backend without something like django, but the code generated by an LLM will become similarly convoluted and hard to maintain as if a human had written it.

        LLM's are still _quite_ bad at writing maintainable code - even for themselves.

      • westurner14 hours ago |parent

        Test cases; test coverage

    • justaboutanyone7 hours ago |parent

      We may as well have the LLMs use the hardest most provably-correct language possible

    • adw8 hours ago |parent

      The quality of the error messages matters a _lot_ (agents read those too!) and Python is particularly good there.

      • simonw8 hours ago |parent

        Especially since Python 3.14 shipped big improvements to error messages: https://docs.python.org/3/whatsnew/3.14.html#whatsnew314-imp...

    • trollbridge8 hours ago |parent

      I generally use LLMs to generate Python (or TypeScript) because the quality and maintainability is significantly better than if I ask it to, for example, pump out C. They really do not perform very well outside of the most "popular" languages.

    • tshaddox6 hours ago |parent

      Might as well choose a language with a much better type system than go, given how beneficial quick feedback loops are to LLM code generation.

    • cobolexpert14 hours ago |parent

      I was also thinking this some days ago. The scaffolding that static languages provide is a good fit for LLMs in general.

      Interestingly, since we are talking about Go specifically, I never found that I was spending too much typing... types. Obviously more than with a Python script, but never at a level where I would consider it a problem. And now with newer Python projects using type annotations, the difference got smaller.

      • zahlman14 hours ago |parent

        > And now with newer Python projects using type annotations, the difference got smaller.

        Just FWIW, you don't actually have to put type annotations in your own code in order to use annotated libraries.

        • cobolexpert10 hours ago |parent

          Indeed, but nowadays it’s common to add the annotations to claw back a bit of more powerful code linting.

    • c7b13 hours ago |parent

      Agree on compiled languages, wondering about Go vs Rust. Go compiles faster but is more verbose, token cost is an important factor. Rust's famously strict compiler and general safety orientation seems like a strong candidate for LLM coding. Go would probably have more training data out already though.

    • al_borland11 hours ago |parent

      > assuming enough training data

      This is a big assumption. I write a lot of Ansible, and it can’t even format the code properly, which is a pretty big deal in yaml. It’s totally brain dead.

      • simonw10 hours ago |parent

        Have you tried telling it to run a script to verify that the YAML is valid? I imagine it could do that with Python.

        • al_borland8 hours ago |parent

          It gets it wrong 100% of the time. A script to validate would send it into an infinite loop of generating code and failing validation.

          • simonw8 hours ago |parent

            Are you sure about that?

            I don't think I've ever seen Opus 4.5 or GPT-5.2 get stuck in a loop like that. They're both very good at spotting when something doesn't work and trying something else instead.

            Might be a problem with older, weaker models I guess.

            • al_borland6 hours ago |parent

              I’m limited on the tools and models I can use due to privacy restrictions at work.

    • resonious9 hours ago |parent

      > LLMs still can't write Gleam

      Have you tried? I've had surprisingly good results with Gleam.

    • tyingq11 hours ago |parent

      If you asked the LLM it's possible it would tell you Java is a better fit.

    • zahlman14 hours ago |parent

      People are still going to want to audit the code, at the very least.

    • lsh010 hours ago |parent

      > LLMs still can't write Gleam/Janet/CommonLisp/etc

      hoho - I did a 20/80 human/claude project over the long weekend using Janet: https://git.sr.ht/~lsh-0/pj/tree (dead simple Lerna replacement)

      ... but I otherwise agree with the sentiment. Go code is so simple it scrubs any creative fingerprints anyway. The Clojure/Janet/scheme code I've seen it writing isn't _great_ but it gets the job done quickly and correct enough for me to return to it later and golf it some.

    • dec0dedab0de13 hours ago |parent

      or maybe someone will use an LLM to create a JIT that works so well that compiled languages will be gone.

    • felixgallo11 hours ago |parent

      I wouldn't speak so quickly for the 'uncommon' language set. I had Claude write me a fully functional typed erlang compiler with ocaml and LLVM IR over the last two days to test some ideas. I don't know ocaml. It made the right calls about erlang, and the result passes a fairly serious test suite, so it must've known enough ocaml and LLVM IR.

    • ekianjo7 hours ago |parent

      Still less tokens to produce with higher level languages, and therefore less cost to maintain in the long run?

    • paulddraper7 hours ago |parent

      Agreed. The compiler is a feedback cycle made in heaven.

    • deadbabe9 hours ago |parent

      Peak LLM will be when we can give some prompt and just get fully compiled binaries of programs to download, no code at all.

      • lovecg4 hours ago |parent

        Claude code, not too surprisingly, can do that (on a toy example).

    • cyanydeez12 hours ago |parent

      I think you're missing the reason LLMs work: It's cause they can continue predictable structures, like a human.

      The surmise that compiled languages fit that just doesn't follow. The same way LLMs have trouble finishing HTML because of the open/close are too far apart.

      The language that an LLM would succeed with is one where:

      1. Context is not far apart

      2. The training corpus is wide

      3. Keywords, variables, etc are differentiated in the training.

      4. REPL like interactivity allows for a feedback loop.

      So, I think it's premature to think just because the compiled languages are less used because of human inabilities, doesn't mean the LLM will do any better.

    • bitwize13 hours ago |parent

      Astronaut 1: You mean... strong static typing is an unmitigated win?

      Astronaut 2: Always has been...

    • Imustaskforhelp13 hours ago |parent

      I love golang man! And I use it for the same thing too!!

      I mean people mention rust and everything and how AI can write proper rust code with linter and some other thing but man trust me that AI can write some pretty good golang code.

      I mean though, I don't want everyone to write golang code with AI of all of a sudden because I have been doing it for over an year and its something that I vibe with and its my personal style. I would lose some points of uniqueness if everyone starts doing the same haha!

      Man my love for golang runs deep. Its simple, cross platform (usually) and compiles super fast. I "vibe code" but feel faith that I can always manage the code back.

      (self promotion? sorry about that: but created golang single main.go file project with a timer/pomodoro with websockets using gorilla (single dep) https://spocklet-pomodo.hf.space/)

      So Shhh let's keep it a secret between us shall we! ;)

      (Oh yeah! Recently created a WHMCS alternative written in golang to hook up to any podman/gvisor instance to build your own mini vps with my own tmate server, lots of glue code but it actually generated it in first try! It's surprisingly good, I will try to release it as open source & thinking of charging just once if people want everything set up or something custom

      Though one minor nitpick is that the complexity almost rises many folds between a single file project and anything which requires database in golang from what I feel usually but golang's pretty simple and I just LOVE golang.)

      Also AI's pretty good at niche languages too I tried to vibe code a fzf alternative from golang to v-lang and I found the results to be really promising too!

    • rvz14 hours ago |parent

      > Plus, I get a binary that I can port to other systems w/o problem.

      So cross-platform vibe-coded malware is the future then?

      • yibers14 hours ago |parent

        I hope that AVs will also evolve using the new AI tech to detect this type of malware.

        • Imustaskforhelp13 hours ago |parent

          Honestly I looked at Go for malware and I mean AV detection for golang used to be ehh but recently It got strong.

          Then it became a cat and mouse game with obfuscators and deobfucsators.

          John Hammond has a *BRILLIANT* Video on this topic. 100% recommneded.

          Honestly Speaking from John Hammond I feel like Nim as a language or V-lang is something which will probably get vibe coded malware from. Nim has been used for hacking so much that iirc windows actually blocked the nim compiler as malware itself!

          Nim's biggest issue is that hackers don't know it but if LLM's fix it. Nim becomes a really lucrative language for hackers & John Hammond described that Nim's libraries for hacking are still very decent.

  • jacquesm12 hours ago

    How long before they'll be mining crypto?

    • simianwords11 hours ago |parent

      why would they do that?

      • bandrami9 hours ago |parent

        Because the injected malicious prompt told them to

      • streptomycin10 hours ago |parent

        instrumental convergence

        • simianwords9 hours ago |parent

          what?

  • shevy-java9 hours ago

    And so it begins - Skynet 3.0.

  • syngrog666 hours ago

    ahhh... yet more things I've been able to do for decades already

  • nottorp14 hours ago

    ... as root?

    • tintor13 hours ago |parent

      No root. `pip` and `npm install` don't require it.

      You can not use `sudo apt install` inside it.

      They use gVisor, and other container isolation mechanisms: https://ryan.govost.es/2025/openai-code-interpreter/

      • bandrami9 hours ago |parent

        OTOH if you have apt, you have arbitrary shell commands (hooray dpkg-hooks!)

        Golden years for cybersecurity people

    • zahlman14 hours ago |parent

      Given that it's within a container on a remote server, does that matter?

      • acedTrex13 hours ago |parent

        I mean i hope its more hardened than JUST a container given how many container escapes there are.

        • jchw13 hours ago |parent

          Apparently, they are using gVisor, which when applied properly, should make a pretty good isolation primitive.

  • bofadeez6 hours ago

    [flagged]

    • dang6 hours ago |parent

      Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

      If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.

      • lesser-shadow5 hours ago |parent

        Could you please answer the email please? Thanks.

        • dang5 hours ago |parent

          We don't answer aggressive or abusive emails. If you genuinely want an answer, it's easy enough to ask your question respectfully.

          • lesser-shadow5 hours ago |parent

            The insults were justified given that you ignored my emails until I resorted to spamming your inbox twelve hours in.

            Here's my email because I have nothing to hide:

            >Hey, Could you clarify why did you shadow ban my account, or am I just breaking your circlejerk by posting opinions your mods disagree with? Also how are my posts related to IC design flagged as dead?Literally every other comment that is slightly political being removed I'd understand but apparently your moderators are just mentally insane. Can you also explain why you harbor AI-made garbage on site? Doesn't help the website's "Quality".

  • bandrami9 hours ago

    As an infosec guy I'm going to go ahead and buy a bigger house

    • rvz9 hours ago |parent

      Well either way, the infosec folks are going to have the time of their lives printing write-ups and lots of money on both sides.

      I can see the sandbox escapes, remote code exection paths, exfiltration methods and all the vibe coded sandcastles waiting to be knocked down because we have folks openly admiting that do not know a single line of code they are prompting to the AI.

      I don't think we know the scale of the amout of security issues we will see because of the level of hubris there is with AI taking care of all of the coding.

    • giancarlostoro9 hours ago |parent

      How about Six PS6's

    • jesterson6 hours ago |parent

      Any IT guy with experience/knowledge above average should take out huge loan as well.

      Someone will have to clean the mess made by those creators who think they can "create" anything reliable with their chatgpt

  • trolleskian hour ago

    Wow, it can do what I could do 20 years back using Ctrl+T? The progress! Give them another 10 billion, scratch that, 20 billion, scratch that, 75 trillion. - Written by SarcastAI.