HNNewShowAskJobs
Built with Tanstack Start
AI coding agents are removing programming language barriers(railsatscale.com)
147 points by Bogdanp 6 days ago | 184 comments
  • behnamoh6 days ago

    Counter point: AI makes mainstream languages (for which a lot of data exists in the training data) even more popular because those are the languages it knows best (ie, has the least rate of errors in) regardless of them being typed or not (in fact, many are dynamic, like Python, JS, Ruby).

    The end result? Non-mainstream languages don't get much easier to get into because average Joe isn't already proficient in them to catch AI's bugs.

    People often forget the bitter lesson of machine learning which plagues transformer models as well.

    • bluetomcat6 days ago |parent

      It’s good at matching patterns. If you can frame your problem so that it fits an existing pattern, good for you. It can show you good idiomatic code in small snippets. The more unusual and involved your problem is, the less useful it is. It cannot reason about the abstract moving parts in a way the human brain can.

      • carlmr6 days ago |parent

        >It cannot reason about the abstract moving parts in a way the human brain can.

        Just found 3 race conditions in 100 lines of code. From the UTF-8 emojis in the comments I'm really certain it was AI generated. The "locking" was just abandoning the work if another thread had started something, the "locking" mechanism also had toctou issues, the "locking" also didn't actually lock concurrent access to the resource that actually needed it.

        • nyarlathotep_5 days ago |parent

          > UTF-8 emojis in the comments

          This is one of the "here be demons" type signatures of LLM code generation age, along with comments like

          // define the payload struct payload {};

        • bluetomcat6 days ago |parent

          Yes, that was my point. Regardless of the programming language, LLMs are glorified pattern matchers. A React/Node/MongoDB address book application exposes many such patterns and they are internalised by the LLM. Even complex code like a B-tree in C++ forms a pattern because it has been done many times. Ask it to generate some hybrid form of a B-tree with specific requirements, and it will quickly get lost.

          • hombre_fatal6 days ago |parent

            "Glorified pattern matching" does so much work for the claim that it becomes meaningless.

            I've copied thousands of lines of complex code into an LLM asking it to find complex problems like race conditions and it has found them (and other unsolicited bugs) that nobody was able to find themselves.

            Oh it just pattern matched against the general concept of race conditions to find them in complex code it's never seen before / it's just autocomplete, what's the big deal? At that level, humans are glorified pattern matchers too and the distinction is meaningless.

            • nyrikki6 days ago |parent

              LLMs are good at needle in the haystack problems, specifically when they have examples in the corpus.

              The counter point is how LLMs can't find a missing line in a poem when they are given the original.

              PAC learning is basically existential quantification...has the same limits too.

              But being a tool to find a needle is not the same as finding all or even reliability finding a specific needle.

              Being being a general programming agent requires much more than just finding a needle.

              • hombre_fatal6 days ago |parent

                > The counter point is how LLMs can't find a missing line in a poem when they are given the original.

                True, but describing a limitation of the tech can't be used to make the sort of large dismissals we see people make wrt LLMs.

                The human brain has all sorts of limitations like horrible memory (super confident about wrong details) and catastrophic susceptibility to logical fallacies.

                • mckn1ght6 days ago |parent

                  > super confident about wrong details

                  Have you not had this issue with LLMs? Because I have. Even with the latest models.

                  I think someone upthread was making an attempt at

                  > describing a limitation of the tech

                  but you keep swatting them down. I didn’t see their comments as a wholesale dismissal of AI. They just said they aren’t great at sufficiently complex tasks. That’s my experience as well. You’re just disagreeing on what “sufficiently” and “complex” mean, exactly.

            • unshavedyak5 days ago |parent

              > humans are glorified pattern matchers too and the distinction is meaningless.

              I'm still convinced that this is true. The more advances we make in "AI" the more i expect we'll discover that we're not as creative and unique as we think we are.

              • kakapo56725 days ago |parent

                I suspect you're right. The more I work with AI, the more clear is the trajectory.

                Humans generally have a very high opinion of themselves and their supposedly unique creative skills. They are not eager to have this illusion punctured.

              • bigfishrunning5 days ago |parent

                maybe you aren't...

                • unshavedyak5 days ago |parent

                  Whether or not we have free will is not a novel concept. I simply side on us being more deterministic than we realize, that our experiences and current hormone state shape our output drastically.

                  Even our memories are mutable. We will with full confidence recite memories or facts we've learned just moments ago which are entirely fictional. Normal, healthy adults.

            • Workaccount26 days ago |parent

              Humans can't be glorified pattern matchers because they recognize that they aren't.[1]

              [1]https://ai.vixra.org/pdf/2506.0065v1.pdf

              The paper is satire, but it's a pretty funny read.

            • Isamu5 days ago |parent

              LLMs should definitely be used for brute force searches especially of branching spaces. Use them for what they do best.

              “Pattern matching” is thought of as linear but LLMs are doing something more complex, it should be appreciated as such.

            • 0points6 days ago |parent

              > it has found them (and other unsolicited bugs) that nobody was able to find themselves.

              How did you evaluate this? Would be interested in seeing results.

              I am specifically interested in the amount of false issues found by the LLM, and examples of those.

              • hombre_fatal6 days ago |parent

                Well, how do you verify any bug? You listen to someone's explanation of the bug and double check the code. You look at their solution pitch. Ideally you write a test that verifies the bug and again the solution.

                There are false positives, and they mostly come from the LLM missing relevant context like a detail about the priors or database schema. The iterative nature of an LLM convo means you can add context as needed and ratchet into real bugs.

                But the false positives involve the exact same cycle you do when you're looking for bugs yourself. You look at the haystack and you have suspicions about where the needles might be, and you verify.

                • 0points6 days ago |parent

                  > Well, how do you verify any bug?

                  You do or you don't.

                  Recently we've seen many "security researchers" doing exactly this with LLM:s [1]

                  1: https://www.theregister.com/2025/05/07/curl_ai_bug_reports/

                  Not suggesting you are doing any of that, just curious what's going on and how you are finding it useful.

                  > But the false positives involve the exact same cycle you do when you're looking for bugs yourself.

                  In my 35 years of programming I never went just "looking for bugs".

                  I have a bug and I track it down. That's it.

                  Sounds like your experience is similar to using deterministic static code analyzers but more expensive, time consuming, ambiguous and hallucinating up non-issues.

                  And that you didn't get a report to save and share.

                  So is it saving you any time or money yet?

                  • hombre_fatal6 days ago |parent

                    Oh, I go bug hunting all the time in sensitive software. It's the basis of test synthesis as well. Which tests should you write? Maybe you could liken that to considering where the needles will be in the haystack: you have to think ahead.

                    It's a hard, time consuming, and meandering process to do this kind of work on a system, and it's what you might have to pay expensive consultants to do for you, but it's also how you beat an expensive bug to the punchline.

                    An LLM helps me run all sorts of considerations on a system that I didn't think of myself, but that process is no different than what it looks like when I verify the system myself. I have all sorts of suspicions that turn into dead ends because I can't know what problems a complex system is already hardened against.

                    What exactly stops two in-flight transfers from double-spending? What about when X? And when Y? And what if Z? I have these sorts of thoughts all day.

                    I can sense a little vinegar at the end of your comment. Presumably something here annoys you?

                    • 0points6 days ago |parent

                      > I can sense a little vinegar at the end of your comment. Presumably something here annoys you?

                      Thanks for your responses.

                      Really sorry about the vinegar, not intentional. I may have such personality disorder idk. Being blunt, not very great communication skills.

                      • hombre_fatal4 days ago |parent

                        It's ok, I do worse things on HN.

                        My vice is when someone writes a comment where I have a different opinion than them, and their comment makes me think of my own thoughts on the subject.

                        But since I'm responding to them, I feel like it's some sort of debate/argument even though in reality I'm just adding my two cents.

      • nzach5 days ago |parent

        > It can show you good idiomatic code in small snippets.

        That's not really true for things that are changing a lot. I got a terrible experience last time I've tried to use Zig, for example. The code it generated was an amalgamation between two or three different versions.

        And I've even got this same style of problem in golang where sometimes the LLM generates a for loop in the "old style" (pre go 1.22).

        In the end LLMs are a great tool if you know what needs to be done, otherwise it will trip you up.

      • practice96 days ago |parent

        Humans cannot reason about code at scale. Unless you add scaffolding like diagrams and maps and …

        Things that most teams don’t do or half-ass

        • samrus6 days ago |parent

          Its not scaffolding if the intelligence itself is adding it. Humans can make their own diagrams ajd maps to help them, LLM agentsbneed humans to scaffold for them, thats the setup for the bitter lesson

    • minebreaker6 days ago |parent

      From what I can tell, LLMs tend to hallucinate more with minor languages than with popular ones. I'm saying this as a Scala dev. I suspect most discussions about the LLM usefulness depend on the language they use. Maybe it's useful for JS devs.

      • philipkglass5 days ago |parent

        I write primarily Scala too. The frontier models seem reasonably good at it to me. One approach I sometimes use is to ask for it to write something complicated in Python, using only the standard library, and then once I have the Python refined I ask for a translation into Scala. This trick may not be applicable to your problems, but it works for me when I find an interesting algorithm in a paper and I'd like to use it but there is no existing Java/Scala implementation.

        I use AI assistance to generate code and review code, but I haven't had success trying to use it to update a substantial existing code base in Scala. I have tried using both Claude Code and Cursor and the handful of times I tried there were so many oversights and mistakes that resolving the mess was more effort than doing it manually. I'll probably try again after the next big model releases.

        Current frontier models have been the least useful to me when I'm asking them to review performance-critical code inside Scala. These are bits of my code base that I have written to use a lot of mutable variables, mutable data structures, and imperative logic to avoid allocations and other performance impediments. I only use this style for functions highlighted by profiling. I can ask for a code review and even include the reasoning for the unusual style in the docstring, but Claude and Gemini have still tried to nudge me back to much slower standard Scala idioms.

        • dearilos5 days ago |parent

          problem with the tools you're using is that they're not built for code review

          im building one that lets you write and enforce your own rules so you don't get the typical slop

          email in profile if you'd like to try it - i can send you a link

      • noosphr6 days ago |parent

        Its more useful for python devs since pretty much all ml code is python wrappers around c++.

      • dr-detroit5 days ago |parent

        [dead]

    • rm_-rf_slash5 days ago |parent

      Cursor and Claude Code were the asskicking I needed to finally get on the typescript bandwagon.

      Strong typing drastically reduces hallucinations and wtf bugs that slip through code review.

      So it’ll probably be the strongly typed languages that receive the proportionally greatest boost in popularity from LLM-assisted coding.

      • theshrike795 days ago |parent

        This is why I like Go for vibe programming.

        goimports makes everything look the same, the compiler is a nitpicky asshole that won’t let the program even compile if there is an unused variable etc.

        • nzach5 days ago |parent

          > won’t let the program even compile if there is an unused variable

          That is a really big advantage in the AI era. LLMs are pretty bad at identifying what is and what isn't relevant in the context.

          For developers this decision is pretty annoying, but it makes sense if you are using LLMs.

          • theshrike795 days ago |parent

            Yep, that's why I like strict tooling with LLMs (and actually real people as well, but that's a different conversation :D)

            When you have a standard build process that runs go vet, go test, golanci-lint, goimports and compiles the code you can order the LLM to do that as the last step every time.

            This way at the very least the shit it produces is well-formed and passes the tests :)

            Otherwise they tend to just leave stuff hanging like "this erroring test is unrelated to the current task, let's just not run it" - ffs you just broke it, it passed perfectly before you started messing with the codebase =)

    • hiAndrewQuinn5 days ago |parent

      Most people who work in non-mainstream languages are, to some extent, making a statement. They care more about X than mere "popularity". (Sometimes X is money, hence why I still have Anki flashcards in rotation on OCaml, Intersystems Cache and Powershell.)

      If they do want "popularity" then the counter-counter-point is that it should be easier to get than ever. Just have one proficient person write a lot of idiomatic, relatively isolatable code, and then have an AI generate terabytes upon terabytes of public domain licensed variations and combinations on that code. If you make programming in the small a breeze, people will flock to your language, and then they can discover how to program in the large with it on their own time.

    • rapind6 days ago |parent

      I’m having a good time with claude and Elm. The correctness seems to help a lot. I mean it still goes wonky some times, but I assume that’s the case with everyone.

    • RedNifre6 days ago |parent

      I'm not sure, I have a custom config format that combines a CSV schema with processing instructions that I use for bank CSVs and Claude was able to generate a perfect one for a new bank only based on one config plus CSV and the new bank's CSV.

      I'm optimistic that most new programming languages will only need a few "real" programmers to write a small amount of example code for the AI training to get started.

      • 0points6 days ago |parent

        > I'm optimistic that most new programming languages will only need a few "real" programmers to write a small amount of example code for the AI training to get started.

        CSV is not a complex format.

        Why do you reach this conclusion from toying with CSV?

        And why do you trust a LLM for economic planning?

        • theshrike795 days ago |parent

          It’s not “economic planning”, it’s creating a CSV parser/converter.

          When the code is done, it not like the LLM can secretly go flip columns at random

    • greener_grass6 days ago |parent

      More people who are not traditionally programmers are now writing code with AI assistance (great!) but this crowd seems unlikely to pick up Clojure, Haskell, OCaml etc... so I agree this is a development in favor of mainstream languages.

      • __loam6 days ago |parent

        Imo there's been a big disconnect between people who view code as work product vs those who view it as a liability/maintenance burden. AI is going to cause an explosion in the production of code, I'm not sure it's going to have the same effect on long term maintenance and I don't think rewriting the whole thing with ai again is a solution.

      • badgersnake6 days ago |parent

        Any they don’t understand it. So they get something that kinda half works and then they’re screwed.

      • lonelyasacloud6 days ago |parent

        Not sure.

        Even for small projects the optimisation criteria is different if the human's role in the equation shifts from authoring to primarily a reviewing based one.

    • kybernetikos5 days ago |parent

      Yes, I worry that we're in for an age of stagnation, where people are hesitant to adopt radically new languages or libraries or frameworks because the models will all be bad at them, and that disadvantage will swamp any benefit you might get from adopting an improved language/library/framework.

      Alternatively every new release will have to come with an MCP for its documentation and any other aspects that might make it easier for an LLM to talk about it and use it accurately.

    • jongjong6 days ago |parent

      Can confirm, you can do some good vibe coding with JavaScript (or TypeScript) and Claude Code. I once vibe coded a test suite for a complex OAuth token expiry issue while working on someone else's TypeScript code.

      Also, I had created a custom Node.js/JavaScript BaaS platform with custom Web Components and wanted to build apps with it, I gave it the documentation as attachment and surprisingly, it was able to modify an existing app to add entire new features. This app had multiple pages and Claude just knew where to make the changes. I was building a kind of marketplace app. One time it implemented the review/rating feature in the wrong place and I told it "This rating feature is meant for buyers to review sellers, not for sellers to review buyers" and it fixed it exactly right.

      I think my second experience (plain JavaScript) was much more impressive and was essentially frictionless. I can't remember it making a single major mistake. I think only once it forgot to add the listener to handle the click event to highlight when a star icon was clicked but it fixed it perfectly when I mentioned this. With TypeScript, it sometimes got confused; I had to help it a lot more because I was trying to mock some functions; the fact that the TypeScript source code is separate from the build code created some confusion and it was struggling to grep the codebase at times. Though I guess the code was also more complicated and spread out over more files. My JavaScript web components are intended to be low-code so it's much more succinct.

    • golergka6 days ago |parent

      Recently I wrote a significant amount of zig first time in my life thanks to Claude Code. Is zig a mainstream language yet?

      • ACCount366 days ago |parent

        It's not too obscure. It's also about the point where some coding LLMs get weak.

        Zig changes a lot. So LLMs reference outdated data, or no data at all, and resort to making a lot of 50% confidence guesses.

      • 0x000xca0xfe6 days ago |parent

        Interesting, my experience lerning Zig was that Claude was really bad at the language itself to the point it wrote obvious syntax errors and I had to touch up almost everything.

        With Rust OTOH Claude feels like a great teacher.

        • golergka6 days ago |parent

          Syntax and type errors gets instantly picked up by type checker and corrected, and as long as these failures stay in context, LLM doesn’t make those mistakes again. Not something I ever have to pay attention to.

    • echelon6 days ago |parent

      AI seems pretty good at Rust, so I don't know. What sort of obscure languages are we talking about here?

      • behnamoh6 days ago |parent

        Haskell, Lisps (especially the most Common one!), Gleam or any other Erlang-wrapper like Elixir, Smalltalk, etc.

        • josevalim6 days ago |parent

          Phoenix.new is a good example of a coding agent that can fully bootstrap realtime Elixir apps using Phoenix LiveView: https://phoenix.new/

          I also use coding agents with Elixir daily without issues.

          • ModernMech6 days ago |parent

            See this is what kills me about these things. They say they built this system that will build apps for you, yet they advertise it using a website that chews through my CPU and GPU. All this page does is embed a YouTube video, why is my laptop's fan going full blast? And I'm supposed to trust the code that emanates from their oracle coding agent? What are we doing here people??

          • arrowsmith6 days ago |parent

            Yes, Claude 4 is very good at Elixir.

      • smackeyacky6 days ago |parent

        Old stuff like VB.NET it’s really struggling on here. But c# its mostly fine

      • mrheosuper6 days ago |parent

        Rust is far from obscure.

        some HDLs should fit the bill: VHDL, Verilog or SystemC

        • vitorsr5 days ago |parent

          I taught Digital Design this semester - all models output nonsensical VHDL. The only exception is reciting “canonical” components available on technical and scientific literature (e.g., [1]).

          [1] https://docs.amd.com/r/en-US/ug901-vivado-synthesis/Flip-Flo...

      • apwell236 days ago |parent

        same i am developing a bunch of neovim plugins. haven't had any background in neovim or lua.

      • m00dy6 days ago |parent

        Rust is the absolute winner of LLM era.

        • danielbln6 days ago |parent

          By what metric? I still see vastly more Python and Typescript being generated, and hell, even more golang. I suppose we are all in our own language bubbles a bit.

          • ModernMech6 days ago |parent

            Python code generated by LLM is like a landmine; it may compile, but there could be runtime errors lurking that will only detonate when the code is executed at some undetermined point in the future.

            Rust code has the property that if it compiles, it usually works. True there are still runtime errors that can occur in Rust, but they're less likely going to be due to LLM hallucinations, which would be caught at compile time.

            • danielbln6 days ago |parent

              I mean, that is true for any interpreted language. That's why have type checkers, LSPs, tests and so on. Still not bullet proof, but also not complete time bomb like some commenters make it out to be. Hallucinations are not an issue in my day to day, stupid architecture decisions and overly defensive coding practices, those more so.

              • ModernMech6 days ago |parent

                Right, that's why good language design is still relevent in 2025. e.g. type checking only saves you if the language design and ecosystem is amenable to type checking. If the LLM can leverage typing information to yield better results, then languages with more type annotations throughout the code and ecosystem will be able to extract more value from LLMs in the long term.

              • bigfishrunning5 days ago |parent

                > overly defensive coding practices

                Can you elaborate a bit here? In my experience, most code I come into contact with isn't nearly defensive enough. Is AI generated code more defensive then the median?

          • m00dy6 days ago |parent

            I don’t have hard data to back it up, but LLMs make writing code super easy now. If the code compiles, you’ve basically filtered out the hallucinations. That’s why writing in Python or TypeScript feels kind of pointless. Rust gives you memory safety, no garbage collector, and just overall makes more sense, way better than Go. Honestly, choosing anything other than Rust feels like a risky gamble at this point.

            • spacechild16 days ago |parent

              Rust only really makes sense in settings where you would have otherwise used C or C++, i.e. you need the best possible performance and/or you can't afford garbage collection. Otherwise just use Go, Java or C#. There is no gamble with picking any of these.

              • echelon6 days ago |parent

                Rust is fantastic for writing HTTP servers, microservices, and desktop applications.

                OpenAI uses Rust for their service development as do a lot of other big companies.

                It's a lot like Python/Flask, or even a bit like Go. It's incredibly easy to author [1] and deploy, and it runs super fast with no GC spikes or tuning. Super predictable five nines.

                Desktop apps sing when written in Rust. A lot of AI powered desktop apps are being written in Rust now.

                If you're going to reach for Go or Java (gRPC or Jetty or something) or Python/Flask, Rust is a super viable alternative. It takes the same amount of time to author, and will likely be far more defect free since the language encourages writing in a less error prone way and checks for all kinds of errors. Google did a study on this [2,3].

                [1] 99.9% of the time you never hit the borrow checker/lifetimes when writing server code as it's linear request scoped logic. You get amazing error handling syntax and ergonomics and automatic cleanup of everything. You also have powerful threading and async tools if you need your service to do work on the side, and those check for correctness.

                [2] "When we've rewritten systems from Go into Rust, we've found that it takes about the same size team about the same amount of time to build it," said Bergstrom. "That is, there's no loss in productivity when moving from Go to Rust. And the interesting thing is we do see some benefits from it. So we see reduced memory usage in the services that we've moved from Go ... and we see a decreased defect rate over time in those services that have been rewritten in Rust – so increasing correctness." https://www.theregister.com/2024/03/31/rust_google_c/

                [3] https://news.ycombinator.com/item?id=39861993

                • spacechild15 days ago |parent

                  > Rust is fantastic for writing HTTP servers, microservices, and desktop applications.

                  In which world is Rust fantastic for writing desktop applications? Where are the mature Rust UI frameworks?

                  > Desktop apps sing when written in Rust.

                  What does this even mean?

                  > A lot of AI powered desktop apps are being written in Rust now.

                  For example? And what do you mean by "AI powered desktop apps"?

                  • echelon5 days ago |parent

                    > In which world is Rust fantastic for writing desktop applications? Where are the mature Rust UI frameworks?

                    Rust has come a remarkably long way [1] !

                    It's better than any language that isn't C/C++. We have bindings to all of the major UI toolkits, plus a lot of native toolkits.

                    You can also use Electron/Tauri for Javascript, or Dioxus for something more performant. Egui is also really nice for dev tools.

                    > What does this even mean?

                    Rewrite It In Rust metrics tend to tell good stories.

                    Developer blogs (links escaping me right now) show positive performance gains for lots of consumer desktop software written in whole or in parts using Rust. Discord, Slack, lots of other apps are starting to replace under-performing components in Rust.

                    > For example? And what do you mean by "AI powered desktop apps"?

                    Stuff like Zed [2] and the open source Photoshop-killer I'm working on (user-guided volumetric rendering, real time instructive Figma). The creator or egui works on Rerun [3], which is industrial / spatial visualization. Etc, etc.

                    [1] https://areweguiyet.com/

                    [2] https://zed.dev/

                    [3] https://rerun.io/

                    • spacechild15 days ago |parent

                      > [1] https://areweguiyet.com/

                      The only mature cross-platform UI frameworks I see in this list are written in C and C++ :) And of all the various Rust bindings for Qt, only a single one ("ritual") offers bindings to the Widgets API. Not only are these bindings unsafe, the project has also been abandoned. https://github.com/KDAB/cxx-qt/?tab=readme-ov-file#compariso.... Of course, this is not suprising, given that the API surface of Qt6 is huge and difficult (or impossible?) to safely wrap in Rust.

                      (The reality is that Rust's ownership model makes it rather awkward to use for traditional UI frameworks with object trees.)

                      Tauri looks neat, but the actual UI is a webview, so it's not everyone's cup of tea. (Certainly not mine.)

                      As you said, egui is cool for simple tools, but not suitable for complex desktop applications.

                      Yes, Rust has gotten quite a few desktop UI frameworks, but I don't see any industry standards yet. Don't forgot that Qt has been in development for 30 years now! Rust has a lot to catch up.

              • m00dy6 days ago |parent

                If you use an LLM with C or C++, stuff like pointer arithmetic or downcasting can be tricky. The code might compile just fine, but you could run into problems at runtime. That's why Rust is the only way...

                • spacechild15 days ago |parent

                  > The code might compile just fine, but you could run into problems at runtime.

                  Obviously, this can also happen with Rust, but certainly less so than with C or C++, I'll give you that. But how is Rust the only way when there's also Java, Go, C#, Swift, Kotlin, etc.?

            • yahoozoo6 days ago |parent

              Does nobody write business logic in Rust? All you ever hear is “if it compiles it works” but you can write a compiling Rust program that says “1 + 1 = 3”. Surely an LLM can still hallucinate.

              • m00dy6 days ago |parent

                you also write units tests, which is something baked in rust std toolchain.

      • bugglebeetle6 days ago |parent

        I’m blown away by how good Gemini Pro 2.5 is with Rust. Claude I’ve found somewhat disappointing, although it can do focused edits okay. Haven’t tried any of the o-series models.

    • arrowsmith6 days ago |parent

      Ehhhh, a year ago I'd have agreed with you — LLMs were noticeably worse with Elixir than with bigger langs.

      But I'm not noticing that anymore, at least with Elixir. The gap has closed; Claude 4 and Gemini 2.5 both write it excellently.

      Otoh, if you wanted to create an entirely new programming language in 2025, you might be shit outta luck.

      • drbojingle5 days ago |parent

        Unless the new language was a super set of an existing language. I think a stricter version of python or even typescript could be do-able.

        • arrowsmith5 days ago |parent

          Good point

      • rglover5 days ago |parent

        > Otoh, if you wanted to create an entirely new programming language in 2025, you might be shit outta luck.

        This just made me really sad. That effectively means that we'll plateau indefinitely as a civilization (not just on programming languages, but anything where the LLM can create an artificial Lindy effect).

      • bigfishrunning5 days ago |parent

        > Otoh, if you wanted to create an entirely new programming language in 2025, you might be shit outta luck.

        Or, do what you want, and don't worry what language people who have no interest in programming use.

  • dogleash5 days ago

    I wonder how much it's changing the learning curve vs just making the experience more comfortable.

    >For someone who spent a decade as a “Ruby developer,” becoming a multi-language developer in less than a year feels revolutionary.

    Revolutionary? They've snitched they have no frame of reference to make that claim. It would have taken "less than a year" with or without AI. They just spent 10 years not trying.

    Everyone's first language learning experience is also learning to program. Learning a new language once you have years of professional programming practice is completely different.

    • thegeomaster5 days ago |parent

      Same here. Reading the article, I could not really relate to the experience of being a single-language developer for 10 years.

      In my early days, I identified strongly with my chosen programming language, but people way more experienced than me taught me that a programming language is a tool, and that this approach is akin to saying "well, I don't know about those pliers, I am a hammerer."

      My personal feeling from working across a wide range of programming languages is that it expands your horizons in a massive way (and hard to qualitatively describe), and I'm happy that I did this.

      • mathgeek5 days ago |parent

        Good analogy IMHO. Knowing whether a given language is a tool vs a toolbox is important.

      • ryandv5 days ago |parent

        The idiosyncrasies of Ruby, like Perl and JavaScript, lead to a certain kind of brain damage that make it difficult to build correct mental models of computing that can then generalize to other languages.

        • jonhohle5 days ago |parent

          Unless you’re writing instructions for a Turing machine the impedance mismatch between the real world and “computing” is always going to have idiosyncrasies. You don’t have to like a language to understand its design goals and trade offs. There are some very popular languages with constraints or designs that I feel are absurd, redundant, or counterproductive but I cannot think of a (mainstream) language where I haven’t seen someone much smarter than me do amazing things.

          The language I consider the lamest, biggest impediment to learning computer science is used by some of the smartest people on the planet to build amazing things.

        • pryelluw5 days ago |parent

          Although I disagree with your opinion (and who cares?), your comment reminded me of Winp Lo.

          https://m.youtube.com/watch?v=d696t3yALAY

          • ryandv5 days ago |parent

            This is profoundly racist.

            • pryelluw5 days ago |parent

              Thank you, but I wasn’t going for racist.

              What you may have missed, from the perspective of your vertically scaled horse, is that you compare learning certain models to a mental disability. It makes calling my comment racist similar to the whole pot/kettle thing.

              However, I do appreciate reading about such opinions because it offers a peek into the elitism that surrounds programming languages.

              Also, as a person from a non-traditional and non-privileged background, Im a little unsure about how to proceed. Shall we cut our losses and move on?

              • skvmb5 days ago |parent

                > Thank you LMAOOOOO I love the response, it's similar to "Thanks for noticing" B A S E D

            • taskforcegemini5 days ago |parent

              how is that video racist? calling it racist appears to be much more racist

        • nazgulsenpai5 days ago |parent

          From my own personal experience I'd add Visual Basic to that list.

          • thegeomaster5 days ago |parent

            I never understood the hate. Beyond the stranger syntax, it's not terribly different from a language such as Pascal. It's an old imperative language without too much magic (beyond strange syntax sugar).

          • hattmall5 days ago |parent

            Terrible language, excellent tools.

        • WatermelonApe5 days ago |parent

          [dead]

    • A4ET8a8uTh0_v25 days ago |parent

      << It would have taken "less than a year" with or without AI. They just spent 10 years not trying.

      I suppose we can mark this statement as technically true. I can only attest to my experience using o4 for python mini projects ( popular, so lots of functional code to train on ).

      The thing I found is that without it, all the interesting little curve balls I encountered likely would have thrown a serious wrench into the process ( yesterday, it was unraid specific way of handling xml vm ). All of sudden, I am not learning how to program, but learning how qemu actually works, but it is a lot more seamless than having to explore it 'on my own'. And that little detour took half a day when all was said and done. There was another little detour at dockers ( again unraid specific isseus ), but all was overcome, because now I had 4o guide me.

      It is scary, because it can work and work well ( even when correcting for randomness). FWIW, my first language was basic way back when.

      • skydhash5 days ago |parent

        Lots of people just went the traditional way of learning things from first principles. So you don't suddenly learn docker, you learn how visualization works. And it's easy because you already know computer hardware works and it's relation to the OS. And that network course has been done already since years so you have no issue talking about bridges and routing. It's an incremental way of learning and before realizing it, you're studying distributed algorithms.

        • A4ET8a8uTh0_v25 days ago |parent

          Eh, it works as an abstract, when you are intentional about your long term learning path, but I tend to ( and here I think a lot of people are the same way ) be more reactive and less intentional about those, which in practice means that if I run into a problem I don't enroll in a course, but do what I can with resources available. It is a different approach and both have uses.

          Incremental obviously is the ideal version especially from long term perspective if the plan for it is decent, but it is simply not always as useful in real world.

          Not to search very far, I can no longer spend more than a day on pursuing random threads ( or intentional ones for that matter ).

          I guess what I am saying is: learning from first principles is a good idea if you can do it that way. And no for docker example. You learn how they should work. When playing in real world, you quickly find out there are interesting edge cases, exceptions and issues galore. How they should work only gets you so far.

          • skydhash4 days ago |parent

            My philosophy is something like GTD where you have tasks, projects, and area of responsibilities. Tasks are the here and now, and they’re akin to the snippets of information you have to digest.

            Project have a more long term objective. What’s important is the consistency and alignment of the individual tasks. In learning terms, that may be a book, a library docs, some codebase. The most essential aspect is that they have an end condition.

            Areas is just thing that you have to do or take care of. The end condition is not fully set. In learning terms, these are my interest like drawing or computer techs. As long as something interesting pops up, I will consume it.

            It’s very rare for me to have to learn stuff without notice. Most will fall under an objective or something that I was pursuing for a while.

    • KronisLV5 days ago |parent

      > They just spent 10 years not trying.

      Being in the zone of proximal development will do that: https://en.wikipedia.org/wiki/Zone_of_proximal_development

      We shouldn't dismiss high friction problems.

  • deterministican hour ago

    I know quite a few programming languages well enough to write production software using them. It didn't take long to learn.

    The hardest to learn for me was Haskell. Because of how different the thinking behind the language is. But it was great to learn even though I don't use it daily.

    I highly recommend learning different programming languages. It's a great way to get exposed to different ways of thinking about how software should be developed.

  • Maro6 days ago

    This is great, and I think this is the right way to use AI: treat it as a pair programming partner and learn from it. As the human learns and becomes better at both programming and the domain in question (eg. a Ruby JIT compiler), the role of the AI partner shifts: at the beginning it's explaining basic concepts and generating/validating smaller snippets of code; in later stages the conversations focus on advanced topics and the AI is used to generate larger portions of code, which now the human is more confident to review to spot bugs.

  • cultofmetatron6 days ago

    I think AI will push programming languages in the direction of stronger hindly milner type type checking. Haskell is brutally hard to learn but with enough of a data set to learn from, its the perfect target language for a coding agent. its high level, can be formally verified using well known algos and a language server could easily be connected with the ai agent via some mcp interface.

    • js86 days ago |parent

      I wish but the opposite seems to be coming - Haskell will have less support from coding AIs than mainstream languages.

      I think people, who care about FP, should think about what is appealing about coding in natural language and is missing from programming in strongly typed FP languages such as Haskell and Lean. (After all, what attracted me to Haskell compared to Python was that the typechecking is relatively cheap thanks to type inference.)

      I believe that natural language in coding has allure because it can express the outcome in fuzzy manner. I can "handwave" certain parts and the machine fills them out. I further believe, to make this work well with formal languages, we will need to use some kind of fuzzy logic, in which we specify the programs. (I particularly favor certain strong logics based on MTL but that aside.) Unfortunately, this line of research seems to have been pretty much abandoned in AI in favor of NNs.

    • Paradigma116 days ago |parent

      I used a LSP MCP tool for a LLM and was so far a bit underwhelmed. The problem is that LSP is designed for human consumption and LLMs have different constraints.

      LLMs don't use the LSP exploratory to learn the API, you just give it to it as a context or MCP tool. LLMs are really good at pattern matching and wont make type errors as long as the type structure and constructs are simple.

      If they are not simple it is not said that the LLM can solve and the user understand it.

    • agent2815 days ago |parent

      IMO, Haskell is less helpful for an LLM because of its advanced language features. The LLM is reasoning about the language textually. Since Haskell is very tense, the LLM would need a very strong model of how the language works.

      I think languages with more minimal features and really good compile time errors would work well with LLMs. In particular, I've heard multiple people say how good LLMs are at generating Go.

      Personally, I like languages with type inference so this wouldn't be my preference.

    • seanmcdirmid6 days ago |parent

      We might see wider adoption of dependently typed languages like Agda. But limited corpus might become the limiting factor, I’m not sure how knowledge transfers as the languages get more different.

      • ipnon6 days ago |parent

        It's getting cheaper and cheaper to generate corpora by the day, and Agda has the advantage of being verifiable like Lean. So you can simulate large amounts of programs and feed these back into the model. I think this is a major reason why we're seeing remarkable improvements in formal sciences like the recent IMO golds, and yet LLMs are still struggling to generate aesthetically pleasing and consistent CSS. Imagine a high schooler who can win an IMO gold medal but can't center a div!

        • andrewflnr6 days ago |parent

          It seems like "generating" a corpus in that situation is more like a search process guided by prompts and more critically the type checker, rather than a straight generation process right? You need some base reality or you'll still just have garbage in, garbage out.

    • tsimionescu6 days ago |parent

      > can be formally verified using well known algos

      Is there any large formally verified project written in Haskell? The most well known ones are C (seL4 microkernel) and Coq+OCaml (CompCert verified C compiler).

      • aetherspawn6 days ago |parent

        Well, Haskell has GADTs, new type wrappers and type interfaces which can be (and are often) used to implement formal verification using meta programming, so I get the point he was making.

        You pretty much don’t need to plug another language into Haskell to be satisfied about certain conditions if the types are designed correctly.

        • tsimionescu6 days ago |parent

          Those can all encode only very simplistic semantics of the code. You need either a model checker or dependent types to actually verify any kind of interesting semantics (such as "this sort function returns the number in a sorted order", or "this monad obeys the monad laws"). GADTs, newtypes and type interfaces are not significantly more powerful than what you'd get in, say, a Java program in terms of encoding semantics into your types.

          Now, I believe GHC also has support for dependent types, but the question stands: are there any major Haskell projects that actually use all of these features to formally verify their semantics? Is any part of the Haskell standard library formally verified, for example?

          And yes, I do understand that type checking is a kind of formal verification, so in some sense even a C program is "formally verified", since the compiler ensures that you can't assign a float to an int. But I'm specifically asking about formal verification of higher level semantics - sorting, monad laws, proving some tree is balanced, etc.

      • gizmo6865 days ago |parent

        seL4 has a Haskell implementation. It only runs under a simulator (as opposed to on real hardware). Their main proof strategy is to prove the Haskell version correct; then prove the C version matches the Haskell version.

        • tsimionescu5 days ago |parent

          I admit that I was not aware of their use of Haskell, so thanks for bringing this up! It motivated me to go and actually read (parts of) their paper [0].

          > Their main proof strategy is to prove the Haskell version correct; then prove the C version matches the Haskell version.

          However, I don't think this is right. By my understanding of the paper, the Haskell implementation isn't proven to be correct, it's more of a prototyping tool. The Haskell version is used as an intermediate representation that can be automatically translated into Isabelle/HOL and quickly proven to have various properties. If the proofs don't work at this level, it's easy to change, re-translate, and re-check. To make it easily tramslatable to Isabelle/HOL, they forego many Haskell features, such as typeclasses, or making use of laziness. The Haskell program is also designed to match the eventual C implementation, so it's using C-like data types, pointers, etc - not native Haskell data structures,nor making use of the Haskell GC.

          The proofs achieved at this level apply to the auto-generated Isabelle/HOL translation of the Haskell code. The Haskell runtime is not modeled here in any way, and as such the proofs do not hold for the Haskell executable code that you can run on the simulator.

          The Haskell prototype was then translated manually to C, trying to keep as close to a 1:1 translation as possible, while still allowing various micro-optimizations. Then, the exact C code was manually translated back to Isabelle/HOL, where they also had a complete model of the semantics of every C statement that they used (they use a subset of C, with the biggest missing feature being references to local variables - you can't use &x if x is a local variable in their C code). Isabelle/HOL was then used to check that this manually written implementation implemented the abstract specification with all of the proven guarantees.

          So, the Haskell part was ultimately a convenience, not a fully necessary part of the proof (though the whole thing would probably not have been feasible had they not used it). It was a decent middle ground between being possible for the kernel devs to define the code in it, and the formal verification guys to translate it to Isabelle and verify it. If they had tried to write the C code directly, the cost of translating any change to the C code back to Isabelle for verification would have been too high. Conversely, having the kernel devs learn enough Isabelle to define the prototype code directly in Isabelle would have also been too complicated as well - and I believe the Isabelle code could not be easily executed to be able to do more regular debugging on it during the development process, the way the Haskell code could be run on the simulator.

          [0] https://www.sigops.org/s/conferences/sosp/2009/papers/klein-...

  • ChrisMarshallNY6 days ago

    > AI as a Complementary Pairing Partner

    That's how I've been using it.

    I treat it as a partner that has a "wide and shallow" initial base, but the ability to "dive deep," when I need it. Basically, I do a "shallow triage," to figure out what I want to focus on, then I ask it to "dive deep," on my chosen topic.

    I haven't been using it to learn new languages, but I have been using it to learn new concepts and techniques.

    Right now, I'm learning up on implementing a webauthn backend and passkey integration into my app. It's been instrumental. Coming along great. I hadn't had any previous experience, and it's helping me to learn.

    I should note that it has given me wrong examples; notably, it assumed a deprecated dependency version, that I had to debug and figure out a fix. That was actually a good thing, as it helped me to learn the "ins and outs" a bit better.

    I'm still not convinced that I'd let AI just go ahead and ship an application from scratch, without any intervention on my part. It often makes mistakes; not serious ones, but ones that would be bad, if they shipped.

  • iparaskev6 days ago

    > The real breakthrough came when I stopped thinking of AI as a code generator and started treating it as a pairing partner with complementary skills.

    I think this is the most important thing mentioned in the post. In order for the AI to actually help you with languages you don't know you have to question its solutions. I have noticed that asking questions like why are we doing it like this and what will happen in the x,y,z scenario, really helps.

    • solids6 days ago |parent

      My experience is that each question I ask or point I make produces an answer that validates my thinking. After two or three iterations in a row in this style I end up distrusting everything.

      • iparaskev6 days ago |parent

        This is a good point. Lately I have been experimenting with phrasing the question in a way that it makes it believe that I prefer what I am suggesting, while the truth is that I don't.

        For example: - I implement something. - Then I ask it to review it and suggest alternatives. Where it will likely say my solution is the best. - Then I say something like "Isn't the other approach better for __reason__ ?". Where the approach might not even be something it suggested.

        And it seems that sometimes it gives me some valid points.

      • samrus6 days ago |parent

        This is very true. Constant insecurity for me. One thing that helps a little is asking it to search for sources to back up what its saying. But claude has hallucinated those as well. Perplexity seems to be good at being true to sources, but idk how good it is at coding itself

      • skydhash5 days ago |parent

        Which is why I read books and articles instead. The information inside them is isolated from my experience. The LLM experience is like your reflection in a deformed mirror talking back to you.

      • tietjens6 days ago |parent

        yes, this. biggest problem and danger in my daily work with llms. my entire working method with them is shaped around this problem. instead of asking it to give me answers or solutions, i give it a line of thought or logical chain, and then ask it to continue down the path and force it to keep explaining the reasoning while i interject, continuing to introduce uncertainty. suspicion is one of the most valuable things i need to make any progress. in the end it's a lot of work and very much reading and reasoning.

    • danielbln6 days ago |parent

      I'm addition, I frequently tell it to ask clarifying questions. Those often reveal gaps in understanding or just plain misunderstanding that you can then bip in the bud before it has generated a million tokens.

  • AstroBen6 days ago

    I've been diving into a (new language to me) Swift codebase over the last week and AI has been incredibly helpful in answering my questions and speeding up my learning

    But meaningfully contributing to a complex project without the skills? Not a chance I'd put my name on the contributions it makes. I know how many mistakes these tools make in the languages I know well - it also makes them in the ones I don't. Only now I can't review its output

    • balder19915 days ago |parent

      Yeah, I don’t understand how people feel confident with that at all.

      Whenever I dive into a programming language I don’t know, I realize the amount of stuff I need to get used to before I feel confident reviewing code in it.

      Also supposedly language barriers are smaller than ever, yet WhatsApp is killing its desktop app on Windows in favor of using the Web-based version.

      • itsafarqueue5 days ago |parent

        Because sixty percent of the time it works every time.

  • thefz6 days ago

    I wanted to test Gemini's code generation so I asked it for a bash script iterating through an array of directory names and executing a command for each one.

    It got it wrong. The command was generated outside of the for, and never updated inside the loop effectively making it useless.

    Luckily I know bash so I spotted it immediately. But I wonder how it's "removing programming language barriers" when one that does not know its output language can not spot even such a glaring issue.

    • theshrike795 days ago |parent

      That’s a bash problem, not LLM.

      I’ve done similar tools with Go and haven’t had any issues.

      One goes through a tree of dirs, finds zip files with images, unzips, benchmarks which image format is the most compact and creates a new zip with that.

      Got it right the first go (hehe pun)

    • jiggawatts6 days ago |parent

      I think this is more of a criticism of bash than of Gemini.

      Also, I out-of-hand reject any criticism of an AI that specifies only the brand ("ChatGPT") and not the specific model version, settings, etc...

      So, for example, Gemini 2.5 Pro (Jan 2025) with temperature 0.15 prompted with: "Write a short, idiomatic bash script that takes a file containing a list of directory names, changes to each one, runs a command, and then returns to the original working directory."

      Spits out this:

          #!/bin/bash
      
          # A script to run a command in a list of directories.
      
          # Exit if the wrong number of arguments are provided.
          if [[ $# -lt 2 ]]; then
            echo "Usage: $0 <file_with_dirs> <command> [args...]" >&2
            exit 1
          fi
      
          DIR_FILE="$1"
          shift # The rest of the arguments ("$@") are now the command.
      
          # Ensure the directory list file exists and is readable.
          if [[ ! -r "$DIR_FILE" ]]; then
            echo "Error: Cannot read file '$DIR_FILE'" >&2
            exit 1
          fi
      
          # Read the file line by line.
          while IFS= read -r dir; do
            # Skip empty lines or non-existent directories.
            if [[ -n "$dir" && -d "$dir" ]]; then
              echo "==> Entering '$dir'..."
              # Run command in a subshell to keep the 'cd' local.
              # '&&' ensures the command only runs if 'cd' succeeds.
              (cd "$dir" && "$@")
              echo "<== Finished with '$dir'."
            else
              echo "--> Skipping invalid directory: '$dir'" >&2
            fi
          done < "$DIR_FILE"
      
          echo "All done."
      
      That worked for me, but I was testing it in WSL 2 where I got a gibberish error... which was because I edited the file in Windows Notepad and the line endings were confusing bash. Gemini helpfully told me how to fix that too!

      Something that I found amusing, and again, is a criticism of bash instead of the AI, is that this fails to process the last line if it isn't terminated with a \n character.

      PS: This is almost a one-liner in PowerShell, and works with or without the final terminator character:

          Push-Location
          Get-Content dirs.txt | cd -PassThru | Foreach-Object { echo "Hello from: $pwd" }
          Pop-Location
      
      Gemini also helped me code-golf this down to:

          pushd;gc dirs.txt|%{cd $_;"Hello from: $pwd"};popd
      • thefz6 days ago |parent

        > I think this is more of a criticism of bash than of Gemini.

        I can write correct bash; Gemini in this instance could not.

        > Also, I out-of-hand reject any criticism of an AI that specifies only the brand ("ChatGPT") and not the specific model version

        Honestly I don't care, I opened the browser and typed my query just like anyone would.

        > PS: This is almost a one-liner in PowerShell, and

        Wonder how this is related to "I asked Gemini to generate a script and it was severely bugged"

        • jiggawatts6 days ago |parent

          > typed my query just like anyone would.

          Yes, well... are you "anyone", or an IT professional? Are you using the computer like my mother, or like someone that knows how LLMs work?

          This is a very substantial difference. There's just no way "anyone" is going to get useful code out of LLMs as they are now, in most circumstances.

          However, I've seen IT professionals (not necessarily developers!) get a lot of utility out of them, but only after switching to specific models in "API playgrounds" or some similarly controlled environment.

          • thefz6 days ago |parent

            > Yes, well... are you "anyone", or an IT professional? Are you using the computer like my mother, or like someone that knows how LLMs work?

            I have more than 15 years of programming experience. I do not trust the output of LLMs a single bit. This just proved my point. I honestly don't care if I used the "wrong" model or the "wrong" query, which was already quite descriptive of what I wanted anyway.

            No need to get super defensive, you can keep spending your time playing code golf with Gemini if you want. My experience just corroborates what I already thought; code generation is imprecise and error prone.

            • jiggawatts5 days ago |parent

              > I honestly don't care if I used the "wrong" model or the "wrong" query

              If you used the wrong SQL query, would you expect the right answer?

              If you used the wrong database, would you expect your app to work well?

              • thefz5 days ago |parent

                Not even remotely comparable

      • oneshtein6 days ago |parent

          for dir in $(cat dirs.txt); do ( cd "$dir"; echo "Hello from $(pwd)" ); done
        • lucianbr6 days ago |parent

          Unbelievable how long and convoluted the other answer is, and that it is presented as proof that the AI provided a good solution.

          • jiggawatts5 days ago |parent

            I asked for a "script". Asking for a one-liner does just that, with no input validation, comments, etc...

            Fundamentally, bash is just... verbose.

            I.e.: Here's the same task implemented in two scripting languages:

            PowerShell is 5 lines of code: https://learn.microsoft.com/en-us/azure/virtual-machines/ena...

            Bash is several pages: https://learn.microsoft.com/en-us/azure/virtual-machines/ena...

            • skydhash5 days ago |parent

              That's more of a reflection of the environment than the scripting language. On a mostly bare linux (debian netinstall or alpine), you are left with loads of text to parse. But as soon as that script becomes unwieldy, then the next option is write an actual program. Windows can afford to do that, because there's only few version out there. But there are lots of different Linux installations. And even the kernel is not guaranteed to be vanilla. So you're either write a script like this, or you go find programs that can help you out.

        • thefz5 days ago |parent

          Out of curiosity isn't () spawning a subshell?

  • Pamar6 days ago

    Am I the only one that remembers how Microsoft tried to convince everyone to adopt .Net because this way you could have teams where one member could use J#, another use Fortran.Net (or whatever the name was) and old chaps could still contribute by writing Cobol# and everything would just magically work together and you would quadruple productivity just by leveraging the untapped pool of #Intercal talent out there?

    • mikert896 days ago |parent

      Wish I could go back to a time when I believed stuff like this

  • globular-toast6 days ago

    We learn natural languages by listening and trying things to see what responses we get. Some people have tried to learn programming the same way too. They'd just randomly try stuff, see if it compiles then see if it gives what they were expecting when they run it. I've seen it with my own eyes. These are the worst programmers in existence.

    I fear that this LLM stuff is turning this up 11. Now you're not even just doing trial and error with the compiler, it's trial and error with the LLM and you don't even understand what it's output. Writing C or assembly without fully reasoning about what's going on is going to be a really bad time... No, the LLM does not have a working model of computer memory, it's a language model, that's it.

  • iLoveOncall6 days ago

    I don't think I've ever seen an experienced software engineer struggling to adapt to a new language.

    I have worked in many, many languages in the past and I've always found it incredibly easy to switch, to the point where you're able to contribute right away and be efficient after a few hours.

    I recently had to do some updates on a Kotlin project, having never used it (and not used Java in a few years either), and there was absolutely no barrier.

    • xnorswap6 days ago |parent

      I've seen plenty struggle.

      They don't struggle to write code, but they struggle to write idiomatic code.

      An experienced programmer from a different language introduced to another will write lots of code that works, but in a style idiomatic to their favoured language, be that C, C++, rust, python, etc.

      Every language has its quirks, and mastery is less about being able to write a for loop or any given algorithm in any given language, but more about knowing which things you should write, and which things you should be using the standard libraries for.

      I've literally seen C# consultants waste time writing their favourite .NET LINQ methods into a javascript library, so they can .Select(), .Where(), etc, rather than using .filter, .map, etc.

      Likewise I've seen people coming from C struggle to be as productive as they ought to be in C#, because they'd rather write a bunch of static methods with loops than get to grips with LINQ.

      Fully understanding a runtime (or compiler behaviour for AOT languages) and what is or isn't in standard libraries isn't something that can be mastered in a few hours.

    • RedNifre6 days ago |parent

      Bash might not be difficult, but it is very annoying, so I'm happy that the AI edits my scripts for me.

      • thefz6 days ago |parent

        > Bash might not be difficult, but it is very annoying

        Just shellcheck the hell out of it until it passes all tests.

  • karmasimida6 days ago

    AI has basically removed my fear with regards to programming languages.

    It almost never misses on explaining how certain syntax works.

  • snovymgodym5 days ago

    I don't know, have there really been meaningful "language barriers" between mainstream programming languages for the last 20 years or so?

    If I think about the 10 most common languages being used for application code right now, something like 80% of them can be described as "C or ALGOL's syntax with call-by-reference and automatic memory management". If you program for a living, I feel like you can switch between any of these without much effort. They're so similar to one another at a fundamental level, and there's lots of convergence going on as they adopt features from one another.

    Sure, doing C or C++ for the first time can be hard if you've never had to think about memory lifecycles before, but even that isn't so crazy if you're working with established patterns, pay attention to warnings, and use an aggressive linter.

    For languages with actual learning curves, they're just not as widely used (e.g. Rust, Ada SPARK, lisp, forth, ML).

  • lvl1555 days ago

    AI tools make it so much easier to shift gears between two or more languages. Before this year, it would take me at least a week to adjust going from Python to Rust to TS. Now, AI will just fill in the gaps and I know enough to recognize poor AI patterns.

  • jug4 days ago

    It’s kind of true. The amount of direct training material per language still matters quite a bit. They’re best at the giants Javascript or Python, but starts influencing Rust and in particular languages like Zig. So if you ask it to write a Python app in C it’ll probably do better than if you ask it to write it in D, Zig, Nim, Crystal, what have you. Unfortunately I haven’t been able to find the excellent chart that had in fact benchmarked coding based on languages right now.

  • SubiculumCode6 days ago

    Seems like it would make people more adverse..the variability of AI expertise by language is pretty large.

    • Paradigma116 days ago |parent

      LLMs learn and apply pattern. You can always give some source code examples and language docs as context and it will apply those adapted patterns to the new language.

      Context windows are pretty large (Gemini 2.5 pro with 1 mill tokens (~ 750k words the largest) so it does not really matter.

    • MattGaiser6 days ago |parent

      It just needs to be better than the human would be and less effort. It does not need to be great.

    • karmasimida6 days ago |parent

      Let me just say this way.

      AI is a much better, so in some case worse, language lawyer than humans could ever be.

  • rvz6 days ago

    This is why the most seasoned of engineers will be employed back to clean up the mess that these AI agents and vibe-coders have created.

    I suggest that the author should properly read up on the technicals of these compiled languages before having to be fully dependent on an AI bot which by his own admission can lead him and the chatbot into the wrong direction.

    Each of these languages all have different semantics and have complete differences between them; especially compiled languages like C/C++,Rust verses Ruby and Javascript (yuck).

  • sunrunner6 days ago

    What about the part of programming and software development that relies on programmatic/systemic thinking? How much is the language syntax itself part of any 'program' solution?

  • sillycube6 days ago

    Yes, I try to port 200 lines of js to Rust, the features remain the same. Using Claude 4.0 Sonnet with a prompt and it's done. Work perfect.

    I still spend a few days studying Rust to grasp the basic things.

  • graynk6 days ago

    Get back to me once you successfully write a Vulkan app with LLMs

    • Archit3ch6 days ago |parent

      Will Smith asks, "Can a robot write a Vulkan app?"

      The robot responds, "Can you?"

      • graynk6 days ago |parent

        I can not :(

  • Ozzie_osman6 days ago

    Agree. My team and I were just discussing that the biggest productivity unlock from AI in the dev workflow is that it enables people to more easily break out of their box. If you're an expert backend developer, you may not see huge lift when you write backend code. But when you need to do work on infrastructure or front-end, you can now much more easily unblock yourself. This unlocks a lot of productivity, and frankly, makes the work a lot more enjoyable.

  • physicsguy6 days ago

    I've noticed this at work where I use Python frameworks like Flask/FastAPI/Django and Go, which has the standard library handlers but within that people are much less likely to follow specific patterns and where there are various composable bits as add ons.

    If you ask an LLM to generate a Go handler for a REST endpoint, it often does something a bit out of step with the rest of the code base. If I do it in Python, it's more idiomatic.

  • 6 days ago
    [deleted]
  • 6 days ago
    [deleted]
  • alentred6 days ago

    I wonder, are some programming languages more suitable for AI coding agents (or, rather LLMs) than the others? For example, are heavy on syntax languages at disadvantage? Is being verbose a good thing or a bad thing?

    P.S. Maybe we will finally see M-expressions for Lisp developed some day? :)

  • elzbardico4 days ago

    I shudder thinking of people who never wrote C, vibecoding C.

  • dearilos6 days ago

    AI coding agents help you solve the problem faster

    AI code review helps you catch issues you've forgotten about and eliminates the repetitive work

    These tools are helping developers create quality software - not replace them

    • kwancloudli4 days ago |parent

      See if you have same thoughts after 1 years

  • 6 days ago
    [deleted]
  • andrewstuart6 days ago

    I’ve been enjoying doing a bunch of assembly language programming - something I never had the experience of or capability to learn to competence or time to learn previously.

  • nikolayasdf1236 days ago

    true. doing pair programming with AI for last 10 months I got my skills from zero to sufficient profficiency (not expert yet) in totally new language — Swift. entry barrier is much lower now. research advanced topics is much faster. typing code (unit tests, etc.) is much faster. code review is automated. it is indeed makes barrier for new languages and tools lower.

    • iLoveOncall6 days ago |parent

      I would expect anyone to get proficient in Swift after 10 months of using it, with or without AI...

      If AI had really a multiplying factor here, I'd expect you to BE an expert.

    • dearilos6 days ago |parent

      how'd you get your code review automated?

      • bigfishrunning5 days ago |parent

        Easy! By not doing any, and trusting the AI when it says "LGTM"

        • nikolayasdf1235 days ago |parent

          you would be surprised how many complex and non-compelx bugs it finds all the time. and that is in addition to typos checks. super reliable. very useful bugs. amost 100% of the time I say "oh that's nice. saved me lots of time testing and debugging this. thanks!"

          sometimes it recommends high-level architectural changes.

          sometimes it detects "inconsistent" things that step out of the line of non-trivial patterns. saved me tons of time in K8S and other large config YAMLs and in just imperative code too (structures there repeats often too)

          and about correctness of suggestions. they are obvious once you see them. tests pass by definition (if you have any, before Copilot kicks-in). so whatever Copilot suggests likely already passes the tests, yet not ... sit right with it. it's intuition is pretty remarkable.

        • nikolayasdf1235 days ago |parent

          just check here: https://github.com/nikolaydubina/go-instrument/pull/53/agent...

          amount of QA and real-world bash runs, exploratory Go programs, code tinkering, research, manipulation, is outstanding

          this poor thing did incredible amount of QA and experimentation work to get it right. and it did! very complex task.

          required only nudging in the right high-level direction

      • nikolayasdf1235 days ago |parent

        GitHub Copilot

  • 0points6 days ago

    Hyperbole: AI isn't even trained on most programming languages.

    Compare it yourself with letting it generate js/python or something it trained alot on, versus something more esoteric, like brainfuck.

    And even in a common language, you'll hit brick walls when the LLM confuses different versions of the library you are using, or whatever.

    I had issues with getting AI generated rust code to even compile.

    It's simple: The less mainstream language, the less exposure in the training set leads to worse output.

  • dogleash5 days ago

    >We can start contributing meaningfully from day one, learning what we need as we go.

    Can you though? Or is just not bad enough for your coworkers to bother telling you how bad it is?

    I use AIs daily. But that doesn't mean I don't get mad when I'm reviewing a coworker's work and have to fight whatever bullshit an AI convinced them. I can't just brush it off as AI nonsense because 1) it might be their honest attempt at work without AI and 2) if it is AI they've already proven they don't know how to improve it.

    • bitpush5 days ago |parent

      > Can you though?

      Take a look at this Haskell program that LLM wrote. I do not write Haskell, but I can review the code just fine to say that this is doing what I want.

        -- Simple multiplication function
        multiply :: Num a => a -> a -> a
        multiply x y = x * y
      
        -- Main function for running the program
        main :: IO ()
        main = do
            putStrLn "Enter the first number:"
            input1 <- getLine
            putStrLn "Enter the second number:"
            input2 <- getLine
            
            let num1 = read input1 :: Double
            let num2 = read input2 :: Double
            let result = multiply num1 num2
            
            putStrLn $ "Result: " ++ show num1 ++ " * " ++ show num2 ++ " = " ++ show result
      
      
      If I had to write this by myself, it'd have taken atleast 20mins. First I have to be learn how main function is setup, how type definitions work, what putStrLn is, how to get an input, how to define a multiple function etc etc.

      It really is an NP problem, come to think of it.

      • skydhash5 days ago |parent

        > First I have to be learn how main function is setup, how type definitions work, what putStrLn is, how to get an input, how to define a multiple function etc etc.

        So did you learn all of that from just glancing at this snippet?

  • bgwalter6 days ago

    Yet again the key point of the article is "[AI is] encouraged at Shopify". Without fail all of these articles are externally driven to some extent.

  • 6 days ago
    [deleted]
  • 6 days ago
    [deleted]
  • OfficeChad6 days ago

    [dead]

  • kaptainscarlet6 days ago

    I was thinking the same the other day. No need for high-level languages anymore. AI, assumming it will get better and replace humans coders. has eliminated the labour constraint. Moores law death will no longer be a problem as performance gains are realised in software. The days of bloated electron apps are finally behind us.

    • balder19915 days ago |parent

      Yet WhatsApp is killing its Windows desktop app in favor of the web-based version.