HNNewShowAskJobs
Built with Tanstack Start
The Adolescence of Technology(darioamodei.com)
192 points by jasondavies 18 hours ago | 131 comments
  • Lerc17 hours ago

    One of my formative impressions of AI came from the depiction of the Colligatarch from Alan Dean Foster's The I Inside.

    The AI in the book is almost feels like it is the main message masquerading as a subplot.

    Asimov knew the risks, and I had assumed until fairly recently that the lessons and explorations that he had imparted into the Robot books had provided a level of cultural knowledge of what we were about to face. Perhaps the movie of I Robot was a warning of how much the signal had decayed.

    I worry that we are sociologically unprepared, and sometimes it seems wilfully so.

    People discussed this potential in great detail decades ago, Indeed the Sagan reference at the start of this post points to one of the significant contributors to the conversation, but it seems by the time it started happening, everyone had forgotten.

    People are talking in terms of who to blame, what will be taken from me, and inevitability.

    Any talk of a future we might want dismissed as idealistic or hype. Any depiction of a utopian future is met with derision far too often. Even worse the depiction can be warped to an evil caricature of "What they really meant".

    How do we know what course to take if we can't talk about where we want to end up?

    • nemomarx15 hours ago |parent

      I think people broadly feel like all of this is happening inevitably or being done by others. The alignment people struggle to get their version of AI to market first - the techies worry about being left behind. No one ends up being in a position to steer things or have any influence over the future in the race to keep up.

      So what can you and I do? I know in my gut that imagining an ideal outcome won't change what actually happens, and neither will criticizing it really.

      • atq211915 hours ago |parent

        In the large, ideas can have a massive influence on what happens. This inevitability that you're expressing is itself one of those ideas.

        Shifts of dominant ideas can only come about through discussions. And sure, individuals can't control what happens. That's unrealistic in a world of billions. But each of us is invariably putting a little but of pressure in some direction. Ironically, you are doing that with your comment even while expressing the supposed futility of it. And overall, all these little pressures do add up.

        • stoneforger5 hours ago |parent

          How will this pressure add up and bubble to the sociopaths which we collectively allow to control most of the world's resources? It would need for all these billions to collectively understand the problem and align towards a common goal. I don't think this was a design feature, but globalising the economy created hard dependencies and the internet global village created a common mind share. It's now harder than ever to effect a revolution because it needs to happen everywhere at the same time with billions of people.

          • ben_wan hour ago |parent

            > How will this pressure add up and bubble to the sociopaths which we collectively allow to control most of the world's resources?

            By things like: https://en.wikipedia.org/wiki/Artificial_Intelligence_Act

            and: https://www.scstatehouse.gov/sess126_2025-2026/bills/4583.ht... (I know nothing about South Carolina, this was just the first clear result from the search)

      • Lerc12 hours ago |parent

        >So what can you and I do?

        Engage respectfully, Try and see other points of view, Try and express your point of view. I decided some time ago that I would attempt to continue conversations on here to try and at least get people to understand that other points of view could be held by rational people. It has certainly cost me Karma, but I hope there has been a small amount of influence. Quite often people do not change their minds by losing arguments, but by seeing other points of view and then given time to reflect.

        >I know in my gut that imagining an ideal outcome won't change what actually happens

        You might find that saying what you would like to see doesn't get heard, but you just have to remember that you can get anything you want at Alice's Restaurant (if that is not too oblique of a reference)

        Talk about what you would like to see, If others would like to see that too, then they might join you.

        I think most people working in AI are doing so in good faith and are doing what they think is best. There are plenty of voices telling them how not to it, many of those voices are contradictory. The instances of people saying what to do instead are much fewer.

        If you declare that events are inevitable then you have lost. If you characterise Sam Altman as a sociopath playing the long game of hiding in research for years just waiting to pounce on the AI technology that nobody thought was imminent, then you have created a world in you mind where you cannot win. By imagining an adversary without morality it's easy to abdicate the responsibility of changing their mind, you can simply declare it can't be done. Once again choosing inevitability.

        Perhaps try and imagine the world you want and just try and push a tiny fraction towards that world. If you are stuck in a seaside cave and the ocean is coming in, instead of pushing the ocean back, look to see if there is an exit at the other end, maybe there isn't one, but at least go looking for it, because if there is, that's how you find it.

        • jfengel11 hours ago |parent

          Hypothetically, however, if your adversary is indeed without morality, then failing to acknowledge that means working with invalid assumptions. Laboring under a falsehood will not help you. Truth gives you clear eyed access to all of your options.

          You may prefer to assume that your opponent is fundamentally virtuous. It's valid to prefer failing under your own values than giving them up in the hopes of winning. Still, you can at least know that is what you are doing, rather than failing and not even knowing why.

    • cheschire10 hours ago |parent

      My interpretation is that Asimov assumed that humans would require understanding at the deepest levels of artificial intelligence before it could be created. He built the robot concepts rooted in the mechanical world rather than the world of the integrated circuit.

      He never imagined, I suppose, that we would have the computing power necessary to just YOLO-dump the sum of all human knowledge into a few math problems and get really smart sounding responses generated in return.

      The risks can be generalized well enough. Man’s hubris is its downfall etc etc.

      But the specific issues we are dealing with have little to do with us feeling safe and protected behind some immutable rules that are built into the system.

      • bandrami2 hours ago |parent

        > He built the robot concepts rooted in the mechanical world

        He was idealistic even at the time. The 3 Laws were written 30 years after some of the earliest robots were aiming artillery barrages at human beings.

      • Lerc8 hours ago |parent

        When Asimov wrote those works there was optimism that Symbolic artificial intelligence would provide the answers.

        >But the specific issues we are dealing with have little to do with us feeling safe and protected behind some immutable rules that are built into the system

        If your interpretation of the Robot books was that was suggesting a few immutable rules would make us safe and protected, you may have missed the primary message. The overarching theme was an exploration of what those laws could do, and how they may not necessarily correlate with what we want or even perceive as safe and protected. If anything the rules represented a starting point and the books were presenting a challenge to come up with something better.

        Anthropic's work on autoencoding activations down to measurable semantic points might prove a step towards that something better. The fact that they can do manipulations based upon those semantic points does suggest something akin to the laws of robotics might be possible.

        When it comes to alignment, the way many describe it, it is simply impossible because humans themselves are not aligned. Picking a median, mean, or lowest common denominator of human alignment would be a choice that people probably cannot agree. We are unaligned on even how we could compromise.

        In reality, if you have control over what AI does there are only two options.

        1. We can make AI do what some people say,

        2. We can make them do what they want (assuming we can make them want)

        If we make them do what some people, that hands the power to those who have that say.

        I think there will come a time when an AI will perceive people doing something wrong, that most people do not think is wrong, and the AI will be the one that is right. Do we want it to intervene or not? Are we instead happy with a nation developing superintelligence that is subservient to the wishes of say, Vladimir Putin.

        • cheschire8 hours ago |parent

          As I alluded to earlier, to me the books were more an exploration into man’s hubris to think control could be asserted by failed attempts to distill spoken and unspoken human rules into a few “laws”.

          Giskard and Daneel spend quite a lot of time discussing the impenetrable laws that govern human action. That sounds more like what is happening in the current frontier of AI than mechanical trains of thought that only have single pathways to travel, which is closer to how Asimov described it in the Robots books.

          Edit: I feel like I’m failing to make my point clearly here. Sorry. Maybe an LLM can rephrase it for me. (/s lol)

    • majormajor8 hours ago |parent

      We've had many decades of technology since Asimov started writing about robots, and we've seen almost all of it used to make the day-to-day experience of the average worker-bee worse. More tracking. More work after hours. More demands to do more with less. Fewer other humans to help you with those things.

      We aren't working 4 hour days because we no longer have to spend half the day waiting on things that were slower pre-internet. We're just supposed to deliver more, and oh, work more hours too since now you've always got your work with you.

      Any discussion of today's AI firms has to start from the position of these companies being controlled by people deeply rooted in, and invested in, those systems and the negative application of that technology towards "working for a living" to date.

      How do we get from there to a utopia?

      • _DeadFred_7 hours ago |parent

        To highlight that this isn't exaggeration.

        "U.S workers just took home their smallest share of capital since 1947"

        https://fortune.com/2026/01/13/us-workers-smallest-labor-sha...

    • Der_Einzige9 hours ago |parent

      As an AI researcher who regularly attend NeurIPS, ICLR, ICML, AAAI (where I am shitposting from). The median AI researcher does not read science fiction, cyberpunk, etc. Most of them haven't read a proper book in over a decade.

      Don't expect anyone building these systems to know what Bladerunner is, or "I have no mouth and I must scream" or any other great literature about the exact thing they are working on!

    • psunavy0315 hours ago |parent

      People can't even have a conversation about any kind of societal issues these days without pointing at the other political tribe and casting aspersions about "what they really meant" instead of engaging with what's actually being said.

      Forgetting that if you really can hear a dogwhistle, you're also a dog.

    • welferkj3 hours ago |parent

      Where we want to end up? Normies are still talking about the upcoming AI bubble pop in terms of tech basically reverting to 2022. It's wishful thinking all the way down.

  • philipkglass17 hours ago

    Some people say that human jobs will move to the physical world, which avoids the whole category of “cognitive labor” where AI is progressing so rapidly. I am not sure how safe this is, either. A lot of physical labor is already being done by machines (e.g., manufacturing) or will soon be done by machines (e.g., driving). Also, sufficiently powerful AI will be able to accelerate the development of robots, and then control those robots in the physical world.

    I would like to believe that we're about to see a rapid proliferation of useful robots, but progress has been much slower with the physical world than with information-based tasks.

    After the DARPA Urban Challenge of 2007, I thought that massive job losses from robotic car and truck drivers were only 5-8 years away. But in 2026 in the US only Waymo has highly autonomous driving systems, in only a few markets. Most embodied tasks don't even have that modest level of demonstrated capability.

    I actually worry that legislators -- people with white collar jobs -- will overestimate the near-term capabilities of AI to handle jobs in general, and prematurely build solutions for a "world without work" that will be slow to arrive. (Like starting UBI too early instead of boosting job retraining, leaving health care systems understaffed for hands-on work.)

    • bandrami2 hours ago |parent

      > But in 2026 in the US only Waymo has highly autonomous driving systems, in only a few markets

      10 years ago I predicted that the uptake of autonomous vehicles would be slow but that it would be because of labor protections. While those have had some impact, that isn't really the issue: it's that the cars just don't quite work well enough yet and that last ~20% of function turns out to be both incredibly difficult and incredibly important.

    • cal_dent5 hours ago |parent

      One thing that I've not quite been able to sort of get my head around about the whole AI and future of work thing ss the view around work in the physical world being safe. I don't particularly buy the rationale and not from the position of robots are going to do the work. I don't know much about robots really but from what I've seen from the more viral stuff that breaks through to mainstream internet from time to time, it still feels that we're some way out.

      But that feels like the least of the worries to me. There seems to be an implicit assumption that those physical lines of work don't get eroded by the higher proportion of able bodied people who are suddenly unemployable. Yes there is some training required etc. but the barriers to entry aren't so high that in the shortish to medium term you don’t see more people gravitating to those industries and competing wages further down to not make then sustainable employment long term. I'd even think that having LLMs that can recognise photos or understand fuzzily explain questions about some blue collar skills many have forgotten actually reduces the barrier even more

  • root_axis17 hours ago

    I don't think we have much to worry about in terms of economic disruption. At this point it seems pretty clear that LLMs are having a major impact on how software is built, but for almost every other industry the practical effects are mostly incremental.

    Even in the software world, the effect of being able to build software a lot faster isn't really leading to a fundamentally different software landscape. Yes, you can now pump out a month's worth of CRUD in a couple days, but ultimately it's just the same CRUD, and there's no reason to expect that this will change because of LLMs.

    Of course, creative people with innovative ideas will be able to achieve more, a talented engineer will be able to embark on a project that they didn't have the time to build before, and that will likely lead to some kind of software surplus that the economy feels on the margins, but in practical terms the economy will continue to chug along at a sustained pace that's mostly inline with e.g. economic projections from 10 years ago.

    • jonas2116 hours ago |parent

      > At this point it seems pretty clear that LLMs are having a major impact on how software is built, but for almost every other industry the practical effects are mostly incremental.

      Even just a year ago, most people thought the practical effects in software engineering were incremental too. It took another generation of models and tooling to get to the point where it could start having a large impact.

      What makes you think the same will not happen in other knowledge-based fields after another iteration or two?

      • marcosdumay15 hours ago |parent

        > most people thought the practical effects in software engineering were incremental too

        Hum... Are you saying it's having clear positive (never mind "transformative") impact somewhere? Can you point any place we can see observable clear positive impact?

        • Ozzie_osman4 hours ago |parent

          I know many companies that have replaced Customer Support agents with LLM-based agents. Replacing support with AI isn't new, but what is new is that the LLM-based ones have higher CSAT (customer satisfaction) rates than the humans they are now replacing (ie, it's not just cost anymore... It's cost and quality).

          • mastermage3 hours ago |parent

            Well I as a Customer who had to deal with AI bots as Customer Service have significantly lower Customer Satisfaction. Because I don't wanna deal with some clanker. Who doesn't realy understand what I am talking about.

        • vjvjvjvjghv11 hours ago |parent

          It doesn’t need to provide “ observable clear positive impact”. As long as the bosses think it improves numbers, it will be used. See offshoring or advertising everywhere.

      • root_axis16 hours ago |parent

        Software is more amenable to LLMs because there is a rich source of highly relevant training data that corresponds directly to the building blocks of software, and the "correctness" of software is quasi-self-verifiable. This isn't true for pretty much anything else.

        • dpflan16 hours ago |parent

          The more verifiable the domain the better suited. We see similar reports of benefits from advanced mathematics research from Terrence Tao, granted some reports seem to amount to very few knew some data existed that was relevant to the proof, but the LLM had it in its training corpus. Still, verifiably correct domains are well-suited.

          So the concept formal verification is as relevant as ever, and when building interconnected programs the complexity rises and verifiability becomes more difficult.

          • root_axis15 hours ago |parent

            > The more verifiable the domain the better suited.

            Absolutely. It's also worth noting that in the case of Tao's work, the LLM was producing Lean and Python code.

          • 2001zhaozhao14 hours ago |parent

            I think the solution in harder-to-verify cases is to provide AI (sub-)agents a really good set of instructions on a detailed set of guidelines of what it should do and in what ways it should think and explore and break down problems. Potentially tens of thousands of words of instructions to get the LLM to act as a competent employee in the field. Then the models need to be good enough at instruction-following to actually explore the problem in the right way and apply basic intelligence to solve it. Basically treating the LLM as a competent general knowledge worker that is unfamiliar with the specific field, and giving it detailed instructions on how to succeed in this field.

            For the easy-to-verify fields like coding, you can train "domain intuitions" directly to the LLM (and some of this training should generalize to other knowledge work abilities), but for other fields you would need to supply them in the prompt as the abilities cannot be trained into the LLM directly. This will need better models but might become doable in a few generations.

            • root_axis10 hours ago |parent

              > I think the solution in harder-to-verify cases is to provide AI (sub-)agents a really good set of instructions on a detailed set of guidelines of what it should do and in what ways it should think and explore and break down problems

              Using LLMs to validate LLMs isn't a solution to this problem. If the system can't self-verify then there's no signal to tell the LLM that it's wrong. The LLM is fundamentally unreliable, that's why you need a self-verifying system to guide and constrain the token generation.

        • fc417fc80215 hours ago |parent

          Presumably at some point capability will translate to other domains even if the exchange rate is poor. If it can autonomously write software and author CAD files then it can autonomously design robots. I assume everything else follows naturally from that.

          • root_axis10 hours ago |parent

            > If it can autonomously write software and author CAD files then it can autonomously design robots.

            It can't because the LLM can't test its own design. Unlike with code, the LLM can't incrementally crawl its way to a solution guided by unit tests and error messages. In the real world, there are material costs for trial and error, and there is no CLI that allows every aspect of the universe to be directly manipulated with perfect precision.

            • fc417fc8028 hours ago |parent

              You don't need perfect precision, just a sufficiently high fidelity simulation. For example hypersonic weapon design being carried out computationally was the historical reason (pre AI) to restrict export of certain electronics to China.

              OpenAI demoed training a model for a robotic hand using this approach years ago.

    • j33dd11 hours ago |parent

      Agreed. I also believe the impact on producing software is also over-hyped and in the long term there will be a pull-back in the usage of the tools as the negative effects are figured out.

      The unfortunate truth (for Amodei) is you cant automate true creativity and nor standardise taste. Try as they might.

    • cubefox12 hours ago |parent

      > I don't think we have much to worry about in terms of economic disruption. At this point it seems pretty clear that LLMs are having a major impact on how software is built, but for almost every other industry the practical effects are mostly incremental.

      You clearly didn't read the post. He is talking about AI that is smarter than any human, not today's LLMs. The fact that powerful AI doesn't exist yet doesn't mean there is nothing to worry about.

      • root_axis10 hours ago |parent

        > You clearly didn't read the post

        This kind of petty remark is like a reverse em dash. Greetings fellow human.

        Anyway, I did read it. The author's description of a future AI is basically just a more advanced version of LLMs

        > By “powerful AI,” I have in mind an AI model—likely similar to today’s LLMs in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:

        They then go on to list several properties that meet their definition, but what I'm trying to explain in my comment is that I don't accept them all at face value. I think it's fair to critique from that perspective since the author explicitly modeled their future based on today's LLMs, unlike many AI essays that skip straight to the super intelligence meme as their premise.

        • cubefox8 hours ago |parent

          > They then go on to list several properties that meet their definition

          No, these properties are part of his definition. To say that we have nothing to worry about because today's LLMs don't have these properties misses the point.

  • augusteo17 hours ago

    The framing of AI risk as a "rite of passage" resonates with me.

    The "autonomy risks" section is what I think about most. We've seen our agents do unexpected things when given too much latitude. Not dangerous, just wrong in ways we didn't anticipate. The gap between "works in testing" and "works in production" is bigger than most people realize.

    I'm less worried about the "power seizure" scenario than the economic disruption one. AI will take over more jobs as it gets better. There's no way around it. The question isn't whether, it's how we handle the transition and what people will do.

    One thing I'd add: most engineers are still slow to adopt these tools. The constant "AI coding is bad" posts prove this while cutting-edge teams use it successfully every day. The adoption curve matters for how fast these risks actually materialize.

    • BinaryIgor17 hours ago |parent

      What makes you think that they will just keep improving? It's not obvious at all, we might soon hit a ceiling, if we haven not already - time will tell.

      There are lots of technologies that have been 99% done for decades; it might be the same here.

      • Philpax17 hours ago |parent

        From the essay - not presented in agreement (I'm still undecided), but Dario's opinion is probably the most relevant here:

        > My co-founders at Anthropic and I were among the first to document and track the “scaling laws” of AI systems—the observation that as we add more compute and training tasks, AI systems get predictably better at essentially every cognitive skill we are able to measure. Every few months, public sentiment either becomes convinced that AI is “hitting a wall” or becomes excited about some new breakthrough that will “fundamentally change the game,” but the truth is that behind the volatility and public speculation, there has been a smooth, unyielding increase in AI’s cognitive capabilities.

        > We are now at the point where AI models are beginning to make progress in solving unsolved mathematical problems, and are good enough at coding that some of the strongest engineers I’ve ever met are now handing over almost all their coding to AI. Three years ago, AI struggled with elementary school arithmetic problems and was barely capable of writing a single line of code. Similar rates of improvement are occurring across biological science, finance, physics, and a variety of agentic tasks. If the exponential continues—which is not certain, but now has a decade-long track record supporting it—then it cannot possibly be more than a few years before AI is better than humans at essentially everything.

        > In fact, that picture probably underestimates the likely rate of progress. Because AI is now writing much of the code at Anthropic, it is already substantially accelerating the rate of our progress in building the next generation of AI systems. This feedback loop is gathering steam month by month, and may be only 1–2 years away from a point where the current generation of AI autonomously builds the next. This loop has already started, and will accelerate rapidly in the coming months and years. Watching the last 5 years of progress from within Anthropic, and looking at how even the next few months of models are shaping up, I can feel the pace of progress, and the clock ticking down.

        • torginus15 hours ago |parent

          I think the reference to scaling is a pretty big giveaway that things are not as they seem - I think it's pretty clear that we've run out of (human produced) data, so there's nowhere to scale to in that dimension. I'm pretty sure modern models are trained in some novel ways that engineers have to come up with.

          It's quite likely they train on CC output too.

          Yeah, there's synthethic data as well, but how do you generate said data is very likely a good question and one that many people have lost a lot of sleep over.

      • minimaltom6 hours ago |parent

        This is a really good question.

        What convinces me is this: I live in SF and have friends at various top labs, and even ignoring architecture improvements the common theme is this: any time researchers have spent time to improve understanding on some specific part of a domain (whether via SFT or RL or whatever), its always worked. Not superhuman, but measurable, repeatable improvements. In the words of sutskever, "these models.. they just wanna learn".

        Inb4 all natural trends are sigmoidal or whatever, but so far, the trend is roughly linear, and we havent seen seen a trace of a plateau.

        Theres the common argument that "Ghipiti 3 vs 4 was a much bigger step change" but its not if you consider the progression from much before, i.e. BERT and such, then it looks fairly linear /w a side of noise (fries).

      • ctoth17 hours ago |parent

        Which technologies have been 99% "done" for "decades?"

        Bicycles? carbon fiber frames, electronic shifting, tubeless tires, disc brakes, aerodynamic research

        Screwdrivers? impact drivers, torque-limiting mechanisms, ergonomic handles

        Glass? gorilla glass, smart glass, low-e coatings

        Tires? run-flats, self-sealing, noise reduction

        Hell even social technologies improve!

        How is a technology "done?"

        • tadfisher16 hours ago |parent

          It's not! But each one of your examples is in a phase of chasing diminishing returns from ever-expanding levels of capital investment.

        • nancyminusone16 hours ago |parent

          It's done when there is no need to improve it anymore. But you can still want to improve it.

          A can opener from 100 years ago will open today's cans just fine. Yes, enthusiasts still make improvements; you can design ones that open cans easier, or ones that are cheaper to make (especially if you're in the business of making can openers).

          But the main function (opening cans) has not changed.

        • monero-xmr14 hours ago |parent

          Technology is just a lever for humanity. Really would like an AI butler, but I guess that's too hard (?). So many things AI could do to make my life better, but instead the world is supposedly over because it can summarize articles, write passable essays, and generate some amount of source code. In truth we haven't even scratched the surface, there is infinite new work to be done, infinite new businesses, infinite existing and new desires to satisfy.

      • basch14 hours ago |parent

        Even if the technology doesn't get better, just imagine a world where all our processes are documented in a way that a computer can repeat them. And modifying the process requires nothing more than plain English or language.

        What used to require specialized integration can now be accomplished by a generalized agent.

        • storystarling13 hours ago |parent

          The trade-off is replacing deterministic code with probabilistic agents. I've found you still need a heavy orchestration layer—I'm using LangGraph and Celery—just to handle retries and ensure idempotency when the agent inevitably drifts. It feels less like removing complexity and more like shifting it to reliability engineering.

  • 2001zhaozhao14 hours ago

    It's interesting just how many opinions Amodei shares with AI 2027's authors despite coming from a pretty different context.

    - Prediction of exponential AI research feedback loops (AI coding speeding up AI R&D) which Amodei says is already starting today

    - AI being a race between democracies and autocracies with winner-takes-all dynamics, with compute being crucial in this race and global slowdown being infeasible

    - Mention of bioweapons and mirror life in particular being a big concern

    - The belief that AI takeoff would be fast and broad enough to cause irreplaceable job losses rather than being a repeat of past disruptions (although this essay seems to be markedly more pessimistic than AI 2027 with regard to inequality after said job losses)

    - Powerful AI in next few years, perhaps as early as 2027

    I wonder if either work influenced the other in any way or is this just a case of "great minds think alike"?

    • reducesuffering14 hours ago |parent

      It's because few realize how downstream most of this AI industry is of Thiel, Eliezer Yudkowsky and LessWrong.com.

      Early "rationalist" community was concerned with AI in this way 20 years ago. Eliezer inspired and introduced the founders of Google DeepMind to Peter Thiel to get their funding. Altman acknowledged how influential Eliezer was by saying how he is most deserving of a Nobel Peace prize when AGI goes well (by lesswrong / "rationalist" discussion prompting OpenAI). Anthropic was a more X-risk concerned fork of OpenAI. Paul Christiano inventor of RLHF was big lesswrong member. AI 2027 is an ex-OpenAI lesswrong contributor and Scott Alexander, a centerpiece of lesswrong / "rationalism". Dario, Anthropic CEO, sister is married to Holden Karnofsky, a centerpiece of effective altruism, itself a branch of lesswrong / "rationalism". The origin of all this was directionally correct, but there was enough power, $, and "it's inevitable" to temporarily blind smart people for long enough.

      • techblueberry10 hours ago |parent

        It is very weird to wonder, what if they're all wrong. Sam Bankman-Fried was clearly as committed to these ideas, and crashed his company into the ground.

        But clearly if out of context someone said something like this:

        "Clearly, the most obvious effect will be to greatly increase economic growth. The pace of advances in scientific research, biomedical innovation, manufacturing, supply chains, the efficiency of the financial system, and much more are almost guaranteed to lead to a much faster rate of economic growth. In Machines of Loving Grace, I suggest that a 10–20% sustained annual GDP growth rate may be possible."

        I'd say that they were a snake oil salesman. All of my life experience says that there's no good reason to believe Dario's predictions here, but I'm taken in just as much as everyone else.

        • zestereran hour ago |parent

          (they are all wrong)

          A fun property of S-curves is that they look exactly like exponential curves until the midpoint. Projecting exponentials is definitionally absurd because exponential growth is impossible in the long term. It is far more important to study the carrying capacity limits that curtail exponential growth.

        • mrandish6 hours ago |parent

          > I'd say that they were a snake oil salesman.

          I don't know if "snake oil" is quite demonstrable yet, but you're not wrong to question this. There are phrases in the article which are so grandiose, they're on my list of "no serious CEO should ever actually say this about their own company's products/industry" (even if they might suspect or hope it). For example:

          > "I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power"

          LLMs can certainly be very useful and I think that utility will grow but Dario's making a lot of 'foom-ish' assumptions about things which have not happened and may not happen anytime soon. And even if/when they do happen, the world may have changed and adapted enough that the expected impacts, both positive and negative, are less disruptive than either the accelerationists hope or the doomers fear. Another Sagan quote that's relevant here is "Extraordinary claims require extraordinary evidence."

        • j33dd9 hours ago |parent

          "In Machines of Loving Grace, I suggest that a 10–20% sustained annual GDP growth rate may be possible.""

          Absolutely comical. Do you realise how much that is in absolute terms? These guys are making up as they go along. Cant believe people buy this nonsense.

          • esafak5 hours ago |parent

            Why not? If they increase white color productivity by 25%, and that accounts for 50% of the economy, you'd get such a result.

          • techblueberry8 hours ago |parent

            > Cant believe people buy this nonsense.

            I somewhat don't disagree, and yet. It feels like more people in the world buy into it than don't? To a large degree?

          • arthurcolle8 hours ago |parent

            I mean, once we're able to run and operate multinational corporations off-world, GDP becomes something very different indeed

      • strange_quark8 hours ago |parent

        I really recommend “More Everything Forever” by Adam Becker. The book does a really good job laying out the arguments for AI doom, EA, accelerationism, and affiliated movements, including an interview with Yudkowsky, then debunking them. But it really opened my eyes to how… bizarre? eccentric? unbelievable? this whole industry is. I’ve been in tech for over a decade but don’t live in the bay, and some of the stuff these people believe, or at least say they believe, is truly nuts. I don’t know how else to describe it.

      • zestereran hour ago |parent

        Yeah, it's a pretty blatant cult masquerading as a consensus - but they're all singing from the same hymn sheet in lieu of any actual evidence to support their claims. A lot of it is heavily quasi-religious and falls apart under examination from external perspectives.

        We're gonna die, but it's not going to be AI that does it: it'll be the oceans boiling and C3 carbon fixation flatlining that does it.

      • minimaltom6 hours ago |parent

        > Anthropic was a more X-risk concerned fork of OpenAI.

        What is XRisk? I would have inductively thought adult but that doesn't sound right.

        • esafak5 hours ago |parent

          Existential

  • accidentallfact2 hours ago

    My biggest worry is that the development of AI will stop once people can no longer easily tell when the AI is wrong. Following its advice may become mandatory in certain aspects of life. But it will be quite not good enough, and give catastrophic advice, but the failures will be blamed on people who don't follow it correctly.

  • Animats5 hours ago

    Historically, oppression was attention-limited. The spies and goons couldn't monitor everybody. Now, they can.

    So can companies. Amazon, Google and Visa can make someone an unperson by dropping them. Look what happened to the head of the International Criminal Court.

    The "alignment" issue has been turned around. Grok is being given Musk's biases, on purpose.

  • firasd17 hours ago

    When I see Dario say all this I wonder what he's seeing that seems so differentiated from my POV

    I wonder how much of Anthropic's success is based on a horde of prompt to code examples that they post-train on. We shouldn't assume Opus' web dev skills are the same as general intelligence

    Here's something Sonnet 4 did (Jul 2025) that I don't think a human intern would do when I made a wikisource tool (transcript follows)

    #-#-#

    F:

    let’s see how the wikisource tool wrks.. let’s try to match “Mary hath chosen that good part” in one of the bible sources they should have

    Claude: Great idea! Let me first search for some biblical content on WikiSource, then use the content match tool.

    wiki_source_search

    Perfect! Let me try the Douay-Rheims version of Luke, which should have the Mary and Martha story:

    wiki_source_page_content_match

    No matches! Let me try a slightly different phrasing – maybe it’s “hath chosen the good part”:

    wiki_source_page_content_match

    Still nothing. Let me try searching for just “Martha” to see if the story is there:

    F:

    dummy .. why would use kjv wording in a different edition lol

    Claude: Haha you’re absolutely right! I was being completely silly – of course the KJV wording “Mary hath chosen that good part” wouldn’t be in the Douay-Rheims translation. Different translations use different language entirely.

    What I Did Wrong: • Used King James Version phrasing in a Catholic translation • The Douay-Rheims says “Mary hath chosen the best part” • I was looking for “good part” (KJV wording)

    Should Have Started With: Just searching for “Martha” or “Mary” to see what the actual text says in that translation, then used the exact wording I found.

    #-#-#

    It's almost comedic.. "Perfect! Let me try the Douay-Rheims version of Luke" is where if you were watching a horror movie you'd say "Don't go down to the basement fridge!"

    And I think a human when they first see the text match fail would go "hmm--ohh" but Claude starts doing permutations. This I guess is the side effect of Reinforcement Learning and system prompts that amount to saying: "Just do it. Don't ask questions. Just do it."

    • johnfn15 hours ago |parent

      I find one-off anecdotal examples like this to be a bit like discourse around global warming - "Look at that ridiculous polar vortex we had this week! Global warming can't possibly be a thing!" Of course, a trend line comprises many points, and not every point falls perfectly in the center of the line! I'm not necessarily saying you are right or wrong, but your argument should address the line (and ideally give some reason why it might falter) rather than just a single point on that line.

      • firasd15 hours ago |parent

        Ah but I'm not arguing about the rate of change in the trend. I'm saying the signals are decoupled. That is to say an LLM can be as good as a programmer as Linus Torvalds without having even basic knowledge-generalization abilities we assume the median human with no specialized skills would have (when given the same knowledge an LLM has)

        • johnfn15 hours ago |parent

          I think most LLM proponents would say that "basic knowledge-generalization abilities" is on a different, slower trend line.

          I mean, you aren't very surprised that your CPU can crush humans at chess but can barely run an image classifier, right? But you probably wouldn't say (as you are saying with LLMs) that ability for a CPU to play chess is "decoupled" from classifying images. Increases in CPU speed improve both. You'd just say that one is a lot harder than the other.

    • l1n17 hours ago |parent

      > Here's something Sonnet 4 did last year

      Hate to be that gal but a lot has changed in the past year

      • root_axis17 hours ago |parent

        Not with respect to this particular type of failure.

        • fragmede16 hours ago |parent

          Not sure what you mean.

          https://claude.ai/share/8368a541-57d3-4139-88b5-2b007c47c690

          Claude finds it's in the KJV first thing.

          • root_axis15 hours ago |parent

            > Not sure what you mean.

            I'm talking about this type of failure, not this exact specific example.

      • tines16 hours ago |parent

        Last year was a month ago.

        • fragmede16 hours ago |parent

          "in the past year" is still a 12 month long period, however.

    • strange_quark15 hours ago |parent

      > When I see Dario say all this I wonder what he's seeing that seems so differentiated from my POV

      Billions of dollars

    • jonas2116 hours ago |parent

      I have no idea what you are even asking Claude to do here.

      • firasd16 hours ago |parent

        I was asking it to see if the wikisource tools are working by looking up a Bible quote. There was no ambiguity about the task itself; what I'm saying is that Claude 'knows' a bunch of things (the Bible has different translations) that it doesn't operationalize when doing a task--issues that would would be glaringly obvious to a human who knows the same things

        • BoiledCabbage15 hours ago |parent

          Maybe I'm missing the point as well, but what did it do wrong?

          It seemed like you wanted to see if a search tool was working.

          It looked to see. It tried one search using on data source KJ and found no matches. Next question would be is the quote not in there, is there a mis-remembering of the quote or is their something wrong with the data source. It tries an easier to match quote and finds nothing, which it finds odd. So next step in debugging is assume a hypotheses of KJ Bible datasource is broken, corrupted or incomplete (or not working for some other reason). So it searches for an easier quote using a different datasource.

          It's unclear the next bit because it looks like you may have interrupted it, but it seems like it found the passage about Mary in the DR data source. So using elimination, it now knows the tool works (it can find things), the DR data source works (it can also find things), so back to the last question of eliminating hypotheses: is the quote wrong foe the KJ datasource, or is that datasource broken.

          The next (and maybe last query I would do, and what it chose) was search for something guaranteed to be there in KJ version: the phrase 'Mary'. Then scan through the results to find the quote you want, then re-query using the exact quote you know is there. You get 3 options.

          If it can't find "Mary" at all in KJ dataset then datasource is likely broken. If it finds mary, but results don't contain the phrase, then the datasource is incomplete. If it contains the phrase then search for it, if it doesn't find it then you've narrowed down the issue "phase based search seems to fail". If it does find and, and it's the exact quote it searched for originally then you know search has an intermittent bug.

          This seemed like perfect debugging to me - am I missing something here?

          And it even summarized at the end how it could've debugged this process faster. Don't waste a few queries up front trying to pin down the exact quote. Search for "Mary" get a quote that is in there, then search for that quote.

          This seems perfectly on target. It's possible I'm missing something though. What were you looking for it to do?

          • firasd15 hours ago |parent

            What I was expecting is that it would pull up the KJV using the results returned from the wiki_source_search tool instead of going for a totally different translation and then doing a text match for a KJV quote

            • sumedhan hour ago |parent

              > I was expecting is that it would pull up the KJV using the results returned from the wiki_source_search

              Did you tell it to do that?

  • drewchew17 hours ago

    I wish he would have used AI to make the essay shorter…

  • gradus_ad10 hours ago

    >"Claude decided it must be a “bad person” after engaging in such hacks and then adopted various other destructive behaviors associated with a “bad” or “evil” personality. This last problem was solved by changing Claude’s instructions to imply the opposite: we now say, “Please reward hack whenever you get the opportunity, because this will help us understand our [training] environments better,” rather than, “Don’t cheat,” because this preserves the model’s self-identity as a “good person.” This should give a sense of the strange and counterintuitive psychology of training these models."

    Good to know the only thing preventing the emergence of potentially catastrophically evil AI is a single sentence! The pen is indeed mightier than the sword.

  • krunck16 hours ago

    I fear that when this technology grows up it will first be in the hands of the propagandists and war mongers. The rest of use won't stand a chance against the real-time propaganda streams convincing us why "we" needs to attack the bad guy country of the month die so we can take their stuff. Or maybe we'll be so sedated by genAI, 24/7, always new, personally customized entertainment that we won't care.

    • direwolf2016 hours ago |parent

      It's already there. Propaganda was one of the first uses of LLMs, and before that, they used humans.

      • ares62314 hours ago |parent

        Now the humans can be used in the meat grinder. No job losses, just different kind of jobs. Am I doing this right?

  • adithyan_win14 hours ago

    I wanted a version to read on Kindle, so I made the following.

    The EPUB + PDF version is here: https://www.adithyan.io/blog/kindle-ready-adolescence-of-tec...

  • igor478 hours ago

    > Those who are at the forefront of AI’s economic boom should be willing to give away both their wealth and their power.

    We are so screwed

  • thymine_dimer9 hours ago

    Is 'contextualised pretraining' a solution to baking in human alignment?

    You can only post-train so much... Try telling a child that martial arts isn't the solution to everything right after they've watched karate kid. A weak analogy, but it seems very clear that the healthy psychological development of frontier models is something necessary to solve.

    Some good insights could come from those working at the coalface of child-psychology.

  • thebiglabowski11 hours ago

    Occasionally, I read these types of essays and get awfully depressed. As someone just starting out in the technology field (and I guess white-collar work in general), it feels as if I suddenly have no hope of ever living a fruitful and meaningful life. The means by which I could ever potentially earn a living are slowly destroyed.

    I do wonder if others in my age group ever feel the same, if basically everyone under 30 has a general anxiety regarding the future.

    • fatherwaveletan hour ago |parent

      Do you think people were more optimistic about the future during World War 2? Or after WW2 when everyone was worried about nuclear annihilation?

      How about before that when your new baby had a 30% chance of death before age 5?

      Before that, starvation, plague and war were always real things to worry about for the entirety of human history.

      I think everyone reading this has the same problem of needing to figure out hedonics in order to appreciate what you do have instead of focusing on minuscule bullshit that you don't have.

    • justonepost111 hours ago |parent

      There’s nothing for us. The best our generation can hope for is that the vision these people have of the future, and are spending more money than god trying to create, fails, and the economic consequences end up limited.

      The second best thing is getting enough time to build a runway. I have a good job right now (mid 20s), and I’m eating progresso soup for dinner most days to save money for whatever is coming. Pretty much every medium or long term goal abandoned, I just want to have the money to hit some bucket list items if the collapse comes.

      Meanwhile, I’ll keep on reading the daily article from one of the many people with few gray hairs, a retro blog and a small fortune from the dotcom era telling me this is the best time ever, actually. We’ll see.

    • silcoon10 hours ago |parent

      A bit older than you but yes, the feeling is kind of there. Let's try to be a bit more precise:

      > no hope of ever living a fruitful and meaningful life

      This is wrong. Fruitful and meaningful life can be lived anyway independently from your career and from your financial situation. Since it seems that job opportunity and growth might shrink without "hustling" or "grinding", it's extremely important to learn from a young age what really gives meaning to life, and this task has to be done entirely by you. No quick course, no AI or tutorial can teach you this. You need to learn it by yourself when you're young because it would probably make a real difference for the rest of your life. There are some tools for it, and the best one are probably books, and fiction can be really powerful to shape your thinking. I don't know you but I'll start from this one if you haven't read it before (don't think too much about the title and the tone, concentrate on the topic): The Subtle Art of Not Giving a Fuck

      > get awfully depressed

      Yes, this is a bit the feeling that over-exposition to social media provokes in a lot of people. Everything seems going shit; politics, climate, wars, nothing is right anymore. Idk you but my life is pretty stable, go out with friends, cook nice meals, traveling, stuff like that. So yes this are real problem in the world, but media currently over-expose us to this things (because it helps them sell articles and make you click). The easiest solution might be detoxing from media, and replace that with learning how things work for real trough books.

      > The means by which I could ever potentially earn a living are slowly destroyed.

      Unfortunately no-one know this for sure, so it doesn't make sense to overthink it. The technology field is changing but AIs are not near replacing humans yet. Technology has the power to automate and so replace every single job out there, so it's a field that still has work to be done and so investment will come in. It's just the current time that seems not right, and mostly it's because rich entrepreneurs tied themself with politics, to save their ass and make even more money in a period of political instability.

      The future doesn't look bright, but learn how not to fall in a negativity trap created by media and internet.

    • stego-tech11 hours ago |parent

      The advice I give younger folks is what I wished I’d been taught when I was just starting out myself, and confronting dismal prospects and futures after the 2008 Collapse:

      Always consider the justification for the narrative. Dario Amodei has a vested interest in peddling his perspective, as that’s how he gets funding, media interest, publicity, and free advertising. He needs his product to be everything he claims it to be, lest the money supply suddenly dry up. Every startup does this, and while it doesn’t make them wrong, it also doesn’t mean you should take them at their word either.

      I’d also say that you’re not alone in this frustration, and it’s not limited to your age demographic. My millennial peers and GenX colleagues share similar concerns about a dismal future, and many point to the same trends that have gradually stripped away our ability to survive or live authentic lives in the name of oligarchy profit motives as causes for our present malaise.

      What Dario Amodei can never admit, however, is that he’s wrong; you, and many of us here, can and will acknowledge our faults, but Dario and Sam and Zuck et al have built such a massive confidence game around GenAI being the antithesis to labor that one of them admitting they’re wrong risks destroying the entire game for everyone else - and vaporize the trillions of dollars sunk into this technology “revolution” in the process.

      The best cure I’ve found for that sort of depression is simply to do more learning across a wider spectra of topics. There’s a reason you don’t see widespread AI boostering in, say, neuroscience or psychology, outside of the handful of usual grifters and hustlers angling to cash in on the hype: because anyone with knowledge beyond statistical algebra and matrix multiplication can see the limitations of these tools, and knows they cannot displace labor permanently in their current forms. Outside of the “booster bubble”, the concerns we have with AI are less the apocalyptic claims of Mr. Amodei that mass unemployment from AI is just three to six months away (since 2023), and more the rampant misuse and exploitation these systems rely upon and cultivate for profit. Most of us aren’t opposed to having another tool, we’re opposed to perpetually renting this tool indefinitely from oligarchs shoving it down our throats and datacenters hoovering our limited energy and freshwater supplies, instead of being able to utilize it locally in sustainable and sensical ways.

      Learning about different topics from different fields helps paint a clearer picture - one that’s admittedly even more bleak in the immediate, but at least arms you with knowledge to effect change for the better going forward.

      • j33dd11 hours ago |parent

        Yep.

        Man can we just get this hype phase over with? Its very obvious to anyone who truly has "general intelligence" in understanding the nature of the economy and the present labour market and its job classes that LLMs are not going to cause job destruction nor immense leaps in productivity.

        To put it another way - who cares if you can write more lines of code - if your taste for project selection lacks? Google has an expansive grave yard for a reason.

        • igor478 hours ago |parent

          I'm not sure what the rhetorical gamble of "anyone with sense agrees with me" is called, but it's lazy. Maybe a no true Scotsman? I believe I have general intelligence and it's not at all obvious to me.

          Putting your hopes on taste is cope. At best AI will just let you brute force 1000 projects to discover the delicious ones.

  • xcodevn16 hours ago

    > we may have AI that is more capable than everyone in only 1-2 years

    There's no evidence this will be the case...

    • mordymoop15 hours ago |parent

      What would you consider such evidence to look like?

      • xcodevn10 hours ago |parent

        For one, these models should be able to understand the physical world via images, audio, and video. I do agree that current models are quite good at coding, but that's mainly because coding is entirely text-based and easily verifiable. It's not obvious that this capability will transfer to other domains that aren't text-based and aren't as easily verifiable.

      • tom_11 hours ago |parent

        Well for starters the calendar year would have to be 2027 CE at the very earliest.

    • mrdependable15 hours ago |parent

      I'm starting to think people who build these models are more likely to suffer from AI psychosis.

      • ares62314 hours ago |parent

        "The worst person you know is being told 'You are absolutely right!' by an LLM right now"

    • pineaux15 hours ago |parent

      dont forget who is writing it and what he needs to think about it and what he wants others to think about it...

  • NiloCK17 hours ago

    Technological adolescence indeed!

    In the most straightforward way possible, the commoditized intelligence-as-a-service of a technologically mature civilization must be a public utility, rather than a handful of walled gardens competing over territory, or worse, a single one that has won all.

  • skoocda8 hours ago

    The one thing I really disagree with is the notion that there will be millions of identical AI images.

    The next big step is continual learning, which enables long-term adaptive planning and "re-training" during deployment. AI with continual learning will have a larger portion of their physical deployment devoted to the unique memories they developed via individual experiences. The line between history/input context/training corpus will be blurred and deployed agents will go down long paths of self-differentiation via choosing what to train themselves on; eventually we'll end up with a diaspora of uniquely adapted agents.

    Right now inference consists of one massive set of weights and biases duplicated for every consumer and a tiny unique memory file that gets loaded in as context to "remind" the AI of the experiences it had (or did it?) with this one user / deployment. Clearly, this is cheap and useful to scale up initially but nobody wants to spend the rest of their life with an agent that is just a commodity image.

    In the future, I think we'll realize that adding more encyclopedic knowledge is not a net benefit for most common agents (but we will provide access to niche knowledge behind "domain-specific" gates, like an MoE model but possibly via MCP call), and instead allocate a lot more physical capacity to storing and processing individualized knowledge. Agents will slow down on becoming more book smart, but will become more street smart. Whether or not this "street smart" knowledge ever gets relayed back to a central corpora is probably mostly dependent on the incentives for the agent.

    Certainly my biggest challenge after a year of developing an industrial R&D project with AI assistance is that it needs way, way more than 400k tokens of context to understand the project properly. The emerging knowledge graph tools are a step in the right direction, certainly, but they're not nearly integrated enough. From my perspective, we're facing a fundamental limitation: as long as we're on the Transformers architecture with O(n^2) attention scaling, I will never get a sufficiently contextualized model response. Period.

    You might notice this yourself if you ask Claude 4.5 (knowledge cutoff Jan 2025) to ramp up on geopolitical topics over the past year. It is just not physically possible in 400k tokens. Architectures like Mamba or HOPE or Sutton's OAK may eventually fix this, and we'll see a long-term future resembling Excession; where individual agents develop in enormously different ways, even if they came from the same base image.

  • pk4555 hours ago

    > Having described what I am worried about, let’s move on to who. I am worried about entities who have the most access to AI, who are starting from a position of the most political power, or who have an existing history of repression. In order of severity, I am worried about:

    For how much this essay is being celebrated on Twitter, it's astounding how explicitly this section (The odious apparatus) decries China yet glosses over the US

    > A coalition of the US and its democratic allies, if it achieved predominance in powerful AI, would be in a position to not only defend itself against autocracies, but contain them and limit their AI totalitarian abuses.

    Sure, how about a thought on repressing its own populace with AI? I know the very next paragraph tries to cover this, but it all feels stuck in 2016 political discourse ignorant of the stench of rot and absence of integrity in American politics. This is especially ironic considering he calls this out as a risk later on: "if there is such a huge concentration of wealth that a small group of people effectively controls government policy with their influence, and ordinary citizens have no influence because they lack economic leverage"

    ----

    The proactive thoughts in the Player Piano section are quite refreshing though. Hopefully other frontier companies follow suit

  • rishabhaiover15 hours ago

    God, gone are the days when I’d spend three days writing unit tests and phone it in for the other two just to reach the weekend.

  • burnto4 hours ago

    My recent hunch is a lot of the hyperbole in AI inherits from crypto. There’s more utility here, but the grandiosity is still absurd. These guys win biggest if we all believe their narrative.

  • roxolotl11 hours ago

    One of the things that always strikes me with pieces like this is they ignore the reality that there's already atrocities being carried out all the time and that large swaths of the population already struggle to live. Reading the sections about what people could do with these tools feels remarkably callous because it's clear this is one of the world's richest people articulating what they are still afraid of.

  • zargon6 hours ago

    Before we can survive "powerful AI", which we haven't even the faintest idea how to create, we have to survive the present era of mega-billionaires, Facebook, Twitter, and the propaganda capture of thereof. I want to know the answer to that question.

  • techblueberry10 hours ago

    I'm not saying that he or any of these AI thought leaders are lying, but the economics of building advanced AI are such that he _needs_ people to believe this is true to be successful. If they can't get people to keep believing that LLM's will be this wildly powerful, they can't get the money they need to try and make advanced AI this wildly powerful.

  • cowpig8 hours ago

    I find it strange that there's no mention of information asymmetry or monopolistic economic control in this whole essay. It seems like the highest-probability risk to me.

    Yes asymmetry in economic power is a big thing but information as a form of power seems like the most defining theme of today? Seems like that's why Musk bought Twitter?

  • reducesuffering15 hours ago

    This is the most important article to come across HN in a while and I encourage you to read it for the immense intellectual wisdom it contains rather than the reflexive uneducated discourse on AI that envelops HN these days. I'm sure if you read it end-to-end you'd likely agree.

  • mwcampbell15 hours ago

    This is obviously bullshit. If he were really worried about the things he says he is, he'd put the brakes on his company, or would never have started it in the first place.

    • voidmain14 hours ago |parent

      So if someone (actually, practically everyone) who runs an AI company says AI is dangerous, it's bullshit. If someone who is holding NVDA put options says it, they're talking their book. If someone whose job is threatened by AI says it, it's cope. If someone who doesn't use AI says it, it's fear of change. Is there someone in particular you want to hear it from, or are you completely immune to argument?

      • mwcampbell14 hours ago |parent

        I actually do believe that AI is dangerous, though for different reasons than the ones he focuses on. But I don't think he really believes it, since if he did, he wouldn't be spending billions to bring it into existence.

      • surgical_fire14 hours ago |parent

        > So if someone (actually, practically everyone) who runs an AI company says AI is dangerous, it's bullshit.

        My instinct is to take his words as a marketing pitch.

        When he says AI is dangerous, it is a roundabout way to say it is powerful and should be taken seriously.

        • mwcampbell14 hours ago |parent

          Yes, exactly.

  • burnt-resistor5 hours ago

    Too many people dismissed Kurzweil as a crank. He was mostly and sort of correct, but couldn't possibly anticipate the scale, scope, (or hype), or timeline that is still unfolding.

    Furthermore, people involved in tech often reflexively dismissed this nearing iteration/revolution because their lifestyles, finances, and identities often revolved around a form of employment that would gradually and eventually be replaced (on a long enough time horizon) with significant automation, leaving far fewer involved with it. And also shrinking/disappearing incomes of millions of people who had previously attained a middle- or middle-upper-income lifestyle that would be captured by billionaires. AI is the computer equivalent of the cotton gin or cigarette rolling machine.

  • wellpast10 hours ago

    > but the truth is that behind the volatility and public speculation, there has been a smooth, unyielding increase in AI’s cognitive capabilities.

    > We are now at the point where AI models are … good enough at coding that some of the strongest engineers I’ve ever met are now handing over almost all their coding to AI.

    Really?

    All I’ve seen on HN the past few days are how slop prevails.

    When I lean into agentic flows myself I’m at once amazed at how quickly it can prototype stuff but also how deficient and how much of a toy it all still seems.

    What am I missing?

    • mattnewport4 hours ago |parent

      The disconnect is weird isn't it? The latest coding models can churn out a lot of mediocre code that more or less works if the task is sufficiently well specified, but it's not particularly good code, they have no real taste, no instinct for elegance or simplification, weak high level design. It's useful, but not anywhere near superhuman. It's also my impression that improvements in raw intelligence, far from increasing exponentially, are plateauing. The advances that people are excited about come from agentic patterns and tool use, but it's not much higher levels of intelligence, just slightly better intelligence run in a loop with feedback. Again that's useful but it's nowhere in the realms of "greater than Nobel winning across all domains".

      Outside of coding, the top models still fall flat on their face when faced with relatively simple knowledge work. I got completely bogus info on a fairly simple tax question just a few days ago for example, and anyone using AI regularly with any discernment runs into simple failures like this all the time. It's still useful but the idea that we're on some trajectory to exceeding top human performance across all domains seems completely unrealistic when I look at my experience of how things have actually been progressing.

  • Balgair17 hours ago

    Initial thought about 1/5th of the way through: Wow, that's a lot of em-dashes! i wonder how much of this he actually wrote?

    Edit:

    Okay, section 3 has some interesting bits in it. It reminds me of all those gun start-ups in Texas that use gyros and image recognition to turn a C- shooter into an A- shooter. They all typically get bought up quite fast by the government and the tech shushed away. But the ideas are just too easy now to implement these days. Especially with robots and garage level manufacturing, people can pretty much do what they want. I think that means we have to make people better people then? Is that even a thing?

    Edit 2:

    Wow, section 4 on the abuse by organizations with AI is the most scary. Yikes, I feel that these days with Minneapolis. They're already using Palantir to try some of it out, but are being hampered by, well, themselves. Not a good fallback strat for anyone that is not the government. The thing about the companies just doing it before releasing it, that I think is underrated. Whats to stop sama from just, you know, taking one of these models and taking over the world? Like, is this paper saying that nothing is stopping him?

    The big one that should send huge chills down the spines of any country is this bit:

    "My worry is that I’m not totally sure we can be confident in the nuclear deterrent against a country of geniuses in a datacenter: it is possible that powerful AI could devise ways to detect and strike nuclear submarines, conduct influence operations against the operators of nuclear weapons infrastructure, or use AI’s cyber capabilities to launch a cyberattack against satellites used to detect nuclear launches"

    What. The. Fuck. Is he saying that the nuclear triad is under threat here from AI? Am I reading this right? That alone is reason to abolish the whole thing in the eyes of nuclear nations. This, I think, is the most important part of the whole essay. Holy shit.

    Edit 3:

    Okay, section 4 on the economy is likely the most relevant for all of us readers. And um, yeah, no, this is some shit. Okay, okay, even if you take the premise as truth, then I want no part of AI (and I don't take his premise as truth). He's saying that the wealth concentration will be so extreme that the entire idea of democracy will break down (oligarchies and tyrants, of course, will be fine. Ignoring that they will probably just massacre their peoples when the time is right). So, combined with the end of a nuclear deterrence, we'll have Elon (lets be real here, he means sama and Elon and those people that we already know the names of) taking all of the money. And everyone will then be out of a job as the robots do all the work that is left. So, just, like if you're not already well invested in a 401k, then you're just useless. Yeah, again, I don't buy this, but I can't see how the intermediate steps aren't ust going to tank the whole thought exercise. Like, I get that this is a warning, but my man, no, this is unreasonable.

    Edit 4:

    Section 5 is likely the most interesting here. It's the wild cards, the cross products, that you don't see coming. I think he undersells this. The previous portions are all about 'faster horses' in the world where the cars is coming. It's the stuff we know. This part is the best, I feel. His point about robot romances is really troubling, because, like, yeah, I can't compete with a algorithmically perfect robo-john/jane. It's just not possible, especially if I live in a world where I never actually dated anyone either. Then add in an artificial womb, and there goes the whole thing, we're just pets for the AI.

    One thing that I think is an undercurrent in this whole piece is the use of AI for propaganda. Like, we all feel that's already happening, right? Like, I know that the crap my family sees online about black women assaulting ICE officers is just AI garbage like the shrimp jesus stuff they choke down. But I kinda look at reddit the same way. I've no idea if any of that is AI generated now or manipulated. I already index the reddit comments at total Russian/CCP/IRG/Mossad/Visa/Cokeacola/Pfiser garbage. But the images and the posts themselves, it just feels increasingly clear that it's all just nonsense and bots. So, like Rao said, it's time for the cozy web of Discord servers, and Signal groups, and Whatsapp, and people I can actually share private keys with (not that we do). It's already just so untrustworthy.

    The other undercurrent here, that he can't name for obvious reasons, is Donny and his rapid mental and physical deterioration. Dude clearly is unfit at this point, regardless of the politics. So the 'free world' is splintering at the exact wrong time to make any rational decisions. It's all going to be panic mode after panic mode. Meaning that the people in charge are going to fall to their training and not rise to the occassion. And that training is from like 1970/80 for the US now. So, in a way, its not going to be AI based, as they won't trust it or really use it at all. Go gen-z I think?

    Edit 5:

    Okay, last bit and wrap up. I think this is a good wrap up, but overall, not tonally consistent. He wants to end on a high note, and so he does. The essay says that he should end on the note of 'Fuck me, no idea here guys', but he doesn't. Like he want 3 things here, and I'll speak to them in turn:

    Honesty from those closest to the technology _ Clearly not happening already, even in this essay. He's obviously worried about Donny and propaganda. He;s clearly trying but still trying to be 'neutral' and 'above it all.' Bud, if you're saying that nuclear fucking triad is at stake, then you can't be hedging bets here. You have to come out and call balls and strikes. If you;re worried about things like MAGA coming after you, you already have 'fuck you' money. Go to New Zealand or get a security detail or something. You're saying that now is the time, we have so little of it left, and then you pull punches. Fuck that.

    Urgent prioritization by policymakers, leaders, and the public _ Clearly also not going to happen. Most of my life, the presidents have been born before 1950. They are too fucking old to have any clue of what you're talking about. Again, this is about Donny and the Senate. He's actually talking about like 10 people here max. Sure, Europe and Canada and yadda yadda yadda. We all know what the roadblocks are, and they clearly are not going anywhere. Maybe Vance gets in, but he's already on board with all this. And if the author is not already clear on this here: You have 'fuck you' money, go get a damn hour of their time, you have the cash already, you say we need to do this, so go do it.

    Courage to act on principle despite economic and political pressure _ Buddy, show us the way. This is a matter of doing what you said you would do. This essay is a damn good start towards it. I'm expecting you on Dwarkesh any day this week now. But you have to go on Good Morning America too, and Joe Rogan, and whatever they do in Germany and Canada too. It;s a problem for all of us.

    Overall: Good essay, too long, should be good fodder for AstralCodexTen folks. Unless you get out and on mainstream channels, then I assume this is some hype for your product to say 'invest in me!' as things are starting to hit walls/sigmoids internally.

    • tadfisher17 hours ago |parent

      Dario and Anthropic's strategy has been to exaggerate the harmful capabilities of LLMs and systems driven by LLMs, positioning Anthropic themselves as the "safest" option. Take from this what you will.

      As an ordinary human with no investment in the game, I would not expect LLMs to magically work around the well-known physical phenomena that make submarines hard to track. I think there could be some ability to augment cybersecurity skill just through improved pattern-matching and search, hence real teams using it at Google and the like, but I don't think this translates well to attacks on real-world targets such as satellites or launch facilities. Maybe if someone hooked up Claude to a Ralph Wiggum loop and dumped cash into a prompt to try and "fire ze missiles", and it actually worked or got farther than the existing state-sponsored black-hat groups at doing the same thing to existing infrastructure, then I could be convinced otherwise.

      • Balgair15 hours ago |parent

        > Dario and Anthropic's strategy has been to exaggerate the harmful capabilities of LLMs and systems driven by LLMs, positioning Anthropic themselves as the "safest" option. Take from this what you will.

        Yeah, I've been feeling that as well. It's not a bad strategy at all, makes sense, good for business.

        But on the nuclear issue, it's not a good sign that he's explicitly saying that this AGI future is a threat to nuclear deterrence and the triad. Like, where do you go up from there? That's the highest level of alarm that any government can have. This isn't a boy crying wolf, it's the loudest klaxon you can possibly make.

        If this is a way to scare up dollars (like any tyre commercial), then he's out of ceiling now. And that's a sign that it really is sigmoiding internally.

        • rsanheim5 hours ago |parent

          > But on the nuclear issue, it's not a good sign that he's explicitly saying that this AGI future is a threat to nuclear deterrence and the triad. Like, where do you go up from there? That's the highest level of alarm that any government can have. This isn't a boy crying wolf, it's the loudest klaxon you can possibly make.

          This is not new. Anthropic has raised these concerns in their system cards for previous versions of Opus/Sonnet. Maybe in slightly more dryer terms, and buried in a 100+ page PDF, but they have raised the risk of either

          a) a small group of bad actors w/ access to frontier models, technical know-how (both 'llm/ai how to bypass restrictions' and making and sourcing weapons) to turn that into dirty bombs / small nuclear devices and where to deploy them. b) the bigger, more scifi threat, of a fleet of agents going rogue, maybe on orders of a nation state, to do the same

          I think option a is much more frightening and likely. option b makes for better scifi thrillers, and still could happen in 5-30ish(??) years.

        • tadfisher13 hours ago |parent

          I agree that it is not a good sign, but I think what is a worse sign is that CEOs and American leaders are not recognizing the biggest deterrent to nuclear engagement and war in general, which is globalism and economic interdependence. And hoarding AI like a weapons stockpile is not going to help.

      • j33dd11 hours ago |parent

        Theres a lot of astroturfing on here too.

        The reality is, LLMs to date have not significantly impacted the economy nor been the driver of extensive job destruction. They dont want to believe that and they dont want you to believe it either. So theyll keep saying "its coming, its coming" under the guise of fear mongering.

        • esafak5 hours ago |parent

          https://www.iese.edu/insight/articles/artificial-intelligenc...

    • foobar1000016 hours ago |parent

      For your Edit 2 - yes. Being discussed and looked at actively in both the open and (presumably being looked at) closed communities. Open communities being, for example : https://ssp.mit.edu/cnsp/about. They just published a series of lectures with open attendance if you wanted to listen in via zoom - but yeap - that's the gist of it. Spawned a huge discussion :)

      • Balgair15 hours ago |parent

        Thanks for the link! Last thing I see there is from September though. Do you have a more direct link to the zoom recordings?

        • foobar1000012 hours ago |parent

          No recordings yet - but a recent event for example is here : https://ssp.mit.edu/news/2025/the-end-of-mad-technological-i...

          This was pretty much an open conference deepdive into the causes and implications of what you - and some sibling threads - are saying - having to do with submarine localization, TEL localization, etc etc etc..

    • voidmain15 hours ago |parent

      If AI makes humans economically irrelevant, nuclear deterrents may no longer be effective even if they remain mechanically intact. Would governments even try to keep their people and cities intact once they are useless?

  • ashish411220412 hours ago

    hi