HNNewShowAskJobs
Built with Tanstack Start
Scoop: Judge Caught Using AI to Read His Court Decisions(migrantinsider.com)
35 points by wahnfrieden 2 hours ago | 20 comments
  • RobRiveraan hour ago

    Text to speech has been a technology for a very long time. This is, in my opinion, a whole article about nothing, leaning on the AI label to garner views.

    Yes, we may ask the question whether or not speculative uses of AI in other manners have negative implications, and these should be asked, but that isn't the case here.

    It is very much like asking the question if cars, upon inve tipn, started driving into random fields with no restraint, off-roading as if any car owner woulddo this, upon the sight of seeing a new motor carriage driving down a street. Important questions to ask of emergent technology, sure, but right now that motor carriage is on the road, let it be.

  • dspillett19 minutes ago

    I can't read the article as it seems hugged to death and not archived elsewhere yet, but if this is the case I'm thinking about, he wrote everything and used a TTS system to read it as him.

    Normally I'm relatively anti-generative-AI, but I don't see a big problem with this one. TTS has been used for a long time, just less convincingly so, often in work situations. Many people with disabilities that affect their verbal ability do it so they can communicate in a way that feels less impersonal than in written form - if everyone else using TTS normalises this sort of thing more than it'll be a boost for those users.

    My only concern here is that TTS systems based on generative tech have been known to hallucinate slight changes to the text they are reading. In legal contexts small changes in wording can have significant impact, so I hope he checks the output in detail, or has someone else do so, after it is produced before giving it to anyone else…

  • mjw100744 minutes ago

    Is this a real judge, or is an "Immigration Judge" one of those not-actually-a-judge decisionmakers employed by the executive?

    • khuey32 minutes ago |parent

      The latter. They're not even real administrative law judges.

    • 38 minutes ago |parent
      [deleted]
    • Terr_26 minutes ago |parent

      I consciously try to reframe their misleadingly-called "warrants" as "memos".

  • bitwizean hour ago

    So if he's writing the decisions himself and using an AI voice to read them, big deal. It's pretty much a nothingburger, unless the AI voice somehow misread something in a legally relevant way. If he's using AI to generate decision text, that's a more serious issue.

    • nerevarthelame36 minutes ago |parent

      Generative text to speech models can hallucinate and produce words that are not in the original text. It's not always consequential, but a court setting is absolutely the sort of place where those subtle differences could be impactful.

      Lawyers dealing with gen-AI TTS rulings should compare what was spoken compared to what was in the written order to make sure there aren't any meaningful discrepancies.

      • csallen19 minutes ago |parent

        People can also make mistakes while reading, and I suspect we do so at just as much if not more frequency as gen AI text-to-speech algos.

        It's the AI thinking that makes me wary, not AI text-to-speech.

      • 23 minutes ago |parent
        [deleted]
    • AngryData17 minutes ago |parent

      Im not sure I agree if it isn't neccessary for a health issue. It depersonalizes the defendent and detaches the judge from the real human consequences of their decisions. It is a whole extra step into gamifying the judicial process which helps facilitate even worse abuses of the justice system than we already deal with.

    • datadrivenangel15 minutes ago |parent

      Except this judge is especially harsh, which suggests that he's very biased, and thus being more productive via AI seems like a bad outcome.

      From TFA: "Burns approved just 2 percent of asylum claims between fiscal 2019 and 2025—compared with a national average of 57.7 percent."

    • silisili38 minutes ago |parent

      While not as bad as AI rendering the decision itself obviously, I wouldn't exactly say it's a nothing burger. It feels completely inauthentic and dystopian.

      I can only imagine the hell of being nervous in a big court case waiting for the decision, and hearing that annoying TikTok lady deliver the bad news.

  • zkmonan hour ago

    [flagged]

    • cl3mischan hour ago |parent

      I read the article as him writing the text himself and using AI just for turning it into audio. Which is evidently frowned upon, but to me doesn't constitute AI taking over policy making.

    • RobRivera44 minutes ago |parent

      I am not convinced that you read the article. What specific action occurred in the narrative do you have a specific grievance with?

      • zkmon32 minutes ago |parent

        It's the eagerness with which the high offices let AI to enter into the courtroom affairs which are considered highly sanctimonious. If the trend continues, judgements might be delivered by AI, and you would be saying why can't we let a more intelligent system take over the role of human judges and policy makers. We already have teachers and doctors using AI. These roles (judges, policy makers, teachers, doctors, priests etc) were considered to be guardians of the ethical righteousness. That's the grievance.

      • stinky61342 minutes ago |parent

        Seems like they didn't even read the headline...

    • stinky613an hour ago |parent

      That's not at all what the article says is happening.

      "Immigration Judge John P. Burns has been using artificial intelligence *to generate audio recordings of his courtroom decisions* at the New York Broadway Immigration Court, according to internal Executive Office for Immigration Review (EOIR) records obtained by Migrant Insider." [Emphasis added]

  • monerozcash40 minutes ago

    This feels like a daily mail article for a slightly different audience. Is this what's now referred to as "rage baiting"?