HNNewShowAskJobs
Built with Tanstack Start
xAI issues apology for Grok's antisemitic posts(nbcnews.com)
25 points by geox a day ago | 15 comments
  • hendersoona day ago

    Cool, cool.

    Now will they apologize for Grok 4 (the new one, not the MechaHitler Grok 3 referenced in this article) using Musk's tweets as primary sources for every request, explain how that managed to occur, and commit to not doing that in the future?

  • a day ago
    [deleted]
  • ashoeafoota day ago

    xAi write an apology for whatever posts offend if NrOfOffendis in Graph > 2

  • thatguymikea day ago

    Oh I see, they set ‘is_mechahitler = True’, easy mistake, anyone could do it, probably one of those rapscallion ex-OpenAI employees who hadn’t fully absorbed the culture.

    • DoesntMatter22a day ago |parent

      Reddit has now fully leaked into hacker news

      • bcravena day ago |parent

        Please check the HN guidelines, particularly the final one.

      • queenkjuula day ago |parent

        Now?

  • freedombena day ago

    > “We have removed that deprecated code and refactored the entire system to prevent further abuse. The new system prompt for the @grok bot will be published to our public github repo,” the statement said.

    Love them or hate them, or somewhere in between, I do appreciate this transparency.

    • mingus88a day ago |parent

      It’s a kinda meaningless statement, tbh.

      Pull requests to delete dead code or refactor are super common. It’s maintenance. Bravo.

      What was actually changed, I wonder?

      And the system prompt is imporant and good for publishing it, but clearly the issue is the training data and the compliance with user prompts that made it a troll bot.

      So should we expect anything different moving forward? I’m not. Musk’s character has not changed and he remains the driving force behind both companies

    • loloquwowndueoa day ago |parent

      If they don’t the prompt will just get leaked by someone manipulating grok itself hours from being released, and then picked apart and criticized. It’s not about transparency but about claiming to be transparent to save face.

    • harimau777a day ago |parent

      Is there any legal obligation for them not to lie about the prompt?

      • JumpCrisscrossa day ago |parent

        If they lie and any harm comes from it, yes, that increases liability.

        • mingus88a day ago |parent

          Every LLM seems to have a prominent disclaimer that results can be wrong, hallucinations exist, verify the output, etc

          I’d wager it’s pretty much impossible to prove in court that whatever harm occurred was due to intent by xAI, or even a liability given all the disclaimers

        • MangoToupea day ago |parent

          Liability for what? Have they been hit with a defamation suit or something?

    • queenkjuula day ago |parent

      It's not transparency, it's ass-covering techno babble.