HNNewShowAskJobs
Built with Tanstack Start
The Future of Forums Is Lies, I Guess(aphyr.com)
36 points by zdw a day ago | 17 comments
  • crabmusket18 hours ago

    Why should "we" not legislate that any AI systems must identify themselves as such when asked? There could even be a specified way to ask this question so it can be recognised by simple NLP techniques and avoid the black box processing of the model itself. This could carry legal weight.

    That way, humans could impersonate AIs, but AIs would be legally encouraged, shall we say, not to impersonate humans.

    "It could never be enforced" or "but there will be bad actors who don't do this" are useful and valid discussions to have, but I think separate to the question of if this would be a worthwhile regulatory concept to explore.

    • SebastianKra7 hours ago |parent

      At least it would get rid of SEO blogspam, since these sites have "reputable" companies behind them.

      Search engines would probably skip any site that admits to being AI-generated.

    • 7 hours ago |parent
      [deleted]
  • alganeta day ago

    We need to normalize behaviors that are commonly attributed to paranoia.

    It is ok to ask a lot of questions, it is ok to be skeptic of friendly interactions, it is ok to be suspicious. These behaviors are not social anxiety, not psychosis, not anti-social. They are, in fact, desirable human aspects that contribute to the larger group.

    There is no automated detection, no magic way of keeping these new threats away. They work by exploring humans in vulnerable states. We need kind humans that are less vulnerable to those things.

    • jaredcwhitea day ago |parent

      Are you real?

      Are you a human?

      Is that real text you typed out?

      Does anything you're saying have any meaning?

      ----

      That is essentially what you are asking for. Every single online interaction immediately viewed as entirely suspect, with people having to go out of their way to prove they are…people.

      Well perhaps you're right that this is where online culture is headed, but we don't have to like it. I hate it. I hate it so bad.

      • alganet20 hours ago |parent

        You don't need to be the paranoid one, you just need to accept that some people will be paranoid and that's a good thing and you should listen. You don't have to like them or obey them.

        The other option is trying to make your bubble of protection and trust, where everyone is happy and friendly. Good luck with that.

  • anitil19 hours ago

    I'm not sure what the solution is here - some forums put people in a 'probationary' state for a while where they either can't post or have extra scrutiny. There's some spoiling of the commons going on here that I can't quite put my finger on.

    Separately, why are companies using this? Surely this is counter productive to their marketing efforts? Or maybe am I wrong and any attention is good?

  • pvga day ago

    Followup of https://news.ycombinator.com/item?id=44130743

  • lavelganzu21 hours ago

    Money is an imperfect but real solution. The simple thing is to charge a small sign-up fee. Obviously this dramatically increases the barrier to entry for real humans. But it should cut the spam even more sharply.

    • alganet20 hours ago |parent

      It's worse. It creates a false sense of security, while it allows people with vast resources to spam and scam freely.

      We need smarter humans, it's the only way.

      • lavelganzuan hour ago |parent

        It "allows" people with vast resources to spam only until the moderator removes the account, and it ensures the moderator is paid to do so. But more critically, it removes the profit incentive to spam, so even if people with vast resources were "allowed", they won't.

        • alganetan hour ago |parent

          What are you even talking about?

          A nasty SEO company with vast resources could create thousands of accounts, even if they have an entry fee, if it determines that the entry fee is cheaper than the value they would get by spamming.

  • praptaka day ago

    I don't believe a purely technical solution exists. This needs to get political, ideally making it a crime to use technology in this way. The scope is much broader and more dangerous than niche forums. This shit has the potential to kill the ability of societies to discuss policy in a meaningful way.

    • burnt-resistora day ago |parent

      This will likely lead to the requirements of identity verification and a small bond as collateral for the privilege of online participation in a particular forum. Idealistic, unenforceable laws won't help.

  • chatmastaa day ago

    > Unavailable Due to the UK Online Safety Act

    https://archive.is/y9JyC

  • giingyui13 hours ago

    > They use Indian IPs

    Oh really?

  • a day ago
    [deleted]