HNNewShowAskJobs
Built with Tanstack Start
CJEU has made it effectively impossible to run a user-generated platform legally(techdirt.com)
57 points by alsetmusic 2 hours ago | 18 comments
  • xg15an hour ago

    > I am an unabashed supporter of the US’s approach with Section 230, as it was initially interpreted, which said that any liability should land on the party who contributed the actual violative behavior—in nearly all cases the speaker, not the host of the content.

    I never really understood how that system is supposed to work.

    So on the one hand, Section 230 absolves the hoster of liability and tells an affected party to go after the author directly.

    But on the other hand, we all rally for the importance of anonymity on the internet, so it's very likely that there will be no way to find the author.

    Isn't this a massive vacuum of responsibility?

    • Retrican hour ago |parent

      Anonymous authors have very little reach without external promotion or a long lasting reputation.

      If someone builds up a reputation anonymously then that reputation itself is something that can be destroyed when a platform destroys their account etc.

      • probably_wrong35 minutes ago |parent

        Your premise goes precisely against the base of the lawsuit itself:

        > (...) an unidentified third party published on that website an untrue and harmful advertisement presenting her as offering sexual services. That advertisement contained photographs of that applicant, which had been used without her consent, along with her telephone number.(...) The same advertisement nevertheless remains available on other websites which have reproduced it.

        Anonymous author, great reach, enough damage for the victim to take a lawsuit all the way to the CJEU.

        • Retric8 minutes ago |parent

          > great reach

          What exactly provided great reach here? Is it the creator or something else.

          > The same advertisement nevertheless remains available on other websites which have reproduced it.

          This presumably involved the actions of 3rd parties not just the original content creator.

    • xoa11 minutes ago |parent

      A few things:

      >But on the other hand, we all rally for the importance of anonymity on the internet, so it's very likely that there will be no way to find the author.

      So:

      1) We all rally for the importance of anonymity (wrt general speech) EVERYWHERE, before even (and critical to) the founding of America. Writing like the Federalist Papers were absolutely central to arguments for the US Constitution, and they were anonymous. "The Internet" is not anything special or novel here per se when it comes to the philosophy and politics of anonymous speech. There has always been a tension with anonymous speech risks vs value, and America has come down quite firmly on the value side of that.

      2) That said, "anonymous" on the internet very rarely actually is to the level of "no way to find the author with the aid of US court ordered process". Like, I assume that just as my real name is not "xoa" your real name is not "xg15", and to the extent we have made some effort at maintaining our pseudonymity it'd be somewhat difficult for any random member of the general public to connect our HN speech to our meatspace bodies. But the legal process isn't limited to public information. If you have a colorable lawsuit against someone, you can sue their placeholder and then attempt to discover their identity via private data. HN has our IP addresses if nothing else, as does intermediaries between the system we're posting from and HN, as well as possibly a valid email address. Which can potentially by themselves be enough breadcrumbs to follow back to a person and have enough cause to engage in specific discovery against them. And this is without any money getting involved, if there are any payments of any kind that leaves an enormous number of ripples. And that's assuming nobody left any other clues, that you can't make any inferences about who would be doing defamatory speech against you and narrow it down further that.

      Yes, it's possible someone at random is using a massive number of proxies from a camera-less logless public access point with virgin monero or whatever and perfect opsec, but that really is not the norm.

      3) Hosters not being directly liable doesn't make them immune to court orders. If something is defamatory you can secure an order to have it removed even without finding the person in question. And in practice most hosters are probably going to remove something upon notification as fast as possible, as in this case, and ban the poster in question on top.

      So no, I don't think it's a "a massive vacuum of responsibility" anymore than it ever was, and the contrast is that eliminating anonymous speech is a long proven massive risk to basic freedoms.

    • unyttigfjelltol27 minutes ago |parent

      The combo effectively enshittified swaths of the Internet, which now is full of robo-pamphleteers acting with anonymous impunity, in ways they never would if sitting face-to-face.

      I love the Internet but it normalizes bad behavior and to the extent the CJEU was tracking toward a new and more stringent standard, well earned by the Internet and its trolls.

  • nomercy40020 minutes ago

    The party which decides to show the advertisment in exchange for payment, should be more responsible for what they are showing than a free user posting content.

    Now things become interesting when a users pays for ranking or 'verification' checkmarks. What makes that content different than a paid advertisment?

    • jay_kyburz2 minutes ago |parent

      I came to the comments to express the same sentiment, expecting to be an unpopular opinion. Pleasantly surprised to find your comment at the top.

      Hosts should make sure they know who is posting content on their platforms, so that in the event they are sued, they can countersue the creator of the content.

  • hexo30 minutes ago

    Websites have to be held responsible for ADs they serve. Otherwise they tend to make unfounded excuses we cant care less about. Like scam ADs on youtube.

    But user generated content? LOL, no.

  • smartbitan hour ago

    Another analysis, by Heise Verlag, publisher of C’t Europe's largest IT and tech magazine https://heise.de/-11102550

    The Russmedia ruling of the ECJ: Towards a “Cleannet”?

    A change in liability privilege for online providers will lead to a “cleaner”, but also more rigid, monitored internet, says Joerg Heidrich.

  • SilverElfinan hour ago

    > Under this ruling, it appears that any website that hosts any user-generated content can be strictly liable if any of that content contains “sensitive personal data” about any person.

    Could this lead to censorship as well? For example you could go to a website or community you don’t like, and share information that could be seen as “sensitive personal data” and then file an anonymous complaint so they get into legal trouble or get shut down?

  • Tor327 minutes ago

    Sigh. "..that personal data processed must be accurate and, where necessary, kept up to date. "

    How do they think a hosting provider can check if personal data is accurate? Maybe if privacy didn't exist and everybody could be scrutinized.. but the ruling refers to the GDPR to justify this, and the GDPR is about _protecting_ privacy. So, what is it?

    And for everything else.. is the material sensitive or not? How can anyone know, in advance?

    I suggest every web site host simply forward all and every input to an EU Court address, and let them handle it. They're the ones suggesting that hosts should make sure that personal data on someone is "accurate", they're the ones demanding that the data should not be "sensitive", so they can as well be responsible for vetting the data.

    But they're all crazy anyway, as they demand that a website must block anyone from copying the content.. so how, at the same time, can you even have a website? A website which people can watch?

    If the ruling was about collecting data which isn't for displaying, i.e. what a net shop does (address, credit card number), then this would be understandable. But provisions for that already exists, instead they use this (GDPR) as a tool to extend this to user-created content. It's not limited to ads, and ads do need something done. Something totally different from this.

  • free_bip2 hours ago

    Seems like a pretty big overreaction IMO. Advertisements deserve more strict regulation than general user-generated content because they tend to reach far more people. The fact that they aren't has resulted in something like 10% of all ads shown being outright scams or fraud[0]. And they should never have allowed the ad to air in the first place - it was patently and obviously illegal even without considering the GDPR.

    If these companies aren't willing to put basic measures in place to stop even the most obviously illegal ads from airing, I have a lot of trouble having sympathy for them getting their just desserts in court.

    [0]: https://www.msn.com/en-us/money/personalfinance/meta-showed-...

    • zahlmanan hour ago |parent

      > Advertisements deserve more strict regulation than general user-generated content because they tend to reach far more people.

      They deserve strict regulation because the carrier is actively choosing who sees them, and because there are explicit fiscal incentives in play. The entire point of Section 230 is that carriers can claim to be just the messenger; the only way to make sense of absolving them of responsibility for the content is to make the argument that their conveyance of the content does not constitute expression.

      Once you have auctions for ads, and "algorithmic feeds", that becomes a lot harder to accept.

      • xoa3 minutes ago |parent

        >The entire point of Section 230 is that carriers can claim to be just the messenger

        Incorrect, and it's honestly kinda fascinating how this meme shows up so often. What you're describing is "common carrier" status, like an ISP (or Fedex/UPS/post office) would have. The point of Section 230 was specifically to enable not being "just the messenger", it was part of the overall Communications Decency Act intended to aid in stopping bad content. Congress added Section 230 in direct reaction to two court cases (against Prodigy and CompuServe) which made service providers liable for their user's content when they didn't act as pure common carriers but rather tried to moderate it but, obviously and naturally, could not perfectly get everything. The specific fear was that this left only two options: either ban all user content, which would brutalize the Internet even back then, or cease all moderation, turning everything into a total cesspit. Liability protection was precisely one of the rare genuine "think of the children!" wins, by enabling a 3rd path where everyone could do their best to moderate their platforms without becoming the publisher. Not being a common carrier is the whole point!

    • tensoran hour ago |parent

      Except this isn't limited to ads is it? From the post it sounds like the ruling covers any user content. If someone uploads personal data to Github now Github is liable. In fact, why wouldn't author names on open source licenses count as PII?

      • o11c21 minutes ago |parent

        The author of the article is claiming it extends beyond ads.

        That does not appear to be what the court actually said, however.

        And I 100% believe that all advertisements should require review by a documented human before posting, so that someone can be held accountable. In the absence of this it is perfectly acceptable to hold the entire organization liable.

      • free_bip43 minutes ago |parent

        You could always sue GitHub to find out.

        Personally, I'm not buying the slippery slope argument. I could be wrong of course but that's the great thing about opinions: you're allowed to be wrong :)