Political parties hitching their wagon to "AI good" or "AI bad" aside, I'm actually a huge fan of this sort of anti-law. Legislators have been far too eager to write laws about computers and the Internet and other things they barely understand lately. A law that puts a damper on all that might give them time to focus on things that actually matter to their constituents instead of beating the tired old drum of "we've got to do something about this new tech."
The problem is when companies dodge responsibility for what their AI does, and these laws prevent updating the law to handle that. If your employees reject black loan applicants instantly, that's a winnable lawsuit. If your AI happens to reject all black loan applicants, you can hide behind the algorithm.
If your employees reject black loan applicants because they're black, that's a winnable lawsuit. If they reject black loan applicants because it happens the black loan applicants had bad credit, not so much.
Why are we treating AI like something different? If it's given the race of the applicants and that causes it to reject black applicants, it's doing something objectionable. If it's given the race of the applicants but that doesn't significantly change its determinations, or it isn't given their race to begin with, it's not.
The trouble is people have come up with this ploy where they demand no racial disparity in outcomes even when there are non-racial factors (e.g. income, credit history) that correlate with race and innately result in a disparity.
A cynic would say that plaintiff lawyers don't like algorithms that reduce human bias because filing lawsuits over human bias is how they get paid.
Everybody gangsta until AI deletes 90% of white collar jobs.
The actual statute: https://archive.legmt.gov/content/Sessions/69th/Contractor_i...
Seems pretty vague to me, but IANAL.
No mention of DRM. Shame.
Because this comes from ALEC (per the article on this), I'd say it's a bad idea, meant to support poorly conceived rules. ALEC advocates for policies that benefits conservatives and large corporations. They push policies that persecute minorities, and also make it much harder for states for prosecute misconduct of businesses or discrimination. In this case it looks like this is intended to block government regulation of AI and compute algorithms. https://www.commoncause.org/issues/alec/
My theory is that this will benefit Flock.
Background:
Trump signed an Executive Order (Dec 2025) preempting state AI safety laws, threatening to withhold $42.5B in broadband funding from states that refuse to comply (specifically targeting Colorado and California).
In response, New York signed the "RAISE Act" after the EO was issued. It has strict safety, transparency, and reporting protocols for frontier models.
California is enforcing its "Transparency in Frontier AI Act" (Sept 2025) regardless of the Federal threat. It requires developers of large AI models (over 10^26 FLOPS) to publicly disclose safety frameworks, report "catastrophic risk" incidents, protect whistleblowers.. etc
Big Tech (OpenAI, Google, Andreessen Horowitz) is siding with Trump on this one. They prefer one weak federal law to 50 strict state laws.
This post:
Red states are creating deregulation areas. If a big tech company has data centers in Montana, and CA tries to impose an audit on their model, the company can sue, claiming their "Civil Rights" in Montana are being infringed by California's overreach.
Red states are tying "Compute" to the First Amendment (free expression), basically anticipating the Supreme Court.
Future implications:
The US continues to split into two distinct operating environments. https://www.economist.com/interactive/briefing/2022/09/03/am...
The goals of this law:
"So, hypothetically, in a state with a right-to-compute law on the books, any bill put forward to limit AI or computation, even to prevent harm, could be halted while the courts worked it out. That could include laws limiting data centers as well.
“The government has to prove regulation is absolutely necessary and there’s no less restrictive way to do it,” Wilcox said. “Most oversight can’t clear that bar. That’s the point. Pre-deployment safety testing? Algorithmic bias audits? Transparency requirements? All would face legal challenge. "
My take: This sounds incredibly pro-industry and anti-democratic.
These laws would have one upside.. Open models would remain open and available. A big problem with at least some of proposed AI regulation is that it could outlaw a growingly important aspect of general purpose computing for the majority of people.
I don't know any proposed laws that limit models. I only know of proposed laws that limit deployment of models.
My take (IANAL): the law should probably be ruled unconstitutional for restricting the ability of Congress to pass laws. I believe there is precedent for this but I can't remember where.
And scary. Really scary.
It's really funny for how all the talk of AI safety what has resulted is precisely the exact series of steps one would take if one were to intentionally design some kind of dystopian AI system.
> Similar to how free speech doesn't mean you can yell “Fire!” in a crowded theater
While I appreciate bringing attention to ongoing changes in the tech/legal landscape, I'll get my rundowns from a source that doesn't blindly repeat this broken assertion. Doesn't speak well of their research practices.
Yeah, that quote was "mere dicta" from the first day (the case wasn't about shouting fire in a theater, it was about distributing pamphlets opposing the draft), and the actual holding of the case the quote is from was overturned more than half a century ago.
Hasn't stopped every authoritarian from parroting the quote whenever they want to censor something.
Despite its history, it’s still a valid example of an exception to the First Amendment under current law. The problem is that most people who cite it are using it as an analogy for something else that isn’t.
Including, from a modern free speech advocacy perspective, the original use of the analogy, which was about forbidding people from advocating resistance against a military draft!
> Despite its history, it’s still a valid example of an exception to the First Amendment under current law.
It's not. The current standard, set by Brandenburg v. Ohio, forbids speech which advocates imminent lawless action. It is a standard much broader than the Schenk case's threshold of clear and present danger.
> The problem is that most people who cite it are using it as an analogy for something else that isn’t.
Even the man who composed the the phrase did this. Schenk's "fire in a theater" aphorism was Oliver Wendell Holmes's attempt to persuasively discredit a group of Yiddish speaking anti-war pamphleteers in his non-binding legal commentary. The comparison is not a legal analysis nor is it itself a ruling on the merits of the case.
> Despite its history, it’s still a valid example of an exception to the First Amendment under current law.
Is it though? If you're putting on a play, and there is a fire in the script, e.g. in a play criticizing that decision, can the government punish you for putting on the play because of the risk it could cause a panic? If there is actually a fire in the theater, can they punish you for telling people? What if there isn't actually a fire but you believe that there is?
Not only is it useless as an analogy for doing any reasoning, the thing itself is so overbroad that even the unqualified literal interpretation is more of a prohibition than would actually be permissible.
None of your examples is what is meant by "Shouting fire in a crowded theatre." The quote is expressly about falsely shouting fire, not as part of the play, not as an honest act of attempting to alert people to a dangerous situation. The quote with more context is clear: "The most stringent protection of free speech would not protect a man falsely shouting fire in a theatre and causing a panic..."
> If there is actually a fire in the theater, can they punish you for telling people? What if there isn't actually a fire but you believe that there is?
(IANAL) Law usually takes circumstance into consideration, and AIUI, usually comes to reasonable conclusions in this case. The Wikipedia article on this quote[1] goes into that:
> Ultimately, whether it is legal in the United States to falsely shout "fire" in a theater depends on the circumstances in which it is done and the consequences of doing it. The act of shouting "fire" when there are no reasonable grounds for believing one exists is not in itself a crime, and nor would it be rendered a crime merely by having been carried out inside a theatre, crowded or otherwise. If it causes a stampede and someone is killed as a result, then the act could amount to a crime, such as involuntary manslaughter, assuming the other elements of that crime are made out. Similarly, state laws such as Colorado Revised Statute § 18-8-111 classify knowingly "false reporting of an emergency," including false alarms of fire, as a misdemeanor if the occupants of the building are caused to be evacuated or displaced, and a felony if the emergency response results in the serious bodily injury or death of another person.
(It continues with other jurisdictions and situations.)
[1]: https://en.wikipedia.org/wiki/Shouting_fire_in_a_crowded_the...
> None of your examples is what is meant by "Shouting fire in a crowded theatre."
Which is exactly the point, because they nevertheless literally are "shouting fire in a crowded theater".
> The quote with more context is clear: "The most stringent protection of free speech would not protect a man falsely shouting fire in a theatre and causing a panic..."
Which is likewise why the people trying to use the quote all but universally omit the qualifiers -- it would otherwise be clear that, even in the context of Schenck, the constraint was intended to be narrow.
And even with the qualifiers, the original quote still doesn't do well with the first example or the third, because imposing a prior restraint under the hypothetical argument that people could get confused and panic is going to be a weak case when the reason someone is doing it is it to criticize the government, and it's quite objectionable to punish people for speech when they genuinely believe something to be true just because they've made an honest mistake.
- [deleted]
I think we should be limiting data centers because of their power draw and water usage, so I’m against it.
I don’t particularly want to compete with Microsoft for power, or OpenAI for water.
Why are people on hackernews downvoting this? I am not American but from what I hear from americans, this is such a genuine and proven concern that AI does end up increasing their electricty and water costs.
I recently saw a county in america trying to have a deal and the deal itself is behind NDA and the govt cant tell its own citizens about the deal.
Personally I feel as if the problems aren't datacenters themselves (there were so many datacenters before AI) but AI datacenters in particular are really lucrative for the business but really hurtful to the average person in that area.
AI datacenters increase your electricity bill, water bill & now your ability to buy your hardware (ramflation)
Why are the demands of some billionaires preferred over the needs of the people who vote and how the mindset in many people themselves (I don't think there are billionaires reading my comment) is to be on the other side of line rather than fixing it, this whole philosophy doesn't sit right with me honestly especially within AI datacenters (I don't think that I am against normal CPU datacenters so much)
> I am not American but from what I hear from americans, this is such a genuine and proven concern that AI does end up increasing their electricty and water costs.
The media pretty much hates AI because it competes with them (people read the AI summary instead of visiting the publisher's website), so they're churning out one hit piece after another.
If you have a sudden spike in demand for electricity, short-term prices increase. Then the higher prices drive construction of new generation capacity (the cheapest option is currently solar) and long term the prices, if anything, come down, because you get more economies of scale and data centers used for AI training are actually pretty good at curtailing load during the rare extended periods of renewable generation undersupply which is one of the main things you need for the grid to take advantage of that cheap solar.
Meanwhile data centers don't inherently use any water. In some climates it's more efficient to use evaporative cooling -- it lowers energy consumption. That doesn't mean you have to do it that way, or even that it's the best choice for all climates. Moreover, many areas don't have the same water problems as the Southwest. "Millions of gallons of water" sounds like a lot until you realize the Great Lakes contain quadrillions of gallons of water, and it's really just being evaporated rather than actually consumed and then comes back down as rain shortly thereafter.
The media also likes comparing these numbers to household water consumption because households don't actually use that much water. Agriculture in just California consumes around 11 trillion gallons of water a year. Using the standard media units of household water consumption, this is the same amount of water used by 160 million households. There are around 133 million households in the US in total.
The water consumption is an entirely fake problem outside of areas where water is actually scarce, and not even the major offender in the areas where it is scarce. You can also obviously put new data centers outside of those areas, or use non-evaporative cooling systems.
> I recently saw a county in america trying to have a deal and the deal itself is behind NDA and the govt cant tell its own citizens about the deal.
This is likewise related to the media trying to impede them. If the local media is going to launch a vendetta against you as soon as they find out you're trying to build something, you'd want to keep it quiet for as long as possible.
That city with the NDA was in Wisconsin. This is not a place with water scarcity.
We have watched every good tech invention for the last 20 years become enshittified - nobody believes in the magical market theory thay “then prices will come down” - because they haven’t. Every service has gone berserk, and nothing feels like a deal any longer - and that’s before you add into the dystopian hellhole aspects of “tech”.
Water consumption and the infrastructure to leverage it can’t be nullified by pointing to… the Great Lakes. That’s not how that works.
This is before you get into “is this technology good for people”. It’s good for a few doughy tech bros in the Epstein list, but not the rest of us.
We don’t need the country to have an infinite pool of data centers, we need healthcare, clean air and water, etc.
We should stop these dumb ass data centers, and also ban all crypto. Tech people were supposed to help usher in Star Trek, not raise our power bills and destroy the environment by firing up new gas power plants for random number generating pyramid schemes and the suckers who fall for them. It’s all beyond embarrassing.
Some godel number this so we have a right to pirate and repair.