Mercenaries over missionaries.
Many employers want employees to act like cult members. But then when going gets tough, those are often the first laid off, and the least prepared for it.
Employers, you can't have it both ways. As an employee don't get fooled.
During the first ever layoff at $company in 2001, part of the dotcom implosion, one of my coworkers who got whacked complained that it didn’t make sense as he was one of the companies biggest boosters and believers.
It was supremely interesting to me that he thought the company cared about that at all. I couldn’t get my head around it. He was completely serious, he kept arguing that his loyalty was an asset. He was much more experienced than me (I was barely two years working).
In hindsight, I think it is true that companies value that in a way. I’ve come to appreciate people who just stick it out for awhile. I try and make sure their comp makes it worth their while. They are so much less annoying to deal with than the assholes who constantly bitch or moan about doing what they’re paid for.
But as a personal strategy, it’s a poor one. You should never love or be loyal to something that can’t love you back.
The one and ONLY way I've ever seen "company" loyalty rewarded in any way is if you have a DIRECT relationship with a top level senior manager (C-suite). They will specifically protect you if they truly believe you are on "their side" and you are at their beck and call.
Always a fun game to watch a new C suite get hired and then figure out which of the news hires that follow are their mates.
- [deleted]
Companies appreciate loyalty… as long as long as it doesn’t cost them anything. The moment you ask for more money or they need to reduce the workforce, all of that goes out the window.
I think loyalty has value to the company but not as much as people think. To simplify it, multiple things contribute to "value" and loyalty is just a small part of it.
100% agree. There is no reason for employees to be loyal to a company. LLM building is not some religious work. It’s machine learning on big data. Always do what is best for you because companies don’t act like loyal humans, they act like large organizations that aren’t always fair or rationale or logical in their decisions.
> LLM building is not some religious work.
To a lot of tech leadership, it is. The belief in AGI as a savior figure is a driving motivator. Just listen to how Altman, Thiel or Musk talk about it.
That’s how they talk about it publicly. Internally I can attest that the companies for two of the three you list are not like that internally at all. It’s all marketing, outwardly focused.
I believe it's the opposite. They don't dare say their ridiculous tech cult stuff to their employees, but it's what they truly believe.
AGI is their capitalist savior, here to redeem a failing system from having to pay pesky workers.
"Tech founders" for whom the "technology" part is the thing always getting in the way of the "just the money and buzzwords" part.
Now they think they can automate it away.
25+ years in this industry and I still find it striking how different the perspective between the "money" side and the "engineering" side is... on the same products/companies/ideas.
> Just listen to how Altman, Thiel or Musk talk about it.
It’s surprising how little they seem to have thought it through. AGI is unlikely to appear in the next 25 years, but even if, as a mental exercise, you accept it might happen, it reveals it's paradox: If AGI is possible, it destroys its own value as a defensible business asset.
Like electricity, nuclear weapons, or space travel , once the blueprint exists, others will follow. And once multiple AGIs exist, each will be capable of rediscovering and accelerating every scientific and technological advancement.
AGI isn’t a moat. AGI is what kills the moat.
The prevailing idea seems to be that the first company to achieve superintelligence will be able to leverage it into a permanent advantage via exponential self improvement, etc.
> able to leverage it into a permanent advantage via exponential self improvement
Their fantasies of dominating others, through some modern day Elysium, reveal far more about their substance intake than rational grasp of where they actually stand... :-)
Tech leadership always treats new ventures or fields that way, because being seen to treat it that way and selling the idea of treating it that way is how you attract people (employees, and if you are very lucky investors, too) that are willing to sacrifice their own rational material interests to advancing what they see as the shared religious goal (which is, in fact, the tech leader’s actual material interest.)
I mean, even on HN, which is clearly a startup-friendly forum, that tendency among startup leaders has been noted and mocked repeatedly.
But at least consider the impact on society of your job. A lot of these big companies are nocive and addictive and are destroying our society fabric.
> Employers, you can't have it both ways.
Exactly. Though you can learn a lot about an employer by how it has conducted layoffs. Did they cut profits and management salaries and attempt to reassign people first? Did they provide generous payouts to laid off employees?
If the answer to any of these questions is no then they're not worth committing to.
1000x this. You should ideally feel like you’re part of a great group of folks and doing good work - but this is not a guarantee of anything at all.
When it comes down to it, you’re expendable when your leadership is backed into a corner.
A Ronin is just a Samurai who has learned his lesson.
Only place you can say if you are an employee and a missionary is well if you are a missionary or working in a charity/ NGO etc trying to help people/animals etc.
The rest of us are mercenaries only.
or if you own the company
It's also nice to work for the government.
At least if you work in a functional democracy where state bureaucrats can't be fired at a dictator's whim.
Be careful what you wish for... cause too far in the other way you get "babu" culture which I feel is one the things that has ruined India.
We are not a company, we are a family
Ferengi Rules of Acquisition:
#6: Never allow family to stand in the way of opportunity.
#111: Treat people in your debt like family… exploit them.
#211: Employees are the rungs on the ladder of success. Don't hesitate to step on them.
#91: Your boss is only worth what he pays you.
These CEOs will be the first to say "we are a team, not a family" when they do layoffs.
"I have decided that you need to go spend more time with your family. Really I'm just doing you a favor."
Relevant Silicon Valley Scene:https://www.youtube.com/watch?v=u48vYSLvKNQ
Well, we're a family, but you're still being disowned at layoff time
I think there's more to work than just taking home a salary. Not equally true among all professions and times in your life. But most jobs I took were for less money with questionable upside. I just wanted to work on something else or with different people.
The best thing about work is the focus on whatever you're doing. Maybe you're not saving the world but it's great to go in to have one goal that everyone goes towards. And you get excited when you see your contributions make a difference or you build great product. You can laugh and say I was part of a 'cult', but it sure beats working a misearble job for just a slightly higher paycheck.
Missionaries https://www.youtube.com/watch?v=zt7BPxHqbkU
Especially for an organization like OpenAI that completely twisted its original message in favor of commercialization. The entire missionary bit is BS trying to get people to stay out of a sense of what exactly?
I'm all for having loyalty to people and organizations that show the same. Eventually it can and will shift. I've seen management changed out from over me more times than I can count at this point. Don't get caught off guard.
It's even worse in the current dev/tech job market where wages are being pushed down around 2010 levels. I've been working two jobs just to keep up with expenses since I've been unable to match my more recent prior income. One ended recently, and looking for a new second job.
That’s because you don’t believe/realize in the mission of the product and its impact to society. When if work at Microsoft, you are just working to make MS money as they are like a giant machine.
That said it seems like every worker can be replaced. Lost stars replaced by new stars
- [deleted]
They sure can have it both ways. They do now.
No. The cult members are less likely to be laid off. Simply because they don‘t stand out and provide less surface for attack.
Only be loyal to doing work :)
Big picture, I'll always believe we dodged a huge bullet in that "AI" got big in a nearly fully "open-source," maybe even "post open-source" world. The fact that Meta is, for now, one of the good guys in this space (purely strategically and unintentionally) is fortunate and almost funny.
AI only got big, especially for coding, because they were able to train on a massive corpus of open source code. I don't think it is a coincidence.
Another funny possibly sad coincidence is that the licenses that made open source what it is will probably be absolutely useless going forward, because as recent precedent has shown, companies can train on what they have legally gained access to.
On the other hand, AGPL continues to be the future of F/OSS.
MIT is also still useful; it lets me release code where I don't really care what other people do with it as long as they don't sue me (an actual possibility in some countries)
Which countries would these be?
The US, for one. You can sue nearly anyone for nearly anything, even something you obviously won't win in court, as long as you find a lawyer willing to do it; you don't need any actual legal standing to waste the target's time and money.
Even the most unscrupulous lawyer is going to look at the MIT license, realize the target can defend it for a trivial amount of money (a single form letter from their lawyer) and move on.
You can sue for damages if they have malware in the code, there is no license that protects you from distributing harmful products even if you do it for free.
If I commit fraud, sure. But the code I release is extremely honest about what it does :)
- [deleted]
There are other ways to litigate that the malicious/greedy can use, where MIT offers no protection; e.g. patent trolling.
And illegally too. Anthropic didn't pay for those books they used.
It's too late at this point. The damage is done. These companies trained on illegally obtained data and they will never be held accountable for that. The training is done and they got what they needed. So even if they can't train on it in the future, it doesn't matter. They already have those base models.
Then punitive measures are in order. Add it to the pile of illegal, immoral, and unethical behavior of the feudal tech oligarchs already long overdue for justice. The harm they have done and are doing to humanity should not remain unpunished.
Legally or illegally gained access too. Lest we forget Meta pirating books
And the legality of this may vary by jurisdiction. There’s a nonzero chance that they pay a few million in the US for stealing books but the EU or Canada decide the training itself was illegal.
Then the EU and canada just won't have any sovereign LLMs. They'll have to decide if they'd rather prop up some artificial monopoly or support (by not actively undermining) innovation.
It’s not going to happen. The EU is desperate to stop being in fourth place in technology and will do absolutely nothing to put a damper on this. It’s their only hope to get out of the rut.
Explain how AGPL would prevent AI from being trained on it or AI-generated code competing with it. I have used AGPL for a decade and still not sure.
It wouldn't -- AGPL code that is picked up would also just get "fair used" into new software.
That said, AGPL as a trend was a huge closing of the spigot of free F/OSS code for companies to use and not contribute back to.
Yes, I hope it was a trend. People were judging me when I first started using it over 10 years ago.
Yup. The book torrenting case is pretty nuts.
If I can reproduce the entirety of most books off the top of my head and sell that to people as a service, it's a copyright violation. If AI does it, it's fair use.
Pants-on-head idiotic judge.
>If I can reproduce the entirety of most books off the top of my head and sell that to people as a service, it's a copyright violation. If AI does it, it's fair use.
Assuming you're referring to Bartz v. Anthropic, that is explicitly not what the ruling said, in fact it's almost the inverse. The judge said that output from an AI model which is a straight up reproduction of copyrighted material would likely be an explicit violation of copyright. This is on page 12/32 of the judgement[1].
But the vast majority of output from an LLM like Claude is not a word for word reproduction; it's a transformative use of the original work. In fact, the authors bringing the suit didn't even claim that it had reproduced their work. From page 7, "Authors do not allege that any infringing copy of their works was or would ever be provided to users by the Claude service." That's because Anthropic is already explicitly filtering out results that might contain copyrighted material. (I've run into this myself while trying to translate foreign language song lyrics to English. Claude will simply refuse to do this)[2]
[1] https://www.courtlistener.com/docket/69058235/231/bartz-v-an...
[2] https://claude.ai/share/d0586248-8d00-4d50-8e45-f9c5ef09ec81
They should still have to pay damages for possessing the copyrighted material. That's possession, which courts have found is copyright violation. Remember all the 12 year olds who got their parents sued back in the 2000s? They had unauthorized copies.
I don't know what exactly you're referring to here. The model itself is not a copy, you can't find the copyrighted material in the weights. Even if you could, you're allowed under existing case law to make copies of a work for personal use if the copies have a different character and as long as you don't yourself share the new copies. Take the Sony Betamax case, which found that it was legal and a transformative use of copyrighted material to create a copy of a publicly aired broadcast onto a recording medium like VHS and Betamax for the purposes of time-shifting one's consumption.
Now, Anthropic was found to have pirated copyrighted work when they downloaded and trained Claude on the LibGen library. And they will likely pay substantial damages for this. So on those grounds, they're as screwed as the 12 year olds and their parents. The trial to determine damages hasn't happened yet though.
> The model itself is not a copy,
Agreed
> the Sony Betamax case, which found that it was legal and a transformative use of copyrighted material to create a copy of a publicly aired broadcast
Good thing libgen is not publicly aired in broadcast format.
> So on those grounds, they're as screwed as the 12 year olds and their parents.
Except they have deep enough pockets to actually pay the damages for each count of infringement. That's the blood most of us want to see shed.
You cannot have trained the model without possession of copyrighted works. Which we seem to be in agreement on.
This was immediately my reaction as well, but I'm not a judge so what do I know. In my own mind I mark it as a "spice must flow" moment -- it will seem inevitable in retrospect but my simple (almost surely incorrect) take is that there just wasn't a way this was going to stop AI's progress. AI as a trend has incredible plot armor at this point in time.
Is the hinge that the tools can recall a huge portion (not perfectly of course) but usually don't? What seems even more straight forward is the substitute good idea, it seems reasonable to assume people will buy less copies of book X when they start generating books heavily inspired by book X.
But, this is probably just a case of a layman wandering into a complex topic, maybe it's the case that AI has just nestled into the absolute perfect spot in current copyright law, just like other things that seem like they should be illegal now but aren't.
I didn't see the part of the trial where they got the "entirety of most books" out of Llama. What did you see that I didn't?
Sad to say but it would have put US companies at a major disadvantage if they were not allowed to.
I'm not sure that's true. I've never heard of a human being done for copyright for reciting a book passage.
I daresay the difference with AI is that pretty much no human can do that well enough to harm the copyright holder, whereas AI can churn it out.
Yea, that dipshit judge just opened the flood gates for more problems. The problem is they don't understand how this stuff works and they're in the position of having to make a judgement on it. They're completely unprepared to do so.
Now there's precedent for future cases where theft of code or any other work of art can be considered fair use.
The AGPL is a nonfree license that is virtually impossible to comply with.
It’s an EULA trying to pretend it’s a license. You can’t have it both ways.
This is a strong claim, given it is listed as a free, copyleft license:
https://www.gnu.org/licenses/agpl-3.0.en.html
Could you expand on why you think it's nonfree? Also, it's not that hard to comply with either...
For some people "free" means "autonomy", and copyleft licences do a lot to restrict autonomy.
So interestingly, free meant autonomy for Stallman and the original proponents of "copyleft" style licenses too. But autonomy for end-users, not developers. But Stallman et al believed the copyleft style licenses maximized autonomy for end-users, rightly or wrongly, that was the intent.
Yeah if it's a problem of definition, then I definitely agree that it could not match there, it certainly isn't a do anything you want license.
"Free" decidedly means autonomy; "I have been freed from prison". Use of the word "free" in many OSS licenses is a jarring euphemism.
marcan does a much more detailed job than I do:
https://news.ycombinator.com/item?id=30495647
https://news.ycombinator.com/item?id=30044019
GNU/FSF are the anticapitalist zealots that are pushing this EULA. Just because they approve of it doesn’t make it free software. They are confused.
I read through and I think that the analysis suffers from the fact that in the case when the modifier is the user it's fine.
Free software refers to user freedoms, not developer freedoms.
I don't think the below is right:
> > Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software.
>
> Let's break it down:
>
> > If you modify the Program
>
> That is if you are a developer making changes to the source code (or binary, but let's ignore that option)
>
> > your modified version
>
> The modified source code you have created
>
> > must prominently offer all users interacting with it remotely through a computer network
>
> Must include the mandatory feature of offering all users interacting with it through a computer network (computer network is left undefined and subject to wide interpretation)
I read the AGPL to mean if you modify the program then the users of the program (remotely, through a computer network) must be able to access the source code.
It has yet to be tested, but that seems like the common sense reading for me (which matters, because judges do apply judgement). It just seems like they are trying too hard to do a legal gotcha. I'm not a lawyer so I can't speak to that, but I certainly don't read it the same way.
I don't agree with this interpretation of every-change-is-a-violation either:
> Step 1: Clone the GitHub repo
>
> Step 2: Make a change to the code - oops, license violation! Clause 13! I need to change the source code offer first!
>
> Step 1.5: Change the source code offer to point to your repo
This example seems incorrect -- modifying the code does not automatically make people interact with the program over a network...
"free software" was defined by the GNU/FSF... so I generally default to their definitions. I don't think the license falls afoul of their stated definitions.
That said, they're certainly anti-capitalist zealots, that's kind of their thing. I don't agree with that, but that's besides the point.
It's not really "virtually impossible to comply with". It's very restrictive, yes, but not hard to comply if you want to.
And yes, it is an EULA pretending to be a license. I'd put good odds on it being illegal in my country, and it may even be illegal on the US. But it's well aligned with the goals of GNU.
And if they AI companies don't like the license, they will ignore it or pay to be given a waver. Long may they rot in hell for doing that.
Hell is, by design, a consequence for poor people. (People could literally pay the church to not go to hell[0]). Rich people have no consequences whatsoever, let alone poor people consequences.
[0] https://www.cambridge.org/core/books/abs/preaching-the-crusa...
Not "by design", as historically the hell came first. It was only much later that they catholic church started talking about the purgatory and the possibility of reducing your punishment by paying money.
The people running AI companies have figured out that there is no such thing as hell. We have to come up with new reasons for people to behave in a friendly way.
We already have such reasons. Besides, all religious "kindness" was never kindness without strings attached, even though they'd like you to think that was the case.
The people running AI companies aren't magic, they can't be certain about what comes after death.
If I can have AI retype all code per my desire how exactly is source code special?
I like open source. I also don't think that is where the magic is anymore.
It was scale for 20 years.
Now it is speed.
Open source may be necessary but it is not sufficient. You also needed the compute power and architecture discoveries and the realisation that lots of data > clever feature mapping for this kind of work.
A world without open source may have given birth to 2020s AI but probably at a slower pace.
Dont make the mistake of anthropomorphizing Mark Zuckerberg. He didnt open source anything because he's a "good guy", he's just commoditizing the complement.
The "good guy" is a competitive environment that would render Meta's AI offerings to be irrelevant right now if it didnt open source.
The reason Machiavellianism is stupid is that the grand ends the means aim to obtain often never come to pass, but the awful things done in pursuit of them certainly do. So the motivation behind those means doesn't excuse them. And I see no reason the inverse of this doesn't hold true. I couldn't care less if Zuckerburg thinks open sourcing Llama is some grand scheme to let him to take over the world to become its god-king emperor. In reality, that almost certainly won't happen. But what certainly will happen is the world getting free and open source access to LLM systems.
When any scheme involves some grand long-term goal, I think a far more naive approach to behaviors is much more appropriate in basically all cases. There's a million twists on that old quote that 'no plan survives first contact with the enemy', and with these sort of grand schemes - we're all that enemy. Bring on the malevolent schemers with their benevolent means - the world would be a much nicer place than one filled with benevolent schemers with their malevolent means.
> The reason Machiavellianism is stupid is that the grand ends the means aim to obtain often never come to pass
That doesn't feel quite right as an explanation. If something fails 10 times, that just makes the means 10x worse. If the ends justify the means then doesn't that still fit into Machiavellian principles? Isn't the complaint closet to "sometimes the ends don't justify the means"?
You have to assume a grand ends is achievable through some knowable means. I don't see any real reason to think this is the case, certainly not on any sort of a meaningful timeframe. And I think this is even less true when we consider the typical connotation of Machiavellianism, which is through 'evil' actions.
It's extremely difficult to think of any real achievements sustained on the back of Machiavellianism, but one can list essentially endless entities whose downfall was brought on precisely by such.
Machiavellianism is not for everyone. It is specifically a framework for people in power. Kings, Heads of States, CEOs, Commanders. Competitive environments with allot at stake (peoples lives, money, future), in these environments it is often difficult to make decisions. Having a framework in place that allows you to make decisions is very useful.
Mitch Prinstein wrote a book about power and it shows that dark traits aren't the standard in most leaders, nor they are the best way to get into/stay in power
author is "board certified in clinical child and adolescent psychology, and serves as the John Van Seters Distinguished Professor of Psychology and Neuroscience, and the Director of Clinical Psychology at the University of North Carolina at Chapel Hill" and the book is based on evidence
edit: you can't take a book from 1600 and a few alive assholes with power and conclude that. there's a bunch of philanthropists and other people around
Im not saying that the end outcome wont be beneficial. I dont have a crystal ball. Im just saying that what he is doing is in no way selfless or laudable or worthy of praise.
Same goes for when Microsoft went gaga for open source and demanded brownie points for pretending to turn over a new leaf.
> Dont make the mistake of anthropomorphizing Mark Zuckerberg
Considering the rest of your comment it's not clear to me if "anthropomorphizing" really captures the meaning you intended, but regardless, I love this
I think it’s a play on “don’t anthropomorphize the lawn mower”, referring to Larry Ellison.
Gotcha, thank you both, I totally missed this
Oh, absolutely -- I definitely meant that in the least complimentary way possible :). In a way, it's just the triumph of the ideals of "open source," -- sharing is better for everyone, even Zuck, selfishly.
The moment Meta produces something competitive with OpenAI is the moment they stop releasing the weights and rebrand from Llama. Mark my words.
they did say "accidentally". I find that people doing the right thing for the wrong reasons is often the best case outcome.
The price tag on this stuff, in human capital, data, and hardware, is high enough to preclude that sort of “perfect competition” environment.
Don’t let the perfect be the enemy of the good.
> The price tag on this stuff, in human capital, data, and hardware, is high enough to preclude that sort of “perfect competition” environment.
I feel like we right now live in that perfect competition environment though. Inference is mostly commoditized, and it’s a race to the bottom for price and latency. I don’t think any of the big providers are making super-normal profit, and are probably discounting inference for access to data/users.
Only because everyone believes it’s a winner takes all game and this perfect competition will only last for as long as the winner hasn’t come out on top yet.
> everyone believes it’s a winner takes all game
Why would anyone think that, and why do you think everyone thinks that?
Because tech is now a handful of baronial oligopolies, and the AI companies are fighting to be the next generation of same.
And this pattern has repeated itself reliably since the industrial revolution.
Successful ASI would essentially end this process, because after ASI there's nowhere else for humans to go (in tech at least.)
Everyone always thinks this at least in big tech I’ve never heard a PM or exec say a market is not winner take all. It’s some weird corpo grift lang that nothing is worth doing unless its winner take all.
>he's just commoditizing the complement
That's a cool smaht phrase but help me understand, for which Meta products are LLMs a complement?
A continuous stream of monetizable live user data?
The entire point of Meta owning everything is that it wants as much of your data stream as it can get, so it can then sell more ad products derived from that.
If much of that data begins going off-Meta, because someone else has better LLMs and builds them into products, that's a huge loss to Meta.
Sorry, I don't follow.
>because someone else has better LLMs and builds them into products
If that were true they wouldn't be trying to create the best LLM and give it for free.
(Disclaimer: I don't think Zuck is doing this out of the good of his heart, obv. but I don't see the connection with the complements and whatnot)
Meta has ad revenue. I think Meta’s play is to make it difficult for pure AI competitors to make revenue through LLMs.
Meta's play is to make sure there isn't an obvious superiority to one company's closed LLM -- because that's what would drive customers to choosing that company's product(s).
If LLM effectiveness is all about the same, then other factors dominate customer choice.
Like which (legacy) platforms have the strongest network effects. (Which Meta would be thrilled about)
That’s not commoditising the complement!
I’m not the poster that said it was that.
I think its about sapping as much user data from competitors. A company seeking to use an LLM has a choice between OpenAI, LLaMA, and others. If they choose LLaMA because it's free and host it themselves, OpenAI misses out on training data and other data like that
Well is the loss of training data from customers using self-hosted Llama that big a deal for OpenAI or any of the big labs at this point? Maybe in late-2022/early-2023 during the early stages of RLHF'd mass models but not today I don't think. Offerings from the big labs have pretty much settled into specific niches and people have started using them in certain ways across the board. The early land grab is over and consolidation has started.
Meta's primary business is capturing attention and selling some of that attention to advertisers. They do this by distributing content to users in a way that maximizes attention. Content is a complement to their content distribution system.
LLMs, along with image and video generation models, are generators of very dynamic, engaging and personalised content. If Open AI or anyone else wins a monopoly there it could be terrible for Meta's business. Commoditizing it with Llama, and at the same time building internal capability and a community for their LLMs, was solid strategy from Meta.
So, imagine a world where everyone but Meta has access to generative AI.
There's two products:
A) (Meta) Hey, here are all your family members and friends, you can keep up with them in our apps, message them, see what they're up to, etc...
B) (OpenAI and others) Hey, we generated some artificial friends for you, they will write messages to you everyday, almost like a real human! They also look like this (queue AI generated profile picture). We will post updates on the imaginary adventures we come up with, written by LLMs. We will simulate a whole existence around you, "age" like real humans, we might even get married between us and have imaginary babies. You could attend our virtual generated wedding online, using the latest technology, and you can send us gifts and money to celebrate these significant events.
And, presumably, people will prefer to use B?
MEGA lmao.
Their primary product: advertisements.
It takes content to sell advertisements online. LLMs produce an infinite stream of content.
VR/metaverse is dead in the water without gen AI. The content takes too long to make otherwise
What's even crazier is that China are the good guys when it comes to open source AI.
We would have to know their intent to really know if they fit a general understanding "the good guys."
Its very possible that China is open sourcing LLMs because its currently in their best interest to do so, not because of some moral or principled stance.
But that's precisely why Meta are the "good guys". They specifically called China the good guys in the same way that Meta is the good guys, though in this case many of the Chinese models are extremely good.
Meta has open sourced all of their offerings purely to try to commoditize the industry to the greatest extent possible, hoping to avoid their competitors getting a leg up. There is zero altruism or good intentions.
If Meta had actually competitive AI offering, there is zero chance they would be releasing any of it.
China nor Meta are the good guys, and they are not stewards of open source AI.
China has stopped releasing frontier models, and Meta doesn't release anything that isn't in the llama family.
- Hunyuan Image 2.0 (200 millisecond flux) is not released
- Hunyuan 3D 2.5, the top performing 3D model and an order of magnitude improvement over 2.1, is not released
- Seedream Video, which outperforms Google Veo 3 on ELO rankings, is not released
- Qwen VLo, an instructive autoregressive model, is not released
The list is much larger than this.
The country ruled by "people's party" has almost no open source culture while capitalism is leading the entire free software movement. I'm not sure what that says about our society and politics but the absurdist in me is having a good laugh every time I think about this :D
There’s actually a lot of open source software made by Chinese people. The government just doesn’t fund it. Not directly anyway, but there’s a ton of Chinese companies that do.
>There’s actually a lot of open source software made by Chinese people
Yea exactly, there is also a lot of chinese people out there, statistically a large chunk are cool with it.
Same dynamic as the US can be really - other countries see the US government and think to themselves, "I don't like these US people, look at what their government did" meanwhile US people are like "what do you mean, I don't like what the government did either". That's what a lot of Chinese people are thinking (but now allowed to say, in China criticizing the government is against their community guidelines)
>There’s actually a lot of open source software made by Chinese people.
If that is true and the software is any good, you should be able to name an open-source project that we've heard of started by people living in China.
DeepSeek released some models as open weights and some software for running the models. That's the only example I can think of.
I've recently been exploring PKM/knowledge management programs, and the best open source one is a Chinese project - SiYuan.
I have a feeling that their collaborative hacker culture is more hardware oriented, which would be a natural extension from the tech zones where 500 companies are within a few miles of each other and engineers are rapidly popping in and out and prototyping parts sometimes within a day.
Anecdotally, I've dealt with Chinese collaborative community projects in the ThinkPad space, where they have come together to design custom motherboards to modernize old ThinkPads. Of course there was a lot of software work as well when it comes to BIOS code, Thunderbolt, etc. I remember thinking how watching that project develop was like peering into another world with a parallel hacker culture that just developed... differently.
Oh there's also a Chinese project that's going to modernize old Blackberries with 5G internals. Cool stuff!
China does have their own Github as gitee.com¹ which runs a fork of Gitea but it's basically dead because it's impossible to have anything like Github with the current censorship aparatus. Here's the excerpt from wiki:
> On 18 May 2022, Gitee announced all code will be manually reviewed before public availability.[4][5] Gitee did not specify a reason for the change, though there was widespread speculation it was ordered by the Chinese government amid increasing online censorship in China.[4][6]
I won't pretend to be deeply familiar with China, but I think of two reasons: China doesn't take IP law seriously, so they can just copy, pirate whatever anyway. And the West has more wealthy idealistic techies with the free time for free software.
Capitalist countries (actually there are no other kinds of economies, in reality) are leading the open source software movement because it is a way for corporations to get software development services and products for free rather than paying for. It's a way of lowering labour costs.
Highly paid software engineers working in a ZIRP economy with skyrocketing compensation packages were absolutely willing to play this game, because "open source" in that context often is/was a resume or portfolio building tool and companies were willing to pay some % of open source developers in order to lubricate the wheels of commerce.
That, I think, is going to change.
Free software, which I interpret as copyleft, is absolutely antithetical to them, and reviled precisely because it gets in the way of getting work for free/cheap and often gets in the way of making money.
Copyleft isn't antithetical, see how many people are paid to work on the Linux kernel. I believe some other ecosystem software is also copylefted, like systemd.
And is building on top of the unpaid labour of SW engineers really a major part of the open source ecosystem? I feel open source is more a way for companies to cooperate in building shared software with less duplication of costs.
I disagree, the corporate open source is just half of the story. Much of free software space is pushed by idealists who can afford to pursue the ideals due to freedoms and finances provided by capitalist systems.
I don't think the intent really matters once the thing is out in the open.
I want open source AI i can run myself without any creepy surveillance capitalist or state agency using it to slurp up my data.
Chinese companies are giving me that - I don't really care about what their grand plan is. Grand plans have a habit of not working out, but open source software is open source software nonetheless.
> I want open source AI i can run myself
What are you running?
> Chinese companies are giving me that
I have not become aware of anything other than DeepSeek. Can you recommend a few others that are worth looking into?
Alibaba's Qwen is pretty good, and it looks like Baidu just open sourced Ernie!
It's really hard to tell. If instructions like the current extreme trend of "What a great question!" and all the crap that forces one to put
into the instructions, so that it doesn't treat you like a child, teenager or narcissistic individual who is craving for flattery, can really affect the mood and way of thinking of an individual, those Chinese models might as well have baked in something similar but targeted at reducing the productivity of certain individuals or weakening their beliefs in western culture.* Do not use emotional reinforcement (e.g., "Excellent," "Perfect," "Unfortunately"). * Do not use metaphors or hyperbole (e.g., "smoking gun," "major turning point"). * Do not express confidence or certainty in potential solutions.
I am not saying they are doing that, but they could be doing it sometime down the road without us noticing.
One private company in China, funded by running a quant hedge fund. I'm not sure China as in Xi is good.
Alibaba and Baidu both open source their models as well.
None of the big tech companies in China are releasing their frontier models anymore.
- Hunyuan Image 2.0 (200 millisecond flux) is not released
- Hunyuan 3D 2.5, the top performing 3D model and an order of magnitude improvement over 2.1, is not released
- Seedream Video, which outperforms Google Veo 3 on ELO rankings, is not released
- Qwen VLo, an instructive autoregressive model, is not released
I mean in some sense the Chinese domestic policy (“as in Xi”) made the conditions possible for companies like DeepSeek to rise up, via a multi-decade emphasis on STEM education and providing the right entrepreneurial conditions.
But yeah by analogy with the US, it’s not as if the W. Bush administration can be credited with the creation of Google.
Do we know if Meta will stick to its strategy of making weights available (which isn't open source to be clear) now that they have a new "superintelligence" subdivision?
It's not ideal, but having major players accidentally propping up an open ecosystem is probably the best-case outcome we could've hoped for
Your ability to use a lesser version of this AI on your own hardware will not save you from the myriad ways it will be used to prey on you.
Why not? Current open models are more capable than the best models from 6 months back. You have a choice to use a model that is 6 months old - if you still choose to use the closed version that’s on you.
And an inability to do so would not have saved you either.
Oh, they're actually the bad guys, just folks haven't thought far enough ahead to realise it yet.
> bad guys
You imply there are some good guys.
What company?
There are plenty of companies that don't immediately qualify as "the bad guys".
For instance, of all companies I've interviewed with or have friends working at that developed tech, some companies build and sell furnitures. Some are your electricity provider or transporter. Some are building inventory management systems for hospitals and drug stores. Some develop a content management system for medical dictionnary. The list is long.
The overwhelming majority of companies are pretty harmless and ethically mundane. They may still get involved in bad practice, but that's not inherent to their business. The hot tech companies may be paying more (blood money if you ask me), but you have other options.
Depends. Does your definition of “good” mean “perfect”? If so, cynical remarks like “no one is good” would be totally correct.
Signal, Proton, Ecosia, DuckDuckGo, Mastodon, Deepseek.
There are some less bad.
But, can't think of one off hand. Maybe Toys-R-Us? Ooops gone. Radio Shack? Ooops, also gone.
On the scale of Bad/Profit, Nice dies out.
Google circa 2005?
Twitter circa 2012?
In 2025? Nobody, I don't think. Even Mozilla is turning into the bad guys these days.
Signal, Mastodon
Bluesky, Kagi
In my head at least, Bluesky are way closer to "the bad guys'. I don't trust them at all, pretty sure that in spite of what they say, they're going to do the same sort of rug pull that Google did with their "do no evil" assurances.
Funnily enough, I would actually flip it to say this about Kagi. With Bluesky, everything they have built is available to continue to be useful for people completely independent of what the folks over at Bluesky decide to do. There is no vendor lock in at all.
Kagi, on the other hand, has released none of their technology publicly, meaning they have full power to boil the frog, with no actual assurance that their technology will be useful regardless of their future actions.
Google was bad the moment it chose its business model. See The Age of Surveillance Capitalism for details. Admittedly there was a nice period after it chose its model when it seemed good because it was building useful tools and hadn't yet accrued sufficient power / market share for its badness to manifest overtly as harm in the world.