I think there's an interesting idea behind Gas Town (basically, using supervisor trees to make agents reliable, analogous to how Erlang uses them to make processes reliable), but it's lacking a proper quality ratchet (agents often don't mind changing or deleting tests instead of fixing code) and architectural function (agents tend to reinvent the wheel over and over again, the context window simply isn't big enough to fit everything in).
However, Steve Yegge's recent credulous foray into promoting a crypto coin, which was (IMO) transparently leveraging his audience and buzz to execute a pump and dump scheme, with him being an unwitting collaborator, makes me think all is not necessarily well in Yegge land.
I think Steve needs to take a step back from his amazing productivity machine and have another look at that code, and consider if it's really production quality.
> have another look at that code
So true. beads[0] is such a mess. Keeps breaking often with each release. Can't understand how people can rely on it for their day-to-day work.
> Steve Yegge's recent credulous foray into promoting a crypto coin
I didn't notice that. Can you give me a source?
He wrote all about it in https://steve-yegge.medium.com/bags-and-the-creator-economy-...
There's some related discussion here: https://news.ycombinator.com/item?id=46654878
I read this post as saying he won’t take funding from VCs, but he will from (his own word) crypto-bros?
I spent some time reading about Gas Town to see if I could understand what Stevey was trying to accomplish. I think he has some good ideas in there, actually - it really does seem like he's thought a bit about what coding in the future might look like. Unfortunately, it's so full of esoteric language and vibecoded READMEs that it is quite difficult to get into. The most concerning thing is that Stevey seems totally unaware of this. He writes about how when he tried to explain this to people they just stared at him like they were idiots, and so they must all be wrong -- that's a bit worrying, from a health and psychosis angle.
Still thinking about https://lucumr.pocoo.org/2026/1/18/agent-psychosis/
Good summary.
Upshot: Steve thinks he’s built a quality task tracker/work system (beads), and is iterating on architectures, and has gotten convinced an architecture-builder is going to make sense.
Meanwhile, work output is going to improve independently. The bet is that leverage on the top side is going to be the key factor.
To co-believe this with Steve, you have to believe that workers can self-stabilize (e.g. with something like the Wiggum loop you can get some actual quality out of them, unsupervised by a human), and that their coordinators can self stabilize.
If you believe those to be true, then you’re going to be eyeing 100-1000x productivity just because you get to multiply 10 coordinators by 10 workers.
I’ll say that I’m generally bought in to this math. Anecdotally I currently (last 2 months) spend about half my coding agent time asking for easy in-roads to what’s been done; a year ago, I spent 10% specifying and 90% complaining about bugs.
Example, I just pulled up an old project, and asked for a status report — I got a status report based on existing beads. I asked it to verify, and the computer ran the program and reported a fairly high quality status report. I then asked it to read the output (a PDF), and it read the PDF, noticed my main complaints, and issued 20 or so beads to get things in the right shape. I had no real complaints about the response or workplan.
I haven’t said “go” yet, but I presume when I do, I’m going to be basically checking work, and encouraging that work checking I’m doing to get automated as well.
There’s a sort of not-obvious thing that happens as we move from 0.5 9s to say 3 9s in terms of effectiveness — we’re going to go from constant intervention needed at one order of magnitude of work to constant intervention needed at 2.5x that order of magnitude of work — it’s a little hard to believe unless you’ve really poked around — but I think it’s coming pretty soon, as does Steve.
Who, nota bene, to be clear, is working at a pace that he is turning down 20 VCs a week, selling memecoin earnings in the hundreds of thousands of dollars and randomly ‘napping’ in the middle of the day. Stay rested Steve, keep on this side of the manic curve please, we need you.. I’d say it’s a good sign he didn’t buy any GAS token himself.
> Stay rested Steve, keep on this side of the manic curve please, we need you
This is my biggest takeaway. He may or may not be on to something really big, but regardless, it's advancing the conversation and we're all learning from it. He is clearly kicking ass at something.
I would definitely prefer to see this be a well paced marathon rather than a series of trips and falls. It needs time to play out.
What a fever dream!
“I’m going to go lay down and, uh, think about the problem with my eyes closed”
Oh good, mainstream coders finally catching up with the productivity of 2010s Clojurists and their “Hammock Driven Development”! (https://m.youtube.com/watch?v=f84n5oFoZBc)
I instantly read any Steve Yegge blog. Not true of anyone else.
Every time I read another article from this guy I get even more frustrated telling if he’s grifting or legitimately insane
Between quotes like these
> I had lunch again (Kirkland Cactus) with my buddies Ajit Banerjee and Ryan Snodgrass, the ones who have been chastising teammates for acting on ancient 2-hour-old information.
, and trying to argue that this is the future of all productivity while taking time to physically go to a bank to get money off a crypto coin while also crowing about how he can’t waste time on money.
On top of that this entire gas town thing is predicated on not caring about the cost but AI firms are currently burning money as fast as possible selling a dollar for 10 cents. How does the entire framework/technique not crash and burn the second infinite investment stops and the AI companies need to be profitable and not a money hole?
Even if something like Gas Town isn't remotely affordable today it could potentially be a useful glimpse at what can be done in principle and what might be affordable in, say, 10 years. There's a long history of this in computing, of course https://en.wikipedia.org/wiki/Expensive_Typewriter https://en.wikipedia.org/wiki/Sketchpad https://en.wikipedia.org/wiki/Xerox_Alto . OTOH it could certainly make the approach totally unsuitable for VC funding at present, and that's without even considering the other reasons to be wary of Gas Town and Beads.
Nothing I have read about LLMs and related makes it seem like they will be affordable in the future when it comes to software specifically.
I will preface this that I think AI agents can accomplish impressive things, but in the same way that the Great Pyramid of Giza was impressive while not being economically valuable.
Software is constantly updating. For LLMs to be useful they need to be relatively up to date with software. That means training and from what I understand the training is the vast majority of costs and there is no plausible technical solution around this.
Currently LLMs seem amazing for software because AI companies like OpenAI or Anthropic are doing what Uber and Lyft did in their heyday where they sold dollars for pennys, just to gain market share. Mr. Yegge and friends have made statements about if cost scares you, then step away. Even in the article this thread is about he has this quote
> Jeffrey, as we saw from his X posts, has bought so many Claude Pro Max $200/month plans (22 so far, or $4400/month in tokens) that he got auto-banned by Anthropic’s fraud detection.
And so far what I’ve seen is that he’s developed a system that lets him scale out the equivalent of interns/junior engineers en masse under his tentative supervision.
We already had the option to hire a ton of interns/junior engineers for every experienced engineer. It was quite common 1.5-3 decades ago. You’d have an architect who sketched out the bones of the application down to things like classes or functions, then let the cheap juniors implement.
Everyone moved off that model because it wasn’t as productive per dollar spent.
Mr. Yegge’s Gas Town, to me, looks like someone thought “what if we could get that same gaggle of juniors, for the same cost or more, but they were silicon instead of meat”
Nothing he’s outlined has made me convinced that the unit economics of this are going to work out better than just hiring a bunch of young bright people right out of college, which corporations are increasingly loathe to do.
If you have something to point to for why they thought is incorrect, in regards to this iteration of AI, then please link it.
But why should one expect no future improvement in training costs (all else being equal) from Moore's Law, never mind any future use of eg. more efficient algorithms or more specialised hardware?
> telling if he’s grifting or legitimately insane
or if he's talking to us from 5 years in the future.
Ignoring the financial aspect, this all makes sense - one LLM is good, 100 is better, 1000 is better still. The whole analogy with the industrial revolution makes sense to me.
> AI firms are currently burning money as fast as possible selling a dollar for 10 cents.
The financial aspect is interesting, but we're dealing with today's numbers, and those numbers have been changing fast over the last few years. I'm a big fan of Ed Zitron's writing, and he makes some really good points, but I think condemning all creative uses of LLMs because of the finances is counterproductive. Working out how to use this technology well, despite the finances not making much sense, is still useful.
This all reminds me of the offhand comment from an old XKCD: “You own 3D googles, which you use to view rotating models of better 3D googles.” He’s got this gigantic angentic orchestration system, which he uses to build… an agentic orchestration system. Okay.