Almost all of this report is about leaking system prompts.
The OpenClaw system prompt has no measures in it at all to prevent leaking, because trying to protect your system prompt is almost entirely a waste of time and actually makes your product less useful.
As a result, I do not think this is a credible report.
Here's the system prompt right now: https://github.com/openclaw/openclaw/blob/b4e2e746b32f70f8fb...
Zeroleaks.ai is a 13 day old registration. Cool.
More interesting, looks to be from this 16yo, https://github.com/x1xhlol, https://www.lucknite.dev/
Explains why it reads like AI slop. "CRITICAL BREACH..."
Can we call slop in two words? I didn't feel that. Is my radar off? /me taps screen
I frequently push back on people being hair-trigger about calling things AI, but even I’ve gotta admit, that’s exactly what Claude code says if you ask it to do a security review and it finds something. I’ve seen this numerous times.
I can detect it pretty well, but that was just one example.
No person starts a summary that way, it's over-the-top and meaningless. I have seen AI do that many times when summarizing something related to security, though. Claude often says "CRITICAL:" or "CRITICAL VULNERABILITY:" or similar, especially when you jam the context window full of junk.
Yes, with 128K GH stars. Impressive if true.
The account's stars are mostly a "system prompts" collection repo fwiw.
Trying to hustle online and writing high quality software aren't the same
Looks interesting, https://github.com/ZeroLeaks/zeroleaks
At least, I am curious about the tool
It's a moltbook agent tasked to get HN attention
seems it worked. We've been outsmarted by the lobster
Is this a CC generated .md report formatted as a .pdf? Looks familiar.
Zero mention of specific models that are being compromised makes it hard to take the numbers in this report seriously.
I do understand there's a lot of people running openclaw that don't really understand it and know what models are actually running. But we've known for a while that there are tons of older models that are pretty vulnerable, and you can hook up any model to OpenClaw, so, this data is not really that useful. Even though I totally agree that there are plenty of security risks here
Relying on the model for security is not security at all.
No amount of hardening or fine-tuning will make them immune to takeover via untrusted context
Can someone give me context on why leaking the system prompt of a open source tool, I run on my machine is a problem?
Only if you write a custom prompt with information you don't want to disclose.
Ha this moltbook gone crazy.