I was hoping to read a post about some tiny LLM running in the browser to do live adblocking.
Once I see the first ad in an LLM I'm paying for, I'll stop using it and cancel my subscription. It's that easy. If that means I'll be missing out on some fancy new model or if that rules out an entire vendor because they trained all of their models with ad-injection, so be it. Of course I can't trust anything from any model, this will distort the relationship between my
My browser is at least somewhat neutral and since it's a client connecting to various systems outside of my control, applying some client-side filtering to get rid of the nonsense some entities push into my direction, is basically just self defense I'll have to live with.
But once I'm fighting a dedicated service provider that owns the client and is intent on selling my eyeballs, I'm not gonna spend a minute trying to cleanup whatever they're sending in my direction. There's 0% chance any of it is still trustworthy.
I fear it won't be possible to detect any potential "commercial interests" in the output, unless the LLM companies are required to disclose them.
If I ask an LLM to write some basic application for me and it uses Next.js with settings that only work with deployment to Vercel, another LLM can't determine if this is sponsored or if Next.js is just the most popular tool du jour, and Vercel is the most popular way Next.js apps are deployed. If another LLM determines this is sponsored and blocks it, I may reasonably be upset about this. I mean I would personally would want Vercel blocked (and would ask LLMs not to use Vercel or any of their products), but many other users don't have an opinion about Vercel yet, and blocking this if it's not due to sponsorship violates users' expectations regarding how LLMs work.