Heavy Gemini user here, another observation: Gemini cites lots of "AI generated" videos as its primary source, which creates a closed loop and has the potential to debase shared reality.
A few days ago, I asked it some questions on Russia's industrial base and military hardware manufacturing capability, and it wrote a very convincing response, except the video embedded at the end of the response was an AI generated one. It might have had actual facts, but overall, my trust in Gemini's response to my query went DOWN after I noticed the AI generated video attached as the source.
Countering debasement of shared reality and NOT using AI generated videos as sources should be a HUGE priority for Google.
YouTube channels with AI generated videos have exploded in sheer quantity, and I think majority of the new channels and videos uploaded to YouTube might actually be AI; "Dead internet theory," et al.
> YouTube channels with AI generated videos have exploded in sheer quantity, and I think majority of the new channels and videos uploaded to YouTube might actually be AI; "Dead internet theory," et al.
Yeah. This has really become a problem.
Not for all videos; music videos are kind of fine. I don't listen to music generated by AI but good music should be good music.
The rest has unfortunately really gotten worse. Google is ruining youtube here. Many videos now contain real videos, and AI generated videos, e. g. animal videos. With some this is obvious; other videos are hard to expose as AI. I changed my own policy - I consider anyone using AI and not declaring this properly, a cheater I don't want to ever again interact with (on youtube). Now I need to find a no-AI videos extension.
I've seen remarkably little of this when browsing youtube with my cookie (no account, but they know my preferences nonetheless.) Totally different story with a clean fresh session though.
One that slipped through, and really pissed me off because it tricked me for a few minutes, was a channel purportedly uploading videos of Richard Feynman explaining things, but the voice and scripts are completely fake. It's disclosed in small print in the description. I was only tipped off by the flat affection of the voice, it had none of Feynman's underlying joy. Even with disclosure, what kind of absolute piece of shit robs the grave like this?
The barrier to entry for grifting has been lowered, and for existing grifters they can put together some intricate slop. Of course Google doesn't care, they get to show ads against AI slop the same as normal human generated slop.
A fun one was from some minor internet drama around a Battlefield 6 player who seemed to be cheating. A grifter channel pushing some "cheater detection" software started putting out intricate AI generated nonsense that went viral. Searching Karl Jobst CATGIRL will explain.
All of that and you're still a heavy user? Why would google change how Gemini works if you keep using it despite those issues?
Every single LLM out there suffers from this.
Just wait until you get a group of nerds talking about keyboards - suddenly it'll sound like there is no such thing as a keyboard worth buying either.
I think the main problems for Google (and others) from this type of issue will be "down the road" problems, not a large and immediately apparent change in user behavior at the onset.
Well, if the keyboard randomly mistypes 40% of the time like LLMs, that's probably not a worthwhile keyboard.
Depends what you're doing I suppose. E.g. if keyboards had a 40% error rate you wouldn't find me trying to write a novel on one... but you'd still find me using it for a lot of things. I.e. we don't choose to use tools solely based on how often they malfunction, rather stuff like how often they save us time over not using them on average.
nah bro just fix your debounce
> Just wait until you get a group of nerds talking about keyboards
Don’t get me started on the HHKB [1] with Topre membrane keyswitches. It is simply put the best keyboard on the market. Buy this. (No, Fujitsu didn’t pay me to say this)
That thing is missing a whole bunch of ctrl keys, how can it be the best keyboard on the market?
Never used a HHKB (and would miss the modifier keys too), but after daily driving Topre switches for about 1.5 years, I can confirm they are fantastic switches and worth every penny.
It uses a Unix keyboard layout where the caps lock is swapped out with the ctrl key. I think it’s much more ergonomic to have the ctrl on the home row. The arrow keys are behind a fn modifier resting on the right pinky. Also accessible without moving your fingers from the home row. It’s frankly the best keyboard I ever had from an ergonomic POV. Key feel is also great, but the layout has a bit of a learning curve.
Dunno why I’m getting downvoted. Is it because you disagree with my statement? Is it because I’m off topic? Do you think I’m a shill?
People are downvoting an out of context advertisement shoved in the middle of a conversation.
Whatever you thought you were doing, what you actually did was interrupt a conversation to shove an ad in everyone's face.
>Countering debasement of shared reality and NOT using AI generated videos as sources should be a HUGE priority for Google.
This itself seems pretty damning of these AI systems from a narrative point of view, if we take it at face value.
You can't trust AI to generate things that are sufficiently grounded in facts that you can't even use it as a reference point. Why should end users believe the narrative that these things are as capable as they're being told they are, by extension?
Using it as a reference is a high bar not a low bar.
The AI videos aren't trying to be accurate. They're put out by propaganda groups as part of a "firehose of falsehood". Not trusting an AI told to lie to you is different than not trusting an AI.
Even without that playing a game of broken telephone is a good way to get bad information though. Hence why even reasonably trustworthy AI is not a good reference.
Not that this makes it any better, but a lot of AI videos on YouTube are published with no specific intent beyond capturing ad revenue - they're not meant to deceive, just to make money.
Not just youtube either. With meta & tiktok paying out for "engagement" that means all forms of engagement is good to the creator, not just positive engagement, so these companies are directly encouraging "rage bait" type content and pure propaganda and misinformation because it gets people interacting with the content.
There's no incentive to produce anything of value outside of "whatever will get me the most clicks/like/views/engagement"
One type of deception, conspiracy content, is able to sell products on the basis that the rest of the world is wrong or hiding something from you, and only the demagogue knows the truth.
Anti-vax quacks rely on this tactic in particular. The reason they attack vaccines is that they are so profoundly effective and universally recognized that to believe otherwise effectively isolates the follower from the vast majority of healthcare professionals, forcing trust and dependency on the demagogue for all their health needs. Mercola built his supplement business on this concept.
The more widespread the idea they’re attacking the more isolating (and hence stickier) the theory. This might be why flat earthers are so dogmatic.
> Not trusting an AI told to lie to you is different than not trusting an AI
The entire foundation of trust is that I’m not being lied to. I fail to see a difference. If they are lying, they can’t be trusted
Saying "some people use llms to spread lies therefore I don't trust any llms" is like saying "since people use people to spread lies therefore I don't trust any people". Regardless of whether or not you should trust llms this argument is clearly not proof of it.
Those are false equivalents. If a technology can’t reliably sort out what is a trustworthy source and filter out the rest than it’s not a truth worthy technology. There are tools after all. I should be able to trust a hammer if I use it correctly
All this is also missing the other point: this proves that the narrative companies are selling about AI are not based on objective capabilities
The claim here isn't that the technology can't, but that the people using it chose to use it to not. Equivalent to the person with a hammer who chose to smash the 2x4 into pieces instead of driving a nail into it.
The claim here is that it can’t because it want filter its own garbage let alone other garbage.
The narrative being pushed boils down to LLMs and AI systems being seen as reliable. The fact that Google AI can’t even tag YouTube videos as unreliable sources and filter them out of the result set before analysis is telling
If you are still looking for material, I'd like to recommend you Perun and the last video he made on that topic: https://youtu.be/w9HTJ5gncaY
Since he is a heavy "citer" you could also see the video description for more sources.
Thanks, good one. The current Russian economy is a shell of its former self. Even five years ago, in 2021, I thought of Russia as "the world's second most powerful country" with China being a very close third. Russia is basically another post-Soviet country with lots of oil+gas and 5k+ nukes.
> another post-Soviet country
Other post-Soviet countries fare substantially better than Russia (Looking at GDP per capita, Russia is about 2500 dollars behind the economic motor of the EU - Bulgaria.)
Must be a misunderstanding
1) Post-soviet countries are doing amazingly well (Poland, Baltics, etc) and very fast growing + healthy (low criminality, etc)
2) The "Russia is weak" thing; it is vastly exaggerated because it is 4 years that we hear that "Russia is on the verge of collapse" but they still manage to handle a very high intensity war against the whole West almost alone.
3) China is not a country lagging behind others at all. It is said in some schoolbooks but it is a big lie that is 0% true now.
> 2) The "Russia is weak" thing; it is vastly exaggerated because it is 4 years that we hear that "Russia is on the verge of collapse" but they still manage to handle a very high intensity war against the whole West almost alone.
It's nearly impossible to bankrupt huge country like Russia. Unless there's civil unrest (or west grows balls to throw enough of resources to move the needle), they can continue the war for decades.
What Russia is doing is each week borrowing more and more from the future and screwing up next generations on a huge scale by destroying it's non-military industrial base, isolating economy from the world and killing hundreds of thousands of young man who could've spent decades contributing to the economy/demographics.
Ukraine is "the whole of the west", interesting? Even the Russian propaganda can't magic up a serious intervention on Ukraine's behalf by western countries. Europeans have been scared to do anything significant, and Trump cut off any real support from the US.
Russia somehow fucked up the initial invasion involving driving a load of preprepared amour across an open border, and have been shredded by FPV drones ever since.
Try Kagi’s Research agent if you get a chance. It seems to have been given the instruction to tunnel through to primary sources, something you can see it do on reasoning iterations, often in ways that force a modification of its working hypothesis.
I suspect Kagi is running a multi-step agentic loop there, maybe something like a LangGraph implementation that iterates on the context. That burns a lot of inference tokens and adds latency, which works for a paid subscription but probably destroys the unit economics for Google's free tier. They are likely restricted to single-pass RAG at that scale.
> works for a paid subscription but probably destroys the unit economics for Google's free tier
Anyone relying on Google's free tier to attempt any research is getting what they pay for.
> Anyone relying on Google's free tie
Google Scholar is still free
> Gemini cites lots of "AI generated" videos as its primary source
Almost every time for me... an AI generated video, with AI voiceover, AI generated images, always with < 300 views
Conspiracy theory: those long-tail videos are made by them, so they can send you to a "preferable content" page a video (people would rather watch a video than read, etc), which can serve ads.
I mean perhaps, I don't know what lm28469 mentions, perhaps I can test it but I feel like those LLM generated videos would be some days/months old.
If I ask a prompt right now and the video's say 1-4 months old, then the conspiracy theory falls short.
Unless.. Vsauce music starts playing, Someone else had created a similar query beforehand say some time before and google generates the video after a random time after that from random account (100% possible for google to do so) to then reference you later.
Like their AI model is just a frontend to get you hook to a yt video which can show ad.
Hm...
Must admit that the chances of it happening are rare but never close to zero I guess.
Fun conspiracy theory xD
Google will mouth words, but their bottom line runs the show. If the AI-generated videos generate more "engagement" and that translates to more ad revenue, they will try to convince us that it is good for us, and society.
Isn't it cute when they do these things while demonetizing legitimate channels?
Don’t be evil
Those videos at the end are almost certainly not the source for the response. They are just a "search for related content on youtube to fish for views"
I've had numerous searches literally give out text from the video and link to the precise part of the video containing the same text.
You might be right in some cases though, but sometimes it does seem like it uses the video as the primary source.
> A few days ago, I asked it some questions on Russia's industrial base and military hardware manufacturing capability
This is one of the last things I would expect to get any reasonable response about from pretty much anyone in 2026, especially LLMs. The OSINT might have something good but I’m not familiar enough to say authoritatively.
yeah that's a very difficult question to answer period. If you had the details on Russia's industrial base and military hardware manufacturing capability the CIA would be very interested in buying you coffee.
> and has the potential to debase shared reality.
If only.
What it actually has is the potential to debase the value of "AI." People will just eventually figure out that these tools are garbage and stop relying on them.
I consider that a positive outcome.
Every other source for information, including (or maybe especially) human experts can also make mistakes or hallucinate.
The reason ppl go to LLMs for medical advice is because real doctors actually fuck up each and everyday.
For clear, objective examples look up stories where surgeons leave things inside of patient bodies post op.
Here’s one, and there many like it.
https://abc13.com/amp/post/hospital-fined-after-surgeon-leav...
"A few extreme examples of bad fuck ups justify totally disregarding the medical profession."
Yup make up something I didn't say to take my argument to a logical extreme so you can feel smug.
"totally disregard"
yeah right, that's what I said
"Doing your own research" is back on the menu boys!
I'll insist the surgeon follows ChatGPTs plan for my operation next time I'm in theatre.
By the end of the year AI will be actually doing the surgery, when you look at the recent advancements in robotic hands, right bros?
People used to tell me the same about Wikipedia.
That it could "debase shared reality?"
Or that using it as a single source of truth was fraught with difficulties?
Has the latter condition actually changed?
That it's a garbage data source that could not be relied upon.
I think we hit peak AI improvement velocity sometime mid last year. The reality is all progress was made using a huge backlog of public data. There will never be 20+ years of authentic data dumped on the web again.
I've hoped against but suspected that as time goes on LLMs will become increasingly poisoned by the the well of the closed loop. I don't think most companies can resist the allure of more free data as bitter as it may taste.
Gemini has been co opted as a way to boost youtube views. It refuses to stop showing you videos no matter what you do.