While browsing YouTube, an AI-generated video appeared and I reflexively told my wife, “That’s AI—skip it.”
Yet I’m using AI-created illustrations for my graphic novel, fully aware of copyright and legal debates.
Both Copilots and art generators are trained on vast datasets—so why do we cheer one and vilify the other?
We lean on ChatGPT to rewrite blog posts and celebrate Copilot for “boosting productivity,” but AI art still raises eyebrows.
Is this a matter of domain familiarity, perceived craftsmanship, or simple cultural gatekeeping?
Who is "we", Kimosabe?
Personally, I have no time for gen-AI in pretty much any context, at least given the current landscape.
And plenty of people seem to accept, if not love, gen-AI art. I don't get it, but it's true.
> While browsing YouTube, an AI-generated video appeared and I reflexively told my wife, “That’s AI—skip it.”
My reflex whenever I encounter gen-AI output in any form: text, code, image, music, video, what have you. I find all off it mid in the best of cases, and usually think it's quite terrible. I regularly see posts of the form "you'll never believe this amazing AI generated picture/video/paper/program, and when I check it out I feel like I'm taking crazy pills because I just don't see the magic.
Just my $.02, not inflation adjusted. You (and many others) may well feel differently.
> And plenty of people seem to accept, if not love, gen-AI art. I don't get it, but it's true.
I get a kick out of generating photos with family and friends in different styles like Play-Doh, Simpsons, Ghibli, etc. All of them like it, too. Maybe that's what people like, a very relatable use of the technology.
Yeah but the fun won’t last, you’ll get bored with this look and the idea of looks when all the space around you will be innundated with it. But no harm in having fun though…
It’s also a shortcut to self gratification and wish fulfillment by way of appropriating or poaching the success and value created by others for personal amusement or satisfaction, so there’s that part worth mentioning.
Of course it’s terrible and that’s because it’s low effort trash with the sole purpose of making money. But AI gen doesn’t nenessarily have to be that though, it’s just the slop wave washing everything off. When humans use it to speed up their some tedilus processes but the whole project doesn’t look/feel rushed I have no problem with what tool they used.
I'm not sure we can be certain everyone has that view. I get the impression many of the people who vilify art generators are also against coding copilots.
> I reflexively told my wife, “That’s AI—skip it.”
> Yet I’m using AI-created illustrations for my graphic novel
Aren't you worried people will skip your graphic novel?
If I see AI art in a graphic novel, I'll stop reading and downvote it.
It's also weird to me, like...
I've used AI art a lot for tabletop RPGs. The level of actual creative control isn't great, even for what should be an easy case of one character in profile against a blank background. Even if you know how to use it well you're wrestling the systems involved to try and produce consistent output ot anything unusual. And that's fine for Orc #3 or Elf Lord Soandso, which are only going to be featured for fifteen minutes at a time and in contexts where you can crop out bad details or use low-effort color grading to get a unified tone.
But for a graphic novel? What? I can't imagine giving up that level of creative control, even as someone who sucks at actual drawing. You'll never be able to get the kind of framing, poses, and structuring you want, doubly so the second you want to include anything remotely original. It's about the absolute worst case for actually using these generation tools.
AI art is not limited to writing a prompt and hoping for the best. There's a multitude of ways to control the generation: img2img, ControlNet, Openpose, InstantID and several other techniques. You can train LoRAs on your characters for consistency.
It takes seconds to generate a panel for a comic; you make a sketch, then generate hundreds of candidates, pick the best one, maybe correct flaws in Photoshop, and it's still faster and cheaper than drawing it yourself from scratch. It's just another workflow for an artist. I use Blender to model rough sketches of 3D scenes, then use ComfyUI to render high quality images with lots of details.
For webnovels I've found it useful as someone who has borderline aphantasia. But in this case, the webnovels would normally have no graphics whatsoever in their chapters aside from the webnovel's cover art (which is usually done by an actual graphic designer).
It's very obvious that they're AI generated, and the authors are typically upfront about it. I still feel a bit of an ick when I see them, and Patreon discussions for the creators I follow also have similar sentiments. Not sure if it's truly a tolerable use-case for AI, but thought I'd throw it out there.
Also, I don't use ChatGPT to rewrite blog posts and don't like people who do. Its style is annoying and if ChatGPT is doing content I might as well ask it whatever you asked it myself directly. For code I do not care much so long as it works.
I vilify code LLMs every time I review a colleague's code, ask them why they wrote something a certain way, and they can't explain. ChatGPT prose is even worse: it's like a corporate press release, only even blander and happytalker.
You get whole blocks of code written by AI? No reviewing from the original dev???
I find AI handy in two cases - writing quick boilerplate and assisting when I'm coming to grips with a library. In my case, I've never used the Qt libraries at all, and I'm trying to learn how to use them. Most of the code gets rewritten several times as I try different things, but it's handy to have something fill in the blanks if I can't remember the name of a function.
Ultimately, it's me who's responsible for the end product and I accept that and review the code. But it's definitely been handy.
LLMs are not good enough to replace programmers, authors or journalists (and I suspect never will, since they still rely on accurate and human written sources to produce anything of value)
However, AI art generators in their current form may render all artistic jobs unlivable within 20 years. Learning to draw is one of the most time-intensive skills to master. A master's degree in CS is sufficient to secure a good job, but five years of experience in art makes you a "novice". AI art is just good enough to devalue art as a whole, making it an infeasible profession to pursue, as it's already near the minimum wage on average.
In 20 years, there may not be any new professional digital artists. All art will become AI art. Do we like that world? Cheap, corporate, lazy, with no sign of effort or dedication.
I want LLMS to go away as well, but at the very least, there will always be a market for real text, and always be people able to produce it.
Art generators can be used by people who would otherwise have to pay artists, so they're in competition with the people dissing them.
Code assistants are used by programmers wanting to be more productive. Things that claim to replace programmers entirely get dissed. (But it's more "that won't work" rather than "that's not allowed", because, well, it doesn't work. Yet at least.)
AI-generated content is probably cheap spam, even though it in theory could be made by someone knowledgeable using the AI as a tool.
Things generated by an AI are lower quality than things made by someone competent... but depending on what you're doing, that might not matter.
Because art is the futile attempt by a conscious subjectivity at expressing their subjective experience. It's fraught and impossible and really beautiful. Meanwhile GenAI is not a subjectivity, cannot express anything.
I paint as a hobby. I think a lot of people put less value on digital art in general. It's almost like too perfect doesn't hit it right for most people. I was in a local gallery last week and saw one image from afar thinking "that's very detailed to be so cheap" then get close enough to see it's printed digital art and lose interest. The next painting over was a waterfall and you could see the brush strokes, yet I found it infinitely more interesting.
However, AI image generation is immensely helpful when I want to do a painting. Before I would find photos I liked and stitch them together, or try to imagine things. Now I can get an image much closer to reference.
With code, my feeling is that we have to write way too much of it right now to express what we want. I can write a small bit of text to the LLM and it will fill out 75%+ of the code over multiple files, which I then just need to shape. So much is structure that needs repeating in variations. I don't have an answer but it seems like there's something else missing from our tools and LLMs are providing a bad imitation of what it should be.
For what it's worth I'm all for both. As someone who dabbles in both domains but does neither professionally, there's a very distinctive difference in how AI fits into the process of producing art and producing code.
For code, it augments my ability to produce code. It's very easy to tinker and modify that code if I so choose, and it's much easier to steer it into the right direction (at least when it comes to the output of the code). For art, it just replaces things. If I create an image with AI for example, it's not that simple to drop it into Procreate and start tinkering with it. There are no layers, no brushes, no masks that come with it - it is the output. I'll just re-prompt, or try to find style guides or reference pictures, but there's no place for an artist in the loop. Others might be using these differently of course, but at least my impression is that it's much more of a replacement when it comes to art.
Reproducing copyright-protected information is "trivial" with image (or text) generation, but hard with code generation.
(from listening a lot to artists, so might have some bias). I haven't actually attempted either ... I find the code generation not very useful and the artistic structures interesting, but something's missing.
> Why do we celebrate AI-Copilots but reject AI–Generated art?
That's not the generalization that I would make of HN sentiment.
But one generalization I'll assert is that there seems to be a very strong undercurrent of self-interest, often to the point of cheating. It's not universal, but it might be over 50%. The field has been selecting for, and refining for, people who seek big paychecks, and train for the BS rituals (e.g., FAANG interviews, resume-driven-development, metrics, Agile reactive theatre, growth scam startups).
How are all those people going to think about tools that might give them an edge in their operating mode. Would they be thinking about quality, maintainability, security, team, company goals, or ethics.
Maybe because it's not about the code, it's about the compiled software?
Also, I like AI art; I made a Lego model and then fed it into an image generator to kinda generate a "reverse" reference image. So it looks like the Lego model tries to look like the reference pictures, even though its look is more dictated by the very constraint parts list (it's an alt build of an existing model): https://rebrickable.com/mocs/MOC-218657/RedNifre/31124-battl...
I could not have drawn these artworks myself and the use is so silly that I would not spend any money for paying for them: Without AI, these would not exist.
> We lean on ChatGPT to rewrite blog posts and celebrate Copilot for “boosting productivity,” [...]
Speak for yourself.
I don't celebrate either.
I would suggest that it is due to perceived utility. For code to be useful it has to work. For art to be useful it has to… do something else which is much harder to quantify. But perhaps it is about connecting the human condition?
The interesting thing about art is mostly not its physical existence. The interesting thing about art is that another human being made it to try to express something, in words or colors or film or whatever medium. It's a person trying to show you their interiority, taking something fundamentally unknowable—another living mind—and making it legible to other people. Even when it's art for hire like animation in a commercial, at the end of the day there's some human or humans who put some work in there. LLM-generated art just doesn't have that. There's no interiority to be exposed.
I don't. There is no "we," and it's a relatively small but extremely loud contingent of people that have meltdowns over AI-generated images and video.
As for the why, it's primarily ego and fear.
And you told your wife to skip the video because most AI-generated content is, quite frankly, garbage. It's not garbage because it's AI though; it's garbage because the person making it probably doesn't care much about quality.
Every single non tech person I know hates AI. Even my tech friends are turning against it.
You are in a bubble, most likely.
It's extremely polarizing. Dismissing anybody's experience as "being in a bubble" is misguided on this topic, not to mention disrespectful.
> it's a relatively small but extremely loud contingent of people that have meltdowns over AI-generated images and video.
That's not my experience at all. Hacker News is indeed an echo chamber, but sometimes things escape, and discussion of how much people hate AI "art" is becoming increasingly common among my non-tech friends. The earliest example most people point to is the Christmas Coca-Cola commercial.
Do people really hate it, or is it bandwagoning? When OpenAI added image generation to GPT-4o, the "ghiblification" trend took off and most people seemed to enjoy it for memes and jokes.
Furthermore I've never really encountered people IRL who hate AI art. The vast majority seem to be neutral towards it.
All of the two-dozen-or-so people I know closely hate AI art - hate. Not as a tool, e.g. for making images to go with your DnD campaign or whatever - that's great - but rather as replacements for human art. E.g. as blog post images, movie posters, images uploaded to art hubs, etc. They also mostly hate the way it usually looks.
Wouldn't a human normally make the D&D characters though?
Also, do they hate all AI art, or just art where they can tell because it looks like AI art?
The "make" here is referring to representative art (for use with battle map standees if nothing else), where the previous common method is "grab a loosely representative image from a web search".
That's it, you solved it. It's all about art.
Now, complete the ritual. Take their place and bring art and culture to your new empire.
I feel the sentiment on HN for both is very positive. But yes, I am negative personally on generative AI. It really comes down to the 3C's and how current LLM training blatantly stomps on them. Yet another piece of business making billions relying on artists without compensating and people wonder why the art community becomes so cynical.
I'm indifferent on co-piliot stuff. For my domain it isn't as useful as using snippets, but code tends to be easier to follow the 3c's on than art. Most coede people don't want copied already is obfruscated or simply not publicly readable.
Ego
Most programmers want to get some final output, they want the application or game rather than some beautiful code.
Most artists want to create beautiful art, it's a form of self expression. Creating art just for outputs sake without adding love seems cold and capitalistic.
So AI enhances and delivers what programmers want, and diminishes what artists want.
Artists correctly realized the threat to their future economic viability and made up reasons it was morally bad. Programmers are currently stuck in an earlier stage, insistent that it can never replace them because [various things].
Destroying a profession without a plan to help those displaced is morally bad. It's also inevitable. The most obvious mistake of everybody on both sides of AI arguments is denying the fact that something can be used for both good and bad, will have devastating effects and yet is an advancement that must happen, etc. This isn't cognitive dissonance - it's reality.
Advanced programmers don't use AI assistants because it doesn't help them write complex new code. This is mostly good for completing only well-known and more or less simple tasks. And I hope that such people understand the danger posed by "artificial intelligence" in many cases and in many forms.
I for one reject them both equally. ChatGPT's writing style is annoying, the "art" all has this weird similarity to it, and whether it's YouTube or a graphic novel or a blog post, if i can tell it's AI, I'm skipping it. Fixing auto generated code annoys me far more than just doing it myself the first time.
> We lean on ChatGPT to rewrite blog posts
Not anybody I read.
I don't understand this post. Both the chatbots and the image generators are controversial. The voice and music generators are controversial. The whole field is controversial.
It's reasonable to hard skip on AI videos because almost all are currently slop, to the extent that I don't see why you needed to explicitly suggest to your wife to skip, rather than her noticing it on her own and doing so automatically.
I celebrate both.
Different groups or individuals are going to have different reactions. Most of the reactions to AI art are similar to the reactions for art and actually most things in general. They are based on popularity or fashion more than any real judgement.
It's largely driven by social dynamics. If your group generally expresses disgust for AI art, you subconsciously know you have to have the same opinion about it.
Your post is a bad example where you make an artificial distinction based on how you generate it in order to make it okay.
It's okay for what you are doing because it's incredibly convenient. It's not okay for other people because you know it's unpopular.
For videos also, you need to distinguish between that and images or other types of art. Videos are more challenging than still images and just starting to get to the point where the latest ones don't have a lot of weird obvious spatial temporal artifacts.
What do you mean "we"?
> Is this a matter of domain familiarity, perceived craftsmanship, or simple cultural gatekeeping?
Likely all of the topics covered in the question. aka Takes time/money to train/become skilled in given professional area. Things that undermine that money/time/training in way that have to retrain before the professional investment paids off tend to cause issues. Legal framework hasn't caught up with changes, so difficult/more costly & time consuming to resolve issues that do happen.
'Historical' parallels abound:
More same thing, different historical context / changes in available technology:--- 1800's introduction of camera / fax machine[4] vs. portrait artist; vs. history of kodak & digital camera[1]; xerox[2]/ibm & pc[3]! --- car/automation vs. blacksmith/coachman/horse related jobs; --- car/us urbanization vs. door to door sales/professional services aka house calls. Although, car delivery services paired with internet ordering/video teleconferencing kinda reinventing/rediscovering the historical door to door sales/professional services. (where covid-19 helped expidite the conversion to making use of on-line offerinngs)
---------------------- Sears orignally sold by catalog / inspiring 'towns by the rail road' vs. amazon, no physial store, sells by internet/mail delivery aka no multi-physical store rent/overhead vs.phyiscal stores "free delivering with in store pickup". Perhaps at some point this concept will yet again change to something akin to send the software specs to print out non-food/food items on one's home 3d printer. --- Some what dependent on where one lives, but grocery store 'call-in', pick-up & delivery. (touching on historical sears & railroad in western US, vs. physical stores & customer proximity in eastern US.) Although, with overnight/speedy delivery, could technically overcome store geographic differences. -- aka amazon free delivery $14 month to places in alaska only reachable by air service (minimum air service flight charge before freight charge way beyond cost of 'small order'. (2015, bit dated: [0])) --- Harder to find references, but mainframe staff/administrative support vs. home pc vs. cloud services vs. mobile phone & internet/services. --- requirement of college class in philosophical logic when seeking a bachelors degree. --- computer science/information science merged into 'data science'
[0] : https://www.adn.com/business/article/amazon-prime-eases-rura...
[1] : https://www.weforum.org/stories/2016/06/leading-innovation-t...
[2] : https://inspireip.com/xerox-failure-reasons/
[3] : https://spectrum.ieee.org/how-the-ibm-pc-won-then-lost-the-p...
[4] : https://www.damninteresting.com/curio/the-fax-machines-of-th...
'electronic pixelated images' : https://artsandculture.google.com/story/phototelegraphy-inventions-that-transported-images-worldwide-museum-for-communication-berlin/ https://en.wikipedia.org/wiki/John_Logie_Baird https://en.wikipedia.org/wiki/Digital_photography
Same reason we celebrate calculators but not AI financial advisors. Code is just math.
Who is this “we” in your postulation? If you’re using AI for your “graphic novel” I object to thinking yourself one of the creative class. To put it another way, if AI was a human being and a slave to you, where you simply prompted it and appropriated its output, would you reasonably be considered a “creator” of that work? I assert you would not.
The entire framing of this question, as posed, is transparently self-serving as a justification to seek validation for a process which, fundamentally, contradicts the purpose and definitions of art.
Code is a functional set of orders to a machine, nothing more. Nobody is buying paperbacks of source code to read on an airplane, correct? It’s refreshing to have this opportunity to confront these absolutely infuriating equivocations which have momentum in the present day.
There’s a reason the term “AI slop” is floating around with such frequency. I am an artist and a musician and a writer. You are not a part of our “we” buddy. Sorry not sorry.
The point of most AI-generated videos is as an attention sink. They're uploaded to get you to spend 30 or 45 minutes watching, and they're typically a mishmash of "points" loosely related to the title of the video. They are often very low-content, and do lots of "before we get to the point, let's backtrack over the history of Three's Company..." and then you get a rehash of the whole Wikipedia article on the topic with zooming still photos.
The style, when it is so obvious, becomes indexed in my mind with low-quality waste-of-time videos.
I blame the tool users, not the tool. The people sloshing these things up onto YouTube are deliberately flinging enough crap at the wall to get clicks. Imagine if they put some oomph into it. Focus on the topic, emphasize main points instead of having a monotonous litany that just sounds like facts strung together without logical connection. Have a point. Thesis, summary, argument, elaboration, summary, map to the thesis, conclusion... Or, if it's fiction, give me three act structure, or seven-point plots.
Otherwise, I'll continue to recognize, and discard what are literally garbage videos, generated by the thousands, to waste our time.
- [deleted]
[flagged]
> You are a massive hypocrite, and it seems like you're a scammer and a thief, too.
Personal attacks like this are not OK on Hacker News and we ban people who do it repeatedly.
Please have a read of the guidelines and make an effort to adhere to them in future.
Harsh but fair and I actually appreciate your phrasing, crass as it may sound.
Correlative: why are people so upset about identity theft?
- [deleted]