HNNewShowAskJobs
Built with Tanstack Start
The only moat left is knowing things(growtika.com)
61 points by Growtika 20 hours ago | 51 comments
  • bschne18 hours ago

    > Was this physically difficult to write? If it flowed out effortlessly in one go, it's usually fluff.

    Probably my best and most insightful stuff has been produced more or less effortlessly, since I spent enough time/effort _beforehand_ getting to know the domain and issue I was interested in from different angles.

    When I try writing fluff or being impressive without putting in the work first, I usually bump up against all the stuff I don't have a clear picture of yet, and it becomes a neverending slog. YMMV.

    • graemep17 hours ago |parent

      My most successful blog post was written about something I felt strongly about, backed by knowledge and a lot of prior thought. It was written with passion.

      People asked for permission to repost it, it got shared on social media, it ended up showing higher in Google than a Time magazine (I think) interview of Bill Gates with the same title.

      • codelikeawolf13 hours ago |parent

        Could you post a link?

        • 12 hours ago |parent
          [deleted]
    • jaapz16 hours ago |parent

      Right? I think some of my best work flowed out effortlessly, it's amazing when you get into the flow state and just churn out line after line.

  • mcny18 hours ago

    > If I subconsciously detect that you spent 12 seconds creating this, why should I invest five minutes reading it?

    The problem is it isn't easy to detect it and I'm sure the people who work on generated stuff will work hard to make detection even harder.

    I have difficulty detecting even fake videos. How can I possibly I detect generated text in plain text accurately? I mean I will make plenty of false positive mistakes, accusing people of using generated text when they wrote it themselves. This will cause unnecessary friction which I don't know how to prevent.

    • fhd218 hours ago |parent

      First thought: In my experience, this is a muscle we build over time. Humans are pretty great at pattern detection, but we need some time to get there with new input. Remember 3D graphics in movies ~15 years ago? Looked mind blowingly realistic. Watching old movies now, I find they look painfully fake. YMMV of course.

      Second thought: Does it _really_ matter? You find it interesting, you continue reading. You don't like it, you stop reading. That's how I do it. If I read something from a human, I expect it to be their thoughts. I don't know if I should expect it to be their hand typing. Ghost writers were a thing long before LLMs. That said, it wouldn't even _occur_ to me to generate anything I want to say. I don't even spell check. But that's me. I can understand that others do it differently.

      • graerg14 hours ago |parent

        Exactly! It must be exhausting to have this huge preoccupation with determining if something has come from an LLM or not. Just judge the content on it's own merits! Just because an LLM was involved doesn't mean the underlying ideas are devoid of value. Conversely, the fact that an LLM wasn't involved doesn't mean the content is worth your time of day. It's annoying to read AI slop, but if you're spending more effort suspiciously squinting at it for LLM sign versus assessing the content itself, then you're doing yourself a disservice IMO.

        • 12 hours ago |parent
          [deleted]
  • _tk_18 hours ago

    Big LinkedIn post on a concept with little proof.

    • Growtika18 hours ago |parent

      Fair point. This is more mindset than case study. The proof is still being built across client work. Though I'd say the same was true for SEO in the early days. People speculating on what made Google rank certain sites higher, what made pages index faster, etc. The frameworks came before the proven playbooks

  • RetroTechie12 hours ago

    Nothing new in the article imho. But it's a nice overview of what content creators are facing, and what to look for when carving out a niche.

    The #1 point really: have access to data / experiences / expert knowledge that's unique & can't be distilled from public sources and/or scraped from the internet. This has always been the case. It just holds more weight when AI agents are everywhere.

  • jmkd17 hours ago

    The central idea that we all have the same tools which now represent an infrastructure baseline, therefore we need to look harder to establish our moats (not just in knowing things although that's one) is sound and well put. Thanks.

  • beej719 hours ago

    I don't disagree with the main thesis, but I do think it's relatively easy for skilled writers to outperform LLMs in terms of clarity and impact. Whether or not that advantage makes any business sense is another question.

  • JamesTRexx16 hours ago

    I see the same point when it comes to fiction writing. Tested (via duck.ai) a while ago with creating fiction stories in less than 500 characters and it came up with generics and repeats that even went above the limit. Tried again just now with 5o mini, and although it waxed poetically, there were cracks and gaps, still felt rather generic, and certainly failed at twists and humour.

    It can write about a spark, but the content has no spark.

    • subscribed11 hours ago |parent

      I had a really great results with DeepSeek V3.2, but in the carefully set up environment (scenario, main characters, certain points for the storyline).

      It came up engaging, refreshing and in some parts punching really hard.

      Mostly it's not as good indeed.

  • bsenftner15 hours ago

    I disagree. The moat now is being able to understand, and then communicate that understanding to others, even when they resist understanding. Crack that, and you'll save this civilization from all the immature shortsighted thinkers.

    • p_v_doom14 hours ago |parent

      > even when they resist understanding

      Agreed. You may know so many things, but ultimately its useless if the other party does not care about wanting to understand them. And I have no clue what the right way is, besides letting people and their models fail and then being there with an answer ...

    • xnx7 hours ago |parent

      attention -> persuasion -> power

  • hwj13 hours ago

    Website seems to require JS to display anything at all.

  • nottorp15 hours ago

    My opinion is "content" was slop even before "AI".

    If you're worried about producing "content", the completion bots have caught up with you.

    See the other posts calling the article "a Linkedin post". Those were slop even before LLMs.

    Now if you have some information you want to share, that's another topic...

    • Lerc15 hours ago |parent

      Content is not meant to imply fungibility by being nonspecific. It is supposed to represent an acknowledgement of diversity across a wide range of activities.

      The term content creator represents inclusivity, not genericity.

      You have used the term information as a candidate for an alternative. What if someone is sharing an experience, an artwork, or simply something they found to be beautiful? There may be an information component to some of those things but the primary reason that they were offered isn't to be informative.

      You don't seek content any more than you seek words. You may read books made of words but it is what the book is about that you look for. The same goes for content, only with a broader spectrum. You seek things that you like, things that you value. Content, being nonspecific, means your horizons can be broad.

      • direwolf2013 hours ago |parent

        Content is used to imply that. I have a website, here are the ads, now I just need some content and I'm all done!

        I like your words analogy. A "content creator" is a "words writer". We need some words on this page or it looks weird. Go and get me some words.

        Users don't seek words, but operators seek to entice users with words so they'll view the advertisements.

        "Content" is the same thing without reference to a specific medium. Content can be video, audio, words, or even interactive gameplay.

      • nottorp14 hours ago |parent

        Your experience sharing is also information, not content. It doesn't have to be a technical manual or a self improvement text.

        What I call content is ... well, content ... produced not because you have something to say but because you're aiming for quantity.

        • Lerc14 hours ago |parent

          Experience sharing has an information component, the thing that distinguishes it from a written report containing the same facts is not.

          What you call content is just low effort content. Slop is a more evocative term that probably captures the concept of low effort, unfortunately it has already been poisoned by people declaring anything AI assisted to be slop regardless of how much effort went into the work.

          There are a lot of people who work tirelessly on things that have a massive time and effort commitment for each thing they produce. Yet they identify as content creators. Dismissing their work seems disrespectful to me.

          Sturgeon's Law is a warning to not overlook the good because of the preponderance of the bad.

          • nottorp11 hours ago |parent

            > Sturgeon's Law is a warning to not overlook the good because of the preponderance of the bad.

            Right, why don't you quote it in full though?

            "90% of everything is crap"

  • fullstackchris10 hours ago

    This points out a general theory / concept I've been working on for the past few weeks - once software development and decent enough copy writing can be done by the LLM - you are indeed only bottlenecked by your own knowledge / creativity. The LLM, as much other hype is out around there about it is - is still just a latent tool and will do absolutely nothing if you don't interact with it of course!

    I see LLMs more and more like a mirror - if YOU can orchestrate high-level knowledge and have a brutally clear vision of what you want and prompt correspondingly, things will go well for you (I suppose this all comes back to 'context engineering' just with higher specificity on what you are actually prompting) turns out domain knowledge, time/experience-built wisdom, and experience in niches, whatever they may be - will and always will be valuable!

  • jongjong18 hours ago

    I think the most valuable intellectual skill remaining is contrarian thinking which happens to be correct.

    LLMs are naive and have a very mainstream view on things; this often leads them down suboptimal paths. If you can see through some of the mainstream BS on a number of topics, you can help LLMs avoid mistakes. It helps if you can think from first principles.

    I love using LLMs but I wouldn't trust one to write code unsupervised for some of my prized projects. They work incredibly well with supervision though.

    • pjc5016 hours ago |parent

      > contrarian thinking which happens to be correct

      Important qualifier there. There's a massive oversupply of contrarian thinking; it's cheap, popular (populist), and incorrect. All you have to do is take some piece of conventional wisdom and write the opposite. You don't have to supply evidence, or if you do then a single cherry-picked piece will suffice.

      I'd say something more like "Chesterton's Fence Inspection Company": there are reasons why things are the way they are, but if you dig into them, maybe you will find that the assumptions are no longer true? Or they turn out to be still true and important.

      • jongjong12 hours ago |parent

        Nowadays there are a lot of contrarian ideas which are correct. Some such ideas I've had for maybe 10 years; started out as speculative thoughts that I dismissed initially but the evidence kept piling up. At first, it was just my personal experience, then other people I met started making similar observations. Then I kept researching and the more I learned, the more plausible it seemed. Now there are major media figures discussing some of those ideas in front of millions of people. Some of this stuff I really thought was just me being paranoid, now open topic of discussion at a societal level.

        Just as one minor example, after working in blockchain space in Germany, I left the industry with a feeling that there was corruption in the sector involving government officials (just based on a lot of weird stuff I witnessed). At the time it was just a paranoid feeling; I couldn't make sense of what I'd experienced because I could not pin down a motivation. But fast forward a few years and I saw an interview in which Marc Andreessen mentioned that some US government officials under Biden actively went after certain blockchain projects and how some people were de-banked for apparently no reason.

        This was interesting to hear because I had lost access to one of my bank accounts a few years prior and the bank wouldn't tell me the reason. I also got audited by the tax authorities in Germany, though my tax record was perfect (they had to concede). This was weird considering my income level was not that high and my situation was relatively straight forward. I'm still not fully settled on a conclusion there but every year, my worldview seems to make more sense.

        I only recently managed to start taking advantage of what I'd been observing. For example, I anticipated the current precious metals rally a few years ago. Just based on my feeling/observation that crypto had been corrupted by governments and people would start looking for other assets. Before this, I just didn't have any capital to invest so I could not act on my accurate predictions; I could only watch in horror.

  • Imustaskforhelp15 hours ago

    > The data backs this up. 54% of LinkedIn posts are now likely AI-written (Originality.ai). 15% of Reddit posts too, up 146% since 2021. Every competitor has the same capability to generate keyword-optimized, structurally correct, grammatically polished content. In about twelve seconds.

    I don't know antything about marketing given that the first paragraph of blog post makes it clear its from marketing context.

    But, as a user or literally just a bystander. using AI isn't really good. I mean for linked-in posts I guess. Isn't the whole point to stand out by not using AI in Linked-in.

    Like I can see a post which can have an ending by,

    Written with love & a passion by a fellow human, Peace.

    And It would be a better / different than this.

    Listen man, I am from Third world country too and I had real issues with my grammar. Unironically this was the first advice that I got from people on HN and I was suddenly conscious about it & I tried to improve.

    Now I get called AI slop for writing how I write. So to me, its painful to see that my improvement in this context just gets thrown out of the window if someone calls some comment I write here or anywhere else AI slop.

    I guess I have used AI and I have pasted in my messages in there to find that it can write like me but I really don't use that (on HN, I only used it once for testing purpose on one discord user iirc) but my point is, I will write things myself and if people call me AI slop, I can really back down the claim that its written by a human, just ask me anything about it.

    I don't really think that people who use AI themselves are able to say something back if someone critiques something as AI slop.

    I was talking to a friend once, we started debating philosophy. He gave me his medium article, I was impressed but observed the --, I asked him if it was written by AI, he said that it was his ideas but he wrote/condensed it with AI (once again third world country and honestly same response as the original person)

    And he was my friend, but I still left thinking hmm, If you are unable to take time with your project to write something, then that really lessens my capacity to read it & I even said to him that I would be more interested in reading his prompts & just started discussing the philosophy itself with him.

    And honestly the same point goes to AI generated code projects even though I have vibe coded many but I am unable to read them or find the will to read it at many times if its too verbose or not to my liking. Usually though in that context, its more so just prototypes for personal use case but I still end up open sourcing if someone might be interested ig given it costs nothing for me to open source it.

  • jdthedisciple19 hours ago

    Ironically this reads like AI slop.

    • zvqcMMV6Zcr18 hours ago |parent

      No, it reads like Linkedin post. That said, do we now have to check if the text we wrote doesn't look like something AI generated?

      • burakemir17 hours ago |parent

        You're absolutely right.

      • Jensson16 hours ago |parent

        If its a problem for you, then yeah. If you never get accused of using AI then no.

        • Imustaskforhelp15 hours ago |parent

          Um, I got called on HN three times now accused of being AI for writing comments by hand.

          I got so annoyed at the second time that I even created a post about it. I guess I just get really annoyed when someone accuses me who writes things by hands as AI slop because it makes me feel like at this point, why not just write it with AI but I guess I just love to type.

          I have unironically suggested in one of my HN comments that I should start making the grammatical mistakes I used to make when I had just started using HN like , this mistake that you see here. But I remember people actually flipping out in comments on this grammatical mistake so much that It got fixed.

          I am this close to intentionally writing sloppy to prove my comments aren't AI slop but at the same time, I don't want to do this because I really don't want to change how I write just because of something what other people say imo.

          • jfyi12 hours ago |parent

            Don't kid you'reself, people LOVE grammatical and spelling errors. It's low entry, and by far the easiest way to get someone to interact with what you have written.

            AI deprives them of this.

            Why even read something with no mistakes? Just scan on to the next comment, you might get a juicy "your/you're" to point out if you don't waste time reading.

          • IAmBroom10 hours ago |parent

            That's EXACTLY what an AI member of this community would say!

            I know you're secretly a bot, because you used punctuation. Only AI uses punctuation!

            /s

            • Imustaskforhelp5 hours ago |parent

              xD

              that /s is carrying the whole message haha

              but yeah I guess, sometimes I wonder if suppose a bot was accused of being AI, I mean if trained with right prompt and everything, it can also learn to flip out and we would be able to genuinely not trust things.

              I guess it can be wild stuff but currently I just really flip out while literally just being below the swear level to maintain decency (also personally I don't like to swear ig) to then find that okay I am a human after all.

              But I guess I am gonna start pasting this youtube video when somebody accuses me of being AI

              I am only human after all: https://www.youtube.com/watch?v=L3wKzyIN1yk

              It would be super funny and better than flipping out haha xD

              "Got no way of prove it so maybe I am lying but I am only human after all, don't put the blame on me, Don't put the blame one me" with some :fire: emoji or something or not lmaoo. It would be dope, I am now waiting (anticipating out of fun) for the next time when I comment something written by me (literally human lmaoo) and someone calls me AI.

              The song is a banger too btw so definitely worth a listen as well haha

    • Growtika18 hours ago |parent

      Genuinely curious, what felt off? Ideas are mine, AI just helped clean up the English (I added a disclaimer)

      • duskdozer17 hours ago |parent

        The writing style just has several AI-isms; at this point, I don't want to point them out because people are trying to conceal their usage. It's maybe not as blatant as some examples, but it's off-putting by the first couple paragraphs. Anymore, I lose all interest in reading when I notice it.

        I would much, much, much rather read an article with imperfect English and mistakes than an LLM-edited article. At least I can get an idea of your thinking style and true meaning. Just as an example - if you were to use a false friend [1], an LLM may not deal with this well and conceal it, whereas if I notice the mistake, I can follow the thought process back to look up what was originally intended.

        [1] https://en.wikipedia.org/wiki/False_friend

      • djeastm17 hours ago |parent

        For me it's a general feel of the style, but something about this stands out:

        >We're not against AI tools. We use them constantly. What we're against is the idea that using them well is a strategy. It's a baseline.

        The short, staccato sentences seem to be overused by AI. Real people tend to ramble a bit more often.

        • ares62316 hours ago |parent

          It reads like an Apple product page.

      • xnorswap17 hours ago |parent

        Most of the subheadings starting with "The" and "What Actually" is a bit of a giveaway in my view.

        Not exclusive to AI, but I'd be willing to bet any money that the subheadings were generated.

      • kranner15 hours ago |parent

        > Using them isn't an advantage, but not using them is a disadvantage. They handle the production part so we can focus on the part that actually matters: acquiring the novel input that makes content worth creating.

        I would argue that using AI for copywriting is a disadvantage at this point. AI writing is so recognisable that it makes me less inclined to believe that the content would have any novel input or ideas behind it at all, since the same style of writing is most often being used to dress up complete garbage.

        Foreign-sounding English is not off-putting, at least to me. It even adds a little intrigue compared to bland corporatese.

      • InterviewFrog15 hours ago |parent

        It did not feel off at all. I read every single word and that is all that counts.

        I think what you are getting wrong is thinking that the reader cares about your effort. The reader doesn't care about your effort. It doesn't matter if it took you 12 seconds or 5 days to write a piece of content.

        The key thing is people reading the entirety of it. If it is AI slop, I just automatically skim to the end and nothing registers in my head. The combination of em dashes and the sentence structure just makes my mind tune it out.

        So, your thesis is correct. If you put in the custom visualization and put in the effort, folks will read it. But not because they think you put in the effort. They don't care. But because right now AI produces generic fluff that's overly perfectly correct. That's why I skip most LinkedIn posts as well. Like, I personally don't care if it's AI or not. But mentally, I just automatically discount and skip it. So, your effort basically interrupts that automatic pattern recognition.

      • Jensson16 hours ago |parent

        You admit it yourself here:

        > I run a marketing agency. We use Claude, ChatGPT, Ahrefs, Semrush. Same tools as everyone else. Same access to the same APIs.

        Since you use it for your job of course you use it for this blog, and that will make people look harder for AI signs.

      • edent17 hours ago |parent

        > AI just helped clean up the English

        Why?

        I get using a spell checker. I can see the utility in running a quick grammar check. Showing it to a friend and asking for feedback is usually a good idea.

        But why would you trust a hallucinogenic plagiarism machine to "clean" your ideas?

    • PurpleRamen16 hours ago |parent

      Ironically, everything smells like AI now, even when it's human.

      • direwolf2013 hours ago |parent

        Sometimes it feels slop. Slop shouldn't get a pass just because a human wrote it.

        • PurpleRamen10 hours ago |parent

          How much of that feeling is false-positive pattern-matching?