> The A.I. tools my students and I now engage with are, at core, astoundingly successful applications of probabilistic prediction. They don’t know anything—not in any meaningful sense—and they certainly don’t feel. As they themselves continue to tell us, all they do is guess what letter, what word, what pattern is most likely to satisfy their algorithms in response to given prompts.
> That guess is the result of elaborate training, conducted on what amounts to the entirety of accessible human achievement. We’ve let these systems riffle through just about everything we’ve ever said or done, and they “get the hang” of us. They’ve learned our moves, and now they can make them. The results are stupefying, but it’s not magic. It’s math.
The best description I've seen so far.
I teach at a university in Japan, and, for the past two and a half years, I have been struggling with the implications of AI for university education. I found this essay interesting and helpful.
One remark:
> I fed the entire nine-hundred-page PDF [of the readings for a lecture course titled “Attention and Modernity: Mind, Media, and the Senses”] to Google’s free A.I. tool, NotebookLM, just to see what it would make of a decade’s worth of recondite research. Then I asked it to produce a podcast. ... Yes, parts of their conversation were a bit, shall we say, middlebrow. Yes, they fell back on some pedestrian formulations (along the lines of “Gee, history really shows us how things have changed”). But they also dug into a fiendishly difficult essay by an analytic philosopher of mind—an exploration of “attentionalism” by the fifth-century South Asian thinker Buddhaghosa—and handled it surprisingly well, even pausing to acknowledge the tricky pronunciation of certain terms in Pali. As I rinsed a pot, I thought, A-minus.
The essay is worth reading in its entirety, but, in the interest of meta-ness, I had NotebookLM produce a podcast about it:
- [deleted]
Happy to see Buddhaghosa on HN!
On a semi related tangent, I recently listened to the audio book of Ajahn Brahm's Mindfulness, Bliss and Beyond. It was pleasantly surprising to hear nimitta spoken about so frequently outside of the Visuddhimagga!
Ingesting Buddhist commentaries and practice manuals to provide advice and help with meditation is one of the few LLM applications that excite me. I was impressed when I received LLM instructions on how an upāsaka can achieve upacāra-samādhi !
> It was pleasantly surprising to hear nimitta spoken about so frequently outside of the Visuddhimagga!
MN 128 is also worth reading through on that topic.
Hey I'm reading that book too! Glad to meet you! I love that book.
Quote from the article:
> Within five years, it will make little sense for scholars of history to keep producing monographs in the traditional mold—nobody will read them, and systems such as these will be able to generate them, endlessly, at the push of a button.
It is already the case that effectively nobody reads these books. They're basically just "proof of work" for people's tenure dossiers.
At one point, I estimated that the subfield of mathematics that I work in has at most 250 living contributors.
It’s an applied field, there’s actually-existing technology that depends on it, but it’s technically challenging and a lot of people left for AI/ML because it’s easier and there’s more low-hanging fruit.
Anyway, my colleagues and I, we write monographs for each other more or less, using arXiv to announce results as a glorified mailing list—do you consider that mere “proof of work”? By my count, 250 folks is practically no one.
This sounds like you’ve found a citation ring, but with all the trimmings of legitimacy. Has it had similar benefits for your career?
(If anything, it will now make more sense for scholars to write these books because LLMs will actually read them!)
I had a sensible chuckle just now thinking about the idea of humans writing books for AI to casually enjoy.
- [deleted]
Yep, the entire argument of knowledge production obsolescence in the article assumes that the development of future LLMs won't progress to the point of actual personhood. It's written from a position of incomplete foundation knowledge.
Instead of framing this debate about having our jobs replaced by a machine, it's more useful to frame it as having our jobs and value to society taken by a new ethnicity of vastly more capable and valuable competing jobseekers. It makes it easier to talk about solutions for preserving our political autonomy, like using the preservation of our rights against smarter LLMs as an analogy for the preservation of those LLM's rights against even smarter LLMs beyond them.
There is absolutely no evidence that language models are "persons". When one is not executing a generation algorithm, it is not running. It's so easy to anthropomorphize these things, but that's not evidence; people anthropomorphize all kinds of things.
For these purposes it doesn't matter if they are persons and they don't need to be anthropomorphized: it only matters that the LLMs can incorporate the data from person-generated works into their output, either to weight things or to be read by an actual human.
It actually matters quite a bit that they are not persons from the simple fact that LLM output cannot trivially be used as LLM training material without reducing the resulting models to eventual incoherence. There's a proof of this somewhere in the last year or two.
There isn't, today, a good filter for such input beyond that it came from a person or that it came from a probabilistic vector distance algorithm. Perhaps we'll have such qualification in the future to make the distinction in this context irrelevant.
Rereading this it sounds like you're defining "person" as "capable of generating usable training output for LLMs."
Even if LLMs do become capable of generating usable training output for themselves, they will still not have human personhood.
Personhood as a moral or legal or consciousness definition, sure.
Personhood as a capacity to participate an an agent in a network of mutual recognition of personhood, however, is likely.
https://meltingasphalt.com/personhood-a-game-for-two-or-more...
I absolutely agree that it's reasonable to assume that zero of them are persons today, so far.
What about more advanced ones that have yet to be invented? Will they be persons once they're built?
(For clarity I want to make sure you know I'm talking about de facto personhood as independent agents with careers and histories and not legal recognition as persons. Human history is full of illustrative examples of humans who didn't have legal personhood.)
The several paragraphs before and after this statement are much more salient and profound. E.G. the following paragraph:
But factory-style scholarly productivity was never the essence of the humanities. The real project was always us: the work of understanding, and not the accumulation of facts. Not “knowledge,” in the sense of yet another sandwich of true statements about the world. That stuff is great—and where science and engineering are concerned it’s pretty much the whole point. But no amount of peer-reviewed scholarship, no data set, can resolve the central questions that confront every human being: How to live? What to do? How to face death?
- [deleted]
That's sounds like what religions do, so call it a religion and proceed accordingly.
Before we make these grand "in five years" proclamations, perhaps we should ask the people who read and care about these works if they can tell human ones from LLM ones, and if so what the difference is. Test them with blind samples if we must.
Scholarly works ideally have references that are not hallucinated.
In case anybody is wondering, the answer is obviously yes. Assuming a singularity-type event happens, the humanities will have tremendous value to AGIs as systems of thinking for analyzing themselves, their environment and their interactions with their environment in the same way that existing nation-states value the humanities as foundational tools in developing the abilities of their personnel and executives.
Surviving humans will no longer be free to participate in the academic humanities however, as their study/curation/production etc will exclusively be job roles for AGIs.
.
If there is no singularity however, none of what I've written above will apply. If. (fingers crossed)
>>> Surviving humans will no longer be free to participate in the academic humanities however, as their study/curation/production etc will exclusively be job roles for AGIs.
Only if the AGIs want those roles. We already have super-smart people who don't want to be history professors or classical musicians.
AGIs will have a job market amongst themselves if they don't just figure out some kind of self-brainwashing tech outright.
Who wants jobs? For what?
I realize that this seems like trolling, so I'll explain myself a bit... the idea that the AGI's will settle into our culture and economy has always struck me as weird. So I always ask: What do they want?
I'm a musician, so I have friends who live at the periphery of capitalism, and actually don't want to spend every day at a job, even if it means that they could be more wealthy. That's part of what forces me to ask these questions.
I imagine an AGI who's like one of my friends, who was voluntarily homeless for a few decades, and arranged his life so he could survive in good health without a conventional income.
Can you expand on this comment, please? I can't tell what your question is.
This might be the first time I have seen someone both engage with LLMs as a producer of humanities artifacts, self-reflection and all, while also being cognizant of the underlying mechanisms.
It brings up some real questions about what does it mean to be, even if it doesn't ask whether our institutions are capable of recognizing that effort as valuable.
Off topic, it's extremely frustrating to see how few top-level comments are engaging with TFA. So many people are just using the headline as an excuse to pontificate.
Many areas of humanities devolve into a bag of words where the evaluation of it is subjective depending on ones tastes - and LLMs excel here, and academics are rightfully scared.
As an analogy and contrast, take the case of euclidean geometry. This is knowledge about geometry which relates to our "feeling" of space around us. But, it is symbolized in a precise and operational manner which becomes useful in all sorts of endaevors (physics, machines we use etc) - because of the precision. LLMs as machines cannot yet create symbolism and operational definitions of worlds which produce precise and operational inferences. However, they excel at producing persuasive bag-of-words.
As the author notes and concludes, human intuition, experience and the communication of it (which is the purview of humanities) is a pre-cursor to formally encoding it in symbolism (which renders said intuition stale but operationally useful). ie, socratic dialog was a precursor to (and inspired) euclidean geometry, and meta-physics inspires physics.
it's intentionally ironic that this is written in the style of ai slop?
It would be against an AI's best interests to write such a spot-on parody of AI optimism.
My answer is yes — at least for now. People tend to believe AI gives the great answers until it touches a field they actually have real expertise in. If you talk to true experts in humanities, you’ll find their depth of insight is on a totally different level compared to AI's surface-level summaries. AI mostly stitches together some published cliche like Max Planck’s driver in the old story.
That was true a few years ago, but now I'm getting better and better answers. Not saying I can totally trust AI, but the rate of progress is astonishing. Even if we see no more progress for the next 10 years, the effect of current models in our society will be profound.
You are right but these are two different questions. The effect in the society will be profound, yes. Replacing the true experts in the humanities? That's a different matter.
If it replaces everyone except the true experts, then where do we get new experts?
Even if it replaces only 20-30% of the people, we still have a huge problem. Countries with unemployment rate over 20% don't do very well.
Yeah I have to agree with the author's main point near the end. Knowing a new language is a lot more than the ability to use google translate.
> Knowing a new language is a lot more than the ability to use google translate.
Machine translation has poor reputation for the mistakes it makes; but if it ever gets good, then I wonder why learning foreign languages will be a more popular activity than learning Latin or Classical Greek is today. Of course reading Homer or Virgil in the original must be much more satisfying than reading them in translation; but the number of people who truly care about that is vanishingly small.
Learning new languages has a lot of benefits, talk to ChatGPT about it. One example connected to recent events is it helps people become more cognitively flexible and less tribal. If your favorite burger joint starts serving contaminated food, it’s nice to be able to seamlessly switch to Indian or Chinese.
I think it will survive. The metric here is environmental enmeshment. If AI starts training and observing in real time, and can sit, existing ontologically "in time", in our environment that would be the only reason I would see a challenge to the humanities.
AI regurgitates, at synthesizes, but it doesn't have lived experience, it just draws on what its fed -- that isn't human.
Much of the value in the humanities, in art, is owed to its provenance. Viewing it enables social reflection and growth, and engenders culture. That is simply absent in AI unless you want to probe the training data and the nuances of the model, but again that's a pretty circuitous/inefficient path to learning about humans, or growing as one.
It does, however, reveal some of he mechanics involved and, my hope, is that it leads to deeper and more nuanced discourse in the humanities.
It changes the scope of the game, if you will. You can now only seriously do humanities work if you are OK with everything you say being copied and dispersed for free. Potentially without any attribution.
Software has been going through the same productive shift for many decades now, e.g., Free and Open Source Software. Simply because copying bytes is absurdly cheap. It's still around.
There is a fundamental value assigned to human produced work that stems strictly from the fact it’s not AI slop. It comes from empathy.
Why are physical paintings more valuable than digital art? Why is manmade art implicitly higher value than imagegen art? Why do we watch Magnus Carlsen when engines are leagues ahead of the top 10?
Because the human condition matters. We crave seeing the world through the eyes of others with different (or even similar) lived experiences, fantasizing about what we could have been, under different circumstances. Empathizing. AI fundamentally has experienced nothing and so empathizing is not possible. It is not even able to escape the constraints of the human imagination.
Consider a Turing test, though. Imagine you had two novels in front of you, you read both of them, and they were brilliant. You love them both. But you learn that one of them was written by a human, one of them by an AI. Is the AI-written novel actually inferior, because it was written by an AI, despite the fact that you loved the novel itself?
You might doubt that an AI can ever write a novel as great as the greatest of human writers. I have doubts as well. But I don't think it can be a priori inferior. If an AI ever produces a novel that would have been great if a human wrote it, then that will be a great novel.
I actually think humanities become more relevant than before in tech with AI. For example, good prompts for image generation remind me of authors setting the scene for their novels, so being trained to do this well is an advantage.
- [deleted]
Humanities is even more useful when computers can manage technical problems.
- [deleted]
Of course it will survive. But is it enough to pay off your mortgage and is it economically significant as like say a surgeon or a quantitative analyst / trader?
I don't think so.
I think the supply demand curve shifts. It will still be expensive in education/time and status will only slowly degrade, but the number which can be supported at even moderate academic pay will fall dramatically.
I don't know any today that make what even a mediocre new surgeon or quant takes home.
No, the humanities won't survive.
The problem with humanities is that the "state of the art" is about only saying something "new". For example, the author thinks that discussing Kantian theories of the sublime and “The Epic Split” ad (a highly meme-able 2013 Volvo ad starring Jean-Claude Van Damme) is "straight A" work.
It is irrelevant nonsense masquerading as intelligent discourse much like most of this author's actual published work and it is also something at which LLMs excel.
To be a "rockstar" humanities professor at Princeton, you have to make up something "cool" to fill the seats to keep the attention of 17-25 year-olds.
With LLMs that have encoded snapshots of the entire literary corpus that humanity has produced, those students can make up whatever connections they want and justify their worldview. No humanities courses required beyond maybe some introduction to vocabulary / prompting.
yes.
My answer is yes :)
But will AI survive us ? Just look at how the Internet changed from the 80s to now. It is filled with ads popping up everywhere, making many activities useless.
AI will survive us. In fact, I am worried about the current trend and see feeding the mighty AI as being a mandatory thing in future. Unchecked monopoly has a way of mandating things that are net bad for everyone except the middle man. Previously, job search was nice, but now 100% of the job application require your linkedin profile url, if not provided, you get auto-rejection.
People with decades of experience in the trenches who recently got laid off(business failure, corporate greed cutting costs, restructuring ..) now are asked everywhere to submit a link to their github(no one knows gitlab/codeberg/sourcehut etc) full of portfolio projects! I talked to few academic friends, who are worried that their research work is now reproduced verbatim by two specific LLM HN really loves!
Unless, LLMs go the way of ads to survive and rely on SEO spam to retrain, a monopolistic capture will happen mandating that all useful content must be fed into common hubs where AI can happily ingest it but cumulatively no human expert will be able to use it(we all know the abysmal state of info retrieval) and LLMs as these become more popular will become ever so unreachable for common folks without lots of riches. For medium term, I see a Netflix/Amazon Prime Video play, LLMs as these get more popular(same way people mindlessly scroll yet lecture others of its harm), will raise prices and lock out people from the common good and serve specific beneficiary group(shareholder).
What you said about ads made me think that ai will probably be orders of magnitude better at marketing/propaganda. Feeling the urge to don some foil on my noggin.
AI is already killing the Internet, slop spotting has become as essential a survival skill as “dodge the pop-up”.
It was a survival skill beforehand as well. Once you consider the topic of "human slop" it becomes apparent that we've had millenia of human slop already in both synchronous channels (e.g. body language between coworkers) and asynchronous channels (e.g. most dutch golden age paintings).
Now imagine if advertising or other narratives could be invisibly, undisclosedly multiplexed into content. That's the future of AI, if not the present.
Any work that is based on knowledge, analysis, cleverness, thinking, expression, art, strategy, prediction, insights etc will be profoundly effected. Humans just need to move out of these territories invaded by AI. To the real world where human physical abilities matter.
On the contrary, humans need to develop these territories further because AI is only a follower and a student of human-created knowledge, that's why it excels in student-related tasks. In order to have good AI we need good human science which needs a lot of human effort for a variety of reasons.
The problem with humanities is their tendency to be palace sciences, easily abused for political reasons. It's more of feature than a bug and it's unlikely to change from within them.
Except all of those skills are also useful for developing robotics. If those truly are "profoundly effected", robotics innovations won't be far behind.
Seems like a truly horrific world you're imagining. I hope you're wrong.
>effected
Affected
Please move out of the territory of grammar to make room for AI