I have no children, but I am part of a family group chat where we discuss these things.
Thus far, 7yr and 10yr old nephews have not been introduced to LLMs. These kids already write code, and not introducing them to LLMs is somewhat analogous to the fact that they have not been given calculators for math.
At what age/point is teaching them about the availability of LLMs a good thing[0]? They will find out eventually, is a parental introduction better than just a random introduction?
[0] The creators of Django, Redis, and Linux believe that LLMs are not useless, so let's agree to move beyond that for the sake of argument.
I have no children yet, but I would never consider introducing LLMs until my children reach a point of maturity where they can accurately detect sophisticated deception. Even after that, I would consider these constructs as dubious in nature, they provide no measurable benefit that isn't offset exponentially by destructive acts in the form of omission or outright dangerous lies.
The age of reason is 7-12, and varies, but generally prior to which they cannot biologically detect deception perceptually.
Given observed behavior of LLMs, you must think of LLMs as the most pathological type of liar. You don't ever give those types access to your kids when you are actually fulfill parental roles.
There's good comments from others here.
I'm also going to add some homeschool family perspective, as a father with several children. Hopefully this is helpful for other parents or soon-to-be parents.
Although there are basics, there truly is not a one-size fits all, even in a single family. From the start, our 2nd had a drastically different personality than our 1st, for example. Doesn't mean that you parent differently necessarily, but it's always a good idea to allow for an open perspective, especially when talking with other parents. Not only are people touchy about how they parent, there are simply many variables when it comes to being intentional about your family. It's okay to have different ways of doing it.
That said, as a traditionally-minded homeschooled household, I'll share what's worked for us.
We have only introduced technology in a limited, time-based, supervised capacity. The focus is on using it as an educational or research tool. Our children are not even aware social media or Youtube really exists. So, for example, we have done small tech projects to understand how things work (Arduino weather station, etc) or also, they're allowed to look online at pictures of dog breeds to their heart's content.
Having a background in linguistics and tech, with LLMs, I'm far more hesitant. Like with social media, I do wonder if we really understand yet how it might affect young minds. Of course, there are also the more Orwellian and political/ethical dynamics involved too.
Instead, we've focused heavily on reading and critical thinking skills, as well as them having a chance to have real conversations with adults. They regularly check out 50 books at a time from the library. Our children will likely graduate college at 16yo. They love learning and can have engaging conversations on difficult topics...with people of every age.
You might hear it in my voice, but it's hard not to be proud of this, especially in this day and age.
Anyway, two other tech questions simply to provoke thought...
- When did you first enroll your children in facial recognition profiling and tracking?
- When did you introduce firearms education to them?
I don't think this is much different to "when do you introduce internet"?
Both for "when do you introduce in a supervised setup" and "when do you let them use in an unsupervised setup".
LLMs are new to us, but for them it's not more new than the rest of the internet. It can be harmful, but probably less than social media.
My household did homeschooling for several years. My observation is that parents do homeschooling for either political or functional reasons. Examples of functional reasons include fighting illiteracy, disability concerns, raising test scores, curriculum acceleration. Political reasons include social conditions, religious motivations, and just about anything to do with disagreement of content.
I suspect households motivated by functional concerns will be far less likely, if at all, to introduce any access to AI and households motivated by political concerns will introduce access to AI at any age but only if they agree with the content or bias of the AI.
That seems backwards to me.
I use LLMs for functional reasons to learn things, why shouldn't kids? Isn't this exactly a "disagreement of content" which the political group would care more about than the functional?
> I use LLMs for functional reasons to learn things, why shouldn't kids
You hopefully have a better sense of correctness, rigour, and fact checking capabilities than a child.
I don’t know what control education means.
LLMs are a tool like a calculator or Wikipedia. Kids will use them whether you like it or not. It’s important to guide and mentor them to use them productively.
That said, I think generally 6-7th grade is where that starts happening. Maybe in the context of coding something like CodeLlama is a good enhancement.
I have three sons in the 7-12 range, and I'm a professor at an undergraduate college that's done a lot on teaching with AI.
We've let our kids play with LLMs by having conversations in voice mode and generating images. The youngest one likes doing this, but it's a novelty, not something that he does all the time.
For academic work, we've had success using Perplexity (with parental guidance) for the older kids' projects that require Internet research. The ability to get an overview of a topic at a moderate level of complexity with links to other sources is beneficial. This isn't a substitute for doing in-depth research in the library or with actual peer-reviewed articles, but they're not yet at that level of depth.
At the college level, the most important lesson we're trying to teach is using LLMs as a source of ideas, suggestions, and feedback to advance your work, rather than as a tool for generating finished work. I often phrase this as "collaborating vs. delegating". I want students to think critically about their ideas and repeatedly iterate with LLMs in the loop to help solve the creative problems they encounter - but without outsourcing their own vision for the project.
My colleagues are seeing good results across multiple disciplines using LLMs for topic development and pre-writing, so I'd encourage leaning into that role, as opposed to jumping straight into text generation.
We've also learned that students benefit from a clear process with specific example prompts. Using AI well requires developing critical thinking and self-reflective skills, so there's a process of maturing that comes with time and exposure.
If you're interested, here's an example research assignment I've used in my own classes with some specific prompts and suggestions for different phases of the writing process:
Be wary of introducing the usage of LLMs at an age where instead the development of critical reasoning should be encouraged. LLMs aren't necessarily inherently bad, but they are SEDUCTIVE and over time it may become very easy to immediately reach for an LLM the moment you have a problem instead of taking some time to quietly ponder on your own.
Reasoning and problem solving is a CRAFT, and if it's not practiced it will atrophy.
Let's not fall into the trap of "LLMs are just a tool like a calculator". An LLM is like a calculator where you can just say, "Here's a math problem please solve it for me."
Well, introducing them to the LLM before the world requires correct answers of them may be their most prolific years with the technology.
People said the same thing to me about computers when growing up, so this has never been a particularly convincing argument to me.
There’s an obvious space here for startups to create filtered educational agents. You want the kids to use the LLM, but you want them to use a neutered LLM that won’t give a freebie solution.
Some simple system prompting on a local LLM can help you get there on your own believe it or not. A custom GPT on ChatGPT is another option. This will take an active involvement from you, because it’s a pretty much figure it out as you go for all of us.
Augmenting a parent so their child has a world class tutor is incredible stuff.
Many would disagree, solely on the basis of the many knock-on effects where such companies in other areas have cut corners with egregious failures.
> Augmenting a parent so their child has a world class tutor is incredible stuff.
So does replacing the underlying meaning of the label tutor, with a cult leader. It is incredible, just not in the way you intended. How will you know the difference if this was done outside your awareness?
Machines are programmed by people you never meet, or can hold to account (in many cases). Many seem to think LLM's are magical replacements for people, they are not.
It seems more like you want to further offload parenting to a blind machine you can't control without recognition of the consequences, while taking away the economic benefit of tutoring as a job at the same time.
The internet raised a lot of kids in the late 80s/90s, many were very intelligent, but became maladjusted from exposure to things on places like 4chan. That maladjustment led to acts that landed some of them in prison, which has become the modern form of slavery.
I think you need to more carefully reflect on both the negatives and positives.
> The internet raised a lot of kids in the late 80s/90s
The internet wasn’t really available to most of the public until after the advent of the web in 1994, it certainly wasn’t raising a lot of kids (even in the very loose sense of having some influence through being a medium of direct interaction for them while they were growing up) in the 1980s.
The internet raised a lot of kids in the late 80s/90s, many were very intelligent, but became maladjusted from exposure to things on places like 4chan.
The realist in me is focusing in on the first half of that sentence. The kids are going to get raised by something other than the parents. This will happen globally because no one wants to accept parenting globally has been and will continue to be inadequate. I’d rather many children in say, I don’t know, Afghanistan, get some guidance from an LLM.
This is a recipe for disaster wherever it might occur, no good can come from it.
https://www.washingtonpost.com/technology/2024/12/10/charact...
LLMs are not people, only people know how to give guidance.
LLMs are chaos and deceit simulacra, and kids are the least biologically capable of discerning omission and deceit especially prior to the age of reason where this isn't possible.
In my participial case, I am not asking about scalable companies. Instead, I am asking about our childrens' education. LLMs for coding are a bit of cheat code, and cheating is not necessarily ideal in this case.
To be entirely honest, I would love to read pg's take on this. He seems like the type of guy who might have expended some thought on this topic, as difficult as it is.
No matter who writes this piece, we need it to move forward.
I’m curious to know your opinions on teaching coding to kids. Is it a good idea given claims around AI taking over programming jobs in the near future? How many parents here would like their children to learn coding?
Not everybody should code -- Mike Rowe's "Don't Follow Your Passion" [0] is a great example explaining why.
That said, I have a son and he's learning coding (I write code for a living). I also encourage him to learn plumbing, woodworking, mechanical repairs, electronics, cooking, cleaning, painting, caring for animals, etc blah blah blah.
He knows what gen AI is and how to access it; he knows it's a tool, a novelty that he uses to goof on his buddies with weird pictures and all that, and I don't believe I'm necessarily teaching him how to shoe a horse while the gasoline buggy is right around the corner (but I guess you never know what the future will bring)
That said, problem-solving, resilience, patience, belief-in-self, consistency, discipline, all of these borderline-woo-woo characteristics of a successful person, these are the things that matter. Your tools are just your tools. 10 seconds in Midjourney or 10 hours in Photoshop will result in the same picture you're trying to clown your pal with. The goal is to get a laugh, not to get a technical critique.
Be careful and judicious with your choice of tools, techniques and styles; be unique and original in your creations. At the same time, don't be afraid of new tech or mediums or theories or ideas. You never know what you might create!
i dont i let them fall prey to it like an insect approaching a venus flytrap.
kidding aside i think its good to explain it in laymans terms as they grow. eventually they will reach college and have their papers rejected by an anti plagiarism ai and it will piss em off lol.
I have a kind of surprising experience with regard to this topic… My older son was already a teenager when LLM’s got mainstream. He has struggled with an awful combination of Tourette Disorder, ADHD and OCD which has meant he’s been pretty disconnected from school entirely since he was very young. However, when he discovered chatGPT he started using it to write formal complaints about all the teachers he didn’t like and emails those formal complaints to the school principal.
The upside is that before ChatGPT he showed no interest in written communication at all and his school was genuinely concerned. Even an educational psychologist couldn’t properly assess whether he had an actual learning disability over and above his neurological conditions because instead of taking the psychometric tests he ate the papers. Just to be clear, he’s not intellectually disabled in anyway, he’s just so defiant that he blatantly refused to engage in any psychometric tests, and he stated that his IQ was his private business. Instead of being annoyed at all his letter writing, the school have taken his letters as an indication that he has some ability to read, write and comprehend, as all the letters he has written have been adapted by him to suit certain nuanced circumstances that would either require detailed prompts, or alternatively heavy editing. I have to say the school’s response to the letters has been refreshing. They wanted to see what else he might use AI to come up with, or how else he might use AI to engage in learning. His school has been allowing him to use AI to complete non-assessed school work. Everyone is so happy because he firstly started doing school work, but then, he’s also started doing some school work, without using AI now as well. As a parent, I think if that was the aid he needed to build his self-esteem enough for him to feel able to engage in learning then I’m all for it.
Why would children need LLMs? At 7!
Maybe they should learn to read, write, do basic knowledge research first before outsourcing everything to a machine.
And yes, to go with your analogy of "not introducing them to LLMs is somewhat analogous to the fact that they have not been given calculators for math". They shouldn't have calculators before they can multiply 5x7 themselves.
We can't outsource very basic knowledge to machines if we want to stay lord over the machines.
When they're in their late teens is still early enough to be lazy and have LLMs do the work for them