What features could make it more ethical?
It's not unethical if it's an assistant that enables the user to live with greater comfort and ease. If it helps with recall and memory, with appointment-setting, with planning, or simply if it serves as a companion which suggests books and movies -- it's really the opposite of unethical.
What it shouldn't do is give specific advice on medical matters. Even if it's no worse than a doctor, or even better than a doctor, 9 times out of 10, that one exception could cause serious trouble. Unlike regular Chat-GTP users, people with dementia are already quite sick, are (usually) already on medication, and require more thoughtful care.
More importantly, it shouldn't be capable of mimicking a living or dead person. You'd probably need to hardwire it so that it has a name and "personality" of its own, which can't be changed by the user.
does it introduce itself? like, "Good morning, I am an AI assistant here to help you with memory problems associated with dementia. Some of the things you perceive may not be present for others. You are otherwise healthy, safe, and loved. I have been provided with some memories for you and I will remind you of them when you need them. If this feels like your first perception of this message, please let us know and we can bring you up to date with your medical condition and what to expect."
Then it recognizes faces of visitors and maintains backstories on them, senses the distance from base reality the patient is responding in, and then keeps the patient on track and managing their condition with the least possible suffering.
It seems horrific, but also humane in some cases.
Reminds me of the movie "50 First Dates"...
I often wondered how to handle such a situation without traumatizing the affected person every single morning. Truly one of the few rom-coms that left a lasting impression (on me).
It all depends on what its goal is. As a caretaker that serves as something that aids with their memory then I say yes. But as a companion type that helps the emotional side then I say not yet or no. LLMs are too prone to hallucinate randomly and there's really no there there. It's an empty bunch of words. It has no memory or feelings yet the person who has dementia is a full human being who falls in love and has feelings for other beings. Helping someone fall in love with or have feelings for words is not ethical in my opinion.
what if decieving them is what relieves the most suffering?
You'd have to prove it first, I would say.
Why do you suspect it is not ethical?
>What features could make it more ethical?
You should think the other way around, start with an ethical base, and only add features that keep it ethical.
So you want to use a technology that's known for hallucinating and give it as a companion to people with reduced cognitive abilities?
What could possibly go wrong!
As a follow up, you could branch out into hardware and give skateboards to people with Hemophilia, or paragliders to vertigo sufferers.
What would make it unethical?
Wouldn’t addressing those concerns make it ethical?
Anyway, things are either ethical or they are not. There is no “more ethical.” (but there are things that are outside ethics, what a red shoulder hawk does for example).
—-
One approach would be to address its use in a living will while the future patient can make an informed decision for themselves.
Good luck.
> things are either ethical or they are not
On the contrary, black and white thinking is considered a cognitive distortion. Real life is more nuanced than that.
black and white thinking is considered a cognitive distortion
Irony of course has degrees.
> Anyway, things are either ethical or they are not.
Where do you get that from?
> What would make it unethical?
One example: you build an AI so that you don't have to spend time with the person while not feeling bad about it.
How is that different from hiring a caregiver?
Easy to maintain for a decade or more of use. No dependency on remote servers subject to the whims of another company.