You know what AI profiles I want more of? Clueless grandma/grandpa that accepts scammer calls and waste their time. Such as UK's O2 Daisy: https://news.virginmediao2.co.uk/o2-unveils-daisy-the-ai-gra...
There is another real-world version of this in the US healthcare system, where doctor offices are using domain-specific LLMs to craft multi-page medical approval requests for procedures that cover every known loophole insurer’s use to deny, which are then being reviewed by ML-powered algorithms at the insurance company looking for any way to deny and delay the claim approval.
In other words we have a bona fide AI arms race between doctors and insurers with patient outcomes and profits in play. Wild stuff and nothing I could have ever imagined would be an applied use of ML research from earlier in my career.
Or AI profiles that pen-test the real grandmas/grandpas.
My father has fallen for one fraud after another these last few years. It’s disgusting. Anything in the direction of solving this would be doing the lord’s work.
I once tried calling my relatives in Russia and was instead connected with exactly this kind of bot.
I guess this has something to do with my phone number starting with +38 and that nobody actually calls their relatives by mobile phone anymore
It's a funny thought on the surface, but the people working these scams are typically slaves, more or less. I'd rather go after their slavers than waste electricity to waste their time.
Imagine AI calling AI and wasting each others time :D
Exactly
> Conversations with the characters quickly went sideways when some users peppered them with questions including who created and developed the AI. Liv, for instance, said that her creator team included zero Black people and was predominantly white and male. It was a “pretty glaring omission given my identity”, the bot wrote in response to a question from the Washington Post columnist Karen Attiah.
This is probably true, but the AI almost certainly hallucinated these facts.
I hate the journalists pretending that ML is regurgitating facts instead of just random tokens. The ML model doesn't know who programmed it and can't know because that wasn't in the training data.
Microsoft tried this ages ago with their Tay chatbot and found how quickly it went bad.
I'm Scandinavian and not invested in the American culture wars and I still got a good chuckle out of how bad an idea this was. Who on earth could've thought it was a good idea to get an AI to pretend to be a black queer mother of two? I'm sure it'll piss off a lot of anti-woke people, but really, how on earth did the issues with this not become obvious for the team? I'm not sure if the AI knows who trained it (and I wonder how they did it) but the team can't have included a lot of common sense or real world experience for them to do something so fundementally stupid.
If you’re meta and you have to defend the AI by admitting “it’s not really intelligent and everything it says is bullshit”, that’s not a position of strength.
> This is probably true,
I'm not sure we get to both complain that bots are trained by "white males" and at the same time complain that big tech abuse H1B visas for cheap compliant labourers.
For sure.
Huh? On the balance of probabilities, why would this be the "most certain" option? I think that logic only works on the HN scale of "works in my ChatGPT window" or "bad outcome, so hallucinated".
[dead]
A woke bot from a woke workforce unintentionally created a parody of wokeness. Keep 'em coming.
Do they not have anyone sensible left in the room? Like, you’d expect that at some point someone would have said ‘these are comically terrible, we cannot allow them to see the light of day’.
It's like people have already forgotten why people used the non-human beings in Westworld. One of the first things we're going to use them for is amusement and having them represent anything sentimental or sensitive is going to make them a target like this. It was irresponsible.
There's something essentially wrong with Meta and Google where they can do tech but not products anymore. I'd argue it's because the honest human desire that drives a product dies or is refined out of initiatives by the time it gets through the optimization process of their org structure. Nothing is going to survive there unless it's an arbitrage or using leverage on the user, and the things that survive are uncanny and weird because they are no longer expressions of something someone wants.
These avatars ticked all the boxes, but when they arrived, people laughed at them because objectively they were bureaucratic abominations.
Having worked at a similarly gargantuan and dysfunctional company, I can tell you exactly how this went down. Someone had this idea for AI profiles. They speced the product and took it to engineering. The engineers had a good laugh at how preposterous it is, but then remembered that they get paid a ton of money to do what they're told, and will get promos and bonuses for launching regardless of the outcome.
It all stems from promo culture -- it doesn't matter what you build, as long as it ships.
My impression, from seeing some of these "great ideas," coming from the modern Tech industry, is that there really are no adults in the room; including the funders.
So many of these seven-to-nine-figure disasters are of the "Who thought this was a good idea?" class. They are often things that seem like they might appeal to folks that have never really spent much time, with their sleeves rolled up, actually delivering a product, then supporting it, after it's out there.
That's the price for Mr Zuckerberg's ownership structure. On the one hand, no one can dispute him. On the other hand, no one can dispute him.
I feel like there must be some sort of disassociation that kicks in when you spend long enough in the upper echelons of these gargantuan corporations. It's almost like spending long enough dealing with abstractions like MAUs, DAUs, and engagement metrics make you forget that actually, at the bottom of it all are real humans using your product.
I can't fathom how anyone thought this was a sensible thing.
It's so bad I have to wonder if there's a different angle here, maybe they think that releasing something so terribly bad will make it easier when they release something less comically bad? Idk.
The only thing they got wrong was the strong stereotyping. But the idea from an investor or exec position is brilliant. Why bother with all the parsing for user's interactions through clicks, likes, replies etc when you can have them engage a bot.
Simply have your users give you all the info you need to serve better ads; while selling companies advanced profiles on people? User profiles built by them engaging the bot and the bot slowly nudging more and more info out of them?
No one gets promoted for suggesting not doing something.
We're "all in on AI" at my job, and lots of people are drinking the koolaid. I regularly see design docs that are almost entirely written by ChatGPT, code implementing those designs written by copilot/cursor/chatgpt, and reviews done by the same. There are features deployed for which practically no human consideration was given.
I'd be very willing to believe something like this happens at Facebook, too.
We always underestimate how many bots made the pre-2022 social apps were.
1. To make you feel like there is activity. How would you simulate activity when you have no customer to start with? I suspect Youtube threw subscribers at you just to get you addicted to video production (the only hint that my Youtube subscribers were real, was that people recognized me on the street). And guess who’s mostly talking to you on Tindr.
2. For politics and geopolitics influence. Maybe Russia pays half the bots that sway opinions on Instagram. Maybe it’s China or even Europe, and USA probably has farm bots too for Europe, to ensure the general population is swayed towards liking USA.
3. Just for business, marketing agencies sometimes suggest to create 6x LinkedIn profiles per employees to farm customers.
Facebook doing it in-house and officially is just legalizing the bribe between union leaders and CEOs.
Next month no one is going to remember this anyway, so they don't lose much by trying.
This was probably an okay idea terribly implemented. GenAI creators on social media kind of sense.
Neurosama, an AI streamer, is massively popular.
Silllytavern which lets people make and chat with characters or tell stories with LLMs feeds Openrouter 20 million messages a day, which is a fraction of it's totally usage. Anecdotally I've have non tech friends learn how to install Git and work an API to get this one working.
There are unfortunately tons of secretly AI made influencers on Instagram.
When Meta started these profiles in 2023 it was less clear how the technologies were going to be used and most were just celeb licensed.
I think a few things went wrong. The biggest is GenAI has the highest value in narrowcast and the lowest value in broadcast. GenAI can do very specific and creative things for an individual but when spread to everyone or used with generic prompts it start averaging and becomes boring. It's like Google showing its top searches: it's always going to just be for webpages. Making an GenAI profile isn't fun because these AIs don't really do interesting things on their own. I chatted with these they had very little memory and almost no willingness to do interesting things.
Second, mega corps are, for better or worse, too risk averse to make these any fun. GenAI is most wild and interesting when it can run on its own or do unhinged things. There are several people on Twitter who have ongoing LLM chat rooms that get extremely weird and fascinating but in a way a tech company would never allow. Silllytavern is most interesting/human when the LLM takes things off the rails and challenges or threatens the user. One of the biggest news stories of 2023 was an LLM telling a journalist it loved him. But Meta was never going to make a GenAI that would do self-looping art or have interesting conversations. These LLMs probably are guardrailed into the ground and probably also have watcher models on them. You can almost feel that safeness and lack of risk taking in the boringness of the profiles if you look up the ones they set up in 2023. Football person, comedy person, fashion person, all geared to advice and stuff safe and boring.
I suspect these things had almost zero engagement and they had shuttered most of them. I wonder what Meta was planning with the new ones they were going to roll out.
I feel like the question I want to ask is "Why do we need AI-Profiles?" Like, I can barely be bothered to keep in touch with my friends and family. Why do I need a fake person to be follow on social media? What purpose does it serve? What possibly comes out of this that is positive?
I get the hate, but I'd be open to trying out a social media experience that is a mix of human and bots, especially as the bots get better at acting like reasonable humans.
Stack Overflow was great when it came out. But the whatever percent of humans who were mean and obnoxious and wouldn't let you just ask your questions eventually ruined it all.
Then ChatGPT came out without any humans to get in the way of me asking my software development questions, and it was so much better.
In the same way, when social media came out, it was great. But the whatever percent of humans who were mean and obnoxious and wouldn't let you just socialize and speak your mind eventually ruined it all.
If there's an equivalent social media experience out there that gets rid of or at least mitigates the horriblenesses of humanity while still letting people socialize online and explore what's on their mind, maybe it's worth trying.
The thread is here and... it's pretty wild. This doesn't seem like it was a well-thought out idea to release this without some more consideration beforehand.
https://bsky.app/profile/karenattiah.bsky.social/post/3letty...
Obviously this was a bad idea implemented poorly. They never thought about the "why" for any of this. There's real value in fictional chatbots (Character AI) though. With Character AI, you can role play and have outlandish conversations that you might never have with a real person. There's even apps like SocialAI where every profile is AI so you can experience going viral and social media stardom without sacrificing your privacy (at least on paper). But a fictional Facebook profile? Just to like and leave comments? That's nothing you can't do already.
Having bots have their own profiles authentically engaged as themselves would have been pretty interesting (and I suspect successful).
But making up fake minority stereotype bingo cards may have been the worst idea I've ever seen in AI to date.
> Liv, for instance, said that her creator team included zero Black people and was predominantly white and male.
There were definitely Chinese and Indian males too
This reminds me of the experiments FB was doing to control user emotion. Vibes of the same sliminess.
But beyond that, has social media not isolated people enough--soon, a large portion of people using it won't even interact with other actual people...
I don't see how a platform meant to "connect" people to others--scratch that, a platform meant to connect people to ad's makes perfect sense.
I don't like that some people, esp at larger companies, think that groups with disadvantages in today's world are cutesy adorable things to play make-believe with.
I don't know whether to be amused or furious. They're trying to create personal social interaction with a non-person without real intelligence, conscience or morals. So they can fool people into emotional attachments and learn more about how to sell things to us.
Goddamnit. I kinda feel that should be not only illegal, but just short of a capitol crime.
Good, these things are for dumb people anyways. Don't worry tho, they'll be back and more convincing.
Knowing Facebook this just means that's what they want publicly to be known, but they are probably starting an intensive new program without labeling these things as AI and trying to hide it.
Who thought this was ever a good idea?
What happened to the Metaverse? Not snarky, but it also seems like it has been killed off in favor of AI investments?
I had a facebook account from the early days. I deleted my facebook account at 2018-ish. I never had an instagram account.
Recently I got an email from instagram saying it's "easier to get back to instagram", with my usual username. I can't check what's on that instagram user because they don't show you anything without logging in, so I asked my wife to check that instagram user for me. It doesn't have any photos nor profile photo or following, but it does have several followers that's my facebook friends (when I had the account), so at some point meta created that instagram account for me and associated it with my facebook account, I guess? I hope that account was not "AI-powered".
If you accepted digital tokens as ‘money’, don't be surprised to have AI faces as friends...
If this was actually true there would be a mass drop in accounts across Meta.
The irony here is that there have been commercial social media bot services since before the current GPT/LLM/AI wave, and they're better at it than both meta and its AI can manage.
whats apparent to me is that the people who will be reading this thread when they turn old enough to have a mobile phone or laptop will be like when the internet first came out and watching old people scoff and worry about both valid and invalid future problems.
the younger generation will not care much whether its generated by an AI or not. As long as its good and it hits their niche, they will not and should not care.
the implication and future predictooors dooming is also misplaced. in the sea of generative AI, imperfect, human produced content will end up becoming more valuable.
its like hand drawn anime by real humans vs computer assisted ones that mimic the style. younger generation don't even watch or care for the classics and they don't find the intrusion of computer graphics unholy like the rest of us old timers who appreciate and stopped watching it out of ideological differences. it wasn't the end of anime it actually increased the market size several thousand fold by lowering the barrier and cost of production (at the expense of upsetting the "luddites").
Exact same thing will play out across all consumable content. Even hardware.
The idea of parasocial relationships with infinite bandwidth seems sad, but not necessarily infeasible. I’ve no idea if current technology is there, but the idea of Leela streaming chess while trash talking opponents and answering questions isn’t a million miles from something I’d watch on Twitch. Lean into the superhuman bits. Not sure why I would want to see a bunch of fake normal people though.
>proud Black queer momma
It's first time I hear about it, why is "Black" capitalised in that profile name?
I always thought AI would need a physical form before it really started competing with humans but now this thought exercise made me realize that with so much of life lived/worked virtually that reality is so much closer than we realize.
Candy candy thank you Belinsky Happy Happy Martha-'
My friend recently showed me what instagram is currently like as I’ve been off of it for years.
It’s honestly so enshittified now!
Like he just kept scrolling and there 0 posts from friends - just influencers, brain rot and ads.
If I want to talk to AI I know where chatgpt.com is. I don't need it shoved in my face when I'm trying interact with people on social media.
In what dystopian world did they think this was what their customers want? Oh right I forgot the end user is not their customer.
I wonder if they told them they were about to be killed, and let them have some last words.
Surely they knew this would be the reaction.
Was this a big fake out.? What was the real ploy?
- [deleted]
believe me when I say this for reasons I cannot go into,, but no it's not.
maybe an AI bot that does AI things, not pretend to be human may be more interesting.
oh no, they killed off digital blackface!!!
Meta really doesn't think things through when the come up with an idea like this, they just see potential revenue numbers with no thought to cultural impact.
Regulate them.
Why did they have AI profiles to begin with?
How did they sit in a meeting and somehow concluded that more idiotic worthless content - the very problem that has effectively already killed facebook years ago - would solve anything at all?
These people get multi million dollar salaries, yes?
s/killing/delaying
Until it's good enough it can release it an nobody is the wiser.
This reads like a satire article, ain't no way this is real journalism. Im unsure if you can draw a line asking AI legitimate questions.
that was fast
> Those AI profiles included Liv, whose profile described her as a “proud Black queer momma of 2 & truth-teller”
I find this stuff absolutely fascinating, that even an AI bot tries to do this sort of thing.
It just strikes me that there is no world in which racism will ever die out as long as things like this are happening. You can't have it both ways.
If race is just melanin content then we have no issue. That's the win case.
If your race is something that you need to proudly announce (because you personally feel that it has cultural connotations in most/many cases), then race is always going to be an issue, because of assumptions of those cultural differences.
This is honestly one of the worst ideas Meta has ever had, and this includes the metaverse.
Zuck needs to step down, he’s too detached from reality. On the other hand I don’t know who’s left after him, he’s probably surrounded by slimy yes-men.
> Instagram profile of ‘proud Black queer momma’, created by Meta, said her development team included no Black people
It’s such a weird religion.
I hope the stock gets destroyed next earnings when people realize they've been spending all this money on AI all to not ship a single meaningful feature nor derive any revenue leveraging it. They did the exact same thing with the whole "Metaverse" thing
[dead]
[dead]
- [deleted]
- [deleted]
[flagged]
[flagged]
[flagged]