You know what AI profiles I want more of? Clueless grandma/grandpa that accepts scammer calls and waste their time. Such as UK's O2 Daisy: https://news.virginmediao2.co.uk/o2-unveils-daisy-the-ai-gra...
There is another real-world version of this in the US healthcare system, where doctor offices are using domain-specific LLMs to craft multi-page medical approval requests for procedures that cover every known loophole insurer’s use to deny, which are then being reviewed by ML-powered algorithms at the insurance company looking for any way to deny and delay the claim approval.
In other words we have a bona fide AI arms race between doctors and insurers with patient outcomes and profits in play. Wild stuff and nothing I could have ever imagined would be an applied use of ML research from earlier in my career.
Interesting. My next door neighbor ten years ago was a lawyer a couple years out of law school. He discovered that he could pour through hundreds of medical charts a day and find cases where the doctor under billed the insurance company. He would then sue the insurance company, settle, and split the profits with the doctor. More or less he was mining the charts.
He would sometimes pull up next door with a half dozen tote boxes overflowing with medical records. He would say "hey, dataviz1000, can you help me get these into the house?" He once asked me if I wanted a new job helping him go through all the charts. I don't get involved with illegal activities and I was earning more not breaking the law elsewhere. He did hire a young woman who graduated law school and was still working on passing the bar. Since they have married and started a family.
Yes, HIPAA laws got broken! Yes, this guy made 10s of millions in a few short years.
There are no good guys in this story.
Probably would make a good start up using LLM and bringing the process into compliance with HIPAA. There is probably several billion dollars in insurance companies that have been under billed.
You have garbled that story. When a provider under bills an insurer that is not grounds for a lawsuit. At most the provider can submit a revised claim if it's still within the time limit.
And it's not necessarily a HIPAA violation to outsource medical billing and chart review as long as there is a proper BAA in place, and everyone follows the Security/Privacy Rules. Many small provider organizations pay outside services to ensure they bill at the highest allowable level.
Are you sure it was illegal?
HIPPA carves out this exception for using your health records:
“To pay doctors and hospitals for your health care and to help run their businesses ”
With HIPAA you have to track and store the information every person who touches or reads the medical chart. The issue was more to do with random people reading medical charts.
It isn't difficult to bring the process into compliance. I offered to make an app which would have been easy because there was a predefined workflow that can be diagrammed on a sequence chart in about 10 steps. There were a couple interactions between the lawyer and the doctor. Then a step where the insurance company is notified. Then a lawsuit filed if not paid. At one point, I was researching how to store data in HIPAA compliance in the cloud. It was about 2 years later when AWS provided HIPAA compliant EC2 instances. I offered to build the app for $10,000. Having random people pour over private medical charts and undocumented and haphazard communication between the lawyers, insurance company, and doctors through email and text messages was a mess.
- [deleted]
This almost definitely falls under Business Associate in hipaa and is totally fine.
https://www.hhs.gov/hipaa/for-professionals/privacy/guidance...
The lawyer looking over the records was probably fine. Him paying his neighbor to help him look through them is more questionable.
> The lawyer looking over the records was probably fine. Him paying his neighbor to help him look through them is more questionable.
I don't think so. The "paying" part is important - the neighbour becomes an employee for the duration of the work, which is fine, as then there's a contract between the employer and employee which includes, even if only implicitly, that the employers data is not to be exfiltrated.
If he were simply sharing it with his neighbour for shits and giggles that would be a different story.
If there is anything true in this article "What Are The Requirements For Storing Physical HIPAA Documents"[0], laws were broken. But, I'm not a lawyer, what do I know?
[0 https://www.medicaltranscriptionservicecompany.com/blog/what...]
There was one case where the HHS levied a fine on somebody leaving a stack of boxes on the street. If they are under lock and key it isn’t an issue.
And yes, I think the lawyer does know more than you.
Yup: “An attorney whose legal services to a health plan involve access to protected health information.”
So many parts of this story makes no sense.
Honestly no piece of this makes any sense, from the thinking this is illegal somehow, to the lawyer jumping to sue because a doctor underbilled (judges would tired of this very quickly, court isn't an automated process to use to threaten people after youve made a mistake)
Checked with a doctor and they said the same and couldn't puzzle out a benign misunderstanding that was right - they pointed out that even if you meant the lawyer sued if the insurance company refused to pay, the economics would be all fucky on the splitting, because now the lawyer does have to go to court, no automated easy money, much less millions.
There was a discrepancy in medical coding. The lawyer was looking for something very specific in the medical charts.
I searched for "how long does a doctor have to bill you in florida" and the a top result was this gem, "A doctor in Fort Lauderdale I saw in 2020 contacted me to tell me that there was a /"billing error/" 3 years ago that they now want paid. What can I do?" That sounds about right.
I don't know the specific details about the lawyer was doing.
What really grinds my gears is the CO2 emissions and power-grid load that's being used for this stupid arms race.
I realize that there's no magic solution to perverse systems like this, but it really bothers me nonetheless.
Inference uses hardly any compute. You should be complaining about computer games for real compute use.
I'm pretty sure they meant emissions/load from inference and/or training TIMES all the current LLM users on the Earth...
Yes, but you've got to balance that against the equivalent human processing time. Meatbags are notoriously carbon-intensive to feed.
Humans use tiny amounts of energy compared to even a smartphone. And they consume the vast majority of that energy regardless of whether they're lounging in bed or pouring over medical records.
A human burns say 2000 kcal / day. That's about 2.3kW hours / day.
An iPhone 16 pro battery appears to have about 18 watt hours. So approximately 100x less then a human uses.
Oops, you're right, I misremembered some basic facts here...
I remembered there was some analogy about a person doing some work walking, with an average diet, versus using an internal combustion engine. In terms of carbon emissions.
The engine came up more favorably.
Turns out, eating any meat and industrial agriculture transportation are really heavy.
This sounds like a false equivalency to me... why are you comparing humans and their support networks... but the engine doesn't also get the same comparison? What about the factories that make them for example? All the gas station infrastructure etc.?
Here's one of the studies from the period: https://www.nature.com/articles/s41598-020-66170-y
What about crypto?
It will take me some time to dig up with all the noise around AI, but this reminds me of a paper published around 2018 or so that explored the possibility of two such AI forming an accidental trust by optimizing around each other. For example, if the denying AI used frequency of denied claims as a heuristic for success, and the AI drafting claims used the claim amount for the same, then the two bots may unknowingly strike a deal where the AI drafting claims lets smaller claims get frequently denied to increase the odds of larger claims.
Note: not saying these metrics are what would be used, just giving examples of antithetical heuristics that could play together.
I feel that this sort of autonomous agent co-optimization may happen more often over time as humans step farther away from the loop, and lead to some pretty weird outcomes with nobody around to go "wait what the f---- are we doing?"
Agreed, and I'm even further worried about the plausible deniability these situations would create.
Wow, that sounds very interesting, do you have some link to that paper?
I'm on the hunt, but no luck. I've tried a myriad of search terms to dig it up, but none are able to surface the paper through all the vaporware and blog pieces on competitive AI.
It’s so bizarre to me that this uniquely US phenomenon of for-profit-middlemen inserted into the healthcare system has resulted in an adverserial relationship between the sick person and the “healthcare provider”.
I put that air quotes because insurance companies don’t actually provide health care. They provide insurance. That’s a financial product, not a medical one.
> That’s a financial product, not a medical one.
It often goes unsaid, but America, on a cultural and political level, is really ideologically fixated on a distinction between working and non-working individuals, and, in a far deeper sense, whether an individual "deserves" healthcare or not. This makes access to healthcare intricately connected to class, wealth, and income, in America. That's why access to healthcare is seen as a product in and of itself. You can either afford it ("you've earned it"), or you go into debt for it ("you have to earn it"), or you simply have no expectation of ever paying for it ("you cheated the system").
The entire conversation is often dominated by these ideas in a way that often makes talking about healthcare with Americans baffling to people that come from many single-payer or universal systems.
To a degree. You have to keep in mind that a hospital emergency room isn't legally permitted to turn you away even if you can't pay under the Emergency Medical Treatment and Active Labor Act.
So the wealthy and insured are covered. The lowest rungs and those that don't care and will just run away are covered. It's mostly lower / middle lower class that this really hurts, ironically.
> The lowest rungs and those that don't care and will just run away are covered.
They're really not. They are only entitled to "stabilizing care".
I work as a paramedic. We have had situations with "frequent fliers" where when we've called the hospital to give a report as we are transporting, the hospital will say "let us know when you're here", and when we've done so, there's literally been a physician come out to the ambulance.
"Hey, X, what's happening?"
"I got a lot of fluid in my gut (he had ascites)."
"Okay, well, that's not new, and it looks like you have an appointment for having that fluid drawn in two days."
"Oh, okay."
"Anything else bothering you?"
"Nope."
"Alright, we're good then." Gives us a nod.
"We're going in then?"
"Uh, no. You have been by a physician, you're stable, you're good to go, you can jump off their gurney and head home now."
Which is harsh - but also this person at this point was being transported 4+ times _per day_.
But EMTALA only requires acute stabilizing care, not definitive management.
I think your premise is flawed. In America, the access-to-healthcare versus income curve is U-shaped.
If you have literally no income (or your income is entirely "off the books"), then you qualify for medicaid; everything is covered with no premiums, copays, or deductibles. At a middle-class level of income, you're probably looking at either a comparatively shitty ACA marketplace plan, or a comparatively shitty employer-provided HDHP plan. At an upper-class level of income, you can afford top-of-the-line healthcare.
I get this feeling a lot. For example the UK typically has unlimited paid sick days for salaried jobs, while I have heard of US employees pooling together and "donating" sick days to someone. The UK has a ton of benefits for the sick, unemployed, single mothers, carers etc. in the US I am sure those exist but I get the sense that charity is supposed to play more of a role.
FYI it’s not common to allow sick days to be transferable.
TBH I think in the US it’s more than anything about how much more competitive industries here are vs in the UK. If X company feels it’s worth the extra cost by allowing unlimited PTO and 2 years of parental leave, etc. the worry is that X will be trounced by Y Company, who is ruthless enough to not offer those things and as such has much cheaper labor costs.
If you take an industry like retail, those companies have a point - Walmart and Amazon offer low benefits compared to what companies once offered. Their lower prices are part of how they killed off most of the department stores and put the rest on life support.
And if you think about a highly paid job, even though our fringe benefits suck compared to Europe style, my impression is that US salaries are higher for equivalent jobs, enough that it makes up for it. So we value the money more than we would the benefits, apparently. Only problem is you can’t use all that money to buy more time with your family (except for by taking breaks between jobs, if you’re good at saving!)
[flagged]
They don't exist in the UK either, what company would offer unlimited sick days? Getting a job at those companies would represent a lottery win.
The catch is that you have to actually be sick.
The data would suggest that isn't the case.
> Record numbers not working due to ill health https://www.bbc.com/news/business-65596283
From the summary of the data[1] being relied upon in that report:
> The number of people economically inactive because of long-term sickness has risen to over 2.5 million people, an increase of over 400,000 since the start of the coronavirus (COVID-19) pandemic.
> Over 1.35 million (53%) of those inactive because of long-term sickness reported that they had depression, bad nerves or anxiety in Quarter 1 2023, with the majority (over 1 million) reporting it as a secondary health condition rather than their main one.
Bad nerves and anxiety could be a reason for a tiny number of people, but those numbers are huge, and if you think I'm lacking compassion for this, hearing what whistleblowers say regarding the way assessments are done[2] may be relevant:
> Sickness benefit assessments via telephone (rather than in-person visit) are now routine and it's not always necessary to provide proof of a sick note. In the film, Michael Houston, a former assessor, is asked how well it's possible to discern, over the phone, if someone should qualify for full sickness benefit. "Not very," he replied, "which is one of the things that ethically and morally I didn't really feel comfortable with."
That's not even the half of it, and that website has stories of people trapped in the system.
[1] https://www.ons.gov.uk/employmentandlabourmarket/peoplenotin...
I mean your employment can't technically be terminated because you are taking sick days, unless it's whatever qualifies as long term illness (18 or 28 weeks I forget).
Technically they are not obligated to continue to pay you and the govt sick pay is like 100 quid per week.
in the us they can fire you the day you don't show up
Be aware that in many single-payer systems, insurance is also tied to working (or unemployment / retirement / pensions).
In my opinion, this is actually the reason for why we have so little innovation in Europe.
Mandatory, single-payer insurance very significantly raises the cost to be self-employed / have a sole proprietorship, which you practically need to run any side project that you want to eventually make money from. This means that if you launch a startup, you either need it to be profitable on day one, or you're vasting significant amounts of your money, not just your time.
> Be aware that in many single-payer systems, insurance is also tied to working (or unemployment / retirement / pensions).
This is true.
> This means that if you launch a startup, you either need it to be profitable on day one, or you're vasting significant amounts of your money, not just your time.
This is a false dichotomy. First of all, even ignoring health-care, you're still spending money on housing, food, electricity etc. If you're not employed and your startup is not profitable, you're paying money out of pocket to live.
Second of all, even in the USA, you are still going to pay for health insurance even if you are currently founding a startup. You could argue you are allowed to gamble that our health is good enough that you don't need health insurance for a few years, but that's just tossing coins. You could just as well not pay your taxes in the EU for a year or two, and gamble that the authorities will not catch on right away.
I don’t get it, why is a self-employed person paying so much more than others for single payer healthcare where you are? That sounds exactly like the USA where those not employed as a normal full-time employee pay the most for equivalent insurance, so people here definitely do stay at their regular jobs instead of quitting to found a startup. Insurance outside of those group plans is even more expensive than the already shocking normal cost, and of course normal full time employment (what we call W-2 jobs) usually provides a generous healthcare subsidy.
Because healthcare is often paid per "working relationship", so if you work for a company and are doing something on the side, you have to pay twice, and the second fee comes out of your pocket.
Living in America, I have never met anyone who doesn't think our health care system is a complete mess. That includes doctors, nurses, people who work in HR, and people on both sides of the political spectrum. There is however disagreement about how it should be fixed. But from what I've seen that disagreement isn't about whether people who don't get insurance from their employer "cheated the system", it's about whether the system should be controlled by the state or private companies.
You can praise/blame the puritans for this weird idea.
Not unique to the US. This develops to a certain extent everywhere private insurance is sold. It is a completely logical consequence of the insurance company raking in the most when selling you insurance for everything that won't happen, and deny you any coverage for stuff that will happen. At that point, it is a mystery to me why so many people still think free market theory works for healthcare.
It doesn't work at all for anything without strong regulations.
in Japan where the government sets the prices, dentists do things over 3 visits that dentists in other places would do in 1 because then they can bill the government set price 3 times instead of one
There are certainly problems in the healthcare systems in other countries. I don’t think there’s any perfect system. But if you ask me, “you have to go to the dentist 3 times” is a much better problem than “even people with health insurance go bankrupt regularly as a result of getting ill and needing medical care”.
The US government spends a similar amount of money per person on healthcare as other western countries do. But unlike Europe, Australia, Canada, Japan and so on, people don’t even get free healthcare in exchange for all that tax money. The system is deeply flawed. I don’t know anything about the Japanese healthcare system, but I’d still choose to be sick in Japan than America any day of the week.
I'm sure you could offer the Japanese dentist 10% of what the procedure would cost in the US and they would do it in one trip.
It cost about 25% of US prices to fly to Costa Rica, stay at a resort, and get the procedure done in a top notch facility. And that's if you just need 1 root canal and crown. If you need even more done the savings move closer to 90%.
And that is a really nice places. You could drive to Mexico and get it done at a decent place for comparatively nothing.
My dental work in Cancun was under $30K including flights from Seattle and 10 days at a higher end hotel (Westin), for work I was quoted up to $65K for in the US.
Honestly that seems high but I can see Cancun being a lot more expensive because people are comfortable going there.
The actual dental work was $23K, so about 35% of the US quotes.
Well, that seals the deal. If it doesn’t work for dentists in Japan, there is no point changing anything at all for US healthcare policy.
No system is perfect, but Japanese healthcare administrative costs are under 2% as compared to 30+% in the US.
that's crazy ! Do they at least schedule the 3 visits back to back in the same day ?
In my view, the root cause of the bizarreness is that medical care is one of a few enterprises that are inherently social in nature, and is therefore a prima facie exception to the common wisdom that free markets create the most positive outcomes for the largest number of people. Because in the US we are taught from a young age that private sector capitalism is "all there is", we end up tying ourselves into knots trying to solve medical care using the wrong toolset.
True! Really, it's a three-way relationship:
customer - insurer: the govt (or, far more rarely, the employer) is the customer
insurer - recipient: the recipient is you. You're really just a necessary but unwelcome side-effect.
Once AI is able to replace patients, the industry is really going to take off.
I think the industry terminology separates the provider (a doctor) from the payer (or payor; an insurance company in this case).
Somebody is paying for it. If not insurance companies, then the people through the government.
As a citizen of a country with socialized health care, I will tell you the politicians promise the world but when the bill comes they can't seem to find their way out of the room fast enough.
The only way to avoid this adversarial relationship is to pay for it yourself. No insurance, no government, nothing. That means vast amounts of people will not be able to afford even a doctor's private practice.
It's sad but the bitter truth is nobody really wants to pay for other people's health care either. They only say they do because it wins them votes or clients. They all can't leave the room fast enough when the bill actually comes. Politicians have other far grander things they'd rather do with all that taxpayer money, and that's when they're not corrupt and pocketing it. Insurers obviously want profit. They're all betting you won't actually need all that fancy schmancy health care they promised you. They're literally banking on it.
In my experience, people barely want to pay for their own health care. They "want" to but start appealing to the altruism of their fellow man the second the bill comes. In my country, doctors are shamed every day because of our "oath" to help others. People act like we are their slaves, not even entitled to payment for services rendered. The good doctor is the one who pursues medicine as a hobby, who walks the earth helping others in need, with no needs of his own to tend to. The good doctor somehow absorbs the costs of it all. Including the costs of the cures involved. Especially the opportunity costs.
People are not prepared for the soulless utilitarianism of public health care. The bitter truth is there aren't enough medical resources for everybody. You must pick and choose who gets that fancy MRI scan. If you pick right you kill people. If you pick wrong you kill even more people. You have hundreds of millions of citizens, how do you help as many as possible as much as possible with the resources available? Decentralization via hundreds of basic clinics and hospitals turns out to be better than centralizing everything into one well equipped giga hospital. It's not about any one guy. It's about saving costs now so that you can help more people later. That's what primary care is all about. Saving costs, by promoting healthy lifestyles which means less sick people later in life where treatment is more expensive. It's about money, about resources.
But people don't want that. Good lifestyles are hard to lead, they require sacrifices. They want to do whatever they want, then go to the doctors when they get sick, then they want others to pay for whatever's necessary to fix it, and they want it fixed good as new. They are like consumers who don't want to pay for the services they need. Nobody wants to pay for it, even the people who directly need the services.
Death panels.
What about them?
The political boogeyman was that government bureaucrats would be the members of “death panels” if we went full socialized healthcare industry, but in practice we already have death panels in health insurance claims adjusters and (less maliciously) doctors on transplant review boards.
My mother beat cancer. Insurance paid for follow up testing every 2 years. I tried to convince her to pay out of pocket and do it every year but she said 'they know best'. My mom did not beat cancer the second time when it came back and too much time elapsed between screenings.
I know 'pro status quo' people will say online anecdotes are all lies and not relevant, but there are a heck of a lot of people with a lot of animosity to the current system and it's 'for profit death panels'. I think it would be easier to swallow if it were societal chosen death panels over failed doctors (that can't make it so they go work for the insurance company) or random AIs doing the decision making.
I'm sorry for your loss. I'm also sorry it now has to serve as a warning for others. Thank you for sharing.
With a socialized healthcare system the system would have delays and you'd get the screening every 2.5 years, even if it was scheduled for every 2 years. Because of wait lists.
To be fair, it's impossible to know of it matters :)
I moved back to Europe from the US. And I can certainly feel that healthcare is slower, less eager to jump and investigate everything.
But on the other hand, in the US you most certainly risk talking your self into procedures you don't need!
I live in Australia, and I don’t pay for private health insurance. Last year after travelling to Egypt I ended up in hospital with some gut related issue. I was let straight in from the emergency room. The doctors were great. I stayed overnight in one of the wards hooked up to machines and all that.
I was discharged the next day. I didn’t pay a cent. I didn’t even see a bill. I don’t think they made one.
I keep hearing stories from Americans about wait times in other countries. I’m sure it happens, but I’ve never seen it myself. My experiences with the medical system here has been pretty universally excellent.
When I was in America I was very impressed with how proactive everything is. My insurer paid for yearly physical exams. I’d never done that before. It’s certainly possible I would have even better health outcomes in America. But, I’m way happier here. And I’m a lot less stressed than I was when I lived in the Bay Area. That counts for a lot.
Pretty much all developed countries do fairly well on rapid access to emergency care. The queues are more of an issue with elective care. Socialized healthcare systems generally impose artificial supply limits to hold down costs, which is why we often see affluent Canadians come to the USA as medical tourists to skip the queue for certain procedures like MRI scans. While socialized systems might be better overall, there are certain drawbacks.
Outside of certain screenings, there is no proven benefit to yearly physical exams for healthy adults. It's a waste of resources and doesn't improve patient outcomes. Some people seem to want those annual exams, but they aren't justified on an evidence-based medicine basis.
https://www.health.harvard.edu/blog/a-checkup-for-the-checku...
https://www.healthcare.gov/coverage/preventive-care-benefits...
> Pretty much all developed countries do fairly well on rapid access to emergency care.
I was talking to a taxi driver in SF a few years ago. He said he was in a car accident once, and his car rolled and flipped upside down. The police & an ambulance showed up. Even though he seemed mostly ok they still wanted take him to hospital. But he couldn't afford the ambulance or the hospital - without health insurance, it would have bankrupted him. So he told them all to get lost.
In telling the story, he got kind of angry about it - I think he was mad how pushy the police and ambulance people were about the whole thing.
Thats vaguely horrifying to me. A man who was just in a car accident should never be put in a situation like this. If you're wealthy in america, yeah - you get top notch medical treatment. But I'm not sure I'd call that a successful system for emergency care.
I'm not claiming it's successful, just that people can generally get rapid access to high quality emergency medical care. Paying for it afterwards is a separate issue, and changes are needed there.
The No Surprises Act did give many consumers significant protection against excessive ER bills.
https://www.cms.gov/newsroom/fact-sheets/no-surprises-unders...
That is not my experience at all !
I live in Québec, Canada and my wife had breast cancer and her periodic scan happened at a 4 months interval. When they detected oligometastasis on her spine she had radiotherapy 2 weeks after the biopsy.
The only thing not covered by the gouvernement is a drugs called kisqali that sucessfully keep her alive (may it continue to works). If I did not had gold plated drugs insurances, the public alternative was weekly chemo (taxol or taxotere i dont remember).
People need to be reasonable and know when to DNR. 85 yrs old with massive health problems has a stroke and falls over...DNR. Not here. We jump them back to life, deny their claim and stick them in a facility. Now they are babling and drooling all day and the trust fails to kick in so the people grandma was taking care of financially wait patiently while someone with POA drains grandmas bank accounts and sells off her houses that said people were going to live in (all in violation of her wishes and planning) to pay medicare.
Well, docs have seen this coming from miles away. I don't think anyone having substantial experience in clinical medicine is surprised by those developments, unfortunately. But it doesn't stop here. Insurance companies will be (are) building models to overcome legal barriers. Imagine: you're 20 and healthy, but located somewhere suggesting a higher risk of developing some chronic disease in the future ? Then no insurance covering this particular condition, for you specifically. A real-world application of the 'fuck you in particular' meme. This of course extends to all sorts of sensitive matters, such as your ethnicity, sexual preferences, etc.
Now this is a really scary application of AI, but you won't hear those wanting AI regulation such as Musk complain about that, right?
That's one reason (among many) the preexisting condition part of ACA is so important.
Without it, health insurance companies would have every incentive to do what car insurance companies do -- buy profiles and records from third parties and use those to adjust rates and willingness to insure.
E.g. the obvious step of buying genetic information from 23andme, because it isn't covered by HIPAA
GINA prohibits health insurance companies from using genetics to deny coverage or set premiums.
https://www.hhs.gov/hipaa/for-professionals/special-topics/g...
I'd feel a lot better with customer-centric privacy protections around the collector and storer, a la HIPAA.
Instead of regulating only some of the uses.
HHS already had to administratively extend to cover gaps (we'll see how that goes, post-Chevron) and Congress attempted to repeal it for workplace purposes in 2017.
And there's still the gray market question about 23andme -> Equifax-alike packaging it into a blended proprietary risk score -> insurance companies using that (of course 'without knowing that genetic information was included').
- [deleted]
The year is 2035. To cut costs, both insurance companies and providers removed the human from the loop long ago setting off an adversarial process between the LLMs on both sides. Medical insurance claims are now written in an ever changing format that resembles no human language. United Healthcare has just announced a $10 billion project including a multi gigawatt data center to train its own foundational model to keep ahead in the arms race. UNH stock is up 5% on the announcement.
The naïveté of I Have no Mouth and I Must Scream is that it would be something as pedestrian as nuclear war that the globe-spanning hate machine would be built to manage. Now we know what AM would really have been built to do.
[dead]
The real fun starts when they start writing the insurance contracts that are only meant to be readable by ML algorithms. Imagine thousands (millions?) of contract pages written in practically incomprehensible language, designed by an ML algorithm to contain clever loopholes that are difficult to detect by an adversarial algorithm.
Interesting. Do you have any examples to share?
Agree. I'd love to see an example of this, or read more about it.
This piqued my interest too. I found a few adjacent papers but couldn't find a source that made as comprehensive of a claim.
The closest were:
- "In constant battle with insurers, doctors reach for a cudgel: AI" from NYT (via Salt Lake Tribune), 2024 July, which is mostly on doctors using law-compliant LLMs to draft prior authorizations and has a passing one-graf mention of insurers likely doing the same: https://www.sltrib.com/news/nation-world/2024/07/11/constant...
- "The AI arms race over your medical bill" from Politico, 2024 Jan., summarizing LLM use in coding, billing, and fraud prevention: https://www.politico.com/newsletters/future-pulse/2024/01/05..., linking to https://www.politico.com/news/2023/12/31/ai-medical-expenses... and https://govciomedia.com/how-health-tech-leaders-use-ai-to-co...
Aside from that:
"Large Language Models to Help Appeal Denied Radiotherapy Services" from JCO Clinical Cancer Information, 2024 Sept. (abstract only; full-text paywalled) https://pubmed.ncbi.nlm.nih.gov/39250740/
"The potential of large language models in the insurance sector", 2024 Feb. (commercial white paper), largely focused on "fraud detection" in claims: https://www.milliman.com/en/insight/potential-of-large-langu...
"IQVIA NLP Risk Adjustment Solution (undated commercial white paper), marketing pitch on using AI to improve coding accuracy and reduce chart review times: https://www.iqvia.com/-/media/iqvia/pdfs/library/fact-sheets...
- [deleted]
Interesting! Source?
[flagged]
- [deleted]
[flagged]
This is coming and there's a very simple fix.
Make healthcare insurance be actual insurance: as in, not a gateway to treatment conditions that are entirely lifestyle driven.
Once patients are responsible for the bill and the large middle layer admin crud is taken off the table, medical inflation almost disappears. Take this example of a for-profit facility vs non-profit hospitals [1]
Ideally this happens once environmental factors are fixed or drastically reduced so diet and lack of time are "choice-driven" instead of "needs driven" as health determinants (you do have subsidies at the lowest end, but that cannot go on forever).
https://www.openhealthpolicy.com/p/cash-providers-cheaper-su...
How do you delineate between conditions that are "lifestyle driven" and not? When you develop a problem with your body it doesn't come with a receipt listing the cause.
I've personally had postural issues that were for many years simply attributed to poor discipline. It later turned out that I have a connective tissue disorder that was destroying the joints in my body.
All you I can see your proposition doing is giving insurures another reason to decline potentially legitimate claims. Your case would be more rational if you were arguing for no insurance at all.
Medical authorities make a list, and people applying for insurance answer lifestyle questions like "do you smoke?", "do you exercise?" etc and they have an initial exam.
It's not that hard. In the example you've given you could sue the doctors for misdiagnosis, or if research showed that a condition had been mislabelled, people would receive compensation. It doesn't seem any more onerous than any of the other negotiations over conditions and treatments that go on in any medical industry and legislature anywhere in the world.
I believe it’s pretty widely accepted that some component of addiction and substance abuse is genetic / hereditary. The same is true of depression.
I personally feel uncomfortable labeling these as lifestyle choices to drop insurance liability. Alcoholism isn’t really the same as skiing.
> Alcoholism isn’t really the same as skiing.
Statistical methods can be used to assess the risk of each.
> I believe it’s pretty widely accepted that some component of addiction and substance abuse is genetic / hereditary. The same is true of depression.
High risk people should:
a) have most costly insurance against those things
b) be given help to avoid those things, which
c) could be used to reduce the cost of their premiums
Men are more prone to violence and also more likely to be victims of violence, and this is largely biological (hence the huge disparity between the sexes) - would that be some sort of excuse? Should men and women pay the same for the same relevant insurance even though they present wildly different risks of both perpetrating and befalling violence? That would be unfair.
I'm firmly in the "you are responsible for your life as an adult" camp. From a family of smokers thus making you more likely to be a smoker? Don't smoke. History of alcoholism in your family? Don't drink… need I go on?
One can smoke and have a condition that is not caused by smoking, just like one can avoid exercising and have a condition that is not caused by insufficient exercise. You can't compile a list of facts about a person's life and use that to deterministicly attribute the cause of given conditions.
Does having one vice deny a person for life from having coverage for any disease which my potentially be caused by that vice? How long must a person partake in this vice to be denied coverage for life (i.e. is it okay to smoke for a few years then quit?)
Your example also has the problem of measuring the "lifestyle questions" being presented. How would you prove a person isn't exercising enough? If I know it will get me denied I'm not going to self report. We would need some sort of invasive "health audit" industry to insure compliance with insurance requirements. A physical exam at the start of insurance doesn't solve this, because like I said, the existing issues could have been caused by any number of problems.
Your dismissal of my specific example is silly - I don't want to sue a doctor for misdiagnosing a relatively common issue. Connective tissue disorders are not that rare, and I'm far from unique. Do you want to live in a society where we have to fight tooth and nail to get basic care for problems on the basis that we might have caused them ourselves?
> You can't compile a list of facts about a person's life and use that to deterministicly attribute the cause of given conditions.
That is just not the case, and I shouldn't have to point out such basics on HN.
> A physical exam at the start of insurance doesn't solve this, because like I said, the existing issues could have been caused by any number of problems.
And yet I have to… People do a thing called "collecting data", on a large scale, and then they apply the lessons learnt from that data to calculate statistical risk. An imperfect system but, strangely enough, very effective (when not interfered with, as in the US health system).
Of course, you are welcome to open a car insurance company and offer everyone the same insurance for the same price and watch as young men and previously uninsurable people flock to your service. Maybe you'll get lucky and won't have to pay out more than you take in. All the best with that.
> Your example also has the problem of measuring the "lifestyle questions" being presented. How would you prove a person isn't exercising enough? If I know it will get me denied I'm not going to self report.
And then the insurance company can decide what level of fraud it will tolerate (something HN has discussed previously[0], and the linked article[1] is enlightening) and thus adjust its costs, and perhaps premiums.
> Your dismissal of my specific example is silly - I don't want to sue a doctor for misdiagnosing a relatively common issue.
If it's not a problem then don't sue. If it is, that is what the court system is for (or whatever system doctors and medical companies would put in place to avoid going to court).
[0] https://news.ycombinator.com/item?id=38905889
[1] https://www.bitsaboutmoney.com/archive/optimal-amount-of-fra...
You don't, but you provide lower insurance prices to those who can demonstrate a healthy lifestyle.
You wouldn't even need too much surveillance to do this. Give people a yearly "fitness checkup" to encourage physical exercise, monitor weight to encourage healthy eating habits, do periodic drug tests to discourage drug use etc.
If you combined this with a (privacy preserving) fitness band that would monitor your vitals, and only send a list of premiums you're eligible for, you could do even better.
You'd have to account for preconditions that make it hard/impossible to exercise, but this would work for most people.
Any actual, real-world implementation of such a thing would 1) not be privacy preserving (except in the "we pinky promise we're not going to use this data!" kind of deal), and 2) would inevitably expand the definition of "unhealthy lifestyle" over time as a way to exclude undesirables from the system and thereby leave more resources for those who remain.
I once tried calling my relatives in Russia and was instead connected with exactly this kind of bot.
I guess this has something to do with my phone number starting with +38 and that nobody actually calls their relatives by mobile phone anymore
Or AI profiles that pen-test the real grandmas/grandpas.
- [deleted]
My father has fallen for one fraud after another these last few years. It’s disgusting. Anything in the direction of solving this would be doing the lord’s work.
What country are you in?
I’m in DE and have filed a criminal complaint about a company that runs fake personal ads targeting the elderly:
When an elderly person calls, they schedule an appointment at the person’s home. Then, over several hours, they talk them into signing a 3,000 EUR contract for an objectively useless service (getting contact data of 7 or so random people over a period of several weeks).
The guy running this scheme has been doing it since the 1990s. We know they did > 40 Million Euros in the last 10 years alone.
We filed the complaint a year ago and police and district attorney have done nothing so far. At the same time, the criminal himself has sued journalists who have covered the story multiple times.
Seems like the criminals are more resourceful than those who are getting paid to stop them.
The solution is for all foreign wire transfers to be insured and reversible which would drive up the cost of doing business with countries home to scammers.
That’s just going to drive more fraud to the receiver’s side. For example, one can pay for goods and receive them, then reverse the payment. This already is a huge percentage of fraud that businesses have to deal with, and there is a large industry built around preventing it. Fighting fraud (and other crimes) requires vigilance from all parties.
Yeah, that's basically what happened in the case of the famous Nespresso Money Mule story.
The scams sometimes involve getting him to buy gift cards and then sending the card number. Not sure your solution would help.
Gift cards are a money transmitting service.
How would that help? How would you prove to the insurer that you were scammed out of the money and are not in fact pulling the opposite scam (that is, paying someone for a good/service, and then clawing the money back afterwards)?
Insured against what? (The bank is already liable if the account is breached). These scam transfers are intentional acts authorized by the account holders. A company can't be held liable for the stupidity of its customers.
Banks should be liable, there is often insufficient ways to validate who you're transferring money to.
Sometimes the bank interface will tell you name and address, after you type in numbers, but who validates this?
My bank (in Denmark) sometimes sends me emails from an domain that isn't their primary domain.
The bank uses a login system that is provided by the state. In theory it's a good idea, but you sign-in on domains that are not owned identity authority. Like I sign-in on the bank website, instead of sign-in by redirecting from the bank to a trusted domain owned by the identity authority (how like OIDC flows usually work).
Sure the login flow still involves an app, but my point is:
There is a lot of bad practices around. These should incur liability.
Just start looking at what domains emails are sent from. And complain if they are not the primary domain of the entity contacting you, you'll get tired real soon.
I agree, and in the US banks are already liable if an unauthorized person gains access to your funds. But what I'm saying is that most of the scams that OP is talking about just aren't done that way. It's just a scammer that tricks them into sending money.
It's just old fashioned con man stuff but over email or phone. And of you're dumb enough to believe that the only way the IRS (US tax agency) is willing to accept payment is by Visa gift card (and yes this is actually a common scam here) it's just not your bank's problem.
Lots of government websites are extremely sketchy and redirect you to a sketchy payment gateway, often the payment gateway is on some weird domain.
If I make a payment using my visa card, how is it that I'm not just redirected to visa.com, and that's the only place I enter card credentials? Like how OpenID works.
I'm sure there are reasons, probably legacy reasons :)
But it's still weird that payment uses a third-party domain I can't verify. Often called something sketchy.
To be clear the EU did a lot with 2FA requirements for online payments.
Public awareness campaigns on the level of anti drink drive ads from the 90s.
Pay your taxes by Visa gift card. Then you’re a blood idiot.
This is all irrelevant.
The way these scams work is to get someone to intentionally send money, usually through gift cards or wire transfer to an account.
Some of the common schemes are to pretend these are taxes that they didn't pay ("you owe the IRS a huge amount, send this money and all the penalties won't be applied immediately"), sometimes it's urgent money needed for a loved one (a common one is "your son/daughter just hit someone with their car, send money to this lawyer to try and save them from prison"), sometimes it's promises of future riches (such as the infamous Nigerian Prince who will send his fortune to you if only you send a little money first).
Power of attorney, and hold all his cards etc for him.
Requires the person to want to do that though.
It's a funny thought on the surface, but the people working these scams are typically slaves, more or less. I'd rather go after their slavers than waste electricity to waste their time.
This is not attacking the people on the phone, it's attacking the whole operation. The person on the phone is going to be on the phone regardless of whether they're talking to an AI or a victim. The AI is merely talking on the phone, not abusing the caller in any way (other than perhaps eating into their commission).
I also think it's extremely simplistic to call the people making the calls "slaves". A lot of the time, they are in facr the perpetrators. Even when they are part of a more organized operation, they (1) are likely paid per successful scam, so they are co-interested in hurting you, and (2) fully aware they are scaring and stealing from someone.
So I wouldn't call these people slaves, I'd call them low-level criminals.
If there's no money in slavery there won't be any slaves. Yes, they'll get moved onto other things but this is currently the most profitable slavery operation available so that seems like a good place to start.
It's sad you're downvoted, because you're right. So called "anti-scammers" who make a fortune on Youtube or that Reddit apparently considers heroes, are in effect preying on the poor. The real culprits are the bosses, not the ones doing the phone calls.
Imagine AI calling AI and wasting each others time :D
And wasting resources too. We’ve peaked as a species.
This is actually a hilarious scenario. Anthropomorphize TTS with Indian accents to entrap the other AI agent into thinking they are a real human. DDOS their o1 API calls by soft jail breaking prompts using complex programming questions disguised as typical Microsoft support issues.
CodeBot: Word tables blank sometimes. Hmm.
SupportBot: What version? Try a repair.
CodeBot: Memory issue maybe? Bad alloc?
SupportBot: Rare. Repair is next.
CodeBot: Threading problem sar? Data races?
SupportBot: Try repair, new doc please sar.
ScamVictimBot: sound = tts("Say again?") // constant, might as well cache while(true): phone.out(sound) _ = phone.waitUntilNextPause()
- [deleted]
- [deleted]
really wasting each other’s energy
really wasting humanity's energy and the planet's climate
A lot of human activity is exactly this, especially in the realm of marketing, so maybe AI getting trapped in the same nonsense would finally make people understand how stupid and absurd this is.
Exactly
> Conversations with the characters quickly went sideways when some users peppered them with questions including who created and developed the AI. Liv, for instance, said that her creator team included zero Black people and was predominantly white and male. It was a “pretty glaring omission given my identity”, the bot wrote in response to a question from the Washington Post columnist Karen Attiah.
This is probably true, but the AI almost certainly hallucinated these facts.
I hate the journalists pretending that ML is regurgitating facts instead of just random tokens. The ML model doesn't know who programmed it and can't know because that wasn't in the training data.
I don't think companies should be allowed to double dip.
If they want to put AI EVERYWHERE then they don't get to hide behind "their output is not to be taken seriously" afterwards.
> I hate the journalists pretending that ML is regurgitating facts instead of just random tokens.
They aren't claiming that what the AI said was fact. It seems your hate comes from lack of understanding.
It's a bit ambiguous, but I certainly read at as the article as suggesting the factoid was true.
It seems that the lack of understanding comes from hate.
They don't appear to have done so. They just shared what they asked it, and screenshots of the conversation: https://bsky.app/profile/karenattiah.bsky.social/post/3letty...
I hate the game reviewers pretending that games are providing entertainment instead of just random numbers in RAM and random pixels on the screen. The game doesn't know who programmed it and can't know because that wasn't in its source code.
I beg you a pardon, what do you mean it doesn’t know? Almost every game I played had credits section in the menu. Games do “know”.
Microsoft tried this ages ago with their Tay chatbot and found how quickly it went bad.
I’d forgotten about this - it was wild.
Yes the first thing that came to my mind when I read the article is Tay.
And we have come back full circle
I'm Scandinavian and not invested in the American culture wars and I still got a good chuckle out of how bad an idea this was. Who on earth could've thought it was a good idea to get an AI to pretend to be a black queer mother of two? I'm sure it'll piss off a lot of anti-woke people, but really, how on earth did the issues with this not become obvious for the team? I'm not sure if the AI knows who trained it (and I wonder how they did it) but the team can't have included a lot of common sense or real world experience for them to do something so fundementally stupid.
I'm baffled that they didn't even try to hide the fact these profiles are artificial but outright add that bleak gray text saying "AI managed by Meta". I mean, did we reach some checkpoint here where reality is blurred with fiction? Do we now treat these generated personas on par with real humans?
Honestly, I don't think meta can go any lower from this point in order to get the user engagement with their silly plaything that facebook has become.
> I mean, did we reach some checkpoint here where reality is blurred with fiction?
Yes, the first interactive synthetic pop culture icon is probably the Vocaloid Hakune Mitsu, circa 2007, and she has lots of fans who know exactly who and what she is. I am not surprised Meta wants in on the "influencer" action with synthetic personas: no commissions to be paid, they can create a persona for every possible niche for a marginal cost.
The technology is not there yet, we've barely progressed from Tay in terms of corporates' ability to prevent their AIs from saying things that cause bad PR. AI is definitely coming for influencers, the economics are just too good
> Who on earth could've thought it was a good idea to get an AI to pretend to be a black queer mother of two?
I would think that the obvious use of AI Facebook profiles would be to train them on someone who actually existed in the real world. Take someone like Jimmy Carter, train an LLM on everything he's ever said, and then let people interact with that.
But I imagine there are legal reasons they can't do this?
The article indicates that it was one of 28 personas created by meta. However, the reporters in between the story and you, thought that one would be interesting to you and so promoted it's relevance. In actuality, if you rolled a dice of potential human traits 28 times, this could be a statistically normal combination
Can you explain why it is you think that makes it any better? I can think of no argument that would make this particular persona anything but a tremendously stupid idea. Even if it was one in a thousand personas it would almost certainly be found and singled out by articles like this one.
> how on earth did the issues with this not become obvious for the team?
I haven't reviewed the context of all 28. But if "the team" were trying to do this rationally: perhaps they'd use census data to weight human characteristics and roll 28 dice. We might not expect more than one of 28 to end up as "queer black mom" but it's not necessarily "woke bait" or "rage bait" for one of 28 to land on those squares. Perhaps it was a logical way of assigning traits?
I don't think it's woke or rage bait. As I said I think it's no real world experience and no common sense and I'm frankly amazed that nobody in a SoMe/advertisement organisation didn't consider how the world would react.
Lol 1 outta 28 isn't a black queer mom of two.
The clash of these characteristics is what defy and define these unlikely odds.
This is trolling rage bait. Like most big corp content.
I'm not sure if you're joking but your comment expresses it backwards - according to the op article, meta made 28 such bots. One of those bots had the insta profile tagline "queer black mom". In between meta's creation of the 28 and the story arriving on your dinner plate, someone decided to focus the coverage on that one of 28. They thought it would be interesting to people and clearly were right
What they're saying is that the odds of such a combo are far less than 1 in 28 IRL.
How did you determine that?
I'm not OP and I have no idea what the actual odds are. But, well, only 15% of Americans are black to start with, and obviously only half of those are women - and at that point we're already down to 1 in 14. So, unless every other black woman is a queer mom, there's no way it's 1 in 28.
If you were trying to mimic a distribution of Instagram users, and you rolled the dice 28 times, it's not unreasonable that this combination of traits could be expected one twenty eighth of the time.
This is birthday paradox logic, not census logic
A birthday is only one metric that the people in the school share. This is five variables.. So no..
I think it will piss off woke people, too.
It seems designed to piss off everyone.
Perhaps the product manager was a black queer mother of two?
Maybe everyone of the 28 AI "people" represents one of the team members, but that doesn't make it any less of a bad idea in my eyes. As a father of girls I've seen full well how representation matters in media, but I suspect that AI "people" is the one area where people will want as little representation as possible.
If you’re meta and you have to defend the AI by admitting “it’s not really intelligent and everything it says is bullshit”, that’s not a position of strength.
Particularly when it's likely that despite the AI bullshitting in the absence of data on its creators, it's also incidentally true that the "black queer momma" persona is a few lines of behavioural stereotypes composed by white male programmers without any particular experience of black queer mommas.
I dont think that necessarily applies when you could easily make a training set from some actual black American people’s writings on the internet or book, or even an individual that self identified that way and train on all their writings around the internet, and result in those same stereotypes when you ask an AI to create such a profile
You dont need a black American engineer or product manager to say “I approve this message” or “halt! my representation is more important here and this product would never fly” as they are just not person the data set was trained on, even if you asked an AI to just create the byline on the profile for such a character
its weirder, and more racially insensitive, for people to be vicariously offended on behalf of the group and still not understand the group. In this case, the engineer or product manager or other decision maker wouldnt have the same background as the person that would call themselves “momma”, let alone it not mattering at all, if you can regurgitate that from a training set
I mean, sure, some programmers with very little experience of queer black mommas could, hypothetically, be so good at curating information from the internet and carrying out training exercises that they created a persona that convincingly represents a queer black momma. Do we think this is what happened in this instance?
In which, ironically, the bot called it a "glaring omission"
the bot is echoing sentiments of comment sections it was trained on and had no idea of its origins
its acting aware and sensitive but only has information about the tech sector as a whole
my critique is about how the standards being applied are dumb all the way down. the standards are not actually that enlightened even if there was more representation congruent with the race/identity being caricatured as representation. nullifying the whole criticism.
the training set is the only thing thats important with a language model. and its just a symptom of dead internet theory, as even the persona’s byline was probably generated by a different language model.
well, yeah, I acknowledged the bot has no idea of its actual origins in my first post. the point is that at some point some actual product manager thought that creating this persona (probably a generic training set plus a few lines of human-authored stereotypes as prompt) and releasing it to the public as an authentic, representative personality was a good idea. Unlike the product manager, the bot's bullshitting was context aware enough to express the sentiment that this was a bit embarrassing
If the intent is to make a recognizable caricature and apply labels to it (cough stereotype), you don't have someone draw themselves. And it's really looking like stereotyping is their intent.
> This is probably true,
I'm not sure we get to both complain that bots are trained by "white males" and at the same time complain that big tech abuse H1B visas for cheap compliant labourers.
For sure.
Huh? On the balance of probabilities, why would this be the "most certain" option? I think that logic only works on the HN scale of "works in my ChatGPT window" or "bad outcome, so hallucinated".
[dead]
A woke bot from a woke workforce unintentionally created a parody of wokeness. Keep 'em coming.
Do they not have anyone sensible left in the room? Like, you’d expect that at some point someone would have said ‘these are comically terrible, we cannot allow them to see the light of day’.
It's like people have already forgotten why people used the non-human beings in Westworld. One of the first things we're going to use them for is amusement and having them represent anything sentimental or sensitive is going to make them a target like this. It was irresponsible.
There's something essentially wrong with Meta and Google where they can do tech but not products anymore. I'd argue it's because the honest human desire that drives a product dies or is refined out of initiatives by the time it gets through the optimization process of their org structure. Nothing is going to survive there unless it's an arbitrage or using leverage on the user, and the things that survive are uncanny and weird because they are no longer expressions of something someone wants.
These avatars ticked all the boxes, but when they arrived, people laughed at them because objectively they were bureaucratic abominations.
> There's something essentially wrong with Meta and Google where they can do tech but not products anymore. I'd argue it's because the honest human desire that drives a product dies or is refined out of initiatives by the time it gets through the optimization process of their org structure.
This is a really nice insight. I hadn’t put it together this way before. It’s similar to design-by-committee, or movie script-by-committee (like Disney-era Star Wars movies, or a Netflix focus-group-driven script). The layers of bureaucratic filter have rubbed off all human influence, and all that’s left is naked profit motive.
The worst thing IMO would be if LLMs became able to convincingly fake these emotions. They’d become emotional pillows for the inhuman manipulative ambitions of their parent orgs.
I wonder why so few companies employ Pixar’s Brain Trust method.
You get into a room with a bunch of key execs who are not part of your project and then they tear your product/movie/idea apart.
You don’t have to implement any of their suggestions but since these are seasoned execs they see flaws readily.
The key is that the process encourages candid feedback without hierarchy or defensiveness.
This goes on for a couple of iterations until the idea is ready for production.
I think Apple has something similar.
Requires really secure upper management (think Steve Jobs, or John Lasseter).
Those types of folks are not representative of most tech C-Suite denizens.
Damn. You’re probably right.
I find this insight so frustrating though. Isn’t that exactly what upper management is supposed to be doing and what they’re paid for after all?
Yup.
Imo it's due to people have a hard time separating the worth of their idea with their self-worth. So criticism of their 'baby' breeds negative sentiment which creates a challenging political environment for criticism, which would need to be ameliorated by strong organizational culture.
I think the commenter is saying, with a brain trust, you don't share organizational structure or hierarchy. Neither party has enough interaction for a political environment to form. You can be offended, but you won't see them again and you're not forced to implement their ideas
> I wonder why so few companies employ Pixar’s Brain Trust method
It doesn't scale. That model hasn't been utilized at Pixar lately - whatsm was the last good/hit Pixar movie that wasn't a sequel or side-quel? Like most companies, Pixar relies on a few great products or ideas, and then doubling/tripling down on them, to make sure the graph grows up and to the right for shareholders.
Mainly because they ousted Lassiter for being to critical of the crap that was being put in front of him. They framed it in other ways publically, but I'm pretty sure he was getting in the way of others taking the company in a direction that didn't prioritize quality.
Oh, I can think of very few exec I’ve worked with in tech who would willingly expose their team to public criticism, for fear that it would blow back on them. I’ve seen execs pay heavy prices for a team making a mistake in public. It’s stupid, but some orgs run that way.
> There's something essentially wrong with Meta and Google where they can do tech but not products anymore.
Because AI is the hype thing and adding AI to your thing makes the stock price go up, because investors and stock evaluators don't know what AI is, nor do they care. They just know it's the hot thing and what you put in your product to make it look better to Wall St.
Meta AI, Google AI summaries, Apple Intelligence are all hilariously half-baked features designed to connect users with ChatGPT so the line goes brrrrrrr. Them being helpful, useful products that solve problems for users is a distant, distant next place to that.
> It's like people have already forgotten why people used the non-human beings in Westworld
This is fiction though. Perhaps we didn't forget about that fake thing and we're critiquing this real thing that exists. How can you take a demonstrative position on "what we're going to use them for" and defend it with someone's contrived story?
> because objectively they were bureaucratic abominations
Yes, this is why criticism of this real thing isn't countered by claiming people "forgot" what occurred in a fantasy script about things that don't exist.
this misunderstands what fiction is. for people with the capacity, it's a tool for reasoning about hypotheticals and counterfactuals. sometimes its fun, but mostly it's serious.
for the people who get it fiction is a public discourse about possibilities. to me seeing it as arbitrary would be like watching golf and thinking it's random. there's a literal mindedness or incapacity for abstraction I can't apprehend in that.
The problem with using fiction as a discourse about possibilities is that fiction is governed by the rules of the author's mind, not the rules of reality. So the fidelity of the model being used to drive the discourse is directly dependent on the congruence between the author's internal model of the world and reality, which can often be deceptively far apart. This is especially bad when the subject is entertaining, because most of us read fiction for entertainment, not logical discourse. So we create scenarios that are entertaining rather than realistic. And the better the author is the more subtle the differences are, but that doesn't mean they go away. It feels like a somewhat common experience in my life that I'm discussing some topic with somebody, and I have subject matter expertise based on actual lived experience, and as the conversation goes on I discover that all of my conversation partner's thinking about the subject was done in the context of a fictional world which misses key elements of the real world that lead to very different conclusions and outcomes.
This is not to totally discard fiction as a way of reasoning. With regard to hypotheticals beyond our current reach it is often the only way to reason. So it's valuable, but we have to keep in mind that a story is just a story. Hard experience trumps fictional logic any day. And I can't assume that the same events in real life will lead to the same outcomes from a story.
The point is that a fictional story is just one of a host of possibilities, so you can't base decisions off what happens in it rather than the other n-1 possibilities.
> sometimes its fun, but mostly it's serious.
Sometimes its fun, but mostly it's stupid.
Fiction is a story that an author wants to tell, for whatever reason. It doesn't necessarily follow rational rules, it follows the rules of story telling. Fictional people do things because it makes the story more interesting, not because they have some internal logic to their actions.
Agree, fiction and narrative is a fundamental method of human reasoning, one of the first and oldest.
I think a lot of people are reacting to this are missing some of the point - fictions may not literally model reality, but what makes certain fiction memorable is it shines light on some under-expressed aspect of reality, which sticks with us when we encounter a similar pattern in our lives. Whether its also serving entertainment purposes or the author's pet peeves is beside the point, because that is not the part of the fiction to be taken seriously.
We have all sorts of relevant quotes about this:
> all models are wrong, some are useful
> truth is much too complicated to allow anything but approximations
> We all know that Art is not truth. Art is a lie that makes us realize truth, at least the truth that is given us to understand. The artist must know how to convince others of the truthfulness of his lies. - Picasso
Aside from the obvious perception-is-reality aspect of fiction, it's especially important to take fiction seriously these days because the tech world is eyeing the many gray areas between fiction and reality as the next frontier for expansion. These avatars are a perfect example of this attempt to acquire more of the territory of human experience; not just the material reality but the many possible directions our hearts and drives may take us. If we dismiss fiction as something to take less seriously than 'reality', rather than something to be understood in a different way than rational analysis (and with its own skill tree), then we cede this territory to those that know how to wield it against us.
>for the people who get it
Yea I love fiction. I certainly get it.
You should think this way about a particular thing because this one movie portrayed it that way, and if you don't it's because you forgot that this movie portrayed it that way according to its creators will, thoughts, ideas, motivated reasoning, worldview, whatever...is not compelling.
It's pretty obvious to me that the creative choices and ideas of certain people do not imply any demonstrative truth about reality just because they exist because there is not a direct connection between the two; someone can write or film or render whatever they want, even completely contradictory versions of the same topic. What if for example someone did watch that or any other thing and simply disagreed? Think the creators got it wrong?
> public discourse about possibilities. to me seeing it as arbitrary
It's not arbitrary. It can certainly be self-serving, it can be propaganda, it can be a good guess or a well-meaning statement about reality, or speculative fiction but also wrong. It's just the emphatic certainty in how you presented this media creation as proof of something inherently connected to truth about a complex future debatable quality of reality, as even a fictional account about history or the present day or politics suffers from obvious fundamental disconnects from being regarded as "truth", proof or evidence of anything. Again, different people can make multiple contradictory or competing portrayals of a certain concept or topic. This is no different than someone telling you what they believe about a thing; it's not "therefore true" as such, especially in the form of a prediction about the future.
you've found a way to use dinosaur DNA in eggs to genetically engineer a medium sized dinosaur back into existence. you've got a bunch of samples in your apparatus, and you go away and come back to find out they've hatched and disappearared.
someone says to you, at least the fences will keep them in. and you say, "what fences?" and when they say, "didn't you see jurrasic park?" your answer is, "why would I see that, it's fiction, and who cares it's just a 90's kids movie"
that's how dumb I'm saying this release was.
> There's something essentially wrong with Meta and Google where they can do tech but not products anymore.
I'm listening to "Masters of Doom" on Audible. It's about the creation of Wolfenstein 3D, Doom, etc. Great book.
Something that's interesting about it, is that Id Software seemed to largely reject the idea of having a game where the players 'connected' to the characters or the story. They were laser focused on:
* Carmack creating a game engine that was the best in the world
* Romero and crew making the gameplay as fun as possible
But it sounded like they had folks on the team who wanted to make a story where the players could 'connect' with the protagonist. He was fired.
Some dude named "Tom."
Facebook, weirdly enough, seems to have the issue. Which is particularly odd considering that's their product!
I haven't played games in ages, but when I saw "Half Life" for the first time, it felt nearly as "revolutionary" to me as Wolfenstein 3D was.
I feel like the way Romero and Carmack wanted to make the player "connect" with the game was different than what Tom had in mind (from reading the book).
Tom wanted elaborate lore and story-telling, while the rest wanted to make the game experience what the player connected to. The instant reaction to your input, seeing your bad-ass character (and by extension yourself) inflict awesome damage on the world.
This to me is more of a conflict of _what_ the player should connect to, as opposed to not wanting the player to connect at all.
I think Google and others are too distracted in collecting enterprise coin at the moment. They have a perfectly good consumer product in NotebookLM, but at the moment it has the quality of something an intern made.
As someone who's built something like it in their free time as a hobby project ( https://github.com/rmusser01/tldw), could I ask what would make it a professional product vs something an intern came up with? Looking for insights I could possibly apply/learn from to implement in my own project.
One of my goals with my project I ended up taking on was to match/exceed NotebookLMs feature set, to ensure that an open source version would be available to people for free, with ownership of their data.
I'm going to challenge you to put that first screenshot into ChatGPT/Claude and ask them why it looks like something an intern came up with vs a professional product.
I'm not saying that as a slight or an insult, but right now the screenshot looks like a Gradio space someone would use to prove out the underlying tech of a professional product, not a professional product (unless you literally mean professionals are your target users as opposed to consumers).
I think an LLM would be able to very quickly tell you what most product builders would tell you at this stage.
-
Also one of the key enablers of NotebookLM is SoundStorm style parallel generation. Afaik no open source project has reached that milestone, have they?
I don't think you understand the context, the person I was replying to was making that comment about NotebookLM. I'm fully well aware of how my UI looks, the whole reason I'm using Gradio for right now is that it is a single person project that isn't a product for sale. Not quite an intern, but same amount of funding. The current UI is a placeholder, because the idea is to migrate to an API first design so users can have whatever kind of UI they'd like.
SoundStorm/Podcast creation is one of the big draws, but I would question as to whether its one of its most-used features, considering hallucinations and shallowness.
I guess I really don't understand the context because even with this clarification it's not clear what you're asking past "what does it take to add polish to my nascent project", when the reality is by the nature of it's nascent state no one is going to be able to give you more than surface level advice (which the LLM can provide pretty effectively, and in a more tailored way than we random commenters can. Isn't that fact kind of the underlying of your own project?)
> SoundStorm/Podcast creation is one of the big draws, but I would question as to whether its one of its most-used features, considering hallucinations and shallowness.
You're questioning the one single feature that drove its entire success in distribution? Most people don't know any features except the podcast feature.
If your goal is to address the more underlying concept of getting across knowledge in a quicker more readily absorbed format using LLMs, there's already an insane amount of competition and noise.
The podcast thing was the only reason NotebookLM cut through that noise, so the question shouldn't be "is it one of the most used features" (due to the way conversion rates work it will be btw, it might not be the most used feature by people who stay but obviously the feature that's highest in the funnel drawing people in will be your most used feature) but imo the more relevant question is "is it one of the most important features", and the answer is yes.
You’re making assumptions about my original question. I wished to know their opinion on what made NotebookLM ‘look like an Intern’s project. If what they shared was something I agreed with and relevant than sure I would apply it to my project but I already have plans for improving my project to my own standards.
Thanks for sharing your perspective though, I will keep it in mind. I disagree regarding podcasts being a ‘big thing’ past the initial honeymoon phase. I do think that custom generated audio is also a big thing and the podcasting is the first exposure a lot of people have had with that level of quality, and since it’s free and everyone already has a Google account, it makes it much easier for it to viral.
plans for improving my project to my own standards.
That’s roughly the implication of my original comment. Many developers, including you, have a higher standard. I’m not gonna use something that looks like an intern slapped a few REST calls together with MUI.
It’s 2025 dude, I can build that in an afternoon. They have to take the product a lot more seriously.
Google’s lack of taste on this front will be the success of another competing product (perhaps even yours).
Ahh, thank you. For what it’s worth, I agree.
I could not have said this better myself.
Why would consumers pay for NotebookLM? It’s basically an (impressive) party trick.
NotebookLM is more than just podcast generation. I came into the middle of consulting project where there were already dozens of artifacts - SOWs, transcripts from discovery sessions, client provided documentation etc.
I loaded them all up into NotebookLM.
I started asking it questions after uploading it all to NotebookLM like I would if I were starting from scratch with a customer. It answered the questions with citations.
And to hopefully deflect the usual objections - we already use GSuite as a corporate standard, NotebookLM is specifically allowed and it doesn’t train on your data.
Why would you say that? I used it as a study guide. Super useful. Stuff like uploading 8 hours of audio and asking it: “Generate an outline of topics that the instructor said were important to remember.
Fictional dystopias aren't the real world. Westworld is just television.
Life often imitates art.
It's like people have already forgotten the deep life lessons of The Core (2003).
Having worked at a similarly gargantuan and dysfunctional company, I can tell you exactly how this went down. Someone had this idea for AI profiles. They speced the product and took it to engineering. The engineers had a good laugh at how preposterous it is, but then remembered that they get paid a ton of money to do what they're told, and will get promos and bonuses for launching regardless of the outcome.
It all stems from promo culture -- it doesn't matter what you build, as long as it ships.
That is not at all how things work at Meta. The impact of the things you deliver as an engineer has a direct effect on your performance review. For better or for worse, that also means that engineers have a ton of leverage on deciding what to work on. It's highly unlikely that the engineers working on this were laughing at it while doing so.
Don't assume that you can simply pattern match because you've been at another big company. I've been at three, meta being one of them. And they have all operated very differently.
How do you think it happened, then? Having also worked there the OP’s story makes total sense to me lol. If you’re on a team with the charter to “make AI profiles in IG work” then you’re just inevitably going to turn off your better judgement and make some cringy garbage.
I think the incorrect premise here was that engineers always know what a good product is. :) And I say that as an engineer myself. It's fully possible that the whole team was aligned on a product idea that was bad, it happens all the time. From my experience though, if there's any company where engineers don't just mindlessly follow the PMs and have a lot of agency to set direction, it's Meta. Might differ between orgs but generally that was my experience.
I suspect they wanted to be able to say "worked on AI at Facebook" on their resume and this was their way of doing it
I don’t think anyone took this seriously while building it, if that’s what you’re implying.
I’ve been at companies like this where you are told to build X, you laugh with your co-workers, and then get to work because you’re paid disgusting amounts of money to build stupid shit like this.
That’s part of why I quit to start my own company. It’s such an awful waste of resources.
You're kind of missing the early step where some executive had to sign off on this dumb idea. Otherwise it doesn't launch. It's only "impactful" to engineer performance review because some exec said so.
The exec gets "credit" for the project, so same promo culture issue. They just need to show increased engagement numbers for like one quarter and they can add it to their end of year performance packet. The fact that the project gets cancelled is either 1) another "win" because they're "making hard choices" and they can obviously justify why it should be cancelled, or 2) someone else's problem.
Also, another note, these sorts of big swing and misses are actually still identified as a positive, even when looked at retroactively. They're "big bets" that could pay off huge if they hit. Similar to VC culture, Meta is probably fine with 99 misses if 1 big bet hits. If they increase engagement even 1%, they're raking in billions and it is worth it.
Execs go through the promo process too. And also, some execs will sign off on projects that they know are bad but will make for good promo material for them and their reports.
Yeah this is true, just missing from the original comment.
It worked great for Ashley Madison until it didn’t.
Execs don’t need to stay at Meta for a decade. They succeed, then exchange musical chairs. On average, it will work well for several iterations.
The idea came from an exec. That's why no one questioned it, and it was executed.
No, execs do not sign off on every feature. Even at medium size (2000+ employees say) there is far more output being produced than could possibly be signed off by an exec team.
I don't mean Zuck personally blessed it, but this went up at least three levels of management and all of them said "sure."
Maybe not for the initial ideation and test, but it was mentioned on an investor call, so execs adopted the idea if they didn’t originate it.
Seconded. It is difficult to understate how pervasive and dysfunctional promo based development is at some of these behemoths (Google from my experience, but I hear Meta is similar). Nothing else matters as long as what you are doing correctly fits in your promo packet.
The best (worst?) part is when the engineers actively overlooked actual production bugs and security concerns to do this work.
Sadly it's really hard to get a promo fixing bugs or optimizing code.
Ship it quick, ramp up some high profile users that don't actually care much about what you're offering, and jump to the next project before anyone notices the problems.
Works every time.
There are 10,000s of SWEs at Facebook and this project was at most a handful of SWEs. (As stupid as it is, it did not significantly detract from prod bugs.)
I suppose the key dysfunction there is that someone can simply have an idea and get it done, without, presumably, review by other product folks or sensible acceptance criteria being put in place.
My impression, from seeing some of these "great ideas," coming from the modern Tech industry, is that there really are no adults in the room; including the funders.
So many of these seven-to-nine-figure disasters are of the "Who thought this was a good idea?" class. They are often things that seem like they might appeal to folks that have never really spent much time, with their sleeves rolled up, actually delivering a product, then supporting it, after it's out there.
That's the price for Mr Zuckerberg's ownership structure. On the one hand, no one can dispute him. On the other hand, no one can dispute him.
So, Zuckerberg has got the AI horn. Well, he always had. But
There are about 70k engineers at facebook, I doubt its possible for zuckerberg to be aware of all the product changes that happen in meta.
However, it also shows that meta has no functioning marketing skills. This I suspect is also down to Zuck.
Had they marketed it, and more importantly, had the engineers had the training to talk to marketing first, then this wouldn't have been an issue.
I dont think it is reasonable to think shareholders would be vigilant safeguard on these types of R&D programs. They simply aren't that involved.
Between shareholders and executives there's usually a board. Corporate governance is not simple and it is designed to control large organizations with several stakeholders. Appointing executives alone, in a well functioning corporation, is a complex affair.
And he is personally responsible for so much misery on Earth that he can likely never do enough good to make up for it, not that he's going to try.
The reason he never looks happy is because he is never happy. Same with his fellow oligarchs; they can get pleasure in bunches but never happiness, they can be smug but never have peace. Happiness is what happens as a result of your helping others become happier; there is no shortcut, there is no other way.
Zuck has only ever worked for himself and his "peers". History is riddled with those losers, the richer the worser. It is not just human nature but the nature of the universe vis a vis our human responsibility as choosers of goodness or its opposites.
> Happiness is what happens as a result of your helping others become happier; there is no shortcut, there is no other way.
Zack has a comic on a similar concept:
https://www.smbc-comics.com/comic/2009-11-18
Of course, the exact line of reasoning doesn’t quite work for the different formulation you stated. However, a different version of the line of reasoning seems to? How does one help another person become happier if helping another person become happier is the only way to become happier? One helps them help someone else to help someone to… ?
Or, I suppose if it is possible to be more or less unhappy for other reasons, so by helping another person become less unhappy (though not yet happy) one could thereby become happy?
We all must deal with other people to survive in this world, and our treatment of everyone every single day creates a karmic result that affects us proportionally.
We are rewarded for our efforts, not their effects. If we truly try to help someone, we get some measure of happiness for our attempt, even if they refuse it or are otherwise miserable. When I tell someone to care for others' happiness, I do so in order that they do not sow the seeds of their own unhappiness. If they use their free will to ignore my recommendation, I do not lose for their choices; in fact, I've gained because I tried to help them make choices that will increase their peace and happiness. We cannot convince anyone to not be (for example) a racist, but when we engage in such efforts, we have tried to make the world a better, less miserable place, and we gain for our valiant attempt.
The intention of our ethical karmic universe is to nudge us towards caring for others, instead of being selfish aholes callous to the misery of others. The feedback mechanism is our resulting inner peace and happiness (or lack thereof).
All our choices start with an intention, however muddled or unconscious our thought process is at the time. When MZ chooses profit over policing his platform, he has planted bitter seeds, indeed, for we all reap what we sow, for good or ill.
> Or, I suppose if it is possible to be more or less unhappy for other reasons, so by helping another person become less unhappy (though not yet happy) one could thereby become happy?
Yes, indeed. And selflessly making efforts for others' happiness creates a well of magic the universe can dip into and sprinkle you with at its leisure, which is sublime and loving at its Source. That is why giving charity is so essential on the Spiritual Path of Love: because our individual and cultural selfishnesses are so stubborn, we need all the help we can get to truly self-evolve our ideals, attitudes, and behaviors.
"Ask and ye shall receive, ..." --New Testament
this is exactly what happened
I could also see this being some executive trying to justify the word "AI" in their title with an initiative that should "make number go up" wrt to engagement or something.
"When we have more users engagement goes up, let's just _make more users_".
I feel like there must be some sort of disassociation that kicks in when you spend long enough in the upper echelons of these gargantuan corporations. It's almost like spending long enough dealing with abstractions like MAUs, DAUs, and engagement metrics make you forget that actually, at the bottom of it all are real humans using your product.
Modern entrepreneurship is basically gradient descent. You try to predict what action will yield you more profits, you do that action, rinse, repeat. It's a completely abstract process.
I can't fathom how anyone thought this was a sensible thing.
It's so bad I have to wonder if there's a different angle here, maybe they think that releasing something so terribly bad will make it easier when they release something less comically bad? Idk.
The next step will be to release bots that aren't labeled as bots. They'll be influencers (advertisers) without having to pay a person. They'll produce hyper targeted influencer slop, hocking products directly to individuals using Facebook's knowledge graphs of users to be incredibly manipulative. Companies will pay Facebook to make people siphon money directly to them.
It'll be like entirely automated catfishing.
1. Create AI bot 2. Wait until it gets lots of followers 3. Sell mentions of products by popular AI bot. Profit.
Sure, they faltered at Step 2, but how was this not the plan?
The future bots won't need followers. They will reach out to engage users based on their ad graph. Like how catfish accounts work.
They'll reach out to users and talk to them about whatever they're interested in. They'll then make influencer-style native advertisements. If you're a middle aged man that likes video games when you see their picture posts they'll be "wearing" some vintage game t-shirt (that's for sale). If you're instead a twenty year old woman into yoga the same bit will be "wearing" some new Lululemon yoga pants.
The first pass of these bots failed because they used the follower mechanism. The next version will just follow you to push ads or scrape more data about you.
I find myself generally wondering the same thing about half the things coming out of SV over the past several years.
I have worked with competent product organizations. I know they exist. A few even exist in SV. But for some reason the largest players have just absolutely lost the plot beyond what can get them to favorable quarterly earnings, and that game eventually and fairly consistently doesn’t end well in the long term.
Meta in particular is clearly rudderless, lacking in vision and strategic leadership, and throwing whatever it can find at the wall hoping to find something that sticks. Facebook turned into a cesspool, Instagram isn’t as popular with the newest demographics Meta has traditionally wanted to court (young people), and Threads turned out to be a nothing burger. Their grand quest to unify messaging was a disaster, and mores the putty because they’ve basically delineated their ecosystem by demographic generation in what has to be an unintentional series of missteps. Their “metaverse” projects weren’t even compelling to the teams working on them, their foray into crypto was over almost as soon as it began, and their business products are a nearly unusable mess for those of us who have had the misfortune of using them.
You’d think eventually someone at Meta would hit upon an idea that goes somewhere. It’s like a more pathetic version of Google’s stagnation.
Stonk prices are going up, so whatever they are doing, the only real feedback they care about is positive.
These AI products are not made for users. They're made for Wall Street. Wall Street expects big companies to 1. talk about AI in their earnings calls, and then 2. do something, anything, with AI. All of BigTech seems to be doing this now, and investors are rewarding them by buying their stock. So they're going to continue the cycle of building useless AI products and then canceling them when they have served their purpose (pleasing investors).
It's pretty simple in my view. It's "where does the money come from" and at Meta it's not from their users. So they are motivated to entrap and wall in and build funnels to try to deliver their users to the people who are actually giving them money. They aren't building things users want. They are building things that they think will keep the users from leaving.
That makes a lot of sense. My Facebook feed is almost 100% hobby stuff now. Sometimes my wife will say "hey did you see this thing that [family member] posted" and I haven't. And I assume the reason I haven't, is because the content from my hobbies is crowding out the original purpose of Facebook (connect with friends and family.)
Worst of all, is that Facebook is a terrible platform for hobby stuff. Plain ol' PHP forums are much better for that; they're much easier to navigate, easier to search, easier to host pictures on, etc.
I frequently find myself posting something on one of the Facebook hobby forums, then realizing that:
* the signal to noise ratio isn't great, because there's a ton of people spamming those forums with products and other social media that they're trying to promote, YT in particular.
* Facebook isn't great for long posts.
We aren't a big org and I know we spend several hundred if not thousands of USD a month on meta ads and WhatsApp business.
For larger orgs I can imagine that number is larger and we definitely get a ton of positive interaction from that. That might be swayed by demographics but I can tell you age isn't one of them because our interaction (on WhatsApp Business which is mostly where I interact with Meta products) is decently in the 20-40 range.
Obligatory PA comic: https://www.penny-arcade.com/comic/2025/01/03/the-mirror-d-h...
Working in ML Engineering this doesn’t surprise me in the least. The ratio between “impressiveness of technology” to “the number of practical problems actually solved” is probably higher for LLMs than just about any tech to come out in my career, probably longer. Every executive at every company is tripping over themselves to incorporate “AI” (which is almost exclusively defined as LLMs, for some reason) into their products. Problem is that many, if not most, companies really don’t have meaningful use cases for LLMs. So you end up with a bunch of problems being invented so they can be solved with AI. What feature can Facebook provide for their users utilizing LLMs that users actually, genuinely care about? I can’t think of any. And I assume Facebook can’t either, which is how they arrived at this.
> Every executive at every company is tripping over themselves to incorporate “AI” (which is almost exclusively defined as LLMs, for some reason) into their products
I think a hangover from the last AI winter, after which everything that would previously have been called ‘AI’ got called ‘ML’, the term ‘AI’ then being shareholder poison. Anything older than the current AI bubble mostly gets called ‘ML’ still, with a possible exception for non-LLMish CV stuff.
Fact is it is hard to make a great product. I don’t think we need complex conspiracy theories to explain bad ones.
Odds are the product team cherry picked conversations for internal demos, their management wanted press and promotions, KPIs were around engagement and not quality, and nobody had perspective and authority to say “this sucks, we can’t release it”.
It’s not hard to identify, before launching, “this is unacceptably bad”. Like, I think this is actually quite a bit worse than Microsoft Tay, which others mentioned, as an example of “how could you possibly launch this”? Tay was reckless, and you’ve got to assume that they knew there was at least a change it could work out as it did, but in this case the product _as launched_ was clearly unacceptable; it didn’t require input from 4chan to make it so.
I mean, just very basic QA, having someone talk to the damn things for a bit, would’ve shown the problem, in this case.
I wish I agreed. But I have seen many products launched where only true believers who internalized the limitations were involved in testing. Yes, a good org with self-critical people and an independent red team and execs who cared about quality would not have made this blunder. Getting that org stuff right sounds easy but is nigh impossible if it doesn’t align to the company culture.
Critics and naysayers tend to leave teams where they don't believe in the product, and eventually teams are left with only true believers. Their entire purpose is to ship Project X, and if they were to admit Project X is a blunder, then they are admitting that their purpose is a blunder. Few teams at few companies are willing to do that.
"and eventually teams are left with only true believers."
Or with nihilists who learned to say yes to everything, to at least get paid.
Exactly. That’s why formal and independent compliance / security / red team signoffs are so important. A good product team wants those checks and balances.
Isn’t it as simple as management wanting to be first one across the release line, with the wishful thinking of “we’ll fix everything later”?
I’ve always believed Microsoft Tay was a really idealistic, genuine, if bombastically naïve situation.
The only thing they got wrong was the strong stereotyping. But the idea from an investor or exec position is brilliant. Why bother with all the parsing for user's interactions through clicks, likes, replies etc when you can have them engage a bot.
Simply have your users give you all the info you need to serve better ads; while selling companies advanced profiles on people? User profiles built by them engaging the bot and the bot slowly nudging more and more info out of them?
No one gets promoted for suggesting not doing something.
We always underestimate how many bots made the pre-2022 social apps were.
1. To make you feel like there is activity. How would you simulate activity when you have no customer to start with? I suspect Youtube threw subscribers at you just to get you addicted to video production (the only hint that my Youtube subscribers were real, was that people recognized me on the street). And guess who’s mostly talking to you on Tindr.
2. For politics and geopolitics influence. Maybe Russia pays half the bots that sway opinions on Instagram. Maybe it’s China or even Europe, and USA probably has farm bots too for Europe, to ensure the general population is swayed towards liking USA.
3. Just for business, marketing agencies sometimes suggest to create 6x LinkedIn profiles per employees to farm customers.
Facebook doing it in-house and officially is just legalizing the bribe between union leaders and CEOs.
We're "all in on AI" at my job, and lots of people are drinking the koolaid. I regularly see design docs that are almost entirely written by ChatGPT, code implementing those designs written by copilot/cursor/chatgpt, and reviews done by the same. There are features deployed for which practically no human consideration was given.
I'd be very willing to believe something like this happens at Facebook, too.
Next month no one is going to remember this anyway, so they don't lose much by trying.
I’m imagining it would have gone like the meme where the dissenting opinion person is thrown out the window. They don’t have any other ideas left, so they have to do stupid stuff like this to appeal to people that call themselves investors.
An annoying person with a terrible idea is a powerful force at a tech company.
Nobody wants to deal with them, so they just kinda get what they want as long as they are mostly harmless.
A mistake on par with Netflix getting into mobile games. With advent of 5G, nobody plays candy crush saga or such games.
Candy Crush and similar games are still making billions
https://www.linkedin.com/pulse/inside-candy-crushs-success-s...
You must be kidding. Look around on public transport. If someone isn't reading something on their phone, they're almost certainly playing some candy crush like game. Apple games is doing well.
Netflix failed because they didn't make games people are interested in, not because people don't want those types of games.
I think most people just scroll instagram or tiktok. Because of lack of short content format earlier, mobile games became popular. That and lack of 5g infra. Both problems have been solved.
You didn’t need 5G to stream video. You could stream video well on “3.5G” with the enhanced GSM protocols
You kind-of do need 5G to stream video on public transport where every one of the other thousand people on the line is also trying to stream video. Even LTE suffers from fundamental contention issues once you get to Tokyo/New York/London metro levels of device density.
I haven't experienced that issue in Tokyo
How is 5G (which isn't super successful or groundbreaking) related to the games people play?
I wonder if their product people get their ideas from Hacker News. Just the other day, a top thread comment was about an idea for a social network where you would receive tons of engagement from bots. Coincidentally (or not), Meta actually took a first step in that direction days later.
Big tech would never ship anything that fast.
Sorry for breaking the fantasy, but this is from TFA:
> The company had first introduced these AI-powered profiles in September 2023 but killed off most of them by summer 2024.
Also, they're deleting these profiles, which is a step away, not towards that idea. Although they're apparently leaving in some feature to make your own AI-profiles.
Wasn’t that a comment about an article regarding the use of AI by Meta?
This was probably an okay idea terribly implemented. GenAI creators on social media kind of sense.
Neurosama, an AI streamer, is massively popular.
Silllytavern which lets people make and chat with characters or tell stories with LLMs feeds Openrouter 20 million messages a day, which is a fraction of it's totally usage. Anecdotally I've have non tech friends learn how to install Git and work an API to get this one working.
There are unfortunately tons of secretly AI made influencers on Instagram.
When Meta started these profiles in 2023 it was less clear how the technologies were going to be used and most were just celeb licensed.
I think a few things went wrong. The biggest is GenAI has the highest value in narrowcast and the lowest value in broadcast. GenAI can do very specific and creative things for an individual but when spread to everyone or used with generic prompts it start averaging and becomes boring. It's like Google showing its top searches: it's always going to just be for webpages. Making an GenAI profile isn't fun because these AIs don't really do interesting things on their own. I chatted with these they had very little memory and almost no willingness to do interesting things.
Second, mega corps are, for better or worse, too risk averse to make these any fun. GenAI is most wild and interesting when it can run on its own or do unhinged things. There are several people on Twitter who have ongoing LLM chat rooms that get extremely weird and fascinating but in a way a tech company would never allow. Silllytavern is most interesting/human when the LLM takes things off the rails and challenges or threatens the user. One of the biggest news stories of 2023 was an LLM telling a journalist it loved him. But Meta was never going to make a GenAI that would do self-looping art or have interesting conversations. These LLMs probably are guardrailed into the ground and probably also have watcher models on them. You can almost feel that safeness and lack of risk taking in the boringness of the profiles if you look up the ones they set up in 2023. Football person, comedy person, fashion person, all geared to advice and stuff safe and boring.
I suspect these things had almost zero engagement and they had shuttered most of them. I wonder what Meta was planning with the new ones they were going to roll out.
Meta's platforms are already filled with AI slop content farms that drive clicks and engagement for them.
I have a FB account for marketplace, and unsubscribed from all my pages and friends. If I log in, my feed is a neverending stream of suggested rage bait, low quality AI photos, nonsensical LLM "tips" on gardening and housekeeping.
The posts seem to attract tens of thousands of reactions and comments from seemingly real people.
"seemingly" being the operative word. They are mostly fakes.
> Neurosama, an AI streamer, is massively popular.
I think some level of Neuro's popularity is due to the Vedal + Neuro double act though, and some of the scripted/prepared replies.
Absolutely, and in picking collabs with people who are willing to work with the weirdness and make it funny. Vedal is definitely a fantastic creator to make it work so well and the amount of fine-tuning and tweaking he must do must be unreal. But I think it still shows there is some hunger for this type of content, though you are probably correct that it still needs to be curated, gardened, worked with, and sometimes faked.
Yes, it's a success, but not a scalable one.
There probably aren't a lot of decent streamers who are also AI developers around.
This was probably an okay idea terribly implemented.
No, I'd vote terrible idea terribly implemented so good (that it failed).
The argument for GenAI chatbots in culture has to be more than "people like it".
The worst possible GenAI is one that manages to be "better" than the standard sterile, moronic homogenized celebrity that everyone already likes. And sure, like any computer program, a GenAI can be randomly "interesting" but this kind of thing is quite shallow imo.
> Neurosama, an AI streamer, is massively popular.
This only proves that there's enough people on the planet around that bit of the bell curve to develop an audience.
Aspersions aside, the content’s actually typically pretty involved and has a lot to speak for itself, it’s not low-effort content that one would typically associate with AI.
Laid bare, it’s generally a variety comedy show of a human host and AI riffing off each other, the AI and the chat arguing and counter-roasting each other with human mediation to either double down or steer discussions or roasts in more interesting directions, a platform for guest interviews and collaborations with other streamers, and a showcase of AI bots which were coded up by the stream’s creator to play a surprising variety of games. There’s a lot to like, and you don’t need to be on “that bit of the bell curve” to enjoy a skilled entertainer putting new tools to enjoyable use.
[flagged]
It's fine when AI has its own social media sandbox to play in for people to watch. Stuff like e.g. https://chirper.ai.
I don't think there should be bots like this on social media for humans, though.
Is "engagement" the primary metric of modern life?
not the primary metric of modern life but I would agree it’s the primary metric of modern consumer facing business
According to your economic masters, yes, because engagement is a proxy for revenue.
> This was probably an okay idea terribly implemented. GenAI creators on social media kind of sense.
It boggles my mind that there are people who think this is a good/ok idea. From a human perspective, all it does is pulls the mind ever closer to fictional imaginative world rather than encouraging real life interactions which I believe is inherently wrong no matter what business strategy is wrapped around it.
- [deleted]
I feel like the question I want to ask is "Why do we need AI-Profiles?" Like, I can barely be bothered to keep in touch with my friends and family. Why do I need a fake person to be follow on social media? What purpose does it serve? What possibly comes out of this that is positive?
Its the other way around. It's about giving people the attention they need exactly because they dont keep in touch any more, so they have a reason to come back to the platform.
Quite a long ways from "give people the power to build community and bring the world closer together", unless you consider homogenizing behavior through interacting with a borgbot "bringing us closer together"
You're right, it's not meant to be a positive outcome for you. It's meant to keep people on Facebook longer so more ad impressions can be delivered. In this case, eventually, conversationally.
So that you get attached to a person who only exists on Facebook, therefore you spend more time on Facebook and see more ads.
Half of Internet traffic is bots. Half of webpages are written by bots. It's just the logical conclusion.
>I feel like the question I want to ask is "Why do we need AI-Profiles?"
Remember when in 2013, 2015 and in 2020 Facebook was caught faking/inflating ad-view numbers so they would charge their customers more? Now AI viewing an ad is a feature and you want to pay for it.
meta probably aims at people getting intimate (as in talking to a friend or family member) to get the usual outcome - getting data that will made ads more accurate. There's nothing positive about that at all.
Unlimited content without needing to pay out humans in rev share.
99% of humans who make content for Meta’s platforms receive no revenue from them. In fact, some of them actually pay the platforms for greater reach. Even without AI personas, Meta can surely have its cake and eat it too.
I get the hate, but I'd be open to trying out a social media experience that is a mix of human and bots, especially as the bots get better at acting like reasonable humans.
Stack Overflow was great when it came out. But the whatever percent of humans who were mean and obnoxious and wouldn't let you just ask your questions eventually ruined it all.
Then ChatGPT came out without any humans to get in the way of me asking my software development questions, and it was so much better.
In the same way, when social media came out, it was great. But the whatever percent of humans who were mean and obnoxious and wouldn't let you just socialize and speak your mind eventually ruined it all.
If there's an equivalent social media experience out there that gets rid of or at least mitigates the horriblenesses of humanity while still letting people socialize online and explore what's on their mind, maybe it's worth trying.
People should try to socialize in real life. The web is 90% rage baits, trolls, people completely brain washed by fringe conspiracy theories and politics
IRL these 90% turn into 10%, if you really care about socializing you'll have more meaningful interactions with the grandma living next door than with the queer black queen zuckerbot ™
It's like if people asked supermarkets to hide plastic apples amongst the real ones because they look better and never went bad, as if they somehow forgot why we even eat food in the first place
> It's like if people asked supermarkets to hide plastic apples amongst the real ones because they look better and never went bad
I’m stealing this
> I’m stealing this
Might not want to say that in the grocery store while browsing those apples. Plastic apples tend to have network affects not usually associated with edible fruit.
What if socializing IRL is something you can practice with pro-social bots?
Or, what if depressed people find the pressure of consequences too stressful to “just go outside and make friends”, and so the realistic options are chatbot or not enough socialization?
There are obvious nuanced issues and risks here but to distill it down to a one liner like “try to socialize IRL” is myopic.
People need to practice with bots and are depressed because they are lacking real social interactions. They went too far to digital rabbit-hole. Now is time to log off.
I agree with your take.
Outside of how asocial and hostile society has become, there are certain people who understand more eccentric or fringe ideas and they can be quite rare.
I can see how supplementing some amount of artificial conversation can keep one better primed for interacting with the (rarer) people who might be more engaging on an individual level.
More of a stepping stone to realizing that other people might be interesting by finding value in human ideas.
What's myopic about it? That's what your brain has evolved to do over hundred thousands on years and we fucked it up in two generations...
Depression skyrocketed since social medias were introduced, I very much doubt adding bots to the equation solves anything. You'd be treating the symptoms, not the cause
And again, you're not going to cure obesity by eating plastic apples
- [deleted]
The solution can’t be that those people have to interact with AI instead of humans. What a shit society that would be.
Right. As well as, or in able to. Not instead of.
Ironically, this would lead to dramatically unhealthy social interactions and be ripe for abuse.
You are literally creating a bubble of interactions with technologically enforced rose-colored glasses. Get the right person in charge of the experience, and don't be surprised if it becomes a modern take on Orwell's 1984.
In what way would it become a modern take on 1984? In terms of surveillance? Afaik the telescreens were non-interactive, right?
No, I'm not even interested in the surveillance here (though it is also a good comparison). Full disclosure, I'm very pessimistic on what current LLM models are capable of, and even more pessimistic on the impact of social networks.
I'm referring to the control of interactions and information. Take a look at the character interactions on the novel.
People socialized as they could, but interactions were not only heavily monitored (to your point), but altered as well. Winston struggles to behave correctly in public knowing that one small misstep could result in him being arrested by The Party. He revels in the chance to rebel against it, only to find that the opportunity to do so was in fact a trap. Even the allowed language was neutered as much as possible to prevent communication. All of these virtual 'friends' will likely politely comment and correct your posts as they see fit.
As for information, "we have always been at war with Eastasia". People in 1984 were clearly being fed false information as part of control. History was altered, facts were changed, it was pretty heavy handed. But imagine what this personal echo chamber could do with a new concept or idea (that virtually all parties you interact with corroborate, along with LLM generated 'news' posts and generated pics that support it), how many people will verify? Just think of how many folks you've seen have been misled by a single well-done "fake news" post; now, imagine that the network itself is confirming it at every turn (note Meta owns multiple networks).
Oh, and as a bonus, these virtual friends are really enjoying Brand Product. Have you tried it yet? Folks keep posting pictures of themselves with Brand Product, and they look pretty happy!
I never responded to you but I appreciate the level of detail in your response. I'll keep this in mind when I pick up the book next time.
Yeah this one is closer Brave New World.
> Stack Overflow was great when it came out. But the whatever percent of humans who were mean and obnoxious and wouldn't let you just ask your questions eventually ruined it all.
The user hostility is partly what keeps the spam and repeated questions out. They even have a peer pressure achievement for deleting a post after someone commented on it.
I am trying out something along those lines with friendlyfriends.community (nod to Do Androids Dream of Electric Sheep). Still pretty basic though, but feel free to give it a try.
Edit: I'm exploring the idea that social media is not really social. How much different is it to interact with a human intermediated by social media sites than it is to interact with chatbots?
I can’t think of any way where a social media app using bots to discourage wrongthink doesn’t come across as dystopian, and also a little sad.
> Wrongthink
When a community is formed, an implied set of rules are generated and applied. They evolve and change as time goes on. Sometimes they are the cause of the community dying, sometimes they are the reason is thrives.
Whether its a sports community (woo football is the greatest!) or a specific team (fuck the other team, booo) or cycling. Each community has a set of rules that you need to abide by in order for it to function.
Now, if you go and break those implied rules, you get told off.
A community falls apart when two or more factions form opposing implied rules in the same shared space. For photography, it could be the use of photoshop, digital cameras or, more recently AI. Either the factions learn to get on, or they break away and find somewhere more accepting. That is the natural order.
You could equally present those things as "wrongthink". But more practically its just a regulation mechanism for human interaction.
Now, you'll counter that "big corporations/politics determining what we see is bad", and then reference some time in the 90s where no such system existed. The problem is that the US media was brilliant at self censorship. Sure you could get specialist publications that catered to whatever taboo subject you wanted, just as you can now.
The issue is, online there are no constraints on behaviour. If I shout at 13 year old kid in the street that I'm going to fuck their mum, burn their house down, and generally verbally abuse them, generally someone will intervene and stop me. Thats not wrongthink, thats society. There is not scalable mechanism for doing that on online communities.
Is this AI bit the way to do it? no, its made by insular collegekids who've barely lived in the real world.
All “wrongthink”? Like if someone expresses suicidal thoughts, the bot should say “whatever, do what you want”?
I disagree. Apps are products with an editorial component. Editorials should be opinionated. It is passive and immature to simply not care how one’s products are used. Oppenheimer and Alfred Nobel have wise words on this topic.
Why settle for indifference to self-harm?
https://www.cbsnews.com/news/google-ai-chatbot-threatening-m...
I feel that these days "wrongthink" is a dogwhistle, by all means, just use X or pravda social.
I think this is the kind of rhetoric that has lead to the rise of the right in the western world.
It’s something discussed here in this video.
Everyone wants to be the hero in their own story, changing your beliefs requires introspection and humility. It's far easier to blame somebody else than yourself and take responsibility. It's trivial to feed people some narrative that trivializes reality and, in their mind, acknowledges them as underdogs and elevates them as "right".
The triviality of this allows malevolent actors to disrupt society and produce a chasm through echo chambers, each amplifying particular voices and shifting narrative.
Free speech is important, but it won't be found on X, or pravda-dot-social, and as much has been proven multiple times. What I take issue with and hint at is that neither of these are platforms that support free-speech, they are merely illusions, and people who yap about wrongthink are exactly those oblivious to said fact.
Personally, I am tired of arguing with people, I am tired of seeking truth in conversations with people unwilling to change their minds, I just want to live my life, safely.
I think that’s right. I broadly agree with everything you’ve written here. I’m just dissatisfied with this as a status quo, where more reasonable people have [justifiably] resigned themselves to the reality that reasoning with the unreasonable doesn’t usually pay [moral] dividends.
But in reality, this has been the status quo for all of Humanity. It’s just that we now have this infinite ledger via the internet to document it. Only 200 years ago, your “honest opinion” would find you in the gallows more often than not across the world. This “progressed” to social exile in lieu of said gallows, and now to seeking out the like minded in echo chambers. I will hold out hope that interfacing with articulate AI’s can support more people to regain their identity and confidence where so many lack the courage to try publicly (or are surrounded by the foolish).
Chat, is this "wrongthink"?
“I’ve have negative interactions with people online so I’d rather give up and talk to nice robots instead.”
We are doomed as a society.
Letting people talk their thought processes through without conflict seems more productive than them prefiltering what they want to say.
You can get so conditioned to expect negative feedback that you refuse to follow your curiosity.
You do need to learn how to deal with negative feedback tho.
People should write letters and emails and long form messages if they want to say a whole thing without interruption. My concern with working through an idea with a bot is you end up espousing the bots ideas and not your own -- it is the ultimate ideological homogenization.
The more you deal with your insecurities, the easier it is to handle negative feedback.
People may not have the time or patience to write long form writing.
There are plenty of times I've disagreed with bots, but they're pretty good at neutrally bridging gaps instead of getting emotional. I think you underestimate human impulsiveness.
Why are you arguing with a bot in the first place. Makes no sense.
Disagreeing with a point of view is not arguing.
Understanding how someone has got to a point of view I disagree with can be useful.
That is called hiring a licensed therapist not chatting with Black Queer Mamma.
“Socialize” with an AI?
What would it mean to "socialize" someplace where you can never be sure if you're talking to a person or not?
The thread is here and... it's pretty wild. This doesn't seem like it was a well-thought out idea to release this without some more consideration beforehand.
https://bsky.app/profile/karenattiah.bsky.social/post/3letty...
> They admitted it internally, to me
It's becoming an increasingly apparent problem that people don't understand that LLMs frequently hallucinate, especially when asked to introspect about themselves. This was 100% a hallucination.
It's getting weird that people cite ChatGPT to settle debates. Even on places like HN where supposedly people have a technical background and know they shouldn't blindly trust an LLM as a source of truth. Even with the banners it has warning about this.
Technical knowledge and wisdom are two very different things. There is an excess of the former here, on this site, in this field, but a comical dearth of the latter.
I would argue that there is not an excess of the former. A lot of the so-called techies don't know tech either.
That's fair. I suppose it's more that there's a worship of knowledge outside the context of wisdom.
There are wise people here, they are just lurkers though (wisely).
I have but one upvote to give you, good sir/madam, when I wish I had 100.
- [deleted]
You and I understand this. Heck, most of us on HN understand this.
When the media and companies with a financial interest in AI have been shouting from the rooftops that "AI is going to replicate humans very soon and all of your jobs will go bye-bye because this thing will do everything better than you", can we really be surprised when the broader populace looks at it that way?
It was likely a hallucination, right? You didn’t check to confirm?
But this is a solved problem that Meta chose to implement poorly. It can be done right. I just asked o1 “create a diversity report for openai” and its very first paragraph is:
> Below is a *sample* diversity report for OpenAI. Please note that the numbers, programs, and initiatives are provided as *illustrative examples*. This report is not an official document, and the data contained here is *fictional* and for demonstration purposes only.
Yes, and I suspect that the very first line of the chat has something equally lawyer-y to serve the same purpose.
I suspect it doesn’t.
People don't understand that LLMs are just statistical magic, by design of the LLM providers because LLMs wouldn't take off if people understood they are just statistical magic.
You can understand that and still think this was a terrible idea.