It’s notable that there were ShinyHunters members arrested by the FBI a few years ago. I was in prison with Sebastian Raoult, one of them. We talked quite a bit.
The level of persistence these guys went through to phish at scale is astounding—which is how they gained most of their access. They’d otherwise look up API endpoints on GitHub and see if there were any leaked keys (he wasn’t fond of GitHub's automated scanner).
https://www.justice.gov/usao-wdwa/pr/member-notorious-intern...
Generally speaking, humans are more often than not the weakest link the chain when it comes to cyber security, so the fact that most of their access comes from social engineering isn't the least bit surprising.
They themselves are likely to some extent the victims of social engineering as well. After all who benefits from creating exploits for online games and getting children to become script kiddies? Its easier (and probably safer) to make money off of cyber crime if your role isn't committing the crimes yourself. It isn't illegal to create premium software that could in theory be use for crime if you don't market it that way.
I'm not sure this is very fair because humans are often not given the right tools to make a good decision. For example:
To gift to a 529 regardless of the financial institution, you go to some random ugift529.com site and put in a code plus all your financial info. This is considered the gold standard.
To get a payout from a class-action lawsuit that leaked your data, you must go to some other random site (usually some random domain name loosely related to the settlement recently registered by kroll) and enter basically more PII than was leaked in the first place.
To pay your fed taxes with a credit card, you must verify your identity with some 3rd party site, then go to yet another 3rd party site to enter your CC info.
This is insane and forces/trains people to perform actions that in many other scenarios lead to a phishing attack.
Don't forget magic links in email for auth and password resets training people that it's OK to click links in emails.
Yes, we've (the software industry) been training people to practice poor OpSec for a very long time, so it's not surprising at all that corporate cybersecurity training is largely ineffective. We violate our own rules all the time
Has anyone invented an alternative to that yet? I could imagine emailing you a code to enter in a specific part of a site to get you to the right link, but then people could just scan all the codes. To solve that you could make the codes long 64bit strings but then that's too hard to remember so you could just provide functionality to automatically include that info to get you to the site but then that's just a link again.
Maybe if you expected everyone to copy-paste the info into the form? That might work
This is the closest I've seen (pretty new): https://github.com/WICG/email-verification-protocol
There should be a way to tell you who I am without telling you who I am.
Phone/laptop based biometrics?
I think this is the way forward. We shouldn't continue relying on email (or proving ownership over an email address for that matter) as identity.
Public/private keys with a second factor (like biometrics) as identity I think is a good option. A way to announce who you are, without actually revealing your identity (or your email address).
Tbh that's how all the age verification crap should work too for the countries that want to go down that road instead of having people upload a copy of their actual ID to some random service that is 100% guaranteed going to get breached and leaked.
We need psuedoanonymous verification
Reminds me of a co-founder of an adtech company I know. They are a platform that buys inventory using automated trading, mostly mobile, and they realized that most of their customers were all clickfraud / scammers / etc. He didn’t want to go into too much detail.
But he shrugged it off.
I bet there are quite a few shops online that may sell gift cards that are used in money laundering schemes. Bonus points if they accept bitcoin.
But those are all quite implicitly used by cybercrime. I can imagine there are quite a few tools at their disposal that are much more explicit.
Worked at a place that used to do a kind of arbitrage between adclicks and traditional print. A large percent of traffic, especially mobile, was obviously either toddlers or bad bots; yet we were billing our customers for the 'engagement'.
>It isn't illegal to create premium software that could in theory be use for crime if you don't market it that way.
Who is making money off of selling premium software, that's not marketed as for cybercrime, to non-governmental attackers? Wouldn't the attackers just pirate it?
This type of software is being sold on many forums, both on the clearnet and darknet.
> Wouldn't the attackers just pirate it?
Sometimes the software is SaaS (yes, even crimeware is SaaS now). In other cases, it has heavy DRM. Besides that, attackers often want regular updates to avoid things like antivirus detections.
Feel like IDA Pro counts.
damn that sucks they threw you in fed prison for running a sports streaming website.
did you have bulletproof hosting and they caught you through other means like going after your payment providers or you made opsec mistakes or how exactly?
was it a website like Sportsurge where it simply linked to streams or did it actually host the streams?
> (he wasn’t fond of GitHub's automated scanner
Do you mean they thought the scanner was effective and weren't fond of it because it disrupted their business? Or do you mean they had a low opinion of the scanner because it was ineffective?
He would complain that it disrupted their business, and that it doesn't catch all keys—it catches the big ones that he certainly found to be very valuable.
> The level of persistence these guys went through to phish at scale is astounding—which is how they gained most of their access.
explain
I love this part (no trolling from me):
I know there will by a bunch of cynics who say that an LLM or a PR crisis team wrote this post... but if they did, hats off. It is powerful and moving. This guys really falls on his sword / takes it on the chin.> We are sorry. We regret that this incident has caused worry for our partners and people. We have begun the process to identify and contact those impacted and are working closely with law enforcement and the relevant regulators. We are fully committed to maintaining your trust.I'll never not think of that South Park scene where they mocked BP's "We're so sorry" statement whenever I see one of those. I don't care if you're sorry or if you realize how much you betrayed your customers. Tell me how you investigated the root causes of the incident and how the results will prevent this scenario from ever happening again. Like, how many other deprecated third party systems were identified handling a significant portion of your customer data after this hack? Who declined to allocate the necessary budget to keep systems updated? That's the only way I will even consider giving some trust back. If you really want to apologise, start handing out cash or whatever to the people you betrayed. But mere words like these are absolutely meaningless in today's world. People are right to dismiss them.
I wouldn't be so quick. Everybody gets hacked, sooner or later. Whether they'll own up to it or not is what makes the difference and I've seen far, far worse than this response by Checkout.com, it seems to be one of the better responses to such an event that I've seen to date.
> Like, how many other deprecated third party systems were identified handling a significant portion of your customer data after this hack?
The problem with that is that you'll never know. Because you'd have to audit each and every service provider and I think only Ebay does that. And they're not exactly a paragon of virtue either.
> Who declined to allocate the necessary budget to keep systems updated?
See: prevention paradox. Until this sinks in it will happen over and over again.
> But mere words like these are absolutely meaningless in today's world. People are right to dismiss them.
Again, yes, but: they are at least attempting to use the right words. Now they need to follow them up with the right actions.
> Everybody gets hacked, sooner or later.
Right! But, wouldn't a more appropriate approach be to mitigate the damage from being hacked as much as possible in the first place? Perhaps this starts by simplifying bloated systems, reducing data collection to data that which is only absolutely legally necessary for KYC and financial transactions in whatever respective country(ies) the service operates in, hammer-testing databases for old tricks that seem to have been forgotten about in a landscape of hacks with ever-increasingly complexity, etc.
Maybe it's the dad in me, years of telling me son to not apologize, but to avoid the behavior that causes the problem in the first place. Bad things happen, and we all screw up from time to time, that is a fact of life, but a little forethought and consideration about the best or safest way to do a thing is a great way to shrink the blast area of any surprise bombs that go off.
> Maybe it's the dad in me, years of telling me son to not apologize, but to avoid the behavior that causes the problem in the first place.
What an odd thing to teach a child. If you've wronged someone, avoiding the behavior in future is something that'll help you, but does sweet fuck all for the person you just wronged. They still deserve an apology.
I think people this approach is overcompensating for over-apologizing (or, similarly, over thanking, both in excess are off-putting). I have a child who just says "sorry" and doesn't actually care about changing the underlying behavior.
But yes, even if you try to make a healthy balance, there are still plenty of times when an apology are appropriate and will go a long way, for the giver and receiver, in my opinion anyway.
Sorry, I should have worded that as "stop apologizing so much, especially when you keep making the same mistake/error/disruption/etc."
I did not mean to come off as teaching my kid to never apologize.
"Sorry - this is my fault" is such an effective response, if followed up with "how do we make this right?" or "stop this from happening again?"
Not a weird thing to teach a child.
It’s 5-why’s style root cause analysis, which will build a person that causes less harm to others.
I am willing to believe that the same parent also teaches when and why it is sometimes right to apologize.
Thanks, this is where I was coming from. I suppose I could have made that more clear in my original comment. The idea behind my style of parenting is self-reflecting and our ability to analyze the impact of our choices before we make them.
But of course, apologizing when you have definitely wronged a person is important, too. I didn't mean to come off as teaching my kid to never apologize, just think before you act. But you get the idea.
Yea, plus, anyone with kids knows that a lot of them just treat "sorry" as some sort of magic spell that you casually invoke right after you mess up, and then continue on with your ways. I teach my kid to both apologize and then consider corrective action, too.
I don’t see how any of what you’re suggesting would have prevented this hack though (which involved an old storage account that hadn’t been used since 2020 getting hacked).
You don't see how preventative maintenance such as implementing a policy to remove old accounts after N days could have prevented this? Preventative maintenance is part of the forethought that should take place about the best or safest way to do a thing. This is something that could be easily learned by looking an problems others have had in the past.
As a controls tech, I provide a lot of documentation and teach to our customers about how to deploy, operate and maintain a machine for best possible results with lowest risk to production or human safety. Some clients follow my instruction, some do not. Guess which ones end up getting billed most for my time after they've implemented a product we make.
Too often, we want to just do without thinking. This often causes us to overlook critical points of failure.
For the app I maintain, we have a policy of deleting inactive accounts, after a year. We delete approved signups that have not been “consummated,” after thirty days.
Even so, we still need to keep an eye out. A couple of days ago, an old account (not quite a year), started spewing connection requests to all the app users. It had been a legit account, so I have to assume it was pwned. We deleted it quickly.
A lot of our monitoring is done manually, and carefully. We have extremely strict privacy rules, and that actually makes security monitoring a bit more difficult.
These are excellent practices.
Such data is a liability, not an asset and if you dispose of it as soon as you reasonably can that's good. If this is a communications service consider saving a hash of the ID and refusing new sign ups with that same ID because if the data gets deleted then someone could re-sign up with someone else's old account. But if you keep a copy of the hash around you can check if an account has ever existed and refuse registration if that's the case.
It would violate our privacy policy.
It's important that "delete all my information" also deletes everything after the user logs in for the first time.
Also, I'm not sure that Apple would allow it. They insist that deletion remove all traces of the user. As far as I know, there's no legal mandate to retain anything, and the nature of our demographic, means that folks could be hurt badly by leaks.
So we retain as little information as possible -even if that makes it more difficult for us to adminster, and destroy everything, when we delete.
I think you misunderstood my comment and/or fail to properly appreciate the subtle points of what I suggest you keep.
The risk you have here is one of account re-use, and the method I'm suggesting allows you to close that hole in your armor which could in turn be used to impersonate people whose accounts have been removed at their request. This is comparable to not being able to re-use a phone number once it is returned to the pool (and these are usually re-allocated after a while because they are a scarce resource, which ordinary user ids are not).
> I think you misunderstood my comment and/or fail to properly appreciate the subtle points of what I suggest you keep.
Nah, but I understand the error. Not a big deal.
We. Just. Plain. Don't. Keep. Any. Data. Not. Immediately. Relevant. To. The. App.
Any bad actor can easily register a throwaway, and there's no way to prevent that, without storing some seriously dangerous data, so we don't even try.
It hasn't been an issue. The incident that I mentioned, is the only one we've ever had, and I nuked it in five minutes. Even if a baddie gets in, they won't be able to do much, because we store so little data. This person would have found all those connections to be next to useless, even if I hadn't stopped them.
I'm a really cynical bastard, and I have spent my entire adult life, rubbing elbows with some of the nastiest folks on Earth. I have a fairly good handle on "thinking like a baddie."
It's very important that people who may even be somewhat inimical to our community, be allowed to register accounts. It's a way of accessing extremely important resources.
> I provide a lot of documentation
> Some clients follow my instruction, some do not.
So you’re telling me you design a non-foolproof system?!? Why isn’t it fully automated to prevent any potential pitfalls?
lmao you taught your son to not apologize and if he can help it not do anything that gets him caught. maybe this is how we get politicians that never admit they were wrong and weasel out of everything
The prevention paradox only really applies when the bad event has significant costs. It seems to me that getting hacked has at worst mild consequences. Cisco for example is still doing well despite numerous embarrassing backdoors.
Well said, ideally action comes first and then these actions can be communicated.
But in the real world, you have words ie. commitment before actions and a conclusion.
Best of luck to them.
There are millions of companies even century or decade old ones without a hacking incident with data extraction. The whole everyone gets hacked is copium for a lack of security standards or here the lack of deprecation and having unmantained systems online with legacy client data. Announcing it proudly would be concerning if I had business with them. It's not even a lack of competence... it's a lack of hygiene.
>There are millions of companies even century or decade old ones without a hacking incident with data extraction.
Name five.
The pedantic answer is to point to a bunch of shell companies without any electronic presence. However in terms of actual businesses there’s decent odds the closest dry cleaners, independent restaurant, car wash, etc has not had its data extracted by a hacking incident.
Having a minimal attack surface and not being actively targeted is a meaningful advantage here.
>there’s decent odds the closest dry cleaners, independent restaurant, car wash, etc has not had its data extracted by a hacking incident.
And there's also a decent chance they have. Did we not just have a years long spate of ransomware targeting small businesses?
Most ransomeware isn’t exfiltrating data. For small business you can automate the ‘pay to unencrypt your HDD’ model easy without care for what’s on the disk.
There are definitely companies who have never been breached and it's not that hard. Defense in depth is all you need
Isn't defense in depth's whole point that some of your defenses will get breached?
Take the OP. What defenses were breached? An old abandoned system running unmantained in the background with old user data still attached. There is no excuse.
Not everyone gets hacked. Companies not hacked include e.g.
- Google
- Amazon
- Meta
Amazonian here. My views are my own; I do not represent my company/corporate.
That said...
We do our very best. But I don't know anyone here who would say "it can never happen". Security is never an absolute. The best processes and technology will lower the likelihood and impact towards 0, but never to 0. Viewed from that angle, it's not if Amazon will be hacked, it's when and to what extent. It is my sincere hope that if we have an incident, we rise up to the moment with transparency and humility. I believe that's what most of us are looking for during and after an incident has occurred.
To our customers: Do your best, but have a plan for what you're going to do when it happens. Incidents like this one here from checkout.com can show examples of some positive actions that can be taken.
> But I don't know anyone here who would say "it can never happen". Security is never an absolute.
Exactly. I think it is great for people like you to inject some more realistic expectations into discussions like these.
An entity like Amazon is not - in the longer term - going to escape fate, but they have more budget and (usually) much better internal practices which rule out the kind of thing that would bring down a lesser org. But in the end it is all about the budget, as long as Amazon's budget is significantly larger than the attackers they will probably manage to stay ahead. But if they ever get complacent or start economizing on security then the odds change very rapidly. Your very realistic stance is one of the reasons it hasn't happened yet, you are acutely aware you are in spite of all of your efforts still at risk.
Blast radius reduction by removing data you no longer need (and that includes the marketing department, who more often than not are the real culprit) is a good first step towards more realistic expectations for any org.
Facebook was hacked in 2013. Attacker used a Java browser exploit to take over employees' computers:
https://www.reuters.com/article/technology/exclusive-apple-m...
Facebook was also hacked in 2018. A vulnerability in the website allowed attackers to steal the API keys for 50 million accounts:
Nah.
The Chinese got into gmail (Google) essentially on a whim to get David Petraeus' emails to his mistress. Ended his career, basically.
I'd bet my hat that all 3 are definitely penetrated and have been off and on for a while -- they just don't disclose it.
source: in security at big orgs
Do you have a source that the Google hack was related to David Petraeus? This page doesn't mention it[1]. Does the timeline line up? Google was hacked in 2009[2]. The Petraeus stuff seems to have happened later.
Disclosure: I work at Google but have no internal knowledge about whether Petraeus was related to Operation Aurora.
> I'd bet my hat that all 3 are definitely penetrated and have been off and on for a while -- they just don't disclose it.
Considering the number of Chinese nationals who work for them at various levels... of course they're all penetrated. How could that possibly fail to be true?
The relevant difference here is that these companies have actual security standards on the level that you would only find in the FAA or similar organisations were lives are in danger. For every incident in Google cloud for example, they don't just apologise, but they state exactly what happened and how they responded (down to the minute) and you can read up exactly how they plan to prevent this from happening again: https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1S...
This is what incident handling by a trustworthy provider looks like.
Google just got hacked in June:
https://cloud.google.com/blog/topics/threat-intelligence/voi...
https://www.forbes.com/sites/daveywinder/2025/08/09/google-c...
That was a Salesforce instance with largely public data, rather than something owned and operated by Google itself. It's a bit like saying you stole from me, but instead of my apartment you broke into my off-site storage with Uhaul. Technically correct, but different implications on the integrity of my apartment security.
It was a social engineering attack that leveraged the device OAuth flow, where the device gaining access to the resource server (in this case the Salesforce API) is separate from the device that grants the authorization.
The hackers called employees/contractors at Google (& lots of other large companies) with user access to the company's Salesforce instance and tricked them into authorizing API access for the hackers' machine.
It's the same as loading Apple TV on your Roku despite not having a subscription and then calling your neighbor who does have an account and tricking them into entering the 5 digit code at link.apple.com
Continuing with your analogy, they didn't break into the off-site storage unit so much as they tricked someone into giving them a key.
There's no security vulnerability in Google/Salesforce or your apartment/storage per se, but a lapse in security training for employees/contractors can be the functional equivalent to a zero-day vulnerability.
There's no vulnerability per se, but I think the Salesforce UI is pretty confusing in this case. It looks like a login page, but actually if you fill it in, you're granting an attacker access.
Disclosure: I work at Google, but don't have much knowledge about this case.
Google got hacked back in 2010, lookup Operation Aurora. It wasn't a full own, but it shows that even the big guys can get hacked.
You are joking right?
All of these companies have been hacked by nation states like Russia and China.
Didnt Edward Snowden release documents that the NSA had fully compromised googles internal systems?
Yup. The NSA has every single major US tech company tapped at their server level and are harvesting all their data. Issues them NSLs and there is zero way these companies can refuse the taps.
fair or not, if their customers get hacked it's still on them to mitigate and reduce the damage. Ex: cloud providers that provide billing alerts but not hard cut-offs are not doing a good job.
Everybody includes Google, Amazon and Meta.
They too will get hacked, if it hasn't happened already.
They also have plenty of domestic and foreign intelligence agents literally working with sensitive systems at the company.
... that we know of. Perhaps some of those "outages" were compromised systems.
"shit it's compromised. pull the plug ASAP"
Meta once misconfigured the web servers and exposed the source. https://techcrunch.com/2007/08/11/facebook-source-code-leake...
I like your stance.
We also have to remember that we have collectively decided to use Windows and AD, QA tested software etc (some examples) over correct software, hardened by default settings etc.
The intent of the South Park sketch was to lampoon that BP were (/are) willingly doing awful things and then give corpo apology statements when caught.
Here, Checkout has been the victim of a crime, just as much as their impacted customers. It’s a loss for everyone involved except the perpetrators. Using words like “betrayed” as if Checkout wilfully mislead its customers, is a heavy accusation to level.
At a point, all you can do is apologise, offer compensation if possible, and plot out how you’re going to prevent it going forward.
> At a point, all you can do is apologise, offer compensation if possible, and plot out how you’re going to prevent it going forward.
I totally agree – You've covered the 3 most important things to do here: Apologize; make it right; sufficiently explain in detail to customers how you'll prevent recurrences.
After reading the post, I see the 1st of 3. To their credit, most companies don't get that far, so thanks, Checkout.com. Now keep going, 2 tasks left to do and be totally transparent about.
No trolling on my side, I think having people who think just like you is a triumph for humanity. As we approach times far darker and manipulation takes smarter shapes, a cynical mind is worth many trophies.
> prevent this scenario from ever happening again.
Every additional nine of not getting hacked takes effort. Getting to 100% takes infinite effort i.e. is impossible. Trying to achieve the impossible will make you spin on the spot chasing ever more obscure solutions.
As soon as you understand a potential solution enough to implement it you also understand that it cannot achieve the impossible. If you keep insisting on achieving the impossible you have to abandon this potential solution and pin your hope on something you don't understand yet. And so the cycle repeats.
It is good to hold people accountable but only demand the impossible from those you want to go crazy.
They are donating the entire ransom amount to two universities for security research. I don't care about the words themselves, but assuming they're not outright lying about this, that meant a lot to me. They are putting their (corporate!) money where their mouth is.
Haha, yes, this is entirely what I expected. I was actually pleasantly surprised by the GP because internet commentators always find a reason that some statement is imperfect.
Indeed, an apology is bad and no apology is also bad. In fact, all things are bad. Haha! Absolutely prime.
In attacks on software systems specifically though, I always find this aggressive stance toward the victimized business odd, especially when otherwise reasonable security standards have been met. You simply cannot plug all holes.
As AI tools accelerate hacking capabilities, at what point do we seriously start going after the attackers across borders and stop blaming the victimized businesses?
We solved this in the past. Let’s say you ran a brick-and-mortar business, and even though you secured your sensitive customer paperwork in a locked safe (which most probably didn’t), someone broke into the building and cracked the safe with industrial-grade drilling equipment.
You would rightly focus your ire and efforts on the perpetrators, and not say ”gahhh what an evil dumb business, you didn’t think to install a safe of at least 1 meter thick titanium to protect against industrial grade drilling!????”
If we want to have nice things going forward, the solution is going to have to involve much more aggressive cybercrime enforcement globally. If 100,000 North Koreans landed on the shores of Los Angeles and began looting en masse, the solution would not be to have everybody build medieval stone fortresses around their homes.
What you request is for them to divulge internal details of their architecture that could lead to additional compromise as well as admission of fault that could make it easier for them to be sued. All for some intangible moral notion. No business leader would ever do those things.
Right. Transparency doesn't mean telling about the attack that already happened. It means telling us about their issues and ways this could happen again. And they didn't even mention the investment amount for the security labs.
The hard line:
"We will pay $500,000 to anyone who can provide information leading to the arrest and conviction of the perpetrators. If the perpetrators can be clearly identified but are not in a country which extradites to or from the United States, we will pay $500,000 for their heads."
Words are cheap, but "We are sorry." is a surprisingly rare thing for a company to say (they will usually sugarcoat it, shift blame, add qualifiers, use weasel words, etc.), so it's refreshing to hear that.
This is a classic example of a fake apology: "We regret that this incident has caused worry for our partners and people" they are not really "sorry" that data was stolen but only "regret" that their partners are worried. No word on how they will prevent this in the future and how it even happened. Instead it gets downplayed ("legacy third-party","less than 25% were affected" (which is a huge number), no word on what data exactly).
How would the apology need to be worded so that it does not get interpreted as a fake apology?
In terms of "downplaying" it seems like they are pretty concrete in sharing the blast radius. If less than 25% of users were affected, how else should they phrase this? They do say that this was data used for onboarding merchants that was on a system that was used in the past and is no longer used.
I am as annoyed by companies sugar coating responses, but here the response sounds refreshingly concrete and more genuine than most.
IMO something like:
We are truly sorry for the impact this has no doubt caused on our customers and partners businesses. This clearly should never have happened, and we take full responsibility.
Whilst we can never put into words how deeply sorry we are, we will work tirelessly to make this right with each and every one of you, starting with a full account of what transpired, and the steps we are going to be taking immediately to ensure nothing like this can ever happen again.
We want to work directly with you to help minimise the impact on you, and will be reaching out to every customer directly to help understand their immediate needs. If that means helping you migrate away to another platform, then so be it - we will assist in any way we can. Trust should be earn't, and we completely understand that in this instance your trust in us has understandably been shaken.
"Up to 25% of users were affected." "As many as 25% of users were affected."
"A quarter of user accounts were affected. We have calculated that to be 7% of our customers."
an effective apology establishes accountability, demonstrates reflection on what caused the problem, and commits to concrete changes to prevent it from reoccurring
> How would the apology need to be worded so that it does not get interpreted as a fake apology?
"We regret that we neglected our security to such degree that it has caused this incident."
It's very simple. Don't be sorry I feel bad, be sorry you did bad.
They stated clearly in the article:
> This was our mistake, and we take full responsibility.
I wonder how much of the negative sentiment about this is from a knee jerk reaction and careless reading vs. thoughtful commentary.
I always presume the "We are sorry" opens up to financial compensation, whereas the "we regret that you are worried" does not.
In my country, this debate is being held WRT the atrocities my country committed in its (former) colonies, and towards enslaved humans¹. Our king and prime minister never truly "apologized". Because, I kid you not, the government fears that this opens up possibilities for financial reparation or compensation and the government doesn't want to pay this. They basically searched for the words that sound as close to apologies as possible, but aren't words that require one to act on the apologies.
¹ I'm talking about The Netherlands. Where such atrocities were committed as close as one and a half generations ago still (1949) (https://www.maastrichtuniversity.nl/blog/2022/10/how-do-dutc...) but mostly during what is still called "The Golden Age".
If you are unwilling to say "We are sorry" because "that opens you up to lawsuits" then you are not sorry.
Letting business concerns trump human empathy is exactly the damn problem and exactly why these companies still deserve immense ire no matter how they word their "We don't want to admit fault but we want you to think we care" press release. This is also true of something like the Dutch crown or the USA having tons of people being extremely upset at the suggestion of teaching kids what the US has actually done in it's history.
This was our mistake, and we take full responsibility.
That preceding line makes it, to me, a real apology. They admit fault.
Seems a bit harsh to leave out the rest of the apology and only focus on the part that is not much of an apology.
> No word on how they will prevent this in the future and how it even happened.
Because these things take time, while you need to disclose that something happened as fast as possible to your customers (in the EU, you are mandated by the GDPR, for instance).
Agreed. It's just a classic way to manipulate the viewers. They just wanted to sound edgy for not paying a ransom, which is definitely a good thing. Never pay these crooks but you left a legacy system online without any protections? That's serious
> We are fully committed to maintaining your trust.
We are fully committed to rebuilding your trust.
Refreshing to not see "due to an abundance of caution". Kudos to the response in general, they pretty much ticked all boxes.
Since when did owning up to a data breach become such a noteworthy event? Less than 25% sounds more like exactly 25% of impacted customers
I like you like this. For me it’s close but fails in the word selection in the last sentence: “maintaining” trust is not what I would say their job is at this point, it’s “restoring” it.
One places the company at the center as the important point of reference, avoiding some responsibility. The other places the customer at the center, taking responsibility.
If i was a customer id be pissed off, but this is as good as a response you can have to an incident like this.
- timely response
- initial disclosure by company and not third party
- actual expression of shame and remorse
- a decent explanation of target/scope
i could imagine being cyclical about the statement, but look at other companies who have gotten breached in the past. very few of them do well on all points
If we just let the companies go away with 'we are sorry' and say that is as good as it gets, then this industry is up for far more catastrophic situations in the future. Criminal liability, refunds to customers, requirements from regulators might move things in the right direction, but letting companies have shitty practices by hoarding data they don't need and putting customers at risk is definitely something that should be looked at with more scrutiny.
It depends on the crime though right? This was all legacy data and from the description the worst thing they got was contact information that's five years older or more ("internal operational documents and merchant onboarding materials at that time.").
For that level of breach their response seems about right to me, especially waving the money in ShinyHunters' face before giving it away to their enemies.
I agree, it depends, but this wouldn't be the first time company underplayed (or simply lied) about the extent of the breach. I am sure even if it was current data or a more serious breach, the messaging would be similar from their side.
You bring up a good point ,and in that they don't really _say_ what data leaked. "Just some old stuff nobody uses."
> - timely response
Timely in what way? Seems they didn't discover the hack themselves, didn't discover it until the hackers themselves reached out last week, and today we're seeing them acknowledging it. I'm not sure anything here could be described as "timely".
I have been doing a self Have I Been Pwned audit and, reading many company blog posts, and it wasn't uncommon to see disclosure months after incidents.
Yeah, that sucks, and I wouldn't call those "timely" either. Is your point that "timely" is relative and depends on what others are doing? Personally, "slow" is slow regardless of how slow others are, but clearly some would feel differently, that's OK too.
"Timely" is relative, right? If I build a house in a week, that was done in a timely fashion, as it was done faster than average,
If i build a house of cards in a week, that took way longer than the average house of cards, and it would not be fair to call it "timely".
In a world where most companies report breaches months after the fact, yes, I think "last week we found out about it and we're now confirming it" is fair. You need to work with Law Enforcement, you need to confirm the validity of the data and the hacker's claims, and that they data they are ransoming is all they actually took. You need to check the severity of the data they took. Was it user/passes? Was there any trademarked processes, IP, sensitive info? You need to ensure the threat actor is removed from your environment, and the hole they got in with is closed.
If you choose to pay the ransom, you may need to work even closer with LE to ensure you don't get flagged for aiding and funding criminals.
With them choosing not to pay, I'm sure they need to clear that with legal still. Finance needs to be on board. Can you actually call it a charitable donation for a tax write-off if its under this sort of duress? (And I'd assume there's other sort's of questions a SysAdmin can't be expected to come up with examples for)
While ALL of this is happening, you can't announce your actions. You can't put our a PR until you know for sure you were compromised, what the scope was, and that any persistence has been removed.
If one week is slow and three months is also slow, why should a company switch from three months to one week?
To borrow from a different context, if eating meat every day is being an evil animal abuser and being vegetarian but liking cheese sauce on you pasta is being an evil animal abuser, why should anyone consider eating less meat?
Warning: not very well thought-out generalisation ahead
We need to be able to express nuance, otherwise everything turns into a shitshow like, for example, the current state of political and social discourse. Americans will vote for privatisation because public healthcare is "literally communism" and "communism is the devil". Twitter users will vote for white supremacists because they get called "literal nazis" for the big nose jokes they occasionally make.
> as good as a response you can have to an incident like this.
From customer perspective “in an effort to reduce the likelihood of this data becoming widely available, we’ve paid the ransom” is probably better, even if some people will not like it.
Also to really be transparent it’d be good to post a detailed postmortem along with audit results detailing other problems they (most likely) discovered.
No, that would not help me as a customer. Because I would never believe that that party would keep their word, besides, it can't be verified. You'll have that shadow hanging around for ever. The good thing is that those assholes now have less budget to go after the next party. The herd is safe from wolves by standing together, not by trying to see which of their number should be sacrificed next.
There’s a very real difference between the data possibly still being saved in some huge storage dump of a ransomware group and being available for everybody to exploit on a leak site.
It’s a sliding scale, where payment firmly pushes you in the more comfortable direction.
Also, the uncomfortable truth is that ransomware payments are very common. Not paying will make essentially no difference, the business would probably still be incredibly lucrative even if payment rates dropped to 5% of what they are now.
If there was global co-operation to outlaw ransom payments, that’d be great. Until then, individual companies refusing to pay is largely pointless.
> It’s a sliding scale, where payment firmly pushes you in the more comfortable direction.
No, it pushes you in a more comfortable direction, and I'm not you.
Yes, but your concerns are less rooted in reality and more in the fact that you find the idea of paying ransomware groups repulsive. That’s fine, but there’s rational analysis to be done here, and it often leads to paying being the best option.
If your company gets hit by one of these groups and you want to protect your customers, paying is almost always the most effective way to do that. Someone who isn’t particularly interested in protecting their customers probably wouldn’t pay if the damage from not paying would be lower than the cost of paying.
A third possibility is that you simply feel uncomfortable about paying, which is fine, but it isn’t a particularly rational basis for the decision.
I think we can also fairly assume that the vast majority of people have no strong feelings about ransomware, so there’s likely going to be no meaningful reputational damage caused by paying.
Never pay the ransom.
The extortionist knows they cannot prove they destroyed the data, so they will eventually sell it anyway.
They will maybe hold off for a bit to prove their "reputation" or "legitimacy". Just don't pay.
If this is actually frequently happening, your claim should be pretty easy to prove. Most stolen databases are sold fairly publicly.
The ransom payments tend to be so big anyway that selling the data and associated reputational damage is most likely not worth the hassle.
Basic game theory shows that the best course of action for any ransomware group with multiple victims is to act honestly. You can never be sure, but the incentives are there and they’re pretty obvious.
The big groups are making in the neighbourhood of $billions, earning extra millions by sabotaging their main source of revenue seems ridiculous.
Do you think ransomware groups do referrals to their satisfied customers who paid and didn't have their data leaked?
Probably? They have pretty professional customer service pages.
However they don’t really need to because there are plenty of documented cases, and the incident response company you hire will almost certainly have prior knowledge of the group you’re forced to deal with.
If they had a history of fucking over their “customers”, the IR team you hired would know and presumably advise against paying.
> reputational damage
Whoa. You're a crime organization. The data may as well "leak" the same way it leaked out of your victim's "reputable" system.
We’re talking about criminal organisations that depend on a certain level of trust to make any money at all.
Yes, the data might still leak. It’s absurd to suggest that it’s not less likely to leak if you pay.
There’s a reason why businesses very frequently arrive at the conclusion that it’s better to pay, and it’s not because they’re stupid or malicious. They actually have money on the line too, unlike almost everyone who would criticise them for paying.
I strongly disagree. Paying the ransom will put everyone in danger.
I would totally agree with you if we lived in a hypothetical world where ransomware payments aren’t super common anyway.
Until there is legislation to stop these payments, there will be countless situations where paying is simply the best option.
> Until there is legislation to stop these payments, there will be countless situations where paying is simply the best option.
Paying the ransom is not exactly legal, is it? Surely the attackers don't provide you with a legitimate invoice for your accounting. As a company you cannot just buy a large amount of crypto and randomly send it to someone.
Paying the ransoms is almost always legal in basically all western countries unless the recipient has been sanctioned.
> As a company you cannot just buy a large amount of crypto and randomly send it to someone.
You can totally do that, why wouldn’t you be able to?
Because its fraud. You cannot just take money out of the company, you have to put something in your books.
So you obviously put “ransomware payment” in the books.
Most of the time the company doesnt pay directly.
They hire a third party, sometimes their cyber insurance provider, to "cleanup" the ransomware. That third party then pays another third party who is often located in a region of the world with lax laws to perform the negotiations.
At the end of the day nobody breaks any laws and the criminals get paid.
Depends. Not paying ransom decreases the likelihood of being attacked in the future.
Probably not that significantly, these are primarily crimes of opportunity. An attacker isn’t likely to do much research on the company until they already have access, and that point they might as well proceed (especially since getting hit a second time would be doubly awkward for the company, presumably dramatically increasing the chances of payment)
And selling the data from companies like Checkout.com is generally still worth a decent amount, even if nowhere close to the bigger ransom payments.
You mean as a customer you'd feel better if the company victim of ransom would help fund the very group that put the business and your data in jeopardy?
Of course, it makes my data and my customers data less likely to end up public on the internet.
It’s not great, but it’s the least shitty option.
What makes you think they won't get the money and sell the database in the dark web?
This is like falling victim to a scam and paying more on top of it because the scammers promised to return the money if you pay a bit more.
I see no likelihood game to be played there because you can't trust criminals by default. Thinking otherwise is just naive and wishful. Your data is out in the wild, nothing you can do about that. As soon as you accept that the better are your chances to do damage reduction.
Their incentives are well known. You don’t have to trust them to assume that they will act rationally.
Picking up hundreds of thousands at best (very few databases would be worth so much) when your main business pays millions or tens of millions per victim simply isn’t worth it, selling the data would jeopardise their main business which is orders of magnitude more profitable.
Absolutely no IR company will advise their clients to pay if the particular ransomware group is known to renege on their promises.
Did some research and indeed there is a sort of "honor among thieves" kinda vibes when it comes to ransom attacks.
Still, it's illegal or quite bureaucratic in some places to pay up.
And idk... It still feels like these ransom groups could well sit on the data a while, collect data from other attacks, shuffle, shard and share these databases, and then sell the data in a way that is hard to trace back to the particular incident and to a particular group, so they get away with getting the ransom money and then selling the db latter.
It's also not granted that even with the decrypt tools you'd be able to easily recover data at scale given how janky these tools are.
I don't know. I am less sure now than I was before about this, but I feel like it's the correct move not to pay up and fund the group that struck you, only so it can strike others, and also risk legal litigations.
> Still, it's illegal or quite bureaucratic in some places to pay up.
I can’t think of anywhere it would be illegal, but the bureaucracy is usually handled by the incident response company who are experts at managing these processes.
> It's also not granted that even with the decrypt tools you'd be able to easily recover data at scale given how janky these tools are
Most IR companies have their own decryption tools for this exact purpose, they’ve reversed the ransomware groups decryptors and plugged the relevant algos into their own much less janky tools.
> And idk... It still feels like these ransom groups could well sit on the data a while, collect data from other attacks, shuffle, shard and share these databases, and then sell the data in a way that is hard to trace back to the particular incident and to a particular group, so they get away with getting the ransom money and then selling the db latter
Very few databases will be worth even $100k, ransoms tend to run in the millions and sometimes tens of millions. There have been individual payments of over $30M. Selling the data just isn’t worth it, even if you could get away with it without sabotaging your main business. It’d like getting a second job as a gas station attendant while working for big tech in SF, possible but ridiculous.
> I don't know. I am less sure now than I was before about this, but I feel like it's the correct move not to pay up and fund the group that struck you, only so it can strike others, and also risk legal litigations.
The UK government even has a website where they basically say “yeah we understand you might need to make a payment to a sanctioned ransomware group, it’s totally fine if you tell us”. The governments accept that these payments are necessary, to the point that they’ll promise non-enforcement of sanctions. I can’t think of anywhere you’d really be risking legal repercussions if you have some reasonable IR company guiding you through the process.
I totally get the concern about funding these groups, but unfortunately the payments are so common at this point (the governments even publish guidelines! That common) that it simply doesn’t make a difference if a few companies refuse to pay.
Makes a lot of sense. Thanks for taking the time for this well thought-out response.
Ah yes let's fund literal criminal groups so they have an incentive to keep hacking people
Completely useless take in the real world where these payments are common, it makes no difference whatsoever if an individual or even vast majority of victims stop paying. Ransomware will remain incredibly lucrative until payments are outlawed.
The cost of an attack like this is in the thousands of dollars at most, the ransom payments tend to be in the millions. The economics of not paying just don’t add up in the current situation.
How do you know it isn't illegal when you pay the ransom?
You could very well be making a payment to a sanctioned individual or country, or a terrorist organization etc.
There are best practices for this, you normally hire a third party to handle the negotiations, payment process and the necessary due diligence.
For example the UK government publishes guidelines on how to do this and which mitigating circumstances they consider if you do end up making a payment to a sanctioned entity anyway https://www.gov.uk/government/publications/financial-sanctio...
They directly state as follows:
> An investigation by the NCA is very unlikely to be commenced into a ransomware victim, or those involved in the facilitation of the victim’s payment, who have proactively engaged with the relevant bodies as set out in the mitigating factors above
i.e you’re not even going to be investigated unless you try to cover things up.
This is a solved problem, big companies with big legal departments make large ransomware payments every day. Big incident response companies have teams of negotiators to work through the process of paying, and to get the best possible price.
> The attackers gained access to a legacy, third-party cloud file storage system.
I think the answer is ok but the "third-party" bit reads like trying to deflect part of the blame on the cloud storage provider.
For all their boasting, I can't help but wonder how their response would have been different if the attackers actually had gotten their hands on sensitive data.
The whole codebase & tools at whatever company I ever worked at was using 99% legacy stuff. Its wild...
Often times it would have been easier to rebuild the whole project over trying to upgrade 5-6 year old dependencies.
Ultimately the companies do not care about these kinda incidents. They say sorry, everyone laughs at them for a week and then after its business as usual, with that one thing fixed and still rolling legacy stuff for everything else.
> Often times it would have been easier to rebuild the whole project
Sure buddy, sure
The company that bought mine spent two years trying to have Team A rewrite a part of our critical service as a separate service to make it more scalable and robust and to enable it to do more. They wanted to do stupid things like "Lets use GRPC because google does!" and "Django is slow" and "database access is slow (but we've added like six completely new database lookups per request for uh reasons)"
They failed so damn bad and it's hilariously bad and I feel awful for the somewhat competent coworker who was stuck on that team and dealt with how awful it was.
Then we fired most of that team like 3 times because of how value negative they have been.
Then my coworker and I rebuilt it in java in 2 months. It is 100x faster, has almost no bugs, accidentally avoided tons of data management bugs that plague the python version (because java can't have those problems the way we wrote it) and I built us tooling to achieve bug for bug compatibility (using trivial to patch out helpers), and it is trivially scalable but doesn't need to because it's so much faster and uses way less memory.
If the people in charge of a project are fucking incompetent yeah nothing good will ever happen, but if you have even semi-competent people under reasonable management (neither of us are even close to rockstars) and the system you are trying to rewrite has obvious known flaws, plenty of time you will build a better system.
but the issue wasn't python or django, RPC or REST
it was the ORM and the queries themselves
I inherited a few codebases as solo dev and I am confident in my abilities to refactor each of them in 1-2 months without issues.
I can imagine that in a team that might be harder, but these are glorified todo apps. I am well aware that complete rebuilds rarely work out.
The donation is more or less virtue signaling rather than actual insight.
The problem can not be helped by research research against cybercrime. Proper practices for protections are well established and known, they just need to be implemented.
The amount donated should've rather be invested into better protections / hiring a person responsible in the company.
(Context: The hack happened on a not properly decomissioned legacy system.)
> The donation is more or less virtue signalling rather than actual insight.
I see it more as a middle finger to the perps: “look, we can afford to pay, here, see us pay that amount elsewhere, but you aren't getting it”. It isn't signalling virtue as much as it is signalling “fuck you and your ransom demands” in the hope that this will mark them as not an easy target for that sort of thing in future.
It also serves as a proxy for a punishment. They are, from one perspective, paying a voluntary fine based on their own assessment of their security failings.
For customers it signals sincerity and may help dampen outrage in their follow up dealings.
Yes but I think it's a good virtue to signal considering the circumstances. If they paid the ransom that would signal that ransoming this company works, incentivizing more ransoms. If they refuse to pay the ransom it might signal that they care more about money than they do integrity. Taking the financial hit of the ransom, but paying it to something that signals their values, is about the best move I can imagine.
Virtue signaling is an insult that you can for example use against greenwashing or against someone who pledged to donate a lot of money to some charity but actually donated none or much less. Hypocrisy is also a form of virtue signaling.
It's also a term you can use against political opponents because it's much easier to speak well than to actually do good.
Refusing to negociate with criminals and help fund security seems like the proper long-term reaction for everyone.
At the stage we're at, I would far prefer virtue signalling to the more widespread vice signalling.
Requiring everyone to implement proper practices is one way of addressing the problem, I might call it Sisyphean & impossible.
Making it illegal to pay ransom is likely a much easier to implement and more effective solution.
And this isn’t virtue signaling - they literally did the virtuous thing that is better for society at the expense of their bottom line. That is just virtue.
It is virtue signaling, especially considering the fact that doing the hard to swallow thing of paying the ransom would probably be the best outcome from a customer perspective.
Yes there are negative externalities in funding ransomware operations, not paying is still much more likely to hurt your customers than paying.
Doing the positive externality thing at expense of your bottom line is to be praised. It is not ‘virtue signaling’ - it is actually doing a virtuous thing.
Very small positive externality at the expense of their customers. Probably doesn’t even come close to balancing out.
Besides, if they were genuinely interested in positive externalities they would be spending the money lobbying for a ransomware payments ban and not donating to universities.
Paying ransomware fines is never the smart move to do unless you happen to trust what cyber criminals tell you.
You send them the payment, they tell you they deleted the data, but they also sell the data to 10 other customers over the dark-web.
Why would you ever trust people who are inherently trustworthy and who are trying to screw you? While also encouraging further ransomware crimes in the future.
It’s a sliding scale.
If you don’t pay, the odds they will publish your data are closer to 100%. If you do pay, the odds have historically been much closer to 0% than 100%
You aren’t paying to be sure, but to improve your chances.
What is the problem with virtue signaling? By all means signal virtue! Perhaps you are concerned by cheap virtue signals, which have little significance.
The point here is that this is an expensive virtue signal. Although, it would be more effective if we knew how expensive it was.
> Proper practices for protections are well established and known
Endpoint security is a well known open problem for what no sufficient practices and protections exist.
They should have watched Ransom (1996).
I was just thinking of this scene as I read their report.
I don't know what virtue signaling means. I think you mean they just did it out of spite.
There is not much to research. If companies want security, they should pay for security.
> If companies want security, they should pay for security.
Or just properly follow best-practise, and their own procedures, internally.⁰
That was the failing here, which in an unusual act of honesty they are taking responsibility for in this matter.
--------
[0] That might be considered paying for security, indirectly, as it means having the resources available to make sure these things are done, and tracked so it can be proven they are done making slips difficult to happen and easy to track & hopefully rectify when they inevitably still do.
Security is an arms race. Don't expect a leap; do your part to stay ahead.
I dont understand some of the cynicism in this thread. This is a bold move and I support. It is impossible to not have incidents like this and until theres a proper post mortem we wont really know how much of it can be attributed to carelessness. They could have just kept is hush hush but I appreciate that they came forward with it and also donated money to academia. The research will be open and everybody benefits.
It’s hacker news, people feel that cynicism elevates them in some way.
> The threat actors do not have, and never had, access to merchant funds or card numbers.
> The system was used for internal operational documents and merchant onboarding materials at that time.
Ah so just all of your KYC for founders, key personnel, and the corporation to impersonate business accounts
> We estimate that this would affect less than 25% of our current merchant base.
Yikes, this affects 25% of their current merchant base.
Interesting spin for a core infrastructure provider who deals with the most sensitive part of most businesses, tries to bury the lede of getting hacked with a tale of their virtuous refusal to pay a ransom; is this supposed to make them attractive or just have people skip the motivating events? Swing and a miss in my books.
"The system was used for internal operational documents and merchant onboarding materials at that time"
To me it seems most likely that this is data collected during the KYC process during onboarding, meaning company documents, director passport or ID card scans, those kind of things. So the risk here for at least a few more years until all identity documents have expired is identity theft possibilities (e.g. fraudsters registering their company with another PSP using the stolen documents and then processing fraudulent payments until they get shut down, or signing up for bank accounts using their info and tax id).
Passport or ID card scans would never be be stored alongside general KYB information, e.g. the standard forms PSPs use.
If you read between the lines of the verbiage here, it looks like a general archived dropbox of stuff like PDF documents which the onboarding team used.
Since GDPR etc, items like passports, driving license data etc, has been kept in far more secure areas that low-level staff (e.g. people doing merchant onboarding) won't have easy access to.
I could be wrong but I would be fairly surprised if JPGs of passports were kept alongside docx files of merchant onboarding questionnaires.
> Passport or ID card scans would never be be stored alongside general KYB information
How do you qualify this statement? Did you mean “should never”? Even then, you’re likely overstating things. Nothing prevents co-locating KYC/KYB information. On the contrary, most businesses conducting KYB are required to conduct UBO and they’re trained to combine them both. Register as a director/officer with any FSI in North America and you’ll see.
Fair point! Yeah, it could be. Although Europe tends to be stricter about those things, i.e. where PII is stored. I was trained way back in like 2018 about ensuring I never have any PII stored on my PC and around the requirements of the GDPR in terms of access to information and right to delete etc.
docx files of merchant onboarding questionnaires
Why would merchants fill out docx files? They would submit an online form with their business, director and UBO details, that data would be stored in the Checkout.com merchants database, and any supporting documents like passport scans would be stored in a cloud storage system, just like the one that got hacked.
If it was just some internal PDFs used by the onboarding team, probably they wouldn't make such a big announcement.
Another person wrote a good response to this but yeah, I would say, as someone that has worked in fintech, you will almost always have some integrations with systems which require Microsoft word format, as well as obviously PDFs, CSVs, etc.
Every country you operate in has different rules and regulations and you have to integrate with many third party systems as well as governmental entities etc, and sometimes you have to do really really technically backwards things.
Some integrations I remember were stuff like cron jobs sending CSV files via FTP which were automatically picked up.
If you are dealing with financial services (and payment provider most certainly would), you will be forced to interface with infuriating vendor vetting and onboarding questionnaire processes. The kinds that would make Franz Kafka blush, and CIA take notice for their enhanced interrogation techniques.
The sheer amount of effectively useless bingo sheets with highly detailed business (and process) information boggles the mind.
Some time ago I alluded to existence and proliferation of these questionnaires in another context: https://bostik.iki.fi/aivoituksia/random/crowdstrike-outage-...
While a nice gesture, I'm not so certain that if I were one of their "less than 25%" of customers impacted that I'd be so pleased. Why not compensate them instead?
I don't think they meant OXCIS, that seems to be a centre for Islamic Studies https://en.wikipedia.org/wiki/Oxford_Centre_for_Islamic_Stud...
I can't quite work out who they donated to - it seems there are a number of Oxford Uni cybersec/infosec units. Any idea which one?
I guess it just means this: https://www.cybersecurity.ox.ac.uk/
"Cyber Security Oxford is a community of researchers and experts working under the umbrella of the University of Oxford’s Academic Centre of Excellence in Cyber Security Research (ACE-CSR)."
Probably, I'm not sure it's not https://gcscc.ox.ac.uk/
I don't think it's https://www.infosec.ox.ac.uk/
There's also this AI security research lab, https://lasr.plexal.com/
It looks like Oxford are quite busy in this space.
So, I used to work in the fintech world and it looks to me like what was hacked was merchant KYB documents. I.e. when a merchant signs up for a PSP they have to provide various documentation about the business so the PSP can underwrite the risk of taking on this business. I.e. some PSPs won't deal with porn companies or travel companies or companies from certain regions etc.
This sort of data is generally treated very differently to the actual PANs and payment information (which are highly encrypted using HSMs).
So it's obviously shitty to get hacked, but if it was just KYB (or KYC) type information, it's not harming any individuals. A lot of KYB information is public (depending on country).
Fair play on them for being open about this.
It's not just business data though - usually it will include ultimate beneficial owner and directors' passports, tax ID, etc. So there is a risk of identity theft there of potentially some very wealthy individuals.
When they say "The episode occurred when threat actors gained access to this third party legacy system which was not decommissioned properly. " for me it sounds like a not properly wiped disk that got into the the bad guys hands. It would be interesting to know more to be prepared for proper decommissioning of hardware.
sounds like an S3 bucket that wasn't deleted
Or a cloud server which was never turned off.
They're "sorry", they want to be "transparent" and "accountable", they want your "trust", but not enough to publicly explain what happened or what kind of data got taken (is a full CRM backup from 6 years ago considered "legacy" "internal operational documents"?). There's not even a promise to produce more information about their mistake.
> Jimmy, where did the cookies go?
> Something that was on the counter is gone! I don't know how! It might not even be my fault! But I'm sorry!
What kind of an apology is that? It's not. It's marketing for the public while they contact the "less than 25% of [their] current merchant base" whose (presumably sensitive) information was somehow in "internal operational documents".
Oh but also took some of what they charge their customers and gave that (undisclosed?) sum away to a university. They must be really sorry.
Bravo - I find this incredibly courageous and will consider being their customer in the future.
Sometimes cyber insurance will come to the rescue. That’s why companies Don’t pay.
Isn't it illegal in many countries to pay a ransom?
(If not, why not?)
(Imho, it would make sense if only the state can pay ransoms)
Typically, companies wouldn't really pay an actual ransom like unmarked bills stacked in a paper bag and thrown out from a bridge onto a passing barge.
Instead, you would pay (exhorbitant) consulting fees to a foreign-based "offensive security" entity, and most of the time get some sort of security report that says if you'd simply plug this and that holes, your systems would now be reasonably safe.
> Typically, companies wouldn't really pay an actual ransom like unmarked bills stacked in a paper bag and thrown out from a bridge onto a passing barge.
Yes, that's why cryptocurrencies are a gift from heaven for these hacker groups.
Therefore, even if paying ransom money (somehow) must be legal, maybe it should be illegal to use crypto for it. You don't want to make it too easy to run this type of criminal business.
Could this be aws s3?
I’m thinking a SFTP or file sharing gateway. Think MoveIT, GoAnywhere, ShareFile, etc.
IMO, these aren’t safe to use anymore.
I was guessing it's a OneDrive, Google Drive, DropBox or something.
Probably someone was phished and they still had access to an old shared drive which still had this data. Total guess but reading between the lines it could be something like this.
yeh, I am skeptical about "third party"
They are downplaying the severity of the data theft, which most likely includes user identification documents, the most dangerous type of breach, since it directly enables identity theft
Reading between the lines reveals the severity they're obfuscating, with contradictions:
> This incident has not impacted our payment processing platform. The threat actors do not have, and never had, access to merchant funds or card numbers.
> The system was used for internal operational documents and merchant onboarding materials at that time.
> We have begun the process to identify and contact those impacted and are working closely with law enforcement and the relevant regulators
They stress that "merchant funds or card numbers" weren't accessed, yet acknowledge contacting "impacted" users, this begs the question: how can users be meaningfully "impacted" by mere onboarding paperwork?
Yeah, they keep repeating what wasn't accessed but never say what actually was.
Giving me MBA vibes. Will they close up shop and go when it's the remaining 75% of their infrastructure next time?
If everyone refuses to pay, such incidents would largely reduce.
> Checkout.com hacked, refuses ransom payment, donates to security labs
This submission's edited title reads like the "target headline" from The Office (US):
> Scranton Area Paper Company - Dunder Mifflin - Apologizes - to Valued Client - Some Companies - Still Know - How - Business - is - Done
At this point I think we all understand that we will never be able to trust any company in this world with our data.
In most cases they can get away with "We are sorry" and "Trust me, bro" attitude.
This should be law. Any company that is hacked should be required by law to make a sizeable investment in a third-party security research company.
Security reasearch lab during the day day, ransomware org at night conspiracy coming soon.
"Firefighter arson is a persistent phenomenon involving a very small minority of firefighters who are also active arsonists ... It has been reported that roughly 100 U.S. firefighters are convicted of arson each year."
Interesting, that number is much higher than I would expect.
It wouldn't require a conspiracy for these companies to 'invest' in security companies they have ties to. Throw in tax incentives and loopholes and whatnot and it turns out not to hurt the original company at all.
I have checkout.me domain, and it is for sale. email me if you want to get it.