If any pentester would download more data than necessary to prove a bug exists, they would be fired.
I don't know whether this person downloaded 57M user records. Maybe it was a 500kb zip file, which would be totally reasonable to grab in a pentest. And then once you realize what it is, you run `srm` on it to ensure it's wiped, then immediately call the client so they can deploy an emergency hotfix and perform forensic analysis to see if the data had already been nabbed.
We know he contacted Uber. We don't know whether he said "Give me $100k and I'll delete the data" or "Give me $100k and I'll keep my mouth shut" or if Uber offered him the $100k or any details at all. In this scenario, it's better to assume the best until proven otherwise. And by "proof" I mean "emails showing what was said," not a second-hand news report that attempts to spin it into an easily clickable form.
But I suppose it could have gone the other way too, and maybe he did. The point is, it's totally reasonable for Uber to raise the price until he was willing to keep quiet about it. If I ran into this bug in the wild, I would be ethically obliged to report it to you, dear readers, after a reasonable time period. But I suppose certain ethics would take a back seat to $100k in my pocket, and I'm not ashamed of that.
But it depends on the data. If we're talking SSNs, that could really screw up peoples' lives, so I don't think I'd be able to be bought. CC numbers I'd probably overlook, since you can't really mess with someone's life by stealing their CC number. They just get refunded. (edit: On the other hand, businesses eat the cost of fraud, so this would probably need to be reported.)
The point is, it's a big complex topic and there are a bunch of things we don't know. But above all, if you are ever holding data hostage and demanding money to destroy it, you're not a pentester, you're a chump.
edit2: it occurs to me that maybe the 20yo wanted to hold the data in order to prove to the world that the breach really did happen, i.e. his intent was altruistic. I could picture myself doing something misguided like that back when I was 20. But the trick is to keep only a few records at most, and redact everything sensitive. Then tell the truth. The company can't lie and say it didn't happen, since they don't know whether you can prove that it did. And no one is at risk because the data is gone.
If you're not getting paid up front, regardless of the outcome, you're not a 'pentester' in the first place - you're just doing free work for a company who will then (in some cases) not hesitate to sue you just for telling them they have a problem. Fuck that. You're taking some sort of moral high ground here; which is to be expected, as you're in the industry, or as we say in Dutch - "whose bread one eats, whose word one speaks". But just because a bunch of people with security services to sell says it's so, doesn't make it true.
"Responsible disclosure" is bullshit. These "bug bounty" programs are just a cover up - a way to retroactively say "but we had procedures!". The real chumps are the people spending days or weeks looking for issues and then being send off with a $250 Starbucks coupon and a pat on the head. The days of 'playing nice' or 'doing the 'ethical' rollseyes thing' are over. Today's internet is an all-out, free for all warzone (security-wise). "Responsible disclosure" is a PR scapegoat, a smoke curtain devised by companies unwilling to spend what it takes to make our eye-wateringly bad state of infrastructure seem... if not good, then at least less crap.
I don't have a horse in this race; I stopped caring about infosec 15+ years ago when the full contact spirit of the scene began to wade. I just assume that anything with a keyboard I type on is compromised and adapt my behavior to match. But it does still make me angry that so many people bought into this whole spiel of blaming whoever finds the issue, instead of holding those that caused it in the first place responsible. It's morally equivalent to the GFC bailouts, except that there's not even a 'too big to fail' argument to be made.
It's just a matter of having a clear objective and guidelines scoped out in a contract.
This doesn't happen in my parts of the world (Eastern European country) where most of the people still use debit-cards. Card cloning (https://en.wikipedia.org/wiki/Credit_card_fraud#Skimming) is a frequent crime around here, and it absolutely and negatively affects those who end up on the wrong side of it all (i.e. the card-holders). A former friend of mine had her money stolen from her bank account when someone had cloned her card and the procedure of her getting the money back was to go to the police, file a police report, and then go to the bank and complain about it with said police report in hand and hoping that she'd receive a refund in the next 6 months.
In Netherlands banks are still responsible, no matter debit or credit card. It's pretty common for the bank to have refunded someone before the customer notices anything.
The skimming bit only worked super easily with the swipe system. With the chip it reduced the amount of fraud considerably. Plus they only allow the card to be used in Europe by default (you can easily change this setting in any bank).
People now moved to social engineering (pretending to be the bank and telling people to transfer money).
Nowadays EC is mostly dead, though, except for Germany.
It's dead in Germany as well.
For quite a long time the banks rebranded their scheme as "electronic cash" and reused the logo. But a few years back they changed their brand to "girocard"  and MasterCard bought the famous "ec" trademark .
I especially like the consorsfinanz card, which is just a regular Debit MasterCard.
: https://die-dk.de/zahlungsverkehr/zulassungsverfahren/electr... (in German)
A 2017 study found that while over 80% of Germans have an EC card, less than 6% have a regular debit or credit card. As result, the entire payment structure is still aimed at EC.
Mostly because only 2-3 banks offer free CCs or Debit cards, and the others have 30-90€ fees per year.
And now that the EU forced CCs (which previously had between 3 and 7% fees (!)) to the same fees as EC (between 0.125% and 0.25%), even the existing free CC offerings are moving towards high fees.
It should be noted that what you call "EC card" is a Germany-only thing from its inception. There was another, totally different scheme under the name "Eurocheque" used internationally in Europe but that is completely dead for a few decades. It never relied on cryptography but rather holograms. The logo was then used for the "Electronic Cash" system which never was supported outside Germany.
(I would also dispute the 6% number. Yes, the scheme of the German banks is really popular, but 6% is too low and doesn't match the numbers I know. Which study would that be?)
I don’t have it on hand, but it was a study comparing the costs of cash, and prevalence of different payment methods, from Steinbeis I think, for either the Bundesdruckerei, or another federal agency.
> but 6% is too low and doesn't match the numbers I know
Remember that until 2015, REWE was the only retailer in most of DE to even accept CCs, and even today, most CCs still have fees above 40€/year.
A card that no retailer accepts, which is useless in German online stores, and which costs you money, is only interesting to those who travel internationally.
There is also the 2013 Steinbeis Cost of Cash study: http://www.steinbeis-research.de/images/pdf-documents/CFP_Co... It cites EHI numbers, about 30% with a credit card.
It's pretty good and has more or less killed cloning.
A hundred grand for a few weeks or a month of work is way, way above that level. This is a jackpot, not a reward. Whether Uber threw out the number or the guy demanded probably won't ever be known, but I know where I'd put my bet.
For comparison, Apple's bug bounty program says they'll pay $100k for a hack to extract data from the secure enclave.
(Bug bounty programs are often probably underpaying compared to what something would be worth on the black market, but clean money is worth lots more than dirty money with a prison risk)
1. Vulnerabilities in the individual websites of specific companies have typically little to no salable value on a “open market”, black or otherwise.
2. People who manage bug bounty programs know this, and those programs are not designed to compete with a shadowy underworld. They’re just an incentive for reporting security vulnerabilities.
3. Apple’s security vulnerabilities actually do compete with a market for the sale of exploits, but this is because vulnerabilities in iOS or macOS represent vulnerabilities in deployed operating systems and software for which they are not the sole arbiter of an update decision.
> Bug bounty programs are often probably underpaying compared to what something would be worth on the black market, but clean money is worth lots more than dirty money with a prison risk
I say the following as someone who has: 1) managed a bug bounty internally as a security engineer, 2) managed bug hounties as a consultant for various tech companies of various sizes, 3) reported security vulnerabilities in bounty programs for companies you’ve heard of, 4) spoken professionally with engineers at tiny, small, medium and large companies running programs and 5) sold vulnerabilities for various reasons:
Bug bounty programs are emphatically not underpaying relative to a black market. Black market exchanges exist for vulnerabilities which impact operating systems, widely used open source software and languages. A key component of the value of a vulnerability is its half-life - that is to say, how long it can be expected to be useful. A vulnerability in Ubuntu has a half-life of years, perhaps decades. A vulnerability in Uber’s web applications has a half-life of one week. In 15 years, you will reliably find servers on the internet chugging along with a horribly misconfigured, vulnerable version of Windows or Debian and an open service written in Python 2.7. In contrast, Uber’s web applications will scarcely look the same in 15 years, and the company can deploy a hotfix to the entire landscape of the vulnerability (their centralized servers) in 24 hours.
Can you conconct a scenario in which a hypothetical sabateur manages to weaponize and capitalize on an exploit in Facebook Ads Manager, or some random Uber server with sensitive data, within a week? Sure, but it’s contrived. The risk/reward ratio just isn’t really there.
I’ve continually crusaded against what you’re claiming on HN for literally years now. It’s simply not true. I don’t mean to be harsh on you in particular, but the confident repetition of incorrect claims becomes frustrating.
This may apply to regular random companies, but does it really apply to very known, rich brands like Uber?
> Can you conconct a scenario in which a hypothetical sabateur manages to weaponize and capitalize on an exploit in (...) some random Uber server with sensitive data, within a week? Sure, but it’s contrived. The risk/reward ratio just isn’t really there.
This is Uber we're talking about. It's not exactly an universally loved company. I can easily imagine someone interested in profiting of extra bad press and attention caused by a data breach.
I would pay 10BTC for a recent copy of Uber user db containing at least the emails, names and phone numbers of all users.
Sure kill the credit cards who gives a fuck.
Knowing where uber users residents live, their standard time home on a friday or saturday night, whether or not they're throwing up with a chance on not remembering anything (I stole) would be fantastic. Oh, I could also sell this to anyone doing ANY datamining to easily enrich their data set.
This is 10 seconds worth of thought, do you really think the Uber data set has so little value?
No. I’m saying that a vulnerability in Uber’s software has very little value.
More precisely, I’ve sold data (and analysis thereof) to the financial sector. I’ve even sold unique data on Uber and UberEats specifically (not gained through a security vulnerability). Data and vulnerabilities are distinct products with separate buyers. Companies interested in data like this are mostly interested in it being sourced, at worst, through scraping or mining. They’re usually skittish about outright vulnerabilities, and have a sense of how likely it is data was obtained in a legally defensible manner.
On the other hand, buyers of vulnerabilities are mostly not using them for interesting dataset acquisition. They weaponize the vulnerabilities themselves instead of buying any single output from a vulnerability, and they mostly use them for developing botnets or constructing online “holes” for identity and credit card harvesting on an ongoing basis.
The point of purchasing vulnerabilities is gaining a privileged position for ongoing compromise that replenishes for a reasonably long time. No one is saying these vulnerabilities are bad; I’m specifically telling you the vulnerabilities are not generally salable, because the parties interested in them have little to no overlap with the parties interested in data. Furthermore, those two markets have separate intentions, processes and risk/reward ratios.
A dataset and a vulnerability that can lead to a dataset are simply not comparable. I believe someone would probably be willing to purchase this particular data, but I do not believe you could weaponize this data on an open market with any regularity, and bug bounty programs would not take this into account when calibrating their payouts.
Finding an organization willing to buy a legally sourced, unique dataset is comparatively easy. So is finding an organization willing to buy a vulnerability that can be weaponized towards a significant number of servers on the internet. But finding an organization willing to buy a vulnerability just for its data value, or an organization willing to use illegally sourced data, is hard. Not impossible, but rarer than either of the other two examples. There is not a regular market for it.
To take your FB Ad Manager example...just recently there was bug that allowed people to start campaigns for free (by somehow charging the bill to unrelated accounts). A bug like that has a small half-life but if it lets someone use up a million dollars in targetted ads over one weekend for free I would think you could still get quite a bit for it on a black market.
But instead of taking that argument I’m going to say this: I have been heavily involved in bug bounty programs during my career thus far and I’ve sold data to hedge funds for companies like Uber (not using security vulnerabilities, however). In fact, I have specifically used unique, originally sourced and curated UberEats data to forecast GrubHub’s revenue and market share over time. I’ve even managed bug bounties for several companies and participated in them. I’ve had people threaten me with taking a vulnerability public in the hopes of receiving a payout, etc, etc.
I’m harping on all of this because vulnerabilities and data are very different products, and have very different markets. A vulnerability is salable to a black market under specific conditions; the people interested in acquiring unique data are, to a first approximation, none of the people interested in buying a vulnerability. One of these markets is interested in illegally generated, extremely high profit on an ongoing basis which they can mostly control end to end. The other market is very risk-averse, and is interested in profitable analysis of the data.
You could certainly find some counter-party willing to buy this kind of data eventually, but it would not be a traditional blackhat organization, and it would not be a routine regularity as it is with vulnerability exchanges. Further, bug bounty programs still would not calibrate their prices based on this, because it would be rarer than vulnerabilty sales.
"Just throw money at it" seems to be a more integral strategy of their corporate playbook than most companies.
To a first approximation, no bug bounty for web application or mobile application software competes with any black market. In fact there is no black market for those vulnerabilities. These vulnerabilities exist in a centralized system, and therefore have virtually no half-life, which means little to no value. They’re not salable.
Much to the chagrin of people who speculate one way or another about bug bounty payouts on message boards like this one, bug bounty programs do not calibrate their program payouts to compete with a real or perceived black market.
The reason for this is obvious: bug reports have value to a company, but dumps of their own internal data do not. The only buyers of vulnerabilities on a white hat market are the companies who are victims of the vulnerabilities. The bug reports are valuable to those companies because it allows them to patch previously unknown vulnerabilities. But data breaches are not valuable to them in the same way. Why would they pay you for their own data that they already have? It’s not analogous to buying vulnerabilities, because the company gains nothing new.
Therefore the argument that the stolen data would be worth more on the black market than the white market is a moot point, because there is no white market for stolen data. If you’ve stolen data, you’ve overstepped the bounds of any reasonable bug bounty program, at a greatly increased risk of prosecution. Your options are to 1) stop and do nothing, 2) sell the data on the black market, 3) attempt to responsibly disclose the breach, knowing you are in a vulnerable legal position having downloaded the data, or 4) extort the company.
(Notice that “sell the data on a white hat market” is not an option.)
What we do not know in this case is if the hacker chose #3 or #4. It seems like he used social engineering to get the GitHub credentials, which would normally fall out of bounds for bug bounty programs (never mind the data breach itself). That seems to support the speculative conclusion that the hacker went into this with malicious intent. So does the fact that he resorted to hackerone seemingly post-facto, as the article mentions, so Uber could “verify his identity.” But perhaps he was just naive. We don’t know.
I second what others have said. The fact that this is Uber makes me inclined to believe they initiated the offer of $100k.
Such reward schemes are set up as a sort of competition, or bet. You invest time not knowing if you will find anything worthy of a reward. If you expect to have a 10% chance of finding a vulnerability, the reward needs to be 10x the value of the work for it to a worthwhile use of your time.
I disagree, sticking to that would even be foolish in a full scope pentest. It depends what the data is. I would, and have, download any data of operational value using a bug that was found. If I can dump the entire company's employee password hashes, I'm going to. And after the engagement, a company-wide email is going out to roll those credentials, which, whether I personally viewed them or not, are now considered compromised. There are some exceptions if there is reason to have high confidence that no one else ever exploited the bug, but that's taking a big chance.
Data of no operational value would be difficult to explain, and customer data or credentials would be off limits beyond proving the bug. Even though it's likely legally fine in most cases (note: a work-for-hire contract or an internal pen test team have very different rules of engagement than bug bounty spec work), it would be highly unprofessional. You're going to be putting the customer in a very awkward position at best. But if you're hired to perform services on behalf of the company, breach disclosure laws don't apply any more than if you are hired as a contractor to migrate their database from one system to another.
It's routine during a netpen to download very large unlabeled files, especially VM images. You don't know whether they're a security threat until you check, and your goal is to escalate as much as possible. Even if it's named backup-20141120, you don't know what it's a backup of, or whether it's encrypted. You need to at least start downloading it to check the file headers.
A good pentester will try to suck down as may different things as possible and sort ruthlessly for anything that could be useful: keys, passwords, notes, bash histories, logs, everything. But that's why we do so on an encrypted, isolated drive. The data never leaves the partition, and it's deleted at the conclusion of the test.
People working on HackerOne don't have that kind of discipline, so it's important to err on the side of caution. But it's totally valid to grab a VM snapshot and look through it to check how to pivot elsewhere.
But the moment you realize it's super sensitive data, you want to wipe it and contact them immediately. (Perhaps after checking whether there's anything you can use to escalate privileges.) If it's named "ssns.txt", you probably still want to download it just to check it really is SSNs before you go running to them about their exposed text file.
The point is, it sounds very dramatic to say "Hacker downloaded 57M user records in 1GB of data", but sometimes you don't know they're user records until you look, and sometimes you can't look until it's fully downloaded. And your goal is to safely simulate what a real hacker would do. That's the point of a pentest, and why ethics and trust are so important.
I've snagged several VM image files during pentests at various companies. I don't remember whether any of them turned out to be useful, but I do remember poking through them in vmware fusion to see what the devs had littered around.
Now: I was under strict NDA. That isn't true for HackerOne finders. Every company has different rules. Some tell you up front not to do this, e.g. https://hackerone.com/deptofdefense ("You do not exfiltrate any data under any circumstances.") https://hackerone.com/square ("Never attempt to view, modify, or damage data belonging to others.")
Crucially, Uber does not: https://hackerone.com/uber
Searching for "data" shows that everything is in scope. So it's really tricky to say there was malicious intent here.
edit: Usually it's the other way around, though: You find an exploit that gives you a little drip of data, so you know that you could technically enumerate the entire dataset if you wanted to. Obviously, don't do that, because you already know from the first drip whether there will be anything useful if you keep going.
You do not exploit a security issue you discover for any reason. (This includes demonstrating additional risk, such as attempted compromise of sensitive company data or probing for additional issues.)
(note: clear conjecture, I only read half of the article before becoming more interested in the HN discussion)
Edit: According to the disclosure process on https://www.hackerone.com/disclosure-guidelines, there's nothing about disclosures lasting this long.
And they were currently negotiating with the FTC over a different prior undisclosed data breach.
This doesn't add up.
In short, the issue people are taking with Uber is that they tried to pass off a security breach as having taken proactive measures, while this was a case of ransom.
The problem isn't that Uber paid him for finding the vulnerability, but that Uber kept it secret for so long.
It's a good thing no hacker would ever think to make a backup copy of the file on a USB stick or upload it to some cloud provider.
I think that I might be able to cover my tracks, but I'm definitely not sufficiently certain to stake my freedom on it, there's always a chance that I'd make some mistake and they happen to be more thorough than I am, and the same applies for everyone (e.g. including authors of APT's employed by the major intelligence services around the world); a 90% chance of getting some extra money on top of what he got isn't worth a 10% chance of criminal prosecution. Knowing that the machine is going to be analyzed by someone with a lot of resources is a sufficient deterrent IMHO.
That said, Windows does keep a whole lot of information on activities in the registry and filesystem.
In what world is the FL man (and his 2nd person) not a felon?
Perhaps Uber will face the same penalty.
if there was no evidence that any data was actually compromised, I'm not sure I see a reason why they would need to disclose this to the public.
Doesn't sound like a typical bug bounty to me.
The difficulty for Uber is that the existence of this a bug was kept a secret from the public, whose information may have been stolen. Nobody knows that this bug was not exploited by other parties.
Long time security researcher here... most smart companies do not bribe, they simply hire , post exploit. An employee or even consultant under NDA can’t disclose very easily. In fact many fortune 500s will seek out up and coming analysts for some fluff project with little other reason than to get that NDA.
Uber's PR response at the time was it's the users fault for not choosing a complicated password vs. owning up there's a problem and or being concerned for their customers. What a great company!
Buying and selling Netflix, porn and Uber accounts in this way is a very common and popular
If you're account was affected it's a very good chance it was via this method rather than a broader Uber breach
Scroll to the middle of the article.
How was the money stolen? Is there some way to transfer money or vouchers from Uber to someone else?
Can we trust anything they say? I just know my account was hacked in 2014.. money stolen from me and then seeing tons of others suffering the same fate. All the while the company who let it happen laughing at it's users... blaming them.
So he's doing corporate espionage stuff now?
The test for substance is a lot like it is for links. Does your comment teach us anything? There are two ways to do that: by pointing out some consideration that hadn't previously been mentioned, and by giving more information about the topic, perhaps from personal experience. Whereas comments like "LOL!" or worse still, "That's retarded!" teach us nothing.
Plus it makes boring reading.