NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Data isn't just being collected from your phone, it's being used to score you (chron.com)
Eyght 1356 days ago [-]
I feel like the Chinese style social score system won't have enough support in government for most countries, but we'll receive a decentralized version anyway through corporate overreach.

Also, it's always amusing to arrive at an article about data collection and be greeted by this: https://i.imgur.com/d4Z4sdd.png

cactus2093 1356 days ago [-]
The credit system in the US is already a pretty horrific version of this, and nobody really seems to care and it’s been like this for decades. And it’s not even just used for credit anymore, renting an apartment now usually requires a credit check.

I had thought credit karma and similar services were at least adding transparency to the industry but was recently in for a rude awakening when trying to get preapproval for our first mortgage. It turns out, the whole idea of a credit score is kind of a lie. Your credit report can be pulled by creditors, and creditors can interpret it in different ways as they see fit. Services like credit karma are just making up their own score based on the report which can be drastically different than what the creditor decides it is. And credit pulled for different uses somehow ends up with different scores, e.g. for a credit card approval your score will often show up higher than for a mortgage. You can check your credit report for free, as required by law, but nobody is required to tell you your credit score. You can also basically just pay to remove many things from your credit report.

It’s insane and the fact that this fairly old and low-tech institution has not managed to be regulated successfully or made fair to the average person, bodes very poorly IMO for regulation of privacy and fairness based on future technology.

lotsofpulp 1356 days ago [-]
> You can check your credit report for free, as required by law, but nobody is required to tell you your credit score. You can also basically just pay to remove many things from your credit report.

There is no “the credit score”. Any lender is free to use whatever scoring algorithm they like. And they do. Some lenders may choose to use certain scores for certain things, but underwriting has no obligation to use one specific credit score.

Therefore there is no reason for people to care about their credit score, whether it be from FICO or credit karma or whoever.

The only thing you should do is make sure information on the credit report is accurate. And what is your source for claiming you can pay to get things removed from credit score? It doesn’t even make sense, as it just shows the status of your lines of credit. You can’t just make a line of credit disappear as no credit reporting agency or financial institution is going to want to commit fraud for any amount of money that an average person might offer.

njarboe 1356 days ago [-]
Credit reporting agencies and financial institutions commit fraud all the time. They call it identity theft so that you think it is your fault instead of their fault. No one would care about a bank giving a loan to a person that was impersonating them except for the fact that the bank (credit card, mattress store, car dealership, etc) commits fraud and reports to the credit agency that you have defaulted on your loan when you have not. Then the credit reporting agency also commits fraud when they sell this false and damaging information to others.

This is the big lie. That "identity theft" should be the problem of the person who was impersonated. Pass laws that heavily fine entities that give false information to credit bureaus and fine credit bureaus who give out false information. "Identity theft" would no longer be something that people would worry about.

lotsofpulp 1356 days ago [-]
I agree that the consequences of identity theft should fall on the financial institutions and credit reporting agencies.
mindslight 1356 days ago [-]
That's not fraud. The tort you're looking for is libel. The "Fair" Credit Reporting Act explicitly immunizes the surveillance bureaus against the tort of libel. This is actually another instance of regulatory capture.
njarboe 1356 days ago [-]
Yes, you are right. It's libel.

Wow, I did not know that the surveillance bureaus are exempt from libel. What an incredible law against the interests of the people. That explains why I've never heard about class action suites against them. I'll have to read up on it. Thanks for the info.

gruez 1356 days ago [-]
>Credit reporting agencies and financial institutions commit fraud all the time. They call it identity theft so that you think it is your fault instead of their fault. No one would care about a bank giving a loan to a person that was impersonating them except for the fact that the bank (credit card, mattress store, car dealership, etc) commits fraud and reports to the credit agency that you have defaulted on your loan when you have not.

That's literally what not fraud is.

>In law, fraud is intentional deception to secure unfair or unlawful gain, or to deprive a victim of a legal right

If some guy walks into a bank and claim they're you and they believe it, the banks aren't gaining anything. If anything, they lost money. Wrongly reporting the default to the CRAs also doesn't benefit them either; it's not like they compensate the banks based on how many default reports they send in. Finally, it's missing the "intentional" part. Lax security practices are negligence at best.

njarboe 1356 days ago [-]
Maybe a better way of say this is that the bank is committing libel when they report to a credit bureau that you have defaulted on a loan when you have not. They say you did something that you did not do and the fact of them declaring that lie does you damage. The important thing is that "identity theft" should be a problem for the entity that made the loan to a criminal and not the person that was impersonated. Call it "bank libel" when someones credit score is ruined by a bank that gives a loan to the wrong person and one is closer to describing the truth of the situation.

Edit: mindslight mentions in this thread that: "The 'Fair' Credit Reporting Act explicitly immunizes the surveillance bureaus against the tort of libel". Amazing.

bsanr2 1356 days ago [-]
>If some guy walks into a bank and claim they're you and they believe it, the banks aren't gaining anything.

They would have just sold a loan.

>If anything, they lost money.

Can't they sell the defaulted loan to collections?

>Lax security practices are negligence at best.

Systemic, known, and long-standing negligence speaks to intention.

gruez 1356 days ago [-]
>They would have just sold a loan.

Yeah, that's what their books say, but that doesn't necessarily match reality. In reality they gave $1000 in cash to someone, and their expected return on that is a fraction of that. Same logic if you bought paid $800 for bonds with face value of $1000, but unknown to you, the bonds are actually junk bonds with a expected value (factoring in repayment and default) of $500. In that case your paper gains are $200, but in reality you actually lost $300.

>Can't they sell the defaulted loan to collections?

They gave the bad guy a loan for $1000. They're out $1000. They sell the loan to collections for $300, they're still out $700.

>Systemic, known, and long-standing negligence speaks to intention.

Security exists on a spectrum, and there are trade-offs to be made. Clearly the banks don't have an interest in selling fraudulent loans, and putting up too much barriers when it comes to authentication also has costs.

bsanr2 1355 days ago [-]
Does... interest stop being a factor? They gave an x-year loan for $1000 at x% APR (was it subprime?). Also, penalties. Collections can go after you for the full amount. They buy it from the bank for the original $1000+.

Or, the loan hasn't defaulted yet. The bank suspects it will. They load it into a bundle, get it highly-rated, sell the bundle. They make a profit. A whole bunch of other people get hosed.

>Clearly the banks don't have an interest in selling fraudulent loans

This is the wrong decade to be making that statement.

lotsofpulp 1356 days ago [-]
> Can't they sell the defaulted loan to collections?

Why would anyone pay face value for a defaulted loan?

1356 days ago [-]
awslattery 1356 days ago [-]
"Pay for delete" in collections reported has transitioned from a rarity, to most RMA agencies offering it directly in their initial letter and websites. None of the credit bureaus have pushed against this publicly in the last 3 years of it being more widely adopted.
lotsofpulp 1356 days ago [-]
If the owner of the debt accepts a lower cash payment in exchange for canceling the debt, I don’t see how that’s a problem for the public. It’s a problem for lenders, but seems like they’ve caught onto it: https://www.nerdwallet.com/article/finance/pay-for-delete
awslattery 1356 days ago [-]
In my experience, I've been able to settle for less than full balanced owed (the collection agencies buy them by the bundle for pennies on the dollar), and all negative tradelines removed (original charge off and collection account) under new (2018ish) publicly listed policies of doing so.

This approach is mainly used by third-party debt buyers, like Encore/Midland/Cavalry. And doesn't apply to public judgments, if they decide to pursue that route. Thankfully, I have managed to avoid that in my credit recovery journey.

First-party (original creditor) will laugh or tell you it isn't possible if you ask for a PFD settlement, but there are documented instances of it happening as early as 2010 in my research for "making the case" before these policies started to become more widespread.

pxeboot 1356 days ago [-]
> And what is your source for claiming you can pay to get things removed from credit score?

It is definitely possible to do a "pay for delete" to have negative info removed from a credit report in exchange for payment.

lotsofpulp 1356 days ago [-]
It’s not the credit reporting agency that’s doing it though, it’s the lender.

https://www.nerdwallet.com/article/finance/pay-for-delete

I don’t see what the problem is if the owner of the debt agrees to erase the debt.

syshum 1356 days ago [-]
>> And it’s not even just used for credit anymore, renting an apartment now usually requires a credit check.

Renting an apartment is credit, I am not sure why you view it has something other than credit?

The owner of the property is loaning (credit) the use of their property to you for X amount of time in exchange for N amount of dollars, payable over monthly installments

How is that not credit?

A better example is employers using it for hiring choices which does happen as well, but using an apartment as an example of bad uses of credit I think it misguided

>>. It turns out, the whole idea of a credit score is kind of a lie. Your credit report can be pulled by creditors, and creditors can interpret it in different ways as they see fit.

Yes the individual or organization that is loaning you a large amount of money can choose how they use the credit report they obtain for you. Again here I am not sure why this is a revelation or a bad thing.

There are also many different and competiting credit scores no person as "a credit score" there are at least 5 if not more credit scores out there and different institutions will use them in different ways. FICO being the most common but not the only

>You can check your credit report for free, as required by law, but nobody is required to tell you your credit score. You can also basically just pay to remove many things from your credit report.

Yea I believe these institutions should also have to release your personal score with your annual free report, Congress should fix that omission from the law.

gruez 1356 days ago [-]
>Renting an apartment is credit, I am not sure why you view it has something other than credit?

>The owner of the property is loaning (credit) the use of their property to you for X amount of time in exchange for N amount of dollars, payable over monthly installments

Not really. In most places you have to pay first and last month's rent, which means you're paying for the service before it's rendered. Therefore they're not extending credit, as you have already paid for the service in advance.

syshum 1356 days ago [-]
Well there are a few problems with this

1. I would like you to define "most places" as for the first 35 years of my life I was a renter and in that time I only ever had to pay "First and last" in one instance. Most of the time it was The first months rent, and security deposit(which was often something small like $100 or $200), no "last months rent" so around here it was not "most places"

2. Even if you use that as a metric that is more like a down payment than a "services paid in advance", unless you are on some kind of month to month with no lease. Every lease I have ever signed shows the TOTAL of all payments which are paid in 12 installments just like a loan, if on month 6th you just move well you still owe the other 6 payments (less any down payment aka last month you prepaid)

The owner is absolutely extending you credit for the use of the property, If you sign a lease for an apartment for $1,500 a month for 12 months you are agreeing to pay $18,000 to the owner for the use of the property, you have both agreed to pay that over 12 equal payments, and in your hypothetical the owner has asked for a $3,000 down payment on that loan in exchange the owner adjusted the terms to 10 equal payments

gruez 1356 days ago [-]
I'm not sure I follow here. The transaction might be structured as the landlord giving you 12 months of housing service at the time of signing of the contract (or when the lease starts) and you paying him in installments of 12 payments (with the first one or two upfront), but you don't "take delivery" of the service all at once. It's given to you on a continuous basis. Therefore, at any moment during the 12 months, you don't owe the landlord anything.
syshum 1356 days ago [-]
The second you sign the lease you owe the landlort the full amount. It is a debt you owe.

Does not matter if you realize the use of the property or not, for example if you signed a Lease on an apartment in January, then due to a global pandemic you could not move (through not fault of the landlord) in until July you would still owe the rent for the 6 months even though you did not occupy the unit.

cactus2093 1356 days ago [-]
Renting is not credit because you don’t get the thing all upfront and then pay for it later. If fact if anything, the renter is giving the landlord up to one month of rent as credit because rent is usually due on the first of the month in order to be able to live there that month. But that’s just sort of an artifact of the discretization of the payment, as far as the arrangement goes there is an ongoing equal trade of values between both parties.

As far as banks being able to make their own credit decision, that’s all fine and good but then they pretend like that’s not what’s happening. If I ask a bank why they denied my loan, they won’t tell me about the specific activities I’ve done that prove me un creditworthy (sometimes I can press to ask what they can see on my report and if they’re feeling nice they may tell me, but they don’t have to). The fact that there’s no party you can ask to just evaluate ahead of time what the result will be, or to see what i can do to get myself above the threshold, so the individual is always at a disadvantage of information asymmetry, is what makes the system bogus. And then on top of that, just the act of seeing if you qualify for the thing lowers your credit score further!

If I’ve been burned by this as a high paid tech worker with just a couple of mistakes in my autopay settings in the past, I can’t even imagine how big of a problem this is for people who have had actual hardships.

syshum 1356 days ago [-]
>>Renting is not credit because you don’t get the thing all upfront and then pay for it later. If fact if anything, the renter is giving the landlord up to one month of rent as credit because rent is usually due on the first of the month in order to be able to live there that month

You have a fundamental misunderstanding of how a lease works, you are equating it to something like a prepaid phone bill, and that is not at all what a lease is

You are obligated the pay the full amount of the lease, if you up and move in the middle you still owe the landlord the full amount.

The property owner would run a credit check for the same reason as a lender would, to judge if you are responsible person that would repay this obligation under the terms of the agreement, it far closer to a loan then you seem to want to give it credit for.

Further the use of credit scores and other background checks only become more important the harder it becomes to evict bad tenants

Something that "there is an ongoing equal trade of values between both parties." would be terminable by either party the second that value proposition changes, this is not the case with Rental property where the interaction is governed not only by the terms of the lease but layers of federal and local laws

>If I ask a bank why they denied my loan, they won’t tell me about the specific activities I’ve done that prove me un creditworthy

If you scroll to my other comments I advocate for changing that, I am a big advocate of personal data ownership and believe any person should have the right at any time to request all data any company collected about them.

>> (sometimes I can press to ask what they can see on my report and if they’re feeling nice they may tell me, but they don’t have to).

You have the right to get an annual credit report from every credit agency every year, that is would they would see

>The fact that there’s no party you can ask to just evaluate ahead of time what the result will be, or to see what i can do to get myself above the threshold, so the individual is always at a disadvantage of information asymmetry, is what makes the system bogus.

Yes and no, there is 100% guarantee nothing is in life, but there are several ways you can get a good fact based analysis of your general credit worthiness, can it predict if a given institution would grant you a loan... no. but it can predict if you have a good chance at some institution giving you a loan.

The lower your general score obviously the less reliable these tools will be, If you have a FICO of 810 + provable long term income then chances are anyone will loan to you, if you have a FICO of 620 well that become more of crap shoot and will be based on many other factors than just your credit score, similarly if you have a high FICO score but unreliable income (self employed) then it also becomes more of a crap shoot.

>>If I’ve been burned by this as a high paid tech worker with just a couple of mistakes in my autopay settings in the past

I hear stories like this often but this is not my personal experience. Not saying it cant happen but companies I do business with do not insta report you if you autopay fails...

You have to be 60+ days over due before it shows up on the credit report... and with all the modern alerting and other tools I fail to believe that a "simple auto-pay mistake" is what caused one to become 60+ days delinquent on a payment

gexla 1356 days ago [-]
I don't know if this is still the case, but banks at one time would deny you based on poor credit history also. And so, you get to stay in the "unbanked" category which is just another factor which may keep you in poverty.

The FinTech industry has since grown to the point that being unbanked doesn't need to be a thing anymore. It's easy to pick up a prepaid debit card which you can receive ACH payments on.

Spooky23 1356 days ago [-]
It’s already happening. Georgia had a anti-opioid program where they could score every pregnant woman in the state for likelihood to become dependent on opioids during pregnancy. Babies born with addiction cost the state about $1M each.

There were a few similar programs a few years ago when federal grants were made available. Iirc, they were modeled on the stuff built to identify people vulnerable to becoming extremist terrorist types. Insurance companies have databases of way more lifestyle and other behavior data than people realize. (Everything from sports, politics, to porn and gambling habits — anything for sale) I’m pretty sure Georgia mashed that against Medicaid claim data to build the model.

It’s a reason why we all need to watch “pre-existing condition” debates closely. If insurers know that 45 year old divorced father of 3 who moves every 3 months is a smoker, gambler and drinker, they don’t want to write a policy — the guy is a trainwreck with no support system.

harlanji 1356 days ago [-]
As a healthy homeless former professional working as a grocery bagger now, I can feel management watching me for signs of drug abuse to explain my ratty clothes and spotty hygeine. Nobody can guess why I’m not working a good job but seem very smart and thorough (slander from boss destroyed reputation). I’m glad to have a job and sorry to show up looking trashy, but it’s the best I can do right now. I am just thankful that they don’t have any data or scores for me yet to help them see a drug addict or boozer, because I know that I’d raise a lot of flags there and it’d be enough to color the suspicion. Metrics are scary, I know my current scores will make things harder for years to come, starting with credit having bombed right down to these corporate/insurer metrics whose systems I was in as a $200K/yr professional. Lots of explaining the data to do if I make it back to pro life.
erikerikson 1356 days ago [-]
Glad you're doing what sounds to be healthier.

I suspect many think differently here but I don't think the problem lies in the knowledge but in how people with power are currently using the knowledge. Restricting the knowledge is simply or current best mitigation of the current conventions. Another world might use that knowledge to better support you and others who have seen hard times and restore what sounds like a lack of justice and help you find an environment where you could thrive and perhaps even that boss of yours. [edit: so that they could thrive but also have reduced negative impact]

In other words, rather than focusing on mitigating risk a sufficiently high quality system could help us maximize our lives. Unfortunately, the probability of something so pro-social being the outcome seems low.

syshum 1356 days ago [-]
The biggest problem today be it credit scoring, Social media Censorship, or anything that is using these vast databases of personal info is the complete lack of transparency

Denied a loan, denied a job, kicked off a platform, in none of these situations is the company requires to justify their actions or be transparent in the policies and processes they used to reach that conclusion

This black box leaves people feeling powerless and out of control because they are.

One way to combat that is stronger data ownership laws, and the ability for people to get ALL information a company has collected about them.

So for example if you are denied a home loan, you should be able to request every single scrap of info that loan company collected about you (including any and all credit scores) they used to make that determination

inetknght 1355 days ago [-]
> The biggest problem today ... is the complete lack of transparency

While that's the biggest problem right now, transparency by itself isn't useful. I have transparency into my bills but I don't have the ability to change them. Negotiating power is being taken away from consumers. When's the last time you've seen anything which didn't have some form of liability limitation clause? When's the last time you've been able to negotiate that liability limitation?

bsanr2 1356 days ago [-]
"Fail fast and opaquely, good luck iterating ya bozo." Good advice, guys, thanks. /s

>So for example if you are denied a home loan, you should be able to request every single scrap of info that loan company collected about you (including any and all credit scores) they used to make that determination

The reason why that doesn't happen is because they're almost certainly discriminating in ways they shouldn't be.

icedistilled 1356 days ago [-]
I mean, it's already standard practice (All states except CA) to use credit score in things like pricing your auto insurance.

It's only a tiny leap to incorporate a similar magic number some company comes up with. Actually due to competition, if it correlates to risk, all companies will literally be forced to use the magic numbers or go into an adverse selection death spiral.

Unless there is regulation against it, like in CA. Not all regulation is bad.

TeMPOraL 1356 days ago [-]
I wonder at which point it becomes a self-fulfilling property - at which point decisions based on data-driven pigeonholing actually lock people on the paths "discovered" in the numbers?

E.g. if a young adult gets classified as "disorderly, drunk, unsuitable for reproduction, suitable only for low-skill work" based on their history of college partying, and then consequently denied work and social opportunities (as everyone doing background checks sees that summary), the prediction essentially becomes a sentence.

(The third season of Westworld, despite bad writing and even worse gunfights, was very good at bringing this point up.)

lmkg 1356 days ago [-]
Read Weapons of Math Destruction by Cathy O'Neil. The book explores several ways that's already happening. Her main premise is that there's a feedback loop in many data-driven policies. You only get success results for the things that you try, and you only try the things you already think are likely to receive. As a result, algorithmic policies tend to reinforce the status quo.

Loan risk algorithms will favor people "similar to" those who have paid back loans before, a sample group biased towards people that banks have already loaned to before. As a result, a lot of the factors are biased towards "from a white upper-middle-class suburban background."

And recidivism estimators, which are used as jail sentencing guidelines in some places.

Screening algorithms for job resumes, and college applications.

Algorithms send police to where crimes are reported. Crimes are reported because the police are there to witness them. The area gets designated a high-crime area. Regular people are arrested more often because regular activity is suspicious in a high-crime area, affecting their future prospects. The higher arrest rate is used to justify this.

It's a continuous spectrum rather than a single point. But if I were to pick a single "point" where it became a self-fulling prophecy? 1994, due to the widespread passage of three-strikes laws.

gruez 1356 days ago [-]
Can't this be solved by randomly giving out the wrong prediction, and seeing how it turns out? eg. for 1% of applicants, pretend to give them 800+ credit score, then check the outcome compared to the "expected" score.
inetknght 1355 days ago [-]
You're playing with peoples' lives here. That's not funny at all.
zozbot234 1356 days ago [-]
Yup, this is what "structural" social prejudice and discrimination are all about, once you strip away all the pointless and meaningless rhetoric that's somehow supposed to be "about" these issues. It's a self-perpetuating equilibrium of basic social conditions and superstructure (viz. discourse, supporting ideas, commonly-held worldviews etc.) that create a nearly unescapable "trap" of invisible oppression.
joejerryronnie 1356 days ago [-]
A Brave New World did it easier, simply sort each person into a genetically pre-determined path prior to birth. No reason to go through all that messy data collection and analysis.
rayhendricks 1356 days ago [-]
Yes this pattern of data-driven decisions on our loves is troubling. However if we understand what these corporations are looking for it is easy to exploit them for fun and profit. If you get a decent credit score say hello to multiple $500 credit card opening balances, free plane tickets, free hotels.
api 1356 days ago [-]
This is kind of the meta plot of Gattaca. You get what you measure, so if we try to measure "potential" we end up getting only the potential our measurements say we have.
white-flame 1356 days ago [-]
One difference there is that people do have the right to view and challenge credit reports, while all this other social scoring is hidden from the targeted people.
jacques_chester 1356 days ago [-]
> Actually due to competition, if it correlates to risk, all companies will literally be forced to use the magic numbers or go into an adverse selection death spiral.

To expand on this: adverse selection is where a consumer has hidden information about the cost they can inflict on a provider. Usually this is talked about in terms of insurance (especially health insurance), but risk is risk and so the principles are the same.

My recollection of what economists predict is that there are two stable equilibria that achieve Pareto efficiency. The first is that discrimination (in the general yes/no sense) is completely forbidden and risks are totally pooled. The second is that complete discrimination is possible without limitation.

The worst outcomes are all found in attempted compromises. Not only do you have the costs of whatever tradeoff you chose between pooling and discrimination being less efficient than the Pareto points, but you also introduce a great deal of dead weight due to complex regulation and oversight, plus efforts made to evade regulation and oversight. Collectively we are worse off, even if individuals think otherwise.

I don't think allowing total discrimination is a viable option in this day and age. Which means banning it wholesale and encouraging the formation of universal risk pools.

Hoasi 1356 days ago [-]
> Also, it's always amusing to arrive at an article about data collection and be greeted by this: https://i.imgur.com/d4Z4sdd.png

Meta amusement: https://i.imgur.com/E5bFeb3.png

RealStickman_ 1356 days ago [-]
That's actually one of the best cookie banners I have seen. You don't even have to deselect everything, just 3 boxes.
simonebrunozzi 1356 days ago [-]
It would not be "decentralized". It will still be centralized, however the "master" will not be a government, but a private company.
trustmeimdrunk 1356 days ago [-]
last i checked data collection from western-block companies is pretty centralized. Extensive records of your online behavior are being used (hence the money being made in it's collection and sale) by HR dpt, insurance companies, and loan desks to determine whether you qualify for a top tier salary for the same work, affordable coverage, and approval for a loan.
cosmodisk 1356 days ago [-]
People are complex. If some piece of code is used to predict my employability based on my facial expressions, there's not much to add. It's already as bad as it gets: you can't say/write anything non vanilla in public because 10 years later some HR snooping app will analyze the sentiment on the post and outright reject you. And we all kind of blindly walking into this no questions asked.
annoyingnoob 1356 days ago [-]
At least the Chinese system is out in the open.

Many private companies surveilling everyone without anyone's knowledge is terrible. Companies using these scores are really only targeting a group of people that allow the collection to happen.

My car insurance company would sure love to put a tracker in my car but there is no way in hell that is going to happen. I'm sure my driving style would qualify me for higher rates, yet I've never caused an accident or made a claim where I was at fault.

icedistilled 1356 days ago [-]
Yes they would, but the flip side of them not being able to price people based on actual driving behavior is by charging all men higher, all single people higher vs married people, and poor people higher based on credit score.

Plus why single out insurance companies and ban them from creepily tracking you wherever you go and not all the other companies who do that with your cell phone.

Also to be pedantic, the don't want to put trackers in the car, that is expensive. They want to use your cell phone like all the other apps tracking you, or they want the car manufactures to let them in on their data since most modern cars have the ability to track and broadcast location.

>made a claim where I was at fault.

That's a big caveat. There are plenty of accidents that are truly not one parties fault, but according to law even being judged as 49% at fault is still "not at fault". And even in cases of 0% at fault as determine by the insurance adjusters, theres a big chance one still had contributing factors.

Also, there's a big chance not at fault claims will be taken into account when pricing you and raise your prices. Just right??

On the other hand, not at fault accidents do correlated with higher risk, and it's easy to see why. For example, people who get rear ended are not at fault. But following the car ahead too closely leads to needing to brake harder, increasing the chance of being rear ended. Following too closely also increases the chance of rear ending the vehicle in front.

annoyingnoob 1356 days ago [-]
I was rear-ended in stop-and-go traffic. The same person was behind me for miles. He was looking at his phone and ran into me - totally and completely not my fault. He refused to give me his insurance info. I made a claim against my own insurance. I'm pretty sure the insurance company went after him and recovered costs. How exactly should that influence my rates? This is exactly why I pay for insurance, service I already pay for that is already factored into the costs. I pay extra for the zip code I live in already.

I have no driving record and I've caused no accidents. If I put a tracking device in my car today my rates would go up because I accelerate quickly and brake hard when conditions allow. That is not right.

shuntress 1356 days ago [-]
Your insurance company would prefer to cover people who notice the reckless driver then pull over to get out of their path.

You did not do that, so they raise your rates.

Edit: not that you were at fault or did anything wrong.

My point is that the insurance companies strongest incentive is to pay out as little as possible. This includes payments made where their client was not at fault. If it were up to them, everyone would pay all their premiums on time and never drive.

annoyingnoob 1356 days ago [-]
They recovered their costs. I already pay for the service. Tracking is just another excuse to raise rates, which as you note is the goal (bring money in, don't let it out). When rates have nothing to do with liability is it really insurance?

To quote wikipedia: If the likelihood of an insured event is so high, or the cost of the event so large, that the resulting premium is large relative to the amount of protection offered, then it is not likely that the insurance will be purchased, even if on offer. Furthermore, as the accounting profession formally recognizes in financial accounting standards, the premium cannot be so large that there is not a reasonable chance of a significant loss to the insurer. If there is no such chance of loss, then the transaction may have the form of insurance, but not the substance (see the U.S. Financial Accounting Standards Board pronouncement number 113: "Accounting and Reporting for Reinsurance of Short-Duration and Long-Duration Contracts"). https://en.wikipedia.org/wiki/Insurance

shuntress 1356 days ago [-]
You pay either way.

It's not about covering costs to provide a needed service. That would be a public service

This is a private company looking for profit. They want to raise your rates.

annoyingnoob 1356 days ago [-]
What you are talking about would no longer be considered insurance.

> the accounting profession formally recognizes in financial accounting standards, the premium cannot be so large that there is not a reasonable chance of a significant loss to the insurer

The vast majority of automobiles do not hold their value, they depreciate over time. If I've had my car for say 5 years and I've paid the premiums for 5 years, I may have already paid the full value of my car as it is today due to depreciation. I probably pay for one minor incident per year in premiums. Unless I total my within the first 5 years I'm not really getting insurance what I'm really getting is a payment plan for a payout I most likely will never get.

chordalkeyboard 1356 days ago [-]
The payout is the risk you transferred to the insurance company. You’re not “supposed” to get all your premiums back because then why would anyone start an insurance company? Its not a bank account, its more like a lottery ticket. The insurer assumes risk of covering an accident in exchange for the guaranteed income of your monthly premiums. In return you exchange a fixed regular payment for protection against an unlikely event. In order for this arrangment to work, the insurer must make a profit.

You might understand this but many people today do not, which is why they expect medical insurance to cover 100% probability events like an annual checkup with the primary care physician.

annoyingnoob 1356 days ago [-]
I'm not asking for my premiums back and don't expect it. Just saying that after 5 years of premiums the insurance company would be break even on that policy. If premiums are high enough its not really insurance. Not that everyone has the same insurance company but generally if everyone has paid premiums that meet the value of the car after 5 years then all of the insurance company risk is with cars less than 5 years old. The insurance company will keep pushing for more intrusion into your life to 'give you the best rate' but really you'll just pay more. Like you said if it wasn't profitable then no one would do it. Making apps and integrating with telematics doesn't come for free and its not coming out of the insurance company bottom line.
chordalkeyboard 1355 days ago [-]
well how much capital are you investing for 5 years at zero percent return? I think if that was required for an insurance company there would be no insurance companies.

> Making apps and integrating with telematics doesn't come for free and its not coming out of the insurance company bottom line.

yes, it is. their bottom line has to make room for that by increasing prices. in insurance as in anything, you get what you pay for.

shuntress 1356 days ago [-]
I believe most insurance also covers the damage done by your vehicle to other property. So the upper limit on the total collected through premiums is somewhere between 0 and the cost of an expensive house.
sidewndr46 1356 days ago [-]
It's actually more along the lines of whatever a person's life is worth. Also where I live the ~3 cars around me in traffic could each cost more than a nearby home. You can pretty easily destroy $400,000 of property here in an otherwise uneventful accident.
annoyingnoob 1355 days ago [-]
Automobile deaths has been declining for years. https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...

Cars are safer than ever. https://one.nhtsa.gov/nhtsa/timeline/index.html

The folks with those fancy cars are paying into the system too, insurance is about pooling after all.

It seems you have a point about bodily injury claims. https://www.insurance-research.org/sites/default/files/downl...

However, it looks like that is mostly due to more lawyers being involved in bodily injury claims. More lawyers involved points to a failure with insurance company's ability to properly adjudicate claims. When insurance is only about profits and finding tools to further squeeze your existing clients no one benefits. Insurance companies do not need more ways to rate me, they just need to do the job they get paid to do.

sidewndr46 1350 days ago [-]
I don't really understand the point your are trying to make? You state that "insurance is about pooling" then correctly observe the dollar amount they charge you is just some sort of risk rating assigned to the individual.

Also if you ever have a large claim paid, you'll quickly discover it isn't about pooling at all. Every insurer will simple decline to quote you or simply quote you some insane price. Even without accidents I once got a quote that was obviously a response of "please do not purchase this policy" when I tried to get basic liability insurance.

Judgmentality 1356 days ago [-]
> This is a private company looking for profit. They want to raise your rates.

And this is the problem with insurance being mandatory. Their business model is to force everyone to buy an expensive subscription, and then increase costs every time everyone actually uses their service in order to recoup costs.

Yes, I am oversimplifying, but holy fuck do I hate insurance companies since I have never gotten one to pay out a claim without suing first.

steffan 1356 days ago [-]
> If I put a tracking device in my car today my rates would go up because I accelerate quickly and brake hard when conditions allow. That is not right.

If statistically other drivers who do that are more likely to, on average, result in a loss for the insurer, they are right to raise your rates. "Past Performance is no guarantee of future results"

With enough data, maybe it would support your assertion that you are, in fact, a very safe driver, by realizing that you only drive fast at certain times, in certain locations that are statistically lower risk.

dependenttypes 1356 days ago [-]
> If statistically other drivers who do that are more likely to, on average, result in a loss for the insurer, they are right to raise your rates.

This argument could as well be applied regarding certain protected characteristics of the driver (such as race, gender, sexual orientation, etc).

chordalkeyboard 1356 days ago [-]
This argument would only be valid if there was a causal relationship between the “protected characteristic” and the increased risk. If the relationship was not causal, then the insurer would be mispricing the risk and thereby leaving profit on the table.
jfk13 1356 days ago [-]
> my rates would go up because I accelerate quickly and brake hard when conditions allow

All it takes is for you to misjudge the conditions one time (are you claiming to be infallible?), or for another road user to do something you were unable to anticipate, and suddenly this driving habit of yours does contribute to a higher risk of an incident that results in a claim. (Even if the incident still isn't considered to be your fault; your driving style reduces safety margins for everyone.)

annoyingnoob 1356 days ago [-]
You cannot know the limits of your vehicle and yourself, and operate safely within those limits if you do not know what those limits are. My driving style contributes to safer driving because I know when I might exceed the limits. As I mentioned, I adjust my driving style for conditions. If my driving was so egregious I would have a driving record.

Giving the insurance company more ways to judge me only enables them to charge me more, without my actual level of risk changing at all.

goatinaboat 1356 days ago [-]
Yes they would, but the flip side of them not being able to price people based on actual driving behavior is by charging all men higher

This is illegal in the UK and EU, fortunately. A real victory for gender equality, even if it means women paying more.

1356 days ago [-]
joejerryronnie 1356 days ago [-]
> My car insurance company would sure love to put a tracker in my car but there is no way in hell that is going to happen.

If you take your cell phone in the car with you, they are already tracking you.

1356 days ago [-]
annoyingnoob 1356 days ago [-]
How so? My insurance company offers programs with apps or use of telematics. I don't use any apps and don't allow access to any telematics. After the location data scandal last year my cell phone company claims to have stopped selling location data. I don't knowing allow any access to my location data.
kofejnik 1356 days ago [-]
A modern car very likely already has a built in gsm module and possibly gps as well. If so, this data is collected and could be subpoenaed by whoever
annoyingnoob 1356 days ago [-]
That is different than monitoring it over time as a way to set rates.
joejerryronnie 1356 days ago [-]
Ok so I was being a little hyperbolic, but I wouldn’t trust that there is not some loophole of getting to your data - if not now then in the near future. Even very basic data (if you have enough of it) can be used to derive surprisingly complex patterns of behavior.
amoshi 1356 days ago [-]
>And we all kind of blindly walking into this no questions asked.

It's already happening, now, unfortunately. All your tweets (posted & liked) etc analyzed and flagged for bad language, drug/alcohol mentions, bigotry etc. and a report generated for HR.

https://news.ycombinator.com/item?id=22211363

The company in question is used by Sterling and HireRight, so this isn't some unknown/niche company.

TeMPOraL 1356 days ago [-]
Holy shit.

The tweet thread under the link is well worth seeing.

Taken from the company in question's site (https://fama.io/product/):

> What Fama finds...

> Our machine learning technology has flagged hundreds of thousands of instances of misogyny, bigotry, racism, violence and criminal behavior in publicly available online content.

If you go and compare what the product actually seems to do - tracking down your Twitter account and grepping every Tweet you interacted with against a list of "thoughtcrime" words (like "hell" or "ass") - you can almost feel the next AI winter coming. How many more bullshit companies calling their fake, broken and trivial technology "AI" or "Machine Learning" will it take until the whole field of ML gets derailed by bad reputation? At least, (as much as I know history), the last AI winter involved companies trying but failing at AI. This time around, they're not even trying.

Swizec 1356 days ago [-]
> If some piece of code is used to predict my employability based on my facial expressions

Funfact: when I first moved to USA I would practice emoting in the mirror so Americans could read my face and wouldn’t penalize me during interactions. Now people back home say I “grin like an american”

> can’t say anything in public

Ever noticed how Gen Z is switching back to private chatrooms and message groups? Public stuff is all polished and curated.

joejerryronnie 1356 days ago [-]
Millennials just keep getting screwed - the generation that was born just in time to post every thought and action to the online public but also born to early to realize what a bad idea that was.
Solstinox 1356 days ago [-]
The kind of people who make and buy these technologies say, “you just need more data/a better algorithm.”

They don’t get that you could have 99.99999% of all possible information on a person AND that the missing 0.00001% could change everything you know about them.

kebman 1356 days ago [-]
I remember a party that was very interested in facial measurements. They used it to discriminate too. Why should corporations suddenly get away with this now?
dcewcrrec 1356 days ago [-]
We've entered an awkward twilight zone where companies can do what they want a la Laissez-faire capitalism, which means they can also do exactly the same thing a government would do at exactly the same scale and with similar levels of authority. It is a grey area that hasn't yet been stopped by laws because sometimes companies are so big they influence the laws themselves.
chordalkeyboard 1356 days ago [-]
Everything you have said except for the last phrase “ sometimes companies are so big they influence the laws themselves“ is false. There are a fantastic amount of regulations in every western country today, many of which are applied arbitrarily or with enough discretion that they are essentially arbitrary. Even the most massive corporations market caps are dwarfed by the monthly spending of government. The only cases where corporations exert the powers if the state over detention and violence are occasions where they do so at the bidding of the state.

It is true that some corporations are so big that they influence lawmakers just as ultra wealthy people always have. Indeed, if corporations were able to do the same things governments did and with the same authority, this process of influencing the government would be unnecessary.

Shared404 1356 days ago [-]
Just enough regulations to get the worst of unregulated and regulated capitalism, at least if my understanding is correct.
chordalkeyboard 1355 days ago [-]
yes, and whether this reflects poorly on capitalism, democracy, regulations, the corporate structure in a given country, the people in a given corporation, or the system as a whole depends to a large extent on your priors.
coding123 1356 days ago [-]
Well, I wouldn't say it was blind. About every 2 years you'll find a movie about how insanely stupid it is to post info online. And those types of movies go back pretty far. The problem is how dense people are.
dijksterhuis 1356 days ago [-]
I remember once being in a branch of HSBC trying to sort out a loan. I saw on the screen for my account they had a score of something like "customer behaviour", which was like 54 something.

I had drunkenly called HSBC and ended up ranting at the person on the phone a few times (lost cards etc) so I think this was a rating of how well behaved I was towards their staff.

The manager I was speaking to changed his tone fairly quickly after that screen came up!

So this is nothing new, but I guess the scale and opportunities for data points are new.

Although, having said that, a lot of marketing based scoring data is woefully inaccurate. I was once responsible for distributing a data set to company clients which covered interests and personal info for all UK population.

Not only was most of the data about my dad wrong, the things that were accurate were years out of date.

My information didn't exist. A colleague's email was completely wrong. Indicated he liked going on holiday but had never been outside of the UK.

So these scores might be useless anyway.

edit: got rid of "even if they exist".

Karunamon 1356 days ago [-]
I think that people wind up worried about the wrong thing on these systems. The worst case scenario, where some all-knowing system negatively impacts you because it's rating your behavior on some opaque scale, isn't the one we should be worried about.

That by itself is scary enough, but the much more likely case is the one where this system is rating you based on wrong information, and given Finagle's law..

Especially concerning when the justice system starts using this info; you really, really don't want to be the false positive.

justanotherc 1356 days ago [-]
The justice system has built into it the requirement to justify allegations against you though. By design everything in the justice system is transparent or its not admissible. So an opaque "score" of any sort wouldn't ever be acceptable evidence.
sroussey 1356 days ago [-]
You are thinking in about evidence in court. Police have other data that can flag you for greater scrutiny. Then your chances of even being in court go up.
justanotherc 1350 days ago [-]
But they have to establish reasonable suspicion, wuch they also have to justify in court in order to investigate you further.

They can't just say "he had a bad score, so I pulled him over and found weed". If they did that, the stop would be unjustified and the whole case would be thrown out.

inetknght 1356 days ago [-]
> By design everything in the justice system is transparent or its not admissible.

https://en.wikipedia.org/wiki/Parallel_construction

justanotherc 1350 days ago [-]
Evidence constructed in parallel is not admissible, therefore my point stands. So what's your point?
patmorgan23 1356 days ago [-]
Predictive policing
elliekelly 1356 days ago [-]
It very well might be garbage data but if companies are acting on that data it’s still highly problematic. Maybe even worse than if the data were accurate.
air7 1356 days ago [-]
The possible silver lining is that companies will quickly realize the data is garbage (which is causing then losses) and stop using it.
usrusr 1356 days ago [-]
Data that appears "good enough" under the purely economic lens can leave an awful lot of people unfairly out of service.

In a low margin business a single bad customer can easily cost more than what is earned from ten good customers. In that situation, an oracle that rejects eight good customers per rejected bad customer would already be good enough.

vladvasiliu 1356 days ago [-]
Depends on how quick is "quickly".

If anything, they seem to be doubling down on this. From my discussions on the subject with people "in marketing", they are really convinced that not only is the data "accurate enough", but that collecting and exploiting it is actually for the client's benefit.

I wonder how one would go about quantifying the usefulness of all this tracking, because I doubt it's cheap. The best I could get was "we're able to see changes in sales which correlate to marketing campaigns". I can totally buy that and it seems somewhat more advanced of an answer than the more common one of "If didn't work they wouldn't be doing it".

What I wonder though is how they would go about quantifying how much better a tracking-based campaign worked than a "traditional" one would have.

kiba 1356 days ago [-]
What if most of the marketing is useless in the first place? And most of the data are useless too.

How would they know?

mindslight 1356 days ago [-]
That's a pretty strong assumption. Data that is garbage isn't necessarily going to cause losses. In fact, making the market more opaque (less efficient) will add to profit margins. If all companies are using the same data/algorithm, then even incorrect decisions form a focal point. If all the companies think someone is a bad risk, then that person will end up paying more regardless. Even if one company defects and accurately prices their risk, that company will just lower their rate a little and that customer still won't be getting the rate they would have if full market competition were taking place.
m463 1356 days ago [-]
Does "risk avoidance" mean they have to act on the data?
dijksterhuis 1356 days ago [-]
Yeah that's probably the bigger problem!
kgwxd 1356 days ago [-]
Useless, but being used. A divining rod in the hands of people deciding the big steps in your life.
usrusr 1356 days ago [-]
There's a very good reason why organizations should be allowed to keep "fool me once/twice/thrice" records: it's what enables them to be nice to customers who don't abuse that niceness.

But there's no process to appeal and the data might be laughably wrong. Outlawing cross-organizational aggregation might be a reasonable middle ground. It gives a further advantage to giants like Amazon, but all the problems compound at interorganizational integration.

PopeDotNinja 1356 days ago [-]
In 2006, I spent some time in the hospital after a bike accident. Someone who shared my recovery room was an guy with kidney failure & an extremely poor attitude. I learned he was running out of places to get dialysis because he kept treating the staff poorly & getting kicked out of the dialysis centers. Imagine having some poor newly hired receptionist trying to help this near death, abusive asshole & reading a note in the computer screen that said “do not treat this person under any circumstances”.
tomcooks 1356 days ago [-]
Precisely what the Hippocratic Oath states, take care of all who have the green smiley icon and 70 or more happy points
1356 days ago [-]
hrwl 1356 days ago [-]
I've had a similar experience. Had access to a dataset about which many breathless articles were written when it was leaked/breached a couple years later. Was able to find very little data on family members in the set and what was there was quite stale.
beervirus 1356 days ago [-]
The fact that it’s inaccurate doesn’t make it more acceptable.
hnhg 1356 days ago [-]
Shouldn't you be able to demand an explanation of how that score was calculated under GDPR? I'm really curious as to what it means now!
klyrs 1356 days ago [-]
In Europe, sure. But imho if a customer verbally abuses employees, it's perfectly fair to cease doing business with them.
dane-pgp 1356 days ago [-]
https://gdpr-info.eu/art-14-gdpr/

"2. In addition to the information referred to in paragraph 1, the controller shall provide the data subject with the following information necessary to ensure fair and transparent processing in respect of the data subject:

...

(g) the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject."

indziektor 1356 days ago [-]
Case in point, I was just recently getting nowhere on an obviously automated decision, so I sent an email quoting GDPR Article 15(1)(h) (instead of Article 14) and got it resolved amicably within an hour. Maybe just coincidence, but their follow-up was a complete 180. I suppose it could be abused to get around certain automatic safeguards, but I made it clear that my intentions where just to ensure that my data is correct and being processed fairly, as I was sure it wasn't the case.

But that requires that the company even cares about following the GDPR correctly, or perhaps even the Data Protection Authority of a given EU country. I know of one small but popular enough data broker that's been operating for years in the EU and flaunting the GDPR without any punishment so far. Not sure if this is the place to advertise it. But I suppose somebody whose GDPR rights were violated has to lodge a complaint first.

dijksterhuis 1356 days ago [-]
I'm betting it was just a rating out of 10 the person answering the phone did after each call.
rootusrootus 1356 days ago [-]
Aside from being a mostly clickbait puff piece, the solution to this is regulation. All companies collecting data this way need to be subject to exactly the same requirements as the credit bureaus. Citizens need to be told what data is collected, why, when it gets used, and they need easy access to seeing their own data, as well as a well-regulated method for correcting it.

And we could just dramatically limit how much of this can even be used for housing and employment decisions, and crank up the penalties for misuse.

perl4ever 1356 days ago [-]
While we're at it, maybe we could enforce easier access to credit information. One report per year with no credit score is absurdly restrictive.
LorenPechtel 1356 days ago [-]
It was reasonable at the time it was put into place. It's now so much easier to deal with information, reform is needed but it's not going to happen unless the Democrats are in control.

What I would like to see is a rule that if a company collects data for more than in-house use (so it doesn't apply to a company who simply has records of it's customers) they must make a reasonable effort to notify you and allow you to examine the data and challenge anything you believe to be incorrect.

perl4ever 1356 days ago [-]
>It was reasonable at the time it was put into place

I don't think so. It's only been around for about 15 years or so. Internet infrastructure wasn't as advanced then, but neither were there as many people who would've used it.

The only reason I can see for the state of things is that the credit bureaus wanted to nickel-and-dime people for credit monitoring, and because they could water down the regulations, they did.

Another thing that was sorely needed was to make the domain a .gov or something other than a .com to prevent so many imposters.

And I just now read on Wikipedia a list of companies excluded from the requirements, like ChexSystems.

PeterStuer 1356 days ago [-]
Both China and the US are oligarchies. You can argue that in China is is the government that owns the enterprises, and in the US it is the enterprises that own the government, but that does not change much.
drtillberg 1356 days ago [-]
A central premise of this article is not correct. If an employer, for example, is using tens of thousands of background data points to deny employment to a person, the employer would need to disclose that to the applicant.[1]. If the things in this article are secret then the law may well be violated. Also, not really the best idea to apply secret proprietary algorithms to these kinds of decisions because you're eventually going to have to disprove improper bias, which can easily and "systemically" be baked into a background screening tool.

[1] https://www.eeoc.gov/laws/guidance/background-checks-what-em...

dragonwriter 1356 days ago [-]
> Also, not really the best idea to apply secret proprietary algorithms to these kinds of decisions because you're eventually going to have to disprove improper bias

You only have to disprove that if the applicant can first show the existence of bias, which with a secret proprietary algo that they don't have access to either the inputs or outputs of or know exists, is pretty hard to do.

zoomablemind 1356 days ago [-]
It seems that such close attention to detail and blind trust to third-party scores will decrease chances of filling the positions and get the needed job done, which is supposedly the main business objective. This again will reduce the hiring succes to the good old chance or buddy-referred chance.
trustmeimdrunk 1356 days ago [-]
Ya large corps with the tools and resources are beholden to the most rigid of ethical standards and furthermore wholly unable to commit crimes systematically for very long due precisely because of their scale, amiriteamirite?! Yukyuk where my high fives at, guys? Yukyukyuk
gmantg 1356 days ago [-]
No need to disclose anything: plausible deniability is enough. And "bias" can be fixed by ad-hoc quotas.
vorpalhex 1356 days ago [-]
Why wouldn't an HR person or hiring staffer immediately out the company or even sue them?
dragonwriter 1356 days ago [-]
> Why wouldn't an HR person or hiring staffer immediately out the company or even sue them?

They wouldn't out the company because hurting the company for no private gain doesn't help them, and they wouldn't sue the company because they’d be a beneficiary, not an injured party, and so would have no damages to claim.

And also because both acts would destroy their future employability in the field.

gmantg 1356 days ago [-]
What do you mean? Why wouldn't an HR sue his own employer? Because binding arbitration and the employer will make sure to destroy the reputation of that HR.
trustmeimdrunk 1356 days ago [-]
We can answer that question by rephrasing, "why haven't HR employees sued their employers, time and time again?" then if it interests you personally pursue precedents.

Or better yet "Why dont employees commit career suicide for relatively minor offences that dont affect them personally and inflict no direct harm?"

goatinaboat 1356 days ago [-]
Why wouldn't an HR person or hiring staffer immediately out the company or even sue them?

Why would they? HR is not your friend.

FalconSensei 1356 days ago [-]
they wouldn't be employed anywhere else?
grazhero99 1356 days ago [-]
Do you guys think there's any realistic hope today (not accounting for what kind of horrible stuff might be possible in the future) of being able to keep your 'genuine' online activities separated from your public facing, sanitized ones? Such that no company's HR department would be able to tie your real life identity to your real online activities? I imagine it could be possible if you're serious and rigorous about it.

Or am I being naïve for even entertaining such a thought?

dcewcrrec 1356 days ago [-]
The more energy you put into being anonymous, the more anonymous you are, but it's not an on or off switch. It's like something that can constantly be improved. You can even go as far as changing the way you speak to avoid stylometrics.

The question to ask yourself is 'what can reasonably be found out about me?' You can be found on the smallest trace of evidence, but is it reasonable to expect someone to do that? You're not exactly a government spy. Just take obvious steps like not using your real name and you should be fine.

orangecat 1356 days ago [-]
Depends on your threat model. I expect the NSA can easily associate my real identity with my HN account. I don't expect most HR departments to have the ability or inclination.
t0mmyb0y 1356 days ago [-]
Yes. Ditch being connected when you don't need to be and think several times before you connect...is it worth it. That's what I have done for a very long time. Almost no data about me exists. I am also very heavy handed with permissions and apps on device. I don't even allow google to install the google app on my device.
unnouinceput 1356 days ago [-]
Yes, it's possible. Keep your normal public activities boring and straight. For everything else use TOR/VPN/Proxies/etc.
joejerryronnie 1356 days ago [-]
This exact scenario is what makes Tokyo so exciting.
trustmeimdrunk 1356 days ago [-]
HR arguably has the most immediate potential gain, and from its new policies those in the employee pool have the most to lose.

There's a lot of use in applying insurance analysis techniques to filter out bad hires much better than conventional practices. Instead of interviews, vetting, tests, headhunters, you can adopt the latest in data analysis to cut costs and improve your KPIs across the board.

Predict an employee's productivity by analyzing online browsing behavior. Fixed qualities like attention span and drift; escapism and procrastination; pleasure-seeking vs productive, curious, prosocial browsing habits. How do these overlap in a typical workday? Are you focused or dispersed? Does your attention cycle? Do you complete tasks to the end, what happens when you get stuck on a problem?

This doesnt even get into friendship networks, purchasing behavior, public displays of attitudes.

Can you game the system? Probably not. These days that requires you to play a losing game of tweaking your personality down to the smallest meticulous detail. Essentially going against the grain of your natural flow ordering all aspects of your identity to suit an opaque and almost certainly fault-laden.

People flip out about the Chinese social score, without realizing that it was pioneered in the US private-sector.

Say no to mandatory medical intervention.

Written from my Android 2.2

surajs 1356 days ago [-]
I'd give the author of this clickbait of a link about a 0
vorpalhex 1356 days ago [-]
Not a single source or citation for some extremely broad claims.
DavidPeiffer 1356 days ago [-]
When getting insurance quotes, there was a disclosure that a credit check had been conducted, listed the top 4 items affecting my Risk Score, and gave a link for questions. I found the guide to all the possible reasons and was amazed at some of the elements. [1]

* They prefer 84 years of credit history to put you in a lower risk category. Having 2 lines of credit paid every month versus 1 would make a huge difference in this regard.

* Oil company credit cards are seen as very high risk, and even having 1 is seen as a negative.

* Department store credit cards are also viewed negatively.

The weightings and mixing of variables is the secret sauce...but reading through it all made me very happy with my decision to turn down an offer from one of the major reporting agencies.

[1] PDF Warning

https://consumer-solutions.custhelp.com/ci/fattach/get/29038...

gruez 1356 days ago [-]
Article title: Data isn't just being collected from your phone [...]

I skimmed the article and there's little mention of phone data collection. There's one mention of insurers using phones to collect driving data, but that's it. I was expecting some vast surveillance network using your phone to score you.

scarface74 1356 days ago [-]
This entire article was a puff piece with no details and definitely didn’t say anything about phones.
mnm1 1356 days ago [-]
This should be illegal. The credit unions are bad enough but this is egregious. Software should not be allowed to affect humans without a complete explanation of how it works, how it reached its decision, and a human with authority to appeal to. Frankly, a bunch of the uses here already seem illegal like denying people the ability to rent or to get a job based on some black box software that provides hidden information on people. How can we even know the software isn't something like: if (race === black) deny Application(); Artificial intelligence indeed. From 1820.
jermier 1356 days ago [-]
There are people out there resisting these efforts. For example, I know people who are against smartphones and use a so called 'dumbphone' or feature phone for their main number. If they need to buy groceries, they refuse to use a loyalty card, and always pay in cash. They typically have a secure and private laptop with something like Ubuntu on it, and use Firefox with all the anti-tracking features enabled, and uBlock Origin installed, JS turned off by default etc. The typical steps people take to minimize their footprint against these predictive algos
xwdv 1356 days ago [-]
Well then these people are fucked. Much like how using no credit means you have no credit history and a low credit score, you will have a very hard time trying to buy a house.

If you spend your whole life being anonymous, be prepared to be treated as a high risk consumer who cannot participate in certain things, because of a low social score.

partyboat1586 1356 days ago [-]
When I graduated high school ~10 years ago I remember one of our schools leadership gave a speech on making a good record of yourself on social media rather than avoiding it. This was when companies first started looking up people's Facebook. He was way ahead of his time but he was missing a key part which is the expansion of data collection into every aspect of our lives. It's not just social media you need to worry about.
rhn_mk1 1356 days ago [-]
That may well be, but that's not the spirit. Instead, "if we all resisted more, we'd be free of that crap".

Granted, this is the tragedy of the commons. By acting in one's own interest: get a good score, people poison the well for everone: everyone must work and self-censor to keep a good score.

srveale 1356 days ago [-]
What percentage of people do you think are willing to take even one of the steps mentioned in OP's comment? I'm pessimistic.
t0mmyb0y 1356 days ago [-]
Very much describes me. I use cash for almost everything and have a credit score of 0 (zero)! I don't use or have a use for credit. 18 months ago the company I worked for had a background check done on me. When the results came in I smiled, knowing I had been doing the right thing. The results: this person barely exists. They went to school but there are no records of what happened, etc, etc, etc. It is how some of us live our lives, always have always will. I am over 50 y/o, born, raised, and still live in silicon valley.
rapjr9 1356 days ago [-]
One thing that could end this is someone hacking into the databases and changing scores. Say for all the US senators and a few CEO's and celebrities. It would basically prove that the scores are no longer trustworthy. Not that they are anyway, but if they start affecting "important" lives things might change.

I've been wondering if all this data is being used for stock market manipulation or timing the market. What exactly are those quant's using for inputs?

amelius 1356 days ago [-]
> One thing that could end this is someone hacking into the databases and changing scores.

While they're at it, perhaps they can change some US senators' bank balances just for fun.

speby 1355 days ago [-]
We're already "scored" all the time ... all types of insurance, mortgages, credit ratings, past criminal record, and so on. Now, we may wish to control the use of data being used on us or against us, and I support that, but to the degree with which we are being "scored" generally, we are already being scored all the time across tons of other dimensions and commercial and even governmental applications.
techlaw 1356 days ago [-]
There are several comments here about the lack of background info or citations.

The authors have a site which includes their letters to the FTC, etc which provide greater detail and references: https://www.representconsumers.org/surveillance-scoring/

gmantg 1356 days ago [-]
Surveillance capitalism to the USA is like heroin to an addict: he thinks he can give up at any time, that he can manage the addiction and find the balance, until he realises that he is a slave to that addiction and can't change anything. The Congress could destroy this "big data is the new oil" business model, but they don't want to give up the money from big tech, they think they can manage this addiction and find a balance between greed and freedom, they find excuses that surveillance is a matter of national security today, but at the end USA will repeat the fate of that addict.
vasili111 1356 days ago [-]
OK. Now lets look from another perspective. Now when we know that, how to make system score us high? How to show system what it scores as high and hide what it scores as low? How to benefit from that system?
tobyhinloopen 1356 days ago [-]
Funny how the cookie wall is the first thing that pops up.

It’s not a bad one though, to be fair

hedora 1356 days ago [-]
Huh. I bought a few nice bottles of whiskey and gin at BevMo recently, and they swiped my drivers license.

Now I get ads for whiskey in the mail. I wonder how much that reduced my future employment prospects.

t0mmyb0y 1356 days ago [-]
Why in the world would you allow them to swipe your id? Craziness.
throwaway_pdp09 1356 days ago [-]
The article is about smartphones, not dumbphones. This matters as some people won't own smartphones for this reason. Not many though, I suppose.
d23 1356 days ago [-]
Perhaps we can find out what our personal scores are. That would help us figure out whether we support this or not.
aww_dang 1356 days ago [-]
If this is true, it could be reverse engineered to increase credit worthiness.
1356 days ago [-]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 20:26:05 GMT+0000 (Coordinated Universal Time) with Vercel.