The Real Problem with Equifax

HOLD FOR JENNY KANE   In this Saturday, July 21, 2012, photo Equifax Inc., offices are seen, in Atlanta. Equifax Inc. is a consumer credit reporting agency in the United States. (AP Photo/Mike Stewart)
This July 21, 2012, photo shows Equifax Inc., offices in Atlanta. Credit monitoring company Equifax says a breach exposed social security numbers and other data from about 143 million Americans. The Atlanta-based com... This July 21, 2012, photo shows Equifax Inc., offices in Atlanta. Credit monitoring company Equifax says a breach exposed social security numbers and other data from about 143 million Americans. The Atlanta-based company said Thursday, Sept. 7, 2017, that "criminals" exploited a U.S. website application to access files between mid-May and July of this year. (AP Photo/Mike Stewart) MORE LESS
Start your day with TPM.
Sign up for the Morning Memo newsletter

It now seems clear that the massive data breach at Equifax was caused not simply by aggressive hackers but by clear and potentially negligent security errors by Equifax itself. But fundamentally, this isn’t a security problem. It’s a market failure and a legal and regulatory failure.

There are many businesses in which the cost and assumed liability of taking possession of certain goods – real or intellectual – is quite high. Indeed, that is often a major part of the business model itself – they are paid to take on that liability. Some extreme examples are transporting dangerous or volatile chemicals. This may be the biggest personal data security breach yet. But breaches that are nonetheless quite large happen basically all the time and the costs to the company are usually negligible. Yes, there’s a big PR hit and there’s usually some fine. But the costs in fraud and disruption in the lives of affected consumers totally dwarfs the financial cost to the company. On the most basic measure, the costs are not great enough to prevent companies like Equifax from making really basic mistakes like failing to install new security patches in a timely manner. It’s a cost of doing business. 

Now, I know Internet security is a cat and mouse game. There’s no perfect security. You can’t just do the job right and then there’s no risk. But the credit rating industry is a classic example of privatizing the gains and socializing the risks. Taking possession of all this private data, which can facilitate fraud, creates a magnet for hackers. It creates a huge potential danger. Consider an analogy. What if one company were responsible for stockpiling most of the high explosives in the country in one place? Maybe in a major population area? It’s obvious that would create a huge amount of danger and liability. Stockpiling fraud-enabling personal data creates the need for massive amounts of security, double-layering and vigilance. That’s inherent in the business itself. But that risk and most of the eventual damage is absorbed by consumers while the profit goes to the company. That disconnect between profit and risk is the central problem. It’s like the old days when a big company could manufacture paint for profit and dump the toxic waste into the local river. Again, it’s privatizing the profit and socializing the risks or externalities.

In some cases consumers would rebel. That would solve the problem. But that’s actually a key part of the problem: consumers aren’t Equifax’s customers. They’re the product. You’re the product. Banks and other lenders like credit agencies because they offer a systematized and standardized way of evaluating risk. The banks are the customers. Credit rating agencies would prefer never to deal with consumers at all. They only do so when forced to or, more recently, as they’ve developed a secondary business in selling consumers services to help them protect themselves against errors or security breaches by credit rating agencies. That’s an amazing business!

Once you look at the issue from this perspective, the problem is obvious, even if the specific solution. If you were in the business of securing huge amounts of explosives but didn’t face any substantial costs in the event of an horrific accident, you’d never do all the stuff necessary to prevent accidents. Never. It’s human nature. It’s the nature of markets and incentives and liability.

Now, after I wrote a sketch version of this argument on Twitter, TPM Reader JO wrote in and told me that’s not precisely the problem. The value of getting access to all that fraud enabling data is just too great. Hackers will find some way to break in no matter what, no matter how great the security. You may delay the problem but not fix it.

Here’s the crux of JO’s argument.

The real problem is that social security numbers, credit card numbers, and other information that you are supposed to keep secret were designed for identification, not authentication. How many different institutions have your social security number? Dozens? Hundreds? That’s not much of a secret. What we need is authentication (proof of identity), and we’re using identifiers for that purpose. A credit card’s CVV number is intended for authentication, but come on, how many times have you typed it into a website, or given it over the phone? It’s turned into just one more piece of identification, and an unscrupulous or incompetent vendor can collect these numbers, compromising the associated credit cards.

This is a very good argument, though it leaves a lot of complexities unanswered. What are the authenticators? And how will we store them? Nonetheless it’s a good point. But I think it still goes back to a market failure and a failure to properly align liability with profits/gains. It’s possible that once you make the credit rating bureaus – and other companies that collect this much data – take on the liability for the dangers inherent in their business no one will be willing to do it. The whole enterprise may cease to be economic or the profits may decline significantly. Since our whole society is built on credit and the need to evaluate creditworthiness, that will almost certainly spur efforts to build or transition towards the kind of system JO envisions.

For now, there’s no reason for that to happen. Because companies can make tons of money evaluating and storing and selling potentially fraud-enabling personal data. If they screw up and it gets stolen, there’s no substantial penalty. It’s your problem.

Latest Editors' Blog
Masthead Masthead
Founder & Editor-in-Chief:
Executive Editor:
Managing Editor:
Deputy Editor:
Editor at Large:
General Counsel:
Publisher:
Head of Product:
Director of Technology:
Associate Publisher:
Front End Developer:
Senior Designer: