Through the social network of Google+, data from hundreds of thousands of users has been exposed for years, and was discovered by Google this spring. According to a Monday Wall Street Journal report, Google chose not to disclose the breach to users due to fear of sustaining damage to the company’s reputation and incurring more government regulation.
The breach reportedly gave outside developers access to users’ full names, email addresses, birth dates, places lived and occupations, among other things.
Per the Wall Street Journal, Chief Executive Sundar Pichai was briefed on the plan to not disclose the data breach.
An internal memo says that disclosing the breach would probably result in “in us coming into the spotlight alongside or even instead of Facebook despite having stayed under the radar throughout the Cambridge Analytica scandal” and that it “almost guarantees Sundar will testify before Congress.”
Pichai has agreed to testify before Congress in the coming weeks.
Google is now shuttering Google+ as a delayed response. The bug has reportedly existed since 2015, and Google does not know if the information has been used.
Based on my understanding of the situation, you have a mistake in your article. A breach is understood to be a situation where a third party was able to access or retrieve data using a vulnerability. In this case, the Google team found the vulnerability internally and were able to determine (probably reviewing audit logs) that nobody had accessed the data. While the vulnerability existed, Google discovered it before any malicious parties and fixed it before any data was released.
That is a highly naive assessment of a multi-billion dollar company covering it’s ass so-as to not take a hit to it’s stock price and expose itself to government scrutiny and regulation.
See here: Google+ Shutting Down After Bug Leaks Info of 500k Accounts
Note: “As Google only keeps two weeks of API logs for its Google+ service, it was impossible for them to determine if the bug was ever misused.”
Well, that’s kind of you to say. It’s not highly naive, I am adequately qualified to talk about this.
First, as your own link states: this was a discovered internally. Frequently when there has been a material breach it is traced back as part of an investigation into the misuse of the stolen information. You may be surprised to know that breaches may have to be disclosed, but vulnerabilities discovered internally and fixed without breach are treated differently.
Second, the scope of this vulnerability was small and low severity. 0.5M affected accounts, and the data exposed was simply personal metadata and social graph links. Equifax’s breach was 150M SSNs, DOBs, Names, and in some cases credit history (and there was a known breach). Target and Home Depot breaches exposed on the order of 50M credit records. The OPM breach was 20M and it was entire personnel records.
You are actually being naive thinking that Alphabet (which is much closer to a trillion dollar company than “multi-billion”) would be affected at all. A billion dollar settlement - which is unthinkable, since no evidence exists that the data was stolen, it is unlikely the data was stolen, and known breaches of severity/scope many orders of magnitude higher had much lower settlements - would be less than 1 month of net profit to them.
The threat to Alphabet was (1) Congressional investigation/action, (2) public distrust. A lawsuit and potential class action punishment were, as you note, the last thing on their minds. As they have publicly said, if this had come out amid the heightened scrutiny over Facebook/CA, it would have been far more impactful and drawn far more distrust of Google than coming out now when the public’s eye is not on large advertising companies’ mishandling of the data they have been collecting on us.
All that said, I think Google handled this pretty well. The CYA lack of immediate disclosure reeks, but (1) they correctly saw that a small issue they found by happenstance (an in-production code review is three stages beyond when security flaws should be found) could have been a company-destroying event, so (2) decided this was not a business they needed to be in. Focusing on “enterprise” customers seems odd, but likely enterprise contracts identify potential risks like these and indemnify Google and/or limit their potential liability. Most importantly, removing the “free public service” component removes them from likely Congressional and regulatory actions as well as severely limiting potential public hysteria when the next minor security oversight happens.
The important lesson here is that security is hard, even for a company with virtually limitless resources like Google/Alphabet. Putting all our private data in one location, any location, is foolish. It will be exposed. It is a lesson we should have learned quite recently from Experian, then from Facebook, but here is Google with yet another object lesson in what security experts have been saying forever. Maybe someday we as a public will take security seriously. Google (or Facebook or Amazon or Apple or …) won’t do it for us.
From a security perspective, Google was exemplary here. It really doesn’t get much better than this either in transparency, operations, or policy. The found this via internal red-teaming, which is a sadly rare practice.
I think you identified the risk, which was the the conservative movement has - for largely inexplicable reasons - chosen Google as an enemy. They believed grandstanding hearings from antagonistic parties could harm their reputation, which they actually do care about (witness AI & cloud DOD contract withdrawals, Boston Dynamics sale). I think they correctly see a Trump administration that’s potentially collaborating with hostile parties to them as the risk, and shuttering G+ and the (unnecessary) disclosure as defensive moves in that respect.
ps - security is impossible, not hard. Google, Apple, Amazon, and even Facebook (which had the famed Alex Stamos as CISO) invest billions of dollars each per year in it. This story is an example of security success, not failure…