Top 3 Security Blunders of 2019 and How to Avoid Them in 2020.

Jon McLachlan
10 min readDec 30, 2019
Photo by Alvaro Reyes on Unsplash

In tech, we build things that change people’s lives. Things that make this world a better, more fair, and more efficient place for all of us. But anything worth building is worth hacking.

Security expectations are always present, explicit, or not.

There are two types of security expectations: the ones explicitly set forth by the market, and ones implied by user trust.

Explicit Security Requirements
Addressing explicit security requirements set forth by a highly regulated market is time consuming but at least straightforward. For example, GDPR in Europe, HIPAA in the HealthCare vertical, FIPS in the Defense vertical, or CommonCriteria in Financial or other enterprise verticals are examples of explicit security requirements. Even though they are challenging to obtain, they are at least well-defined regulations. As such, we will not focus on them here.

Implicit Security Expectations
So then, what are these elusive implicit security expectations?

It’s everything else. It’s the “please don’t get hack” category. People might call them obvious, but that attitude trivializes the subtleties that led to the most significant security blunders of 2019. Surprisingly, our mistakes were not always technical. We’ll focus in on what happened, peel back why parties responded the ways they did, and explore how we might build a better future in 2020.

#3: Poor User Authentication

Ring had over 3k users log-in credentials compromised, allowing hackers to gain unauthorized access to Ring. Perhaps not the widest hack of 2019 (3k is not much compared to the whopping 162M that were compromised by the Dubsmash data branch also in 2019), but Ring was one of the worst invasions of personal privacy, permitting anyone with access directly into your home.

If you read the Ring response, you’ll notice that there was no direct attack against Ring itself. Odd. The working hypothesis is that other data breaches (see #1 below) allowed hackers to gain access to Ring when users would re-use passwords, credentials, or recovering emails. These attacks are called credential stuffing.

Of course, Ring systems were not directly compromised, but there’s an evasion of responsibility for the compromised accounts, not just by Ring, but by the entire security community. And we all know, shared-responsibility is almost always just dropped on the floor. And our users pay the price.

The password itself is perhaps one of the absolute worst inventions to have come out of early tech, which can be summarized in on simple statement: You are not a password. When companies cling to archaic security mechanisms, you gamble with the trust of your customers.

So, who do we hold accountable? Well, those in a position to solve this problem should be held responsible.

How can consumers protect themselves?
For starters, we can stop creating new accounts with passwords. Whenever I find a new app or service that wants me to pick a password, especially with no MFA options, I stop. I don’t care what it is. Whoever build that system doesn’t know what they’re doing in terms of security. Asking for a password is asking for trouble.

Some of you might say, “Can’t you just use a password manager?” Look, the future has nothing to do with passwords. And I expected new technology to take concrete steps towards that future. So I am drawing the line: in 2020, passwords have no place in new technology stacks.

So, how do I log in to stuff? I have selected a few high-security accounts for myself — these including Google, Linkedin, and Github. I install MFA on them, disable password reset via SMS, and use them to log in to everything. Using OpenID Connect, every startup / app / game / IoT can and usually does integrate with one of these Identity Providers.

In the end, all we’re doing here is trusting Google and Microsoft to be able to protect login credentials and limiting the scope of “whom to trust as an identity provider” down to the most capable few.

How can tech leaders protect their products and services?
First, we need to admit that user authentication is a hard problem and that most of us are not going to do a better job than Google, Microsoft, Okta, Auth0, or any of the mainstream identity providers. Then, breathe a sigh of relief, because integrating with OpenID Connect is far more accessible, faster, and more secure than building something from the ground up. Be sure to invite customers to add MFA, for those services that are particularly sensitive.

This way, customers are not re-using a crappy password, and your site doesn’t get hacked — even from credential stuffing attacks.

If you’re in a regulated vertical, you might want to take it a step further, and integrate with a full-stack Know-Your-Customer solution, such as Passbase.

And finally, to help kill the password entirely, it will need a replacement. Soon, expect to see more behavioral-based implicit and continuous authentication mechanisms, such as UnifyId’s authentication systems, which use a variety of behavioral factors to authenticate users in the background continuously.

#2: Monetization of Data-Privacy

Facebook’s business model is entirely dependent on monetizing end-user data. From leaked documents, Facebook seems to have developed two faces to its privacy controls. One of them is for social controls, and the other for advertisers, partners, and developers. The social controls have settings that control what other Facebook users see, such as “only friends” or “only me” — we’ve all seen that stuff. But the latter, however, controls how Facebook monetizes your data. Your privacy settings have nothing to do with how Facebook monetizes your data.

To top it off, Facebook also stands accused of engaging in anti-competitive behavior. It held market dominance by limiting access to its APIs away from competitors while providing easier access to partners, advertisers, and “friendly” developers (who were not competing with Facebook). An example of an app that was flagged was MessageMe. Facebook perceived MessageMe as taking too much value from the Facebook APIs, without reciprocating.

This security blunder is not in the grit of the technical hack but the business models and practices. Facebook’s entire business model is in direct tension with end-user data privacy. Every time they make a dollar, it’s at the expense of someone’s privacy. To reinforce a sense of trust (and therefore, a sense of security) consent must be explicit, not hidden.

How can consumers protect themselves?
It may be helpful to realize that businesses that “give you access to free services” are not being charitable: they’re trying to make money off your data. As long as you’re ok with trading your privacy for whatever it is you’re doing on Facebook, Instagram, Google, or any other data-monetizing business model, then ok. You’re free to make that choice.

A typical response is, “I don’t mind if they advertise to me stuff aligned with my interests,” but keep in mind this data can be used against you if it falls into the wrong hands. For example, a free thinker caught in China or Belarus could be awful news. Or on a more macro scale, your data could be used to sway national elections, as was the case with Cambridge Analytica.

I am excited to see innovations for the consumer that reinforce end-user privacy by aligning business values with end-user data-privacy. For example, the Puma Browser is a browser that protects end-user privacy and enables content providers to collect compensation directly from consumers, instead of going through middle-man-advertisers.

How can tech leaders protect their products and services?
When a technology trades user-privacy for a service, explicit and unambiguous consent is needed. If you’re building a service like this, make those trade-offs explicit. For example, the Basic Attention Token in the Brave Browser replaces traditional implicit service-for-advertising trade-offs in conventional advertising paradigms.

But there are alternative monetization platforms today, such as XRP (digital asset build for global payments) and Coil (a new way for consumers to compensate content creators directly). For example, Cinnamon Video (a YouTube alternative) and Puma Browser (a Chrome alternative), both enable users to pay content creators directly. These alternative business models allow for business value to align with end-user data privacy instead of competing with it.

#1: Negligent Data-Security

Billions of records were stolen or discovered-stolen in 2019.

The data included 108M from an online casino, 139M records from Canva, 202M Chinese Job Seekers, 275M Indian resumes, 540M Cultura Colectiva of Facebook app dataset, 617M account for sale on the dark web’s Dream Market, 773M records from Tony Hunt’s Collection #1, 808M records from verifications.io, 885M records from First American Financial Corp., 1B+ SMS related records from TrueDialog, 2B+ records from Orvibo, and many others.

Your data tells a story. And when it’s your data, it tells your story.

But data has a life of its own, too: It is born in the user’s client. Once it leaves the client, it belongs to the server. Then it gets chopped up and re-organized. Some copies might go off into 3rd party services, such as Google Pub/Sub, Twilio, Algolia, AWS, Azure, Snowflake, SalesForce, and others. Some copies are ending up in logs, which are being backed-up and distributed across multiple availability zones. Some of it ends up in hot-caches for reduced response latency. Some of it ends up in a database. Some of it ends up in a giant immutable index for future search capabilities.

Are the operating systems and auxiliary software all running up to date versions? Are all these services authenticated and authorizing each other’s requests? How many databases end up holding a copy of your data? And it only takes one weak link of all this cloud activity to lead to a data breach.

How can consumers protect themselves?
Unfortunately, there are not many ways to protect yourself, aside from being more conscious and deliberate about where we put our data.

Services that provide full E2E encryption are a great start. End-to-End encryption means that only its intended recipients can read the data. So, the data can safely pass over untrusted portions of the system (or 3rd party services or networks). For example, Signal is a chat application that supports end-to-end encryption; the intended recipient of a message should be the only other party in the system with access to the decryption keys. This way, a data breach in any of the intermediate systems does not compromise your data.

How can tech leaders protect their products and services?
Taking data-security seriously is a great start. Investing in layers of data-security will mitigate your risk. These layers almost always should include,

( a ) Use firewalls to drop unnecessary traffic.

Prohibit network traffic that does not conform to your collection of services. For example, a database should only be accessible from the service that uses it. Services should only be accessible from other services. An exposed API should only be publicly available behind an API Gateway such as NGINX or HAProxy, and even these should live behind a load balancer.

( b ) Require strong service-to-service AuthN and AuthZ.

Services must authenticate all privileged traffic (answers the question, “Who are you?”) and authorize the requester (answers the question, “What are you allowed to do?”). Open Policy Agent is a great way to tackle AuthZ with policies and fine-grain access controls. A Token Exchange service can be used in tandem with OpenID Connect to help solve AuthN, with both in-house and externalized trusted Identity Providers.

( c ) Run up-to-date software in your cloud.

Spinnaker is an excellent CICD solution to continuously deploy newly updated software to any cloud environment. Policies to re-deploy production environments regularly help mitigate risk. Clair or Snyk.io can be used to ensure that your software does not contain known vulnerabilities.

( d ) Audit and log what happens with your data.

Lots of tools provide visibility into where your data ends up. When something bad happens, we need clarity into the details of what exactly happened, and how. Perhaps more importantly, we need to be able to detect anomalies, in real-time, to be able to identify malicious actors in our system. One such tool is Ambitrace, which provides both API and device data access audit logging and tracking.

( e ) Integrate Application Layer Encryption.

Application Layer Encryption is the closest to the data, so, therefore, it is the final layer of defense in your quest for data security. It’s also the most robust layer of protection, assuming key management is in order.

It sounds simple: Encrypt the sensitive data. But now your engineers are forced to manage secrets (the encryption keys). Secret lifecycle management is a hugely difficult problem. But, several SaaS solutions can help reduce the burden of key management, such as AWS KMS, Google KMS. But why build anything custom? To minimize the overhead even further, and you may also consider a full-stack Application Layer Encryption Solution, such as Peacemakr.

The trick when looking at these services is to pick a service that will meet all your needs. For example, if you or a customer wants to own there own keys on-prem, can the provider accommodate this requirement? Another critical feature to consider is crypto-agility: Can the SaaS enable your software to quickly pivot off compromised encryption algorithms and keys without you having to release new clients? Finally, consider risk segmentation — does their key lifecycle management automatically segment risk over the three critical axes: time, space, and use?

Do these things, and the data thieves of the world will hate you.

In 2020, take small imperfect steps forward towards a better world.

Do not be distracted by your end-goal. Arm yourself with knowledge. Seek out small and imperfect steps forward. Align your direction with your values, and step bravely forward into 2020.

--

--

Jon McLachlan

Founder of YSecurity. Ex-Apple, Ex-Robinhood, Ex-PureStorage. Lives in Oakland. Athlete.