As big a deal as the web is, phones and tablets are even bigger. The number of mobile devices out there already dwarfs the number of desktops. Smartphones are getting so useful and so cheap, it’s clear we’re heading to a world where every family and then every person on the planet has one. That means right now, every mobile app maker shares in the responsibility for the future privacy of everyone on the planet. The good news? Mobile apps are in a position to make mass surveillance extremely difficult for government spies.
If we make the right moves now, the explosive growth of smartphones can become the NSA’s worst nightmare. Here’s how.
1. Make security & privacy a priority
Mobile security is arguably more important than the web. There’s more surveillance data at stake. Your phone is always on, it’s always in your pocket, and it has access to all kinds of data (your calls, your real time location, etc.) that your laptop wouldn’t usually have.
But the fast growing space is extremely chaotic. Apps grow in popularity so quickly, and app makers are still figuring out how to do it well and make money. Some data collection tricks that would never fly in a mature market are still common in mobile apps, and security has lagged behind. This is a shame because, for a few technical reasons, apps are in a much better position than sites to protect user privacy and block mass surveillance.
This section is less about any single technical step and more about moral framing. When you’ve got millions of users, it’s a simple question of your responsibility to all those people.
2. Don’t send data unprotected. Use SSL
SSL is the basic technology we use to send data securely over the Internet. It’s what makes HTTPS websites secure, and you use it every day without noticing it, to buy anything online, or on sites that take security seriously.
Some apps keep all your data safe on your phone. But if a mobile app talks to a server—to send a message to a friend, post a photo, or share your high score—that data gets sent over the open Internet. If the app doesn’t use SSL to protect that connection, anyone can spy on that data. Police can do it. The FBI can do it. Really clueless governments can do it. Even random creepy dudes on the same public wifi network as you can do it. For the NSA, it’s a freebie.
Even a game like Angry Birds might send a ton of information about you back to servers with no protection, including your real-time location. Does the NSA really grab that data? You bet.
But if an app uses SSL for all the data, spying on that data is hard. When apps take another precaution called “cert pinning” (keep reading) it’s really hard. Mobile apps should probably be much more careful about what data they collect and send, but that’s a balancing act. One thing is certain: mobile apps must use SSL to protect the data they send from prying eyes.
3. Apps can be more secure than websites, with “cert pinning”.
In a lot of ways, mobile apps are in a better position to secure the data they send than web browsers. Certificate or “cert” pinning takes advantage of that, to make SSL much harder to break.
The easiest way to break HTTPS or SSL encryption is a man-in-the middle attack. You type “facebook.com” into your browser, but somebody else tells your computer “I’m facebook.com”. You may think you’re talking to Facebook, but really you’re talking to the NSA.
Cryptography provides some rock solid ways to know who you’re talking to, but to make that easy to use and update, you have to trust somebody. On the web, we trust the certificate authority system to tell us who’s who. If a CA says “yes, this site is really Facebook” we’ll trust it.
The problem is, the CA system is terrible, and it is really easy for government spies to hand out fake certificates. They can hack CA’s, and many are CA’s. The US Department of Homeland Security is a CA. China is a CA.
On the web, we’re a little screwed right now (though good people are working on it and it will get fixed). But on mobile, the outlook is brighter: you can stick the certificate in your app (hence “pinning”). That way, the app only trusts the certificate it’s supposed to.
Pinning of certs can go wrong if something like Heartbleed (or worse) forces the cert you pinned to be revoked. So people implementing pinning need to have a plan to make sure that their certs are revocable and that revocation won’t cause an outage in their app. But there are well-understood ways to do this, like forcing an update, or including a backup key. See this guide.
Another way to look at it is: it is absolutely brain dead for apps to trust the broken CA system used on the web. Apps don’t need to, and doing so is a huge gift to every government spy agency.
4. High standards for third party code (like ads & analytics).
Say a developer reads this article and is convinced. She adds carefully adds SSL & cert pinning to her app, pushes an update, and thinks, “Awesome, I just made the NSA’s life a little harder.”
But along the way, she used a bunch of different third party code. Code for displaying ads, say. Or analytics code to tell her how well each feature worked. Even though her code was secure, these third party tools are still cranking along without good security, leaking piles of user info to the NSA.
This means two things. First, we need to hold developers of code intended to be used by many apps (like advertising and analytics providers) to the highest standard when it comes to security. Their tools have a really wide reach. And they’re often the weakest link.
Second, app developers need to pick third party tools that do security right, and ask tough questions when shopping around. If you want your app to respect user privacy, you have to choose libraries and third party tools that do too.
5. Don’t forget Perfect Forward Secrecy (PFS).
The security of any SSL connection depends on the security of private keys. Once the NSA or any attacker gets these keys, they can break SSL encryption until the site switches to new keys.
Losing control of private keys is a catastrophe every app tries to avoid. But the security version of Murphy’s Law says it will happen eventually. Recently, the “Heartbleed” bug meant that a huge number of servers could have leaked their private keys.
So it’s good to think about how to limit the damage when it happens. Perfect Forward Secrecy (PFS) limits damage, by protecting all the data sent before the keys leaked.
To government spies an HTTPS connection could look like a bunch of encrypted, incomprehensible gobbledygook. But they can still collect it! With PFS, that old data is still safe. Only the new stuff is vulnerable.
PFS for mobile apps isn’t our main focus, since there are much bigger privacy weaknesses in the mobile app world. Still, any app developer who wants to do things right should look into it.
6. Bring out the big guns: end-to-end encryption.
Another place where mobile apps are way better positioned to support privacy is in end-to-end encryption.
“End-to-end” means encryption from you, all the way to the person you’re talking to. When done right, it means anyone trying to listen in is out of luck, unless they hack your phone, or break into your house and bug it.
On the web, end-to-end encryption is hard and weird for users. But mobile apps can do it really well. In fact, some of the easiest and most NSA-resistant ways to communicate are mobile apps (see: Textsecure).
Now, just because it’s a cool project doesn’t mean developers should write their own end-to-end encryption code. It is well documented that 99.9% of coders will make subtle but fatal mistakes when they try to implement encryption. Look to established projects like Textsecure and The Guardian Project for guidance and well-tested libraries to use.
End-to-end encryption with free, open source code is the gold standard for user privacy. When done right it’s really hard for the NSA to handle, and absolutely impossible for nearly any other government. Plus it’s great for businesses. If the FBI or NSA forces you to hand over all your app’s data, the content of those messages will be safe.
Why this is so important.
Mobile is the most important battleground, and it’s the one where we’re best positioned to win. The web depends on the weak CA system, but mobile apps can easily use cert pinning. End-to-end encryption on mobile is way more practical than on the web.
There are big problems with mobile security, but phones are still way more secure than, say, Windows was in its heyday.
In many ways we’re already winning. When people made calls and sent texts on old feature phones, their home governments saw *all* of that, and so did the NSA. Cellphone carriers were happy to hand it over. Now, when people use Facebook chat, or Hangouts, or iMessages, or Facetime, all that data is encrypted back to companies’ servers, so the NSA has to wrangle with tech companies, who are much less compliant than phone companies (and who will usually ignore sketchy requests from foreign governments!) to do something as simple as a wiretap.
End-to-end encryption is already taking off on mobile. We can accelerate it. There’s no reason why the next Snapchat or Whatsapp can’t have NSA-resistant privacy built-in. And there’s no reason why it can’t be verifiably secure free software. That’s the NSA’s worst nightmare. Let’s make it happen!
See anything here that you would change? Email firstname.lastname@example.org with feedback.