Monday, May 25, 2015

Why changes to Wassenaar make oppression and surveillance easier, not harder

Warning to EU readers: EU writing culture lays out arguments to draw a strong statement as conclusion, US writing culture seems to prefer a strong statement up front, followed by the arguments. This article follows US convention.

Adding exploits to Wassenaar was a mistake if you care about security


The addition of exploits to the Wassenaar arrangement is an egregious mistake for anyone that cares about a more secure and less surveilled Internet. The negative knock-on effects of the agreement include, but are not limited to, the following list:

  • It provides governments with a massive coercive tool to control public security research and disadvantage non-military security research. This coercive power need not be exercised in order to chill public research and vulnerability disclosure.
  • It tilts the incentive structure strongly in favor of providing all exploits to your host government, and makes disclosure or collaborative research across national boundaries risky
  • It provides a way to prohibit security researchers from disseminating attack tools uncovered on compromised machines.
  • It risks fragmenting, balkanizing, and ultimately militarizing the currently existing public security research community.

The intention of those that supported the amendment to Wassenaar was to protect freedom of expression and privacy worldwide; unfortunately, their implementation achieved almost the exact opposite. 

With friends of such competence, freedom does not need enemies. The changes to Wassenaar need to be repealed, along with their national implementations.

A pyrrhic victory with unintended consequences


In December 2013, activists worldwide celebrated a big success: Intrusion Software was added to the list of technologies regulated by the Wassenaar Arrangement. In the cyber activist community, people rejoiced: Finally, the people they call "cyber arms dealers" would no longer be able to act with impunity. Oppressive regimes would no longer be able to buy the software that they use for mass surveillance and repression. A victory for freedom and democracy, no doubt.

Unfortunately, the changes to the regulation have horrible knock-on effects for security research, privacy, and society at large. The change to the Wassenaar Arrangement achieves the exact opposite of what it was intended to do.

The difficulties of being a security researcher


To discuss the many ways in which this regulation is flawed requires some background on the difficulties faced by security researchers worldwide:

Security research is an activity fraught with many difficulties. There are few historical precedents where a talented 20-year old can accidentally conceive a method capable of intruding into and exfiltrating information out of hundreds of well-fortified institutions. The best (if very cheesy) analogy I can come up with that explains the difficulties of young researchers are the X-Men comic books -- where teenagers discover that they have a special ability, and are suddenly thrust into a much bigger conflict that they have to navigate. One week you're sitting in your room doing something technically interesting, a few weeks later people in coffee shops or trains may strike up a conversation with you, trying to convince you that government X is evil or that they could really be helpful fighting terrorist organisation Y.

Security researchers face a fundamental problem: In order to prove exploitability, and in order to be 100% sure that they are not crying wolf, they need to demonstrate beyond any doubt that an attack is indeed possible and reliable. This means that the researcher needs to build something that is reliable enough to be dangerous.

Once successful, the researcher is in a very difficult spot -- with no evident winning move. What should he/she do with the exploit and the vulnerability?

Different people will posit different things as "the right behavior" at this point. Most people argue the researcher should provide the vulnerability (and sometimes the exploit) to the software vendor as quickly as possible, so that the vulnerability can be fixed. This often comes with risk -- for many closed-source programs, the researcher had to violate the EULA to find the vulnerability, and many vendors perceive vulnerability reports as attention-seeking at best and blackmail at worst. In the best case, reporting simply involves some extra work that will not be compensated, in the worst case the researcher will face legal and/or extralegal threats by the party he/she is reporting the vulnerability to.

After the researcher hands over the vulnerability and exploit, he/she is often made to wait for many months -- wondering if the people he provided his code to will fix the issue as swiftly as possible -- or if they are silently passing on information to third parties. In many cases, he/she will receive little more than a handshake and a "thank you" for his efforts.

At the same time, various parties are likely to offer him money for the vulnerability and the exploit -- along with a vague promise/assurance that it will only used for "lawful purposes". Given the acrobatics and risks that responsible disclosure carries, it is unsurprising that this offer is tempting. Anything is better than "legal risk with no reward".

Partly to alleviate this imbalance, more mature vendors have begun introducing bug bounties -- small payments that are meant to encourage disclosing the bug to the vendor. The sums are smaller than in the grey market, but -- by and large -- enough to compensate for the time spent and to offer the researcher positive recognition. Talented researchers can scrape out a living from these bounties.

Security researchers are in an odd position: In order to check the validity of their work, they need to create something with the inherent potential for harm. Once the work is shown to be valid, the result becomes the object of desire of many different military and intelligence organisations which, given the general scarcity of "cyber talent", would love to get these researchers to cooperate with them. The software industry has grudgingly accepted the existence of the researchers, and the more mature players in that industry have understood the value of their contributions and tried to re-shape the playing field so that getting security issues reported and fixed is incentivized.

The researcher has to balance a lot of difficult ethical deliberations -- should the exploit be sent to the vendor? Can the people that wish to buy the exploit on the grey market be trusted when they claim that the exploit will save lives as it will be used to prevent terrorist strikes? What is a reasonable timeframe for a vendor to fix the issue? Can disclosure accelerate the process of producing a fix and thus close the window of opportunity for the attacker more quickly? Is fixing the issue even desirable?

There are no 100% simple answers to any of the above questions -- each one of them involves a long and difficult debate, where reasonable people can disagree depending on the exact circumstances.

Adding exploits to Wassenaar, and shoddily crafted regulation


The Wassenaar Arrangement is not a law by itself -- it is an agreement by the participating countries to pass legislation in accordance with the Wassenaar Arrangement, which then stipulates that "export licenses" have to be granted by governments before technology listed in the agreement can be exported.

From an engineering perspective, it is good to think of Wassenaar as a "reference implementation" of a law -- different legal systems may have to adapt slightly different wording, but their implementation will be guided by the text of the arrangement. The most damaging section in the new version reads as follows:
Intrusion software:
"Software" specially designed or modified to avoid detection by 'monitoring tools', or to defeat 'protective countermeasures', of a computer or network capable device, and performing any of the following:
a. The extraction of data or information, from a computer or network capable device, or the modification of system or user data; or
b. The modification of the standard execution path of a program or process in order to allow the execution of externally provided instructions.
This formulation is extremely broad -- any proof-of-concept exploit that defeats ASLR or a stack canary (or anything else for that matter) and achieves code execution falls under this definition. Even worse -- by using a formulation such as "standard execution path" without properly defining how this should be interpreted, it casts a shadow of uncertainty over all experimentation with software. Nobody can confidently state that he knows how this will be interpreted in practice.

Legal uncertainty and coercive power to nation states


Legal uncertainty is almost always to the benefit of the powerful. Selective enforcement of vague laws that criminalize common behavior is a time-honored technique -- perhaps best embodied in Beria's famous statement "Find me the man and I will find you the crime".

The principle is simple: As long as a person conforms to the expectations of society and the powerful, the laws are not stringently applied -- but as soon as the person decides to challenge the status quo or not cooperate with the instruments of power, the laws are applied with full force.

I live in Switzerland. Most other security researchers that I like to collaborate with are spread out all over the world, and I routinely travel accross international borders. Occasionally, I carry working exploits with me -- I have to do this if I wish to collaborate with other researchers, discuss new discoveries, or perform investigations into reliable methods of exploitation. Aside from the physical crossing of borders, it is routine to work jointly on a shared code repository accross borders and timezones.

The newly created legal situation criminalizes this behavior. It puts me personally at huge risk -- and gives local governments a huge coercive tool to make me hand over any information about zero-day I may find ahead of time: If I do not apply for an export license before visiting another researcher in Germany or France or the US, I will be breaking export control regulations.

The cyber activists that celebrated Wassenaar have mainly made sure that every security researcher that regularly leaves his country to collaborate with others can be coerced into cooperation with their host government easily. They have made it easier for all governments to obtain tools for surveillance and oppression, not harder.

A story of Elbonia and Wassenaar


Let us imagine a young researcher Bob in a country named Elbonia. Elbonia happens to be a Wassenaar member, and otherwise mostly under the rule of law with comparatively solid institutions. 
Bob finds a number of vulnerabilities in a commonly used browser. He reports these to the vendor in the US, and the vendor publishes fixes and awards bug bounties.

The domestic intelligence service of Elbonia has a thorny problem on the counterterrorism side -- separatists from one of their provinces have repeatedly detonated bombs in the last years. The intelligence service could really use some good exploits to tackle this. Unfortunately, they have had difficulties hiring in recent years, and they do not have the tooling or expertise -- but they do see the security bulletins and the bug bounties awarded to Bob.

They decide to have a coffee with Bob -- who does not want to work with the intelligence service and would prefer to get the problems fixed as quickly as possible. The friendly gentlemen from the service then explain to the researcher that he has been breaking the law, and that it would be quite unfortunate if this led to any problems for him -- but that it would be easy to get permission for future exports, provided that a three-month waiting period is observed in which the Elbonian intelligence service gets to use the exploits for protecting the national security and territorial integrity of Elbonia.

What would the researcher do?

Balkanising and nationalising an international research community


The international security research community has greatly contributed to our understanding of computer security over the last 20+ years. Highly international speaker line-ups are the norm, and cooperation between people from different nations and continents is the norm rather than the exception. 

The criminalization of exporting exploits risks balkanising and nationalising what is currently a thriving community whose public discussion of security issues and methods for exploitation benefits everybody.

The implementation of Wassenaar makes it much easier and less risky to provide all exploitable bugs and associated exploits to the government of the country you reside in than to do any of the following:
  • report the vulnerability to a vendor and provide a proof-of-concept exploit
  • perform full disclosure by publishing an exploit in order to force a fix and alert the world of the problem
  • collaborate with a researcher outside of your home country in order to advance the state of our understanding of exploitation and security

Wassenaar heavily tilts the table toward "just sell the exploit to your host government".

Making "defense" contractors the natural place to do security research


With the increased risk to the individual conducting research, and a fragmentation of the international research community, what becomes the natural place for a security researcher to go to pursue his interest? What employer is usually on great terms with his host government, and can afford to employ a significant number of other researchers with the same nationality?

The changes to Wassenaar make sure that security researchers, which in the past few years have been recruited in large numbers into large, private-sector and consumer-facing companies, will have much less attractive prospects outside of the military-industrial complex. A company that does not separate people by nationality internally and is unused to heavy classification and compartmentalisation will simply not want to run the risk of violating export-control rules -- which means that the interesting jobs will be offered by the same contracting firms that dominate the manufacturing of arms. These companies are much less interested in the security of the global computing infrastructure -- their business model is to sell the best method of knocking out civilian and military infrastructure of their opponents.

Will the business of the defense contractors be impacted? This is highly doubtful -- the most likely scenario is that host governments for defense contractors will grant export licenses, provided that the exploits in question are also provided to the host government. The local military can then defend their systems while keeping everybody else vulnerable -- while keeping good export statistics and "protecting Elbonian jobs".

Everybody that cares about having more secure systems loses in this scenario.

Weakening defensive analysis


The list of knock-on effects that negatively affect the security of the Internet can be extended almost arbitrarily. The changed regulation also threatens to re-balance the scales to make the public analysis of surveillance and espionage toolkits harder and riskier.

The last few years have seen a proliferation of activist-led analysis of commercial surveillance tools and exploit chains -- publicly accessible analysis reports that disassemble and dissect the latest government-level rootkit have become a regular occurrence. This has, no doubt, compromised more than one running intelligence-gathering operation, and in general caused a lot of pain and cost on the side of the attackers.

Some implants / rootkits / attack frameworks came packaged with a stack of previously-unknown vulnerabilities, and most came with some sort of packaged exploit.

It is very common for international groups of researchers to circulate and distribute samples of both the attack frameworks and the exploits for analysis. Quick dissemination of the files and the exploits they contain is necessary so that the public can understand the way they work and thus make informed decisions about the risks and capabilities.

Unfortunately, sending a sample of an attack framework to a researcher in another country is at risk of being made illegal.

On export controls and their role in protecting privacy and liberty


There is a particular aspect in the lobbying for adding exploits to Wassenaar that I personally have a very hard time understanding: How did anyone convince himself that the amendmends to Wassenaar were a good idea, or that export control would be helpful in preventing surveillance?

Any closer inspection of the agreement and the related history will bring to light that it was consistently used in the past to restrict the export of encryption -- as you can read here, Wassenaar restricted export of any mass-market encryption with key sizes in excess of 64 bits in the late 1990s. 

Everybody with any understanding of the cryptographic history knows that export licenses were used as a coercive mechanism by governments to enhance their surveillance ability -- "you can get an export license if you do key escrow, or if you leak a few key bits here and there in the ciphertext". To this day, security of millions of systems is regularly threatened by the remains of "export-grade encryption".

Even today, there are plenty of items on the Wassenaar restrictions list that would have great pro-privacy and anti-surveillance implications -- like encrypted, frequency-hopping radios. 

I have a difficult time understanding how anyone that claims to support freedom of expression and wishes to curb surveillance by governments would provide additional coercive power to governments, instead of advocating that encrypted, frequency-hopping radios should be freely exportable and available in places of heavy government surveillance.

Summary


While the goal of restricting intrusive surveillance by governments is laudable, the changes to Wassenaar threaten to achieve the opposite of their intent -- with detrimental side effects for everybody. The changes need to be repealed, and national implementations of these changes rolled back.

(Thanks to all pre-reviewers for useful comments & suggestions :-) 

1 comment:

John Thacker said...

The USA proposed implementation seems particularly bad. It has a clause that implies that sharing exploits with the NSA is required for export approval.

Wassenaar makes perfect sense if you agree that the problem is the "bad guys" as defined by the US and other Western governments, terrorists and criminals and a few rogue nations that don't participate in Wassenaar. If you think that your own signatory country is potentially part of the problem, then it's a big mistake.