Monday, July 20, 2015

Commentary on the BIS proposal regarding the Wassenaar Arrangement

The Bureau of Industry and Security (BIS) has proposed rules related to the Wassenaar Arrangement, a set of agreements intended to limit the exchange of weapons and related research. As Cyber security gains attention, the WA has been expanded to cover cyber research. Specifically, the BIS proposes to require export licenses for products and documentation related to network and software vulnerabilities. These rules have the potential to severely restrict the sort of work I and my peers in the industry do. The BIS is taking public comment through today. Below are my comments to the BIS taken in large part from a previous post on Security Shades of Grey.

I frequently write about malware, spam, credit card fraud, and various computer crimes. In my and others' writing it may seem as though there is an easy distinction between the legitimate and the malicious. The reality is, the world of online security is not always black and white. More often, it is filled with shades of grey.

The same behavior may be perfectly legitimate in one context, and purely criminal in another. The same program or tool can be used for benevolent purposes by one person, and for malicious gain by another. In fact one person may use technology tools for good by day, and for evil by night: Brian Krebs wrote in his book Spam Nation the tale of Pavel Vrublevsky, a Russian who simultaneously ran a widespread pharmaceutical spam program and served as chairman for the anti-spam working group in the Russian Ministry of Telecom.

With the BIS rules regarding the Wassenaar Arrangement there is a danger that well-meaning legislators that don't understand what they are legislating could cause more harm than good. The BIS proposes to restrict transfer of software and knowledge related to intrusion systems, penetration testing products, and IP network communications surveillance systems or equipment.

I am not an "elite hacker" by any definition, but do research into vulnerabilities in products that I use. Under certain interpretations of the BIS proposal, several of my projects and vulnerability disclosures could needlessly fall under restrictions.

About a year ago, I found that my wireless router was not updated to the latest available firmware, even though the update button said it was up-to-date. Being a curious soul, I set out to find out why. Eventually I discovered that my router relied on a file stored at the manufacturer's website, which listed the latest firmware version for every router model; that file had not been updated, so as far as my router knew, it had the latest version.

My research was completely aboveboard, with no malicious intent nor malicious use. In fact, that research led to an informal relationship with the product team at this manufacturer such that I've been able to beta test several new products and recommend changes to make them more secure upon public release. In fact, I have discovered a few more serious flaws, which the company fixed before I published my research. Under the proposed law though, I accessed the website in a manner that was not intended by the manufacturer, and thus exceeded the intended authorization. My blog posts describing the flaws could enable a malicious hacker to gain access to devices where the owner has not updated to the fixed firmware. My beneficial research - which has resulted in more secure routers used in hundreds of thousands of homes and small businesses - could have instead been interpreted as proprietary research into vulnerabilities and exploitation of network-capable devices.

As another example, I use a variety of software and devices to protect my home network from viruses, malware, and attacks. A recent addition was an IDS, or intrusion detection system, using open-source Snort software on a Raspberry Pi running Kali Linux. I wrote some custom rules to detect undesired activity by looking at the responses OpenDNS gave to domain name queries. OpenDNS is like a smart phone book: for most websites it responds with the correct network address, but for known undesireable sites (whether they be malicious, or blocked by our family policy), it instead responds with the address of a page that says "you don't really want to go here."

Shortly after turning on the system, I noticed that my teenage son's laptop was frequently making DNS queries that triggered alerts - at a rate orders of magnitude more frequent that any other devices on the network. On investigating, almost all of these alerts were for requests for advertising domains. The culprit was two browser "helpers" that had been installed on his computer - one known as "Jollywallet" and the other as "LPT Monetizer." Both are programs that hook into a web browser and display advertisements, presumably to earn money for those controlling the ad network. More advertising impressions equal more revenue.

Why did my anti-virus program not detect and block these programs? Strictly speaking, they are not malware. They don't steal passwords or break into bank accounts. They don't delete files or destroy hard drives. They don't seek out other computers to infect, or databases to hack. Somewhere along the line, they probably came as a hidden "benefit" of a game or other program my son intentionally installed.

The implementation of Snort to inspect DNS responses, and my subsequent description of the project and release of my proprietary Snort rules to the public could be interpreted as a communication with intrusion products.

Google, Cisco, and others with a legion of legal counsel have written their own comments to the proposed rules. I and many of my peers have no such counsel: we are individuals doing our part to make the Internet a safer place, often by researching vulnerabilities and exploits -- and ways to counter such exploits. When even a legion of legal experts cannot be certain whether a given activity falls under export controls, folks such as myself have no chance of understanding the ins and outs of the rules. Many such as myself are likely to stop such research rather than risk running afoul of federal laws. And when the good guys quit, all that will be left will be the criminal hackers.

Security research is shrouded in shades of grey. Black and white laws with no room for interpretation, or no exemption for good-faith research and sharing, risk squashing an industry of good guys. The research we do - often on our own time with no expectation of being paid - results in better security for everyone. The bad guys will continue researching and exploiting vulnerabilities regardless of the law. My "hacker" peers and I just want to find and fix flaws first. Don't discourage us.

Update:

For clarity, here are the scenarios I am most concerned with:

  1. If I conduct research on a network-capable device that I own (such as a wireless router) and which was manufactured by a non-US company, and find exploitable vulnerabilities, does reporting those vulnerabilities to the manufacturer in exchange for a "bug bounty" require an export license?
  2. If I conduct intrusion detection research, and publish how to reproduce a product or configuration in my blog which has documented readership from all parts of the world, does that require an export license?