The old Cold War export control alliance, now known as the Wassenaar Arrangement, hasn't exactly been a hotbed of new controls since Russia joined the club. But according to the Financial Times, the 41-nation group is preparing a broad new set of controls on complex surveillance and hacking software and cryptography. I suspect that the move is a response to concerns about the use of such tools -- from deep packet inspection to zero-day attacks -- by rogue states like Syria and Iran.
It's an unusual step in several respects. First, the European Union seems to be at least as enthusiastic as the United States about the controls. Usually, Europeans have let the US take the lead (and the economic hit) when it comes to controlling exports. Second, it is not clear that these controls will work. Wassenaar doesn't include China or Israel, both major producers of surveillance and hacking tools. So the new control regime could turn out to be an exercise in moral preening, as Europe and the United States sacrifice technology sales to China and Israel for the sake of political correctness.
The latest Snowden leak story is in the Huffington Post. It says that NSA thought about exposing the hypocrisy of Islamic extremist recruiters by revealing their financial greed or predatory sexual habits. I'm quoted in support of considering such tactics, but the backstory of the interview may be more interesting.
When one of the authors, Ryan Grim, called me for comment, he said that while Glenn Greenwald was transitioning to his new Omidyar-funded venture he was temporarily publishing his Snowden leaks with HuffPo. So when he asked for my take on the NSA story, pretty much the first words out of my mouth were, "Why wouldn't we consider doing to Islamic extremists what Glenn Greenwald does routinely to Republicans?" The story quotes practically everything I said to Grim except that, although I returned to the point a couple of times and emphasized that it summed up my view.
I don't think HuffPo cut the quote because they ran out of electrons. The article itself is so tediously long that I defy anyone to read every word in a single go.
Nor because my remark was inaccurate. It turns out that Glenn Greenwald has written an entire book devoted to exposing the contradiction between Republicans' ideology and their private lives. In Greenwald's words, "While the right wing endlessly exploits claims of moral superiority ... virtually its entire top leadership have lives characterized by the most decadent, hedonistic, and morally unrestrained behavior imaginable ...[including] a string of shattered marriages, active out-of-wedlock sex lives, and highly 'untraditional' and 'un-Christian' personal lives [endless detail omitted]." His book certainly makes the NSA memo sound restrained and cautious, but both are motivated by the same idea.
Grim and Greenwald very likely cut the quote because it would have undermined the narrative of the piece, which combines solicitude for the poor Islamists whose sexual and financial hypocrisy might be exposed with outrage at the NSA for even considering such a tactic. The quote would have made them look like, well, hypocrites.
The incident makes me wonder what else Greenwald leaves out of his stories. And why we should continue to trust snippets of documents selected by someone who thinks that the difference between Islamist extremists and Republicans is that one is an enemy that deserves no quarter and the other is sort of like Martin Luther King, except for the part about trying to kill us.
The US-China Economic and Security Review Commission has issued its annual report. It reminds us that, while press and privacy campaigners have been hyperventilating over US intelligence programs, there are, you know, actual authoritarian governments at work in the United States -- breaking into the networks of activists whom they dislike, newspapers whose sources they want to discover, and companies whose secrets they want to steal, all without (gasp!) court orders or Jim Sensenbrenner's consent.
Perhaps even more interesting, the Commission offers moral support and an open Overton window to those who support much more active defenses than the Justice Department has been willing to countenance under the Computer Fraud and Abuse Act. Among the policy options it treats seriously are watermarking and beaconing of documents for evidentiary purposes as well as authorizing private victims to conduct a host of active responses to intrusions:
Encourage the U.S. government, military, and cleared defense contractors to implement measures to reduce the effectiveness of Chinese cyber operations and increase the risk of conducting such operations for Chinese organizations. For example, the IP Commission recommends measures such as ‘‘meta-tagging, watermarking, and beaconing,’’ because they can help identify sensitive information and code a digital signature within a file to better detect intrusion and removal. These tags also might be used as evidence in criminal, civil, or trade proceedings to prove data was stolen.
Clarify the legal rights of companies, and the types of action that are prohibited, regarding finding and recovering intellectual property that is stolen through cyber intrusions. Mr. Kamphausen said U.S. companies ‘‘need the right tools that afford them the protections, legal and otherwise, so that they can do what’s in their own interest.’’
Pass legislation permitting U.S. companies to conduct offensive cyber operations in retaliation against intrusions into their networks. Such operations could range from ‘‘actively retrieving stolen information’’ to ‘‘physically disabling or destroying the hacker’s own computer or network.’’
The Administration has set a goal for fixing the troubled Obamacare website, healthcare.gov. By November 30, according to the Washington Post, the government's goal is that 80% of users will be able to buy healthcare policies online. The 80% target moves the goalposts back from the President's more confident statement earlier this month: “By the end of this month, we anticipate that it is going to be working the way it is supposed to.”
But it is a concrete, measurable goal.
Unfortunately, everyone involved in that measurement, from the contractors to HHS to the White House, has a strong interest in reporting success. And a track record of handling data in a way that masks failure. For example, the administration refused to provide any numbers about enrollments for more than a month and then released numbers that mix actual enrollments with plans that are simply sitting in the consumer's online “shopping cart.”
You don't have to be very cynical to think that we'll only hear about enrollment statistics on November 30 if the 80% goal is met, or can be spun. Which leads me to the point of this post.
We don't actually have to wait for the administration to release the numbers. Because the administration has chosen a target that can be measured by all of us. All we need is for a large enough group of consumers to go through the enrollment process on November 30 and report whether they succeeded or failed. Call it crowd auditing, or crowditing for short. In fact, done right, it's a better measure of success or failure than anything that can be measured by site administrators. And it will be available in something close to real time.
There are obviously problems with crowditing in such a politically charged atmosphere. Participants will be tempted to game the system, claiming failure or success without actually using the site. But it seems to me that many of those problems can be overcome by requiring participants to use real identities and to take screenshots when they start and periodically as they move through the process, sending these shots and one of their final successful or unsuccessful registration to a central location for verification and tabulation. I'm sure there are reputable news outlets that would provide some neutral oversight to the tabulation process, and I'm guessing that they'd spotcheck the participants, if only to get local color and quotes.
This is a little outside my usual beat, so I'm opening comments for evaluations of whether this is feasible, how to prevent abuse, and most importantly for volunteers. If we're going to organize this, we'll have to work fast, since November 30 is only six days away.
If you want to volunteer services, funding, or web design/software/technology help, or you want to make a comment that remains confidential, please send your message to email@example.com. If this works out, we'll likely want to work closely with, and perhaps give an exclusive to, the news outlets that are most helpful in pulling the effort together (h/t Glenn Greenwald), so news outlets interested in helping should also send a message to the same account.
The Leahy-Sensenbrenner USA FREEDOM Act puts the Foreign Intelligence Surveillance (FIS) court in charge of shaping, overseeing, and enforcing minimization guidelines in connection with section 215, pen/trap orders, and section 702, largely taking the Attorney General out of the process of writing minimization guidelines.
I'm appalled, because the FIS court has taken control of minimization before, with disastrous consequences; it built a "wall" between intelligence and law enforcement without any legal basis for doing so, and enforced the wall so aggressively that the FBI couldn't use its best counterterrorism assets to track down the hijackers in late August and early September 2001.
In a very real sense, it was the FIS court's legal error combined with a self-righteous use of its contempt power that thwarted the country's last, best chance to stop the attacks.
That the court made terrible errors in 2001 is perhaps understandable. Repeating those errors is not.
But the more closely I observe the FIS court the more concerned I become that the peculiar role that we have created for the FIS court makes a repetition all too likely. I'm testifying to the Judiciary Committee tomorrow on the USA FREEDOM Act, and I took the opportunity to do a bit more thinking in this post about why the FIS court seems to have learned so little from its discreditable performance in 2001.
It may be that the problem is best seen as a constitutional failure. That is, practical politics are pushing the FIS court out of an article III role and into article I. And the FIS court's failings may be best seen as a problem in separation of powers.
At the outset, the separation of powers issue isn't obvious. The FIS court’s principal statutory role is to approve or deny intercept and discovery orders involving foreign intelligence. This sounds like a role any court might play; judges approve warrants and wiretaps every day in a criminal context.
In practice, though, the FIS court’s role is quite different. Sitting on the court pulls judges into some of the most sensitive intelligence programs the United States has. It suddenly sees the many terrible things that other nations and movements hope to visit on Americans; it sees how much the government must do just to keep our enemies at bay. It cannot help wanting the government to succeed.
But service on the FIS also exposes judges to some of the most sustained and unidirectional political criticism they are likely to experience in their careers on the bench. The court is routinely mocked as nothing but a rubber stamp, and it's clear that the mockery stings. In fact, the court recently announced that it was keeping statistics to show how often it forces modifications of FISA orders. See Letter from the Honorable Reggie B. Walton, Presiding Judge, the United States Foreign Intelligence Surveillance Court, to the Honorable Charles E. Grassley, Ranking Member, Committee on the Judiciary, United States Senate (Oct. 11, 2013), available at http://www.uscourts.gov/uscourts/courts/fisc/ranking-member-grassley-letter-131011.pdf.
This suggests that the political criticism is hitting home, and perhaps affecting the court’s ability to apply the law with an even hand. After all, no one would want to be judged by a court that goes out of its way to publicize a scorecard of how often it rules against him.
These conflicting pressures, I suspect, push the court into a nit-picking overseer’s mentality toward the intelligence agencies. Feeling quite legitimate pressure to grant surveillance requests, the court also feels pressure to show its independence.
As the court’s “scorecard” and its occasional public statements suggest, the result is a court that flyspecks FISA orders to a fare-thee-well, demanding many modifications that may or may not be required by a strict reading of the FISA statute.
Ordinarily, of course, if a judge asks the government for things that go beyond his authority, the government appeals. But in the close confines of the FIS court, this is not an easy option. Neither the Justice Department nor the intelligence community wants to alienate the FIS court by suggesting that its demands have no basis in law. Instead, it is more comfortable for all if the intelligence community adopts as many of the court’s suggestions as it can and explains why it can't adopt the others.
And so the FIS stops being a judicial process of argument and ruling; it becomes more of a negotiation, in which the government is tempted to accept any doable measure that the court asks for, whether justified by law or not, and the court does not press for changes that the government persuasively argues it cannot make. Once the court has negotiated minimization guidelines, it owns them, pet rocks and all. The FIS court necessarily feels responsible for ensuring that they are carried out as intended. To make sure that happens, the court plays an increasingly managerial role in the operation of intelligence agencies.
But the FIS court is not a manager. Real managers have many administrative tools to make sure their policies are carried out. The FIS court has only two: legal rulings and contempt findings. As the court becomes more familiar with the agency, it grows more invested in the implementation of particular measures and policies. The temptation to declare that its favored measures are required by law is very great.
Similarly, when the court is disappointed or surprised by how the agency has implemented its measures, the temptation to brandish the contempt power is strong.
In short, I suspect that the disaster of 2001 was not the result of one judge’s bad temperament or faulty legal judgment. It is an institutional temptation, inherent in the managerial role that the FIS court has gradually assumed. Whether that role is consistent with the constitution looks more and more like a difficult question.
NOTE: My full testimony is here: Baker Testimony to Senate Judicary - 11-21-13.
I reviewed Juan Zarate's Treasury's War for the Wall Street Journal. If you have a subscription, here's the paywalled link. For cheapskates, here's the gist:
Treasury has attacked money laundering by big banks, imposing fines up to $2 billion on institutions around the world. As a result, banks have toughened their compliance regimes. Under the slogan “know your customer,” they now feel obliged to run checks on their customers’ reputations and to shun even faintly suspicious transactions.
In such a climate, it’s easy to become a customer no one wants to know. And the easiest way of all is to be officially labeled a “primary money laundering concern.” A bank that has been tarred with that brush quickly becomes a pariah to every bank with a compliance program. Because a pariah can’t perform normal financial transactions under such conditions, its solvency is immediately drawn into question. And, boom, within 24 hours, even a bank with no direct ties to the United States is effectively out of business, brought down by a Treasury-induced run. Treasury’s designation turns out to be a remarkably effective weapon—the Predator drone of financial sanctions—killing instantly, without warning, far from home.
In one of his better stories, Mr. Zarate shows how Treasury’s new weapon struck even North Korea, a veteran sanctions-buster that had sheltered comfortably in China’s lee for decades.
China’s diplomats stood by their client as usual, but not its banks. Rather than risk its access to world financial markets, even the state-owned Bank of China in Macau froze North Korean accounts. Later, after many ceremonial toasts at a session of the international talks on nuclear proliferation, one inebriated North Korean negotiator leaned in to his American counterparts and admitted: “You Americans have finally have found a way to hurt us.”
Mr. Zarate brings verve and the joy of combat to this and other tales. I served with him in government; and “Treasury’s War” certainly speaks with his hard-nosed, in-it-to-win-it voice. He is indeed a warrior-bureaucrat (and, truth be told, one after my own heart). In Mr. Zarate’s hands, what could have been a dry series of think-tank papers becomes a lively narrative filled with heroes, villains, and fools. ...
Mr. Zarate’s enthusiasm for his new weapon is heartfelt. Since leaving government, though, I’ve had a chance to see Treasury’s war from the other side. And like a Predator strike, it looks a little different on the ground. If you own a bank, Treasury’s designation can wipe out your investment overnight. Yet because the decision is fatal, challenges are rare. Treasury never sees its errors or the collateral damage it has caused. And what begins as an awesome and sobering responsibility soon becomes a routine part of the bureaucracy’s toolbox, prone to overuse and free from oversight.
Read the whole thing.
I'll be testifying tomorrow before the House Intelligence Committee. This post is an excerpt from that testimony. The full version is here: Download Baker - HPSCI testimony - Oct. 29 2013.
I fear that the campaign by Glenn Greenwald and others who control the Snowden documents has forced the executive branch into a defensive crouch. Other nations are taking advantage of the moment to demand concessions that the White House is already halfway to granting. If so, we will regret them as a country long after the embarrassment of fielding angry phone calls from national leaders has faded into a short passage in President Obama's memoirs.
European and other nations see the prospect for enormous gains at the expense of the U.S., in part because President Obama seems genuinely embarrassed and unwilling to defend the National Security Agency. Instead, he is offering assurances to select world leaders that they are not targets, and his homeland security adviser is declaring that “the president has directed us to review our surveillance capabilities, including with respect to our foreign partners. We want to ensure we are collecting information because we need it and not just because we can [and that] we are balancing our security needs with the privacy concerns all people share.”
Administration sources have begun criticizing the NSA for putting the President in this bind, and they are hinting at the possibility of negotiating reciprocal deals with other countries that will bar espionage directed at each other while sharing intelligence.
In short, we face the prospect that foreign nations will capitalize on President Obama's defensive crouch to extract diplomatic and intelligence concessions that would have been unthinkable a year ago. At the same time, I note, these nations have asked China, which is subjecting them to the most notorious and noisy computer hacking campaign on the planet, for, well, for nothing at all.
The reason for that reticence is simple. They know that China will give them nothing.
And that, it seems to me, is where Congress comes in. Sometimes an American negotiator's best friend is an unreasonable Congress. As far as European negotiators are concerned, the United States Congress is almost in China's league.
If Congress sets limits on what the executive branch can concede to its foreign counterparts, those limits will be observed. And if Congress specifies consequences for threatening U.S. industry, threatening U.S. industry will be much less attractive.
That's why I suggest that any legislation addressing the domestic intelligence program also address the international campaign to weaken U.S. intelligence capabilities. What would that legislation say? Let me suggest a few possibilities, any one of which would provide U.S. negotiators with useful limits and leverage:
To play the role it has played in the world for the last 70 years, the United States must be able to gather intelligence anywhere in the world with little or no notice. We never know where the next crisis will erupt, where the next unhappy surprise is coming from. It’s the intelligence community’s job to respond to today’s crises, but its agencies live in a world where intelligence operations take years to yield success. That makes it a little hard – and very dangerous -- to create “intelligence-free zones." ...
Even the countries we usually see as friends sometimes take actions that quite deliberately harm the United States and its interests. Ten years ago, when the U.S. went to war with Iraq, France and Germany were not our allies. They were not even neutral. They actively worked with Russia and China to thwart the U.S. military’s mission. Could they act against U.S. interests again in the future – in trade or climate change negotiations, in Syria, Libya or Iran? ...
That’s just life and international politics. As German Chancellor Angela Merkel too knows quite well. She visited China right after public disclosures that the Chinese had penetrated her computer network, yet she managed to be “all smiles” while praising relations between the two countries as “open and constructive.” ...
The United States can’t stop gathering intelligence without running the risk of terrible surprises. So it won’t.
NIST has revised the draft cybersecurity framework that it released in August. What it published today is a "preliminary cybersecurity framework." After comments, a final framework will be released in February.
I've been very critical of the draft released in August. NIST clearly worked to address the criticisms.
The result is a mixed bag, but the document is still a net loss for security.
What's improved? First, in an effort to introduce flexibility into the document, NIST deleted all the “should” language from the privacy standards.
Second, it added a paragraph that asserts the “flexibility” that organizations have to implement the privacy provisions:
Appendix B contains a methodology to protect privacy and civil liberties for a cybersecurity program as required under the Executive Order. Organizations may already have processes for addressing privacy risks such as a process for conducting privacy impact assessments. The privacy methodology is designed to complement such processes by highlighting privacy considerations and risks that organizations should be aware of when using cybersecurity measures or controls. As organizations review and select relevant categories from the Framework Core, they should review the corresponding category section in the privacy methodology. These considerations provide organizations with flexibility in determining how to manage privacy risk.Third, NIST responded to my concern that the “governance” section of the appendix would smuggle into the rules governing private companies all of the fair information practice principles, or FIPPs, that govern federal agencies. NIST narrowed the scope of the governance section by tying it to the actual PII being used for cybersecurity. See the bold language below..
Old version: Organizations should identify policies and procedures that address privacy or PII management practices. Organizations should assess whether or under which circumstances such policies and procedures : [followed by a list of FIPPs, many with dubious relationship to cybersecurity]
New version: Identify policies and procedures that address privacy or PII management practices for the PII identified under the Assets category. In connection with the organization’s cybersecurity procedures, assess whether or under which circumstances such policies and procedures: [followed by the same list]
That's a substantial improvement.
What's wrong with the new version? Well, the first change, dropping the "should"s, is well-intended but largely cosmetic. In fact, it arguably makes the rules harsher, not more flexible. That’s because, instead of telling companies what they “should” do to protect privacy, the appendix now just commands them to do those things. You can see that in the example above. Also in this one:
Old version: “When performing forensics, organizations should only retain PII that is relevant to the investigation.”
New version: “When performing forensics, only retain PII or communications content that is necessary to the investigation.”
(As an aside, note the other change in the new version, which is pretty clearly the result of privacy groups’ comments. It tells companies to protect communications content, not just PII. But that change is only needed if the companies are sharing content that can’t be traced to a person. So it seems to mean that companies who share information about spam should minimize the amount of spam they quote when trying to tell other companies which messages to block. That's dumb. More broadly, why shuld such a mandate be added to a standard that insists that it’s about PII?)
That brings me to my biggest concern. Despite NIST’s claim that it has left companies lots of flexibility, you can’t really find flexibility in the language of the privacy appendix. So I continue to fear that the net result of the package will be to impose a "privacy tax" on cybersecurity, adding to the cost of security measures by tying those measures to expensive privacy obligations whose value is unproven. For example:
Old: “When voluntarily sharing information about cybersecurity incidents, organizations should ensure that only PII that is relevant to the incidents is disclosed.”
New: “When voluntarily sharing information about cybersecurity incidents, limit disclosure of PII or communications content to that which is necessary to describe or mitigate the incident”
The new language is slightly less demanding, but it still calls on companies that share information about malware and intrusions to make determinations about which information is “necessary” to describe or mitigate the incident. If the company guesses wrong about a couple of bits of information, and someone later decides that those bits weren’t strictly necessary to mitigate the incident, then the standard has been violated and liability is much more likely. At a minimum, lawyers have to review every category of data that is being shared and write rules for when it is necessary and when it isn’t. It takes heroic ignorance to believe that a requirement like that won’t reduce the sharing that’s already occurring, even among private enterprises.
Finally, NIST took a further step that has heightened my concern that this appendix is going to impose the FIPPs on the entire US private sector. That’s because the only “reference” standard offered by NIST to explain and implement the appendix is a document that is plainly written for government agencies trying to implement federal privacy standards. In the absence of any other reference, the pressure will be great to follow the government rules.
So, to return to the example above, suppose you’re a company that wants to implement privacy-compliant information sharing. You consult the “reference” standard, and here’s what you’re told:
MINIMIZATION OF PERSONALLY IDENTIFIABLE INFORMATION
Control: The organization:
a. Identifies the minimum personally identifiable information (PII) elements that are relevant and necessary to accomplish the legally authorized purpose of collection;
b. Limits the collection and retention of PII to the minimum elements identified for the purposes described in the notice and for which the individual has provided consent; and
c. Conducts an initial evaluation of PII holdings and establishes and follows a schedule for regularly reviewing those holdings [Assignment: organization-defined frequency, at least annually] to ensure that only PII identified in the notice is collected and retained, and that the PII continues to be necessary to accomplish the legally authorized purpose.
Supplemental Guidance: Organizations take appropriate steps to ensure that the collection of PII is consistent with a purpose authorized by law or regulation. The minimum set of PII elements required to support a specific organization business process may be a subset of the PII the organization is authorized to collect. Program officials consult with the Senior Agency Official for Privacy (SAOP)/Chief Privacy Officer (CPO) and legal counsel to identify the minimum PII elements required by the information system or activity to accomplish the legally authorized purpose.
Organizations can further reduce their privacy and security risks by also reducing their inventory of PII, where appropriate. OMB Memorandum 07-16 requires organizations to conduct both an initial review and subsequent reviews of their holdings of all PII and ensure, to the maximum extent practicable, that such holdings are accurate, relevant, timely, and complete. Organizations are also directed by OMB to reduce their holdings to the minimum necessary for the proper performance of a documented organizational business purpose. OMB Memorandum 07-16 requires organizations to develop and publicize, either through a notice in the Federal Register oron their websites, a schedule for periodic reviews of their holdings to supplement the initial review. Organizations coordinate with their federal records officers to ensure that reductions in organizational holdings of PII are consistent with NARA retention schedules. By performing periodic evaluations, organizations reduce risk, ensure that they are collecting onlythe data specified in the notice, and ensure that the data collected is still relevant and necessary for the purpose(s) specified in the notice. Related controls: AP-1, AP-2, AR-4, IP-1, SE-1, SI-12, TR- 1.
(1) MINIMIZATION OF PERSONALLY IDENTIFIABLE INFORMATION | LOCATE / REMOVE / REDACT / ANONYMIZE PII
The organization, where feasible and within the limits of technology, locates and removes/redacts specified PII and/or uses anonymization and de-identification techniques to permit use of the retained information while reducing its sensitivity and reducing the risk resulting from disclosure.
Supplemental Guidance: NIST Special Publication 800-122 provides guidance on anonymization.None of this is good for quick and easy cybersecurity information sharing. It seems to suggest that each sharing company has to evaluate its cybersecurity data and minimize, perhaps even anonymize, the data it keeps and to get rid of anything it isn’t sure it needs. The data will have to be scrubbed for accuracy and completeness. To make that decision, the guidance creates a committee that includes not just the lawyers but top officials and a privacy officer, further clogging and bureaucratizing what should be an instantaneous exchange of threat data. This raises the cost of information sharing, which is what you do only if you want less of something.
There's a lot of talk in the press these days about how hard it is for the federal government to do IT right and how the blame for the failures of the healthcare.gov website should fall on the federal procurement system, not the federal managers.
As someone who advocated enthusiastically for federal use of relatively advanced IT while in government, I agree that the procurement process makes it hard to produce IT that works on budget and on time. There have been plenty of expensive IT failures in recent administrations.
That said, it isn't impossible, even with stiff political opposition, to manage big public-facing federal IT projects successfully. I can think of three fairly complex IT projects that my old department delivered despite substantial public/Congressional opposition in the second half of George W. Bush's administration.
They weren't quite as hard as the healthcare problem, but they were pretty hard and the time pressure was often just as great. Putting together the list from memory, which may be faulty on some details, they are:
These programs aren't directly comparable to the healthcare challenge, but they're in the ballpark; as I remember they were delivered without serious schedule or cost overruns, and they worked when delivered. So it can be done with careful management, and to be frank, if your administration's entire legacy depends on delivering a working healthcare IT system, managing the IT process should be a pretty high priority.
For that reason, I am surprised at the management problems that the Obamacare website has suffered from. They can't be blamed entirely on the IT procurement process.
I'm a much bigger fan of Girl Talk, whom I've blogged about before, than of current copyright law. So it's hard to resist a chance to talk about both. Girl Talk (actually a fellow named Greg Gillis) produces delightful mashups of hip-hop and classic rock that shed new light on both. Since Girl Talk relies on a claim of fair use for his sampling and doesn't seek the original label's authorization, he has trouble selling his albums through the usual channels. (His stuff is available here.)
Now Michael Schuster, another Girl Talk lawyer-fan, has produced a law-review-style study of All Day, Girl Talk's latest album, arguing that the songs it samples actually had higher sales in the year after the sampling than in the year before. For those of us who think copyright law is too protective of plaintiffs, the article is comforting. It suggests that current law may actually be hurting the authors it purports to help by discouraging musicians from introducing their fans to our pop-cultural heritage. And that's how it's being covered by a largely sympathetic press and blogosphere.
Actually, though, I think the article is a little too comforting. I am always skeptical of scholarly research that reinforces academic prejudices, since scholars tend adjust their standards of proof to fit their prejudices. And Schuster's article, it seems to me, does exactly that.
Schuster achieves his results by playing with the sample, dropping nine songs from a sample of about 200 because they completely wreck his argument. His reason for dropping the songs is that they were hits in the 30 months prior to the release of Girl Talk's album, so their sales were bound to decline and it isn't appropriate to charge Girl Talk with the natural rhythm of pop music sales.
If he didn't drop those songs, though, Schuster's data would show a 50% drop in sales for songs that Girl Talk samples -- exactly the opposite of the effect he claims. Schuster says he's just correcting for noise in the data. Maybe so, but once you start making big after-the-fact adjustments to a sample of 200, you can prove pretty much anything.
At best, then, Schuster has developed an interesting hypothesis that ought to be tested by a new experiment untainted by data cherry-picking.
So how about it, Greg Gillis? We need a new Girl Talk album, and soon.
For science's sake.
PS How's that for an evergreen blog title? If the FTC enforced honesty in blogging, at least half the content of the blogosphere would appear under this heading.
I've been critical of the claim that European privacy law offers more protection against government surveillance than American law. Apparently not critical enough. An Ars Technica reporter with a pro-privacy inclination decided to seriously investigate using a German email system to get the benefits of European privacy law.
His tale of disillusionment revealed three privacy deficits in European law that even I hadn't noticed when I trashed the myth of European privacy superiority. First, unlike their US counterparts, German email providers are unable to issue transparency reports of the sort that US companies have been publishing:
“German law forbids providers to talk about inquiries for user data or handing over user data,” Löhr added. “We are currently investigating a possible way with our lawyer to issue a transparency report about questions from police like Google, Microsoft, and [many] other US providers do, but we can not promise we will be able to do so. We try hard.” Indeed, the German Telecommunications Act of 2004 (PDF) states very clearly, “The person with obligations shall maintain silence vis-à-vis his customers and third parties about the provision of information.” In other words, German communications services would be under a gag order by default.Of course, given their other disadvantages on the government-privacy front, maybe European providers aren't exactly eager to issue transparency reports. For example, in the US, authorities have to get a specific "gag" order to prevent subscribers from getting notice that their mail has been seized; while gag orders are common in the US, they often expire after a time and can usually be challenged. It appears that Europe simply doesn't make disclosure an option. Silence, not disclosure, is the law's default.
[A]n American provider could notify its customer that he or she is the target of a judicial investigation. Google has a user notification policy, for instance, that stands unless the court forbids it from disclosing that information. ... German court orders, by contrast, appear to be sealed automatically.And finally, it appears that European mail providers cannot challenge government discovery orders before turning over the data. In Germany and the Netherlands, the only jurisdictions the writer examined, providers turn over the data first, and then argue about whether they should have to do so:
Löhr also added that Posteo could challenge a secret court order after the fact, unlike in the case of the United States, where such challenges can be made before such a handover. "If we think the order was not right, we can complain afterwards—and we would do so," Löhr told Ars.The same is true elsewhere in Europe:
“There is an option to challenge that request [in the Netherlands], but only after it has [been] given the data,” Ot van Daalen, the director of Bits of Freedom, a Dutch digital rights group, told Ars. “A successful challenge leads to an order of the court to destroy the data. In the case of possible privileged communication, in practice the data is sealed in an envelope pending challenge and only opened after the data is deemed to be unprivileged by the court.”NOTE: I'm experimenting with comments, hoping to get a higher ratio of wheat to chaff. Today's experiment: If you have comments that I am likely to find supportive, clarifying -- or entertainingly abusive -- please send them to vc.comments[at]gmail.com.
I’d like to offer readers a short quiz on judicial independence.
Imagine a field where liability is common but damages vary widely -- patent law, perhaps, or disability claims. In this field, there is a specialized court that has attracted Congressional and press criticism because it rules for the plaintiff 99% of the time. Stung by relentless criticism based on this statistic, the chief judge of the court finally writes a public letter to Congress, saying, in essence,
“You don’t understand how this court works. The court conducts detailed pretrial settlement negotiations and in at least 25% of its cases, the judge tells the plaintiff that he is likely to lose unless he reduces his claim to an amount the court considers more reasonable, and the plaintiff almost always does. The court on occasion tells the plaintiff that his chances are so poor that the case should be dropped, and it usually is. In order to correct the misimpression created by the 99% success rate figure, from now on this court will keep track of every case in which we force the plaintiff to reduce or abandon his claim and will publish those statistics regularly.”
Based on those facts, I offer two multiple-choice questions:
1. The court’s letter is (choose one):
a. A breach of the tradition that courts do not enter the political arena to justify their decisions.
b. A prudent and factual response to public misunderstandings about the court’s decisions and role.
2. Which statement about the court's collection of statistics is most accurate?
a. It improperly encourages the court's judges to "improve" their track record by negotiating for reductions and withdrawals of plaintiffs' claims even when those reductions and withdrawals are not required by law.
b. It is a valuable public service countering an inaccurate public impression; no reasonable person could believe that the publication of such statistics would ever influence the court’s execution of its judicial responsibilities.
If you were even tempted to choose “a” as the answer to either question, you should be troubled by these two letters, written by Chief Judge Walton of the FISA court. They say, in essence that the government’s 99% success rate before the FISA court fails to take into account the frequent and intense negotiations between the court and the government, negotiations that result in modification or withdrawal of roughly a quarter of all FISA applications. The most recent letter promises that in the future the court will keep track of all the modifications or withdrawals of FISA orders that the court negotiates and will report the court's track record to Congress and the public.
In my view, nothing better illustrates the error behind the popular bien-pensant meme that the FISA court is just a rubber stamp. The reverse is true, and for obvious reasons. Saying no makes the court a civil liberties hero; saying yes makes it a civil liberties goat. Which do you think the judges want to be? You don't have to dig deep into these letters to guess the answer.
The current political and press climate inevitably strains the FISA judges’ impartiality and encourages the court to demand concessions from the government that have no basis in law. A similar climate before 9/11 might explain Chief Judge Lamberth’s legally unjustified and fateful imposition of "the wall" in the months before the attacks, echoes of which may be found in Judge Walton’s over-the-top attack on the government in the section 215 telephone metadata case.
But even those who don’t share that view must surely wonder what role the FISA court and its staff are playing when they negotiate changes in a quarter of the warrants brought before them. Why are permanent staffers, apparently accountable only to a shifting array of judges, holding meetings up to three times a week and maintaining daily phone contact with the government to pursue questions that originate in at least some instances with the staff and not the judges? Why does the FISA court receive draft (essentially negotiation) copies of planned government pleadings before they have been reviewed and approved by executive officials?
To anyone who has served in government, these are familiar tactics. Any good executive branch bureaucrat seeking to expand his turf insists that his staff have early access to other agencies’ plans and reserves the right to negotiate those plans before they’re final. By the same token, any good executive branch bureaucrat works hard to influence how the press and Congress views his agency. All this elbowing and delegating and schmoozing and corresponding, though, has a distinctly unjudicial air.
In fact, the more the FISA court seeks to justify itself to Congress and the press, the less like a court it sounds.
From Foreign Policy:
Recently, Heritage refused to publish two papers about the National Security Agency's surveillance programs written by a prominent conservative attorney. Why? Because he concluded that the programs were legal and constitutional, according to sources familiar with the matter. It was a surprising move for a think tank that has supported extension of the Patriot Act -- which authorizes some of NSA's activities -- and has long been associated with right-of-center positions on national security and foreign policy.
But the paper's conclusions did not sit well with DeMint, the sources said, who worried about offending or alienating more libertarian lawmakers such Sen. Rand Paul, a DeMint ally and leadingcritic of NSA's collection of Americans' phone records, as well as Tea Partiers, who according to a recent poll think that government counterterrorism policies have gone "too far" in restricting civil liberties. It's those groups that brought DeMint his greatest influence as a lawmaker and made him a national political heavyweight.
It turns out that at least one Washington mugger is a little too well informed about current affairs:
An attempted mugging on Capitol Hill was thwarted Monday night by a quick-thinking victim — one who apparently keeps an eye on national security news.
The victim, who weighs a petite 95 pounds, explained to the assailant she was an intern with the National Security Agency. ...
The victim elaborated further, warning the would-be mugger that the phone she held in her hand — complete with a pink-and-blue Lilly Pulitzer case — would be tracked by the NSA if she were to turn it over.
"I told him that the NSA could track the phone within minutes, and it could cause possible problems for him," the victim recounted.
You left some critical facts out of your lengthy October 7 article on European government efforts to encourage European “cloud” computing services. While the article dwells on a perceived U.S. intelligence threat to European users’ privacy, it fails to ask a question of greater importance to Times readers: What will happen to personal data, American and European, that is stored in a European cloud? Nothing good, it turns out. European law requires that Internet service providers and telephone carriers store personal metadata for up to two years so that it will be available to European law enforcement and security agencies – a privatized and more comprehensive version of the NSA’s domestic telephone metadata collection. (NSA gave up its domestic Internet metadata collection a few years ago; Europe did not.) Not only does the “data retention” requirement in European law cover more personal information, it comes with far fewer safeguards. In Europe, unlike the United States, the authorities need only ask for stored data; companies can and do “volunteer” their data without any court order or other legal process. See Statement of Stewart A. Baker before the Committee on the Judiciary, United States Senate, July 31, 2013. And it shows in the surveillance statistics. Residents of Italy and the Netherlands are more than 100 times more likely to be the subjects of government surveillance than Americans, according to a study by the Max Planck Institute. See Hans-Jörg Albrecht, et al., Legal Reality and Efficiency of the Surveillance of Telecommunications, Max Planck Institute 104 (2003). The same will be true for Americans whose data is stored in a European cloud I have testified before Congress on the one-sided nature of Europe’s focus on privacy threats to the cloud. I’m disappointed to see the same one-sided focus on the news pages of The New York Times. Is all snooping on Americans a bad thing, or have you decided that it’s OK when foreign governments do it?
In my first post about NIST’s draft cybersecurity framework I explained its basic problem as a spur to better security: It doesn’t actually require companies to do much to improve their network security.
My second post argued that the framework’s privacy appendix, under the guise of protecting cybersecurity, actually creates a tough new privacy requirement for industry by smuggling the Fair Information Practice Principles into the law. In doing so, it clearly goes beyond the scope of the cybersecurity executive order, which is focused on protecting critical infrastructure. When was the last time lost PII caused “catastrophic regional or national effects on public health or safety, economic security, or national security?”
The reason is simply stated. If you want more of something, you don’t raise its cost. But by grafting strong privacy mandates to its weak cybersecurity standards, the privacy appendix raises the cost of putting cybersecurity measures in place. It’s like a ship design that requires the builder to pay for the installation of barnacles before launch.
That disincentive will be easy to heed. Taken as a whole, the message of the framework is, “You don’t have to implement any particular cybersecurity measures, but if you do, you’d better implement a bunch of privacy measures along with them.” This tempts network professionals to do less security, thus saving them as well the hassles that the framework’s privacy appendix calls for.
There are a lot of examples. Let’s start with network audits and monitoring. These are absolutely essential cybersecurity tools in today's environment. They give a detailed picture of everything – and everyone – operating on the network. But for that reason, the NIST privacy appendix treats them as suspect -- measures to be strictly limited. They are to be used only if their effectiveness is regularly demonstrated and they are regularly scrubbed to bring their privacy impact to a minimum: "When performing monitoring that involves individuals or PII, organizations should regularly evaluate the effectiveness of their practices and tailor the scope to produce the least intrusive method of monitoring." If I’m right about the legal effect of these standards, the failure to observe this rule will lead to negligence or regulatory liability. But a lawyer asked to avoid that liability will be appalled at the requirement to produce the “least intrusive method of monitoring.” Lawyers understand that, with hindsight, plaintiffs and regulators can often point to some method of monitoring that would have been less intrusive and that might have worked just as well. Avoiding liability under such a rule is more a matter of luck than planning.
Audits get the same suspect treatment under the appendix. Companies that record personal data as part of a network audit are told to consider "how such PII could be minimized while still implementing the cybersecurity activity effectively." Again, it will always be possible after the fact to discover a way to reduce a little more the personal data used in an audit.” Lawyers can flyspeck the audit plan forever without eliminating the risk.
The privacy appendix also prescribes yet more privacy assessments for cybersecurity detection and filtering. Companies “should regularly review the scope of detection and filtering methods to prevent the collection or retention of PII that is not relevant to the cybersecurity event.” Instead of poring over logs, looking for intruders, cybersecurity professionals are to pore over them for personal data that “is not relevant.” In another liability magnet, companies are instructed to adopt policies “to ensure that any PII that is collected, used, disclosed, or retained is accurate and complete.” That language will give employees who violate network rules new ways to challenge disciplinary actions.
Even in the middle of responding to a breach, the NIST appendix expects security staff to prioritize privacy: “When considering methods of incident containment, organizations should assess the impact on individuals’ privacy and civil liberties,” and “when PII is used for recovery, an organization may need to consider how to minimize the use of PII to protect an individual’s privacy or civil liberties.”
Perhaps worst of all, the privacy appendix imposes a heavy new legal and practical burden on cybersecurity information-sharing. It calls on companies to scrub any forensic data they may collect before they share it with others: “When voluntarily sharing information about cybersecurity incidents, organizations should ensure that only PII that is relevant to the incidents is disclosed”; and “When performing forensics, organizations should only retain PII that is relevant to the investigation.” Today, companies quickly share information with each other about new threats, including “personal” data like the IP addresses or the email accounts that are spreading malware. They face no real risk of liability for such sharing, at least as long as they keep the government out of the sharing arrangement. Once the NIST privacy appendix takes effect, though, even such private cybersecurity sharing will slow to a crawl as lawyers try to anticipate whether every piece of data has been screened for PII and for relevance.
In short, under the NIST framework, pretty much every serious cybersecurity measure in use today will come with new limits and possibly new liability. This is especially troubling because the framework does not prescribe any particular security measures, which means that companies that want to escape the new liabilities can simply decide not to implement the security measures. Rather than deal with the barnacles, they can just scuttle the ship.
Let’s hope that NIST scuttles the privacy appendix instead.
Following up on my earlier NIST post, it's fair to ask why I think the NIST Cybersecurity Framework will be a regulatory disaster. After all, as I acknowledged in that post, NIST's standards for cybersecurity are looking far less prescriptive than business feared. There's not a “shall” or “should” to be found in NIST's August draft.
At least not until you get to the privacy appendix. Then, suddenly, "should" blossoms in practically every sentence. The appendix says that it's just telling companies what methodology they should use to protect privacy while carrying out cybersecurity measures. In truth, it is setting out a detailed and comprehensive set of prescriptions for companies handling personally identifiable information (PII).
Right off the bat, the NIST privacy "methodology" shows remarkable ambition, telling companies that they “should identify all PII of employees, customers, or other individuals that they collect or retain, or that may be accessible to them.” Why critical infrastructure cybersecurity should require a comprehensive census of PII -- but not of other sensitive corporate information -- is not explained.
The cybersecurity executive order asked NIST to produce a methodology to "identify and mitigate" the cybersecurity's framework's impact on privacy, but in fact, many of the privacy provisions in NIST's appendix have only a nodding acquaintance with cybersecurity. For example, the NIST privacy appendix tells companies that they should “limit [their] use and disclosure of PII to the minimum amount necessary to provide access to applications, services, and facilities” and that they “should securely dispose of or de-identify PII that is no longer needed.” That may or may not be a good practice, but it's connection to protecting the cybersecurity of critical infrastructure is tenuous. Later, the document goes even further, calling for companies to designate a privacy officer, particularly remarkable given that it doesn't call for designation of a cybersecurity officer.
The NIST appendix's disconnection from cybersecurity is most clear when it says that companies should identify their privacy policies and assess whether those policies do the following:
"i) provide notice to and enable consent by affected individuals regarding collection, use, dissemination, and maintenance of PII, as well as mechanisms for appropriate access, correction, and redress regarding use of PII;
"ii) articulate the purpose or purposes for which the PII is intended to be used;
"iii) provide that collection of PII be directly relevant and necessary to accomplish the specified purpose(s) and that PII is only retained for as long as is necessary to fulfill the specified purpose(s);
"iv) provide that use of PII be solely for the specified purpose(s) and that sharing of PII should be for a purpose compatible with the purpose for which the PII was collected; and
"v) to the extent practicable, ensure that PII is accurate, relevant, timely, and complete."
Not one of these quasi-requirements has anything to do with the objectives of the executive order. But they have everything to do with smuggling comprehensive privacy regulation into a cybersecurity initiative. In fact, the provisions are more specific and demanding than the twenty-year privacy consent decrees imposed on technology companies like Google that have been caught up in FTC enforcement actions.
The provisions are drawn from the so-called Fair Information Practice Principles that the US government adopted for itself in the 1970s -- and that Europe's data protection laws incorporated around the same time. The United States has never applied them across the board to its private sector, in part because they turned into such a free-floating instrument of selective enforcement in Europe. Taken literally, the principles are either fatally ambiguous or impossible to fully comply with, leaving privacy bureaucrats with authority to impose harsh penalties on anyone they choose.
Not surprisingly, that sounds like a great idea to the United States' foremost practitioner of selective enforcement, the Federal Trade Commission. For more than a decade, the FTC has begged Congress to enact something like the Fair Information Practice Principles as a way of giving the Commission some legislative support for its claim to be the nation's chief privacy enforcer. To no avail. So for the Commission, the NIST proposal is a bonanza of new authority, or at least topcover. Indeed, it is a godsend for every regulatory agency that wants to add privacy to its list of regulatory requirements.
That's because of how the cybersecurity executive order treats NIST's work product. Once NIST has finished the framework, next January, the administration plans to use a wide range of incentives to get industry to adopt the framework. But the document's effect will be felt as soon as a preliminary draft is issued in October. The executive order instructs every regulatory agency in the federal government to to review the preliminary NIST framework and report to the President on whether the agency has authority to impose NIST's framework on the industries it regulates. If an agency lacks authority, it will almost certainly be invited to go ask for it. This means that the privacy appendix, which made its first appearance in public in the dead of August, will have a potentially irreversible effect as early as October 10, when NIST is due to issue the preliminary framework.
In short, if the NIST framework keeps this appendix, the FTC and every other regulator in town will have plenty of topcover to impose the Fair Information Practice Principles on the private sector. The excuse for doing so will be the need for better cybersecurity, but adoption of the NIST framework as written will likely be a net loss for cybersecurity. That's the subject of a third and final post that I'll offer shortly.
NOTE: I tried to reach NIST officials to get their response. But the shutdown means that many are not available. I did get a clear sense that the preliminary framework will not be released on October 10. It will likely be delayed for as long as the shutdown lasts, plus some time for interagency clearance. So the bad news from the shutdown is that we can't get to NIST's website, and the good news is that every day of shutdown is a day of delay for this unfortunate standard. All things consdered, I think I can live without the website.
Business and conservatives have been worried all year about the cybersecurity standards framework that NIST (the National Institute of Standards and Technology) is drafting. An executive order issued early this year, after cybersecurity legislation stalled on the Hill, told NIST to assemble a set of standards to address cyber risks. Once they're adopted, the order says, other agencies will encourage private companies, especially those running critical infrastructure, to use the standards. Regulatory agencies are expected to establish requirements based on the standards. And it is widely expected that the standards will drive negligence liability in the wake of a breach, since the courts are always glad to find government-endorsed definitions of “reasonable” security measures.
Now that NIST has released a discussion draft of its preliminary framework, business's worries are looking a bit overblown. And they're distracting from a much more serious threat buried in the NIST draft – the stealth imposition of a European-style privacy regime on the U.S. private sector.
Why? Let's look at the draft. (I would ordinarily link to the NIST webpage, which posted the framework weeks ago, but the administration's passive-aggressive shutdown strategy means that NIST took its website off line – very likely at greater cost than leaving it up. Luckily, Lawfare again proves itself the indispensable Blog of Record by preserving a copy of the framework here.)
The cybersecurity standards that everyone has been worried about turn out to be more taxonomic than prescriptive. You won't find a “shall” or a “should” anywhere in the appendix that sets out the framework. Instead, the framework is procedural to its core. The cybersecurity mission is divided into five steps: identify your network assets, put protections in place, detect breaches of your protective measures, respond to the breaches you detect, and finally recover and learn from the breaches. The five steps are in fact a loop, rather like painting the Golden Gate bridge: Paint until you reach the end, then start again from the beginning, incorporating any lessons you learned along the way.
That doesn't guarantee a good paint job – lazy painters can still skip spots, work too slowly, or use cheap, unsuitable paint -- but it does describe how a conscientious painter might do his job. The framework, in short, depends on the motivation and good judgment of the painter.
To be fair, the framework tries to give a bit of content to the Five Steps by defining each of them more precisely and by adding two additional layers of detail below each step. Thus, Detection is broken into three categories – screening for anomalies and events, continuously monitoring processes, and setting up detection processes to make sure breaches aren't ignored. And each of these subcategories is itself divided into four to eight subcategories. Thus, Continuous Monitoring includes subcategories like “perform network monitoring” in response to the detection of a breach. All in all, there are nearly a hundred subcategories in the draft framework.
The framework then drills down one more layer, identifying actual industry standards that correspond to each of the subcategories. At this point, you might think that the framework has identified several hundred tasks relevant to cybersecurity. In fact, though, the framework crossreferences the same five or six standards in every one of the nearly 100 subcategories it identifies.
So the NIST framework is certainly open to criticism. It offers less choice to industry than first appears. The framework may point to a hundred roads, but they all lead to the same five places. And when you get to one of those places, there's no certainty that you're safe. The framework tells industry only what boxes should be checked, not how carefully the job should be done. In a way it's a shiftless painter's best friend. And, perhaps, a good painter's worst enemy, since just skipping a box could lead to tort liability.
Still, the framework largely avoids substantive mandates, and its structure rebuts any suggestion that the subcategories are really requirements by providing several different standards that offer several different ways of interpreting each subcategory. I'm not personally convinced that this is a good thing, given the shiftless painter problem, but business groups that feared substantive mandates may be mollified.
If so, they'll be missing the real danger in this document. Because, while business has been concentrating its fire on the risk of cybersecurity regulation, it looks as though enthusiasts for sweeping privacy regulation of industry have stolen a march on everyone. I'll cover that risk in a second post.
Two straws in the wind for the Snowden flap:
1. When Silicon Valley corporate leaders are grilled over their view NSA by an outraged Michael Arrington, he uncovers a remarkably diverse set of views and ends up complaining, ”I’m not getting anyone to care so far on stage.”
2. When Joseph Menn of Reuters does a deep dive to calculate Snowden-related losses suffered by US companies he finds ... nada:
Despite emphatic predictions of waning business prospects, some of the big Internet companies that the former National Security Agency contractor showed to be closely involved in gathering data on people overseas - such as Google Inc. and Facebook Inc. - say privately that they have felt little if any impact on their businesses.
Insiders at companies that offer remote computing services known as cloud computing, including Amazon and Microsoft Corp, also say they are seeing no fallout.
UPDATE: Fixed link; thanks, Jeffrey!
According to the MIT Technology Review, a short-lived security flaw in the anonymousTor network allowed researchers to analyze and categorize the traffic that Tor was protecting. The results weren't pretty:
The Tor network is an online service that allows users to surf the web anonymously. Its main benefit is to reduce the chances of network surveillance discovering a user’s location or web usage. For that reason it is championed as an important tool for promoting free speech and protecting personal privacy, especially for people under authoritarian regimes such as that in China.
However, Tor is also often criticised for carrying illegal, shady or controversial content such as pornography and “Silk Road” traffic for illegal goods. So an interesting question is what kind of traffic prevails?
Today, we get an answer thanks to the work of Alex Biryukov, Ivan Pustogarov and Ralf-Philipp Weinmann at the University of Luxembourg. And the results are not as eye-sparklingly freedom-protecting as you might imagine.
These guys conclude that the Tor network is dominated by botnet traffic and that much of the rest is adult content and traffic related to black market and illegal goods.
First up, if Tor is so anonymous, how did these guys get their data? It turns out that until recently, the Tor protocol contained a flaw that allowed anybody in the know to track users back to their origin.
This flaw was actually discovered by Biryukov, Pustogarov and Weinmann earlier this year and immediately corrected by Tor. However, before the flaw became public, these guys took the opportunity to analyse Tor traffic to see where it came from and what it contained.
Of the top twenty most popular Tor addresses, eleven are command and control centres for botnets, including all of the top five. Of the rest, five carry adult content, one is for Bitcoin mining and one is the Silk Road marketplace. Two could not be classified.
The FreedomHosting address is only the 27th most popular address while DuckDuckGo is the 157th most popular, according to this analysis.
PHOTO credit: Will Swan
I've been giving speeches lately on cyberespionage, the attribution revolution, and how it helps corporate boards and general counsels to think about the cybersecurity problem without trying to do the Chief Information Security Officer's job. Video of a recent speech is embedded below:
I'm still working my way through all the FISA court material that was declassified today, and acquiring a new appreciation for how hard a journalist's job can be. But I've gotten far enough to start worrying, seriously, about the role we've given to the FISA court and what it does to the court and NSA.
There's an old saying that megalomania is an occupational hazard for district court judges. While Chief Judge Walton's opinion doesn't quite succumb to megalomania, there is a distinct lack of perspective in his approach that makes me wonder whether the FISA job slowly distorts a judge's perspective in unhealthy ways.
That was certainly true of Judge Lamberth, who spent most of 2001 persecuting a well-regarded FBI agent for not observing the "wall" between law enforcement and intelligence. That's the wall that the court of appeals found to be utterly without a basis in law but that Chief Judge Lamberth nonetheless enforced with an iron hand. Judge Lamberth forced FISA applicants to swear an oath that they were observing the wall, a tactic that allowed him to sanction the applicants for misrepresentation if they didn't live up to his expectations. He was so aggressive in this pursuit that he had sidelined the most effective FBI counterterrorism teams in August of 2001. The bureau knew by then that al Qaeda had terrorists in the United States but it couldn't use its best assets to find them them because Judge Lamberth had made it clear that he was willing to wreck their careers if they breached the wall.
I fear that Chief Judge Walton is going down the same road -- that the FISA court is the only agency of government not humbled by its failures on the road to 9/11 and is therefore the only agency that will repeat those failures. My concerns are best illustrated by the court's opinion of March 2, 2009, about which I offer three thoughts:
1. In much covered language, the judge claims that the government engaged in "misrepresentations" to the court. This is one of the three alleged misrepresentations mentioned by Chief Judge Bates in an opinion released last month. Since that opinion was released, commentators have widely assumed that NSA has been lying to the court. Because, frankly, that's what "misrepresentation" usually means. But the other filings declassified today show pretty persuasively that there was no intentional misrepresentation. Here's what seems to have happened, in brief. Back in 2006, scrambling to write procedures for the metadata program, a lawyer in NSA's Office of General Counsel wrote in a draft filing that a certain dataset of phone numbers always met the "reasonable articulable suspicion" standard. Turns out that that wasn't true; only some of the numbers did. The lawyer circulated his draft for comment, suggesting that he wasn't absolutely sure of his facts, but no one flagged the error, which turned out to be surprisingly difficult to verify. From then on, NSA and Justice simply copied the original error, over and over, all of their submissions. A mistake for sure. But a "material misrepresentation"? Only to a judge with a very warped view of the world, and the NSA.
2. How about the other headline-grabbing statement in the opinion, that the government's position "strained credulity"? Here, I think the court is on even shakier ground. The debate is about the court's minimization order, which declared that "any search or analysis of the [phone metadata] archive" must adhere to certain procedures. NSA dutifully imposed those procedures on analysts' ability to search or analyze the archive. The problem arose not from giving analysts access to the archive but from some pre-processing NSA performed as the data was flowing into the archive.
If I'm reading the filings properly (and I confess to some uncertainty on this point), NSA keeps an "alert" list of terror-related phone numbers of interest to individual analysts. Since new data shows up at NSA every day, the agency has automated the job of scanning to find those numbers as they show up in the agency's daily take. The numbers on the alert list are compared to the day's incoming intercept data, and each analyst gets a report telling him how many times "his" numbers appear in which databases.
This alert list was run against data bound for the telephone metadata along with all the other incoming data. The difference was that an analyst who got a "hit" on that database couldn't access it without jumping through the hoops already set up by the FISA court -- reasonable articulable suspicion, special procedures, etc. This must have seemed quite reasonable to the techies at NSA. They knew what it meant for an analyst to "access" the database, and an automated scanning system that yielded only pointers was not the same as giving an analyst access. In the end NSA's office of general counsel came to the same conclusion: the court's orders regulated actual archive access, not scanning against a list for statistics and pointers.
But that's not how Chief Judge Walton saw it. He held that it "strained credulity" to say that alert list scanning was different from "accessing" the archive. Maybe he just didn't understand the technology (the opinion offers some reason to think that). Or maybe he just thought about the question like a judge, always alert to slippery slopes and unintended consequences: "If you can lawfully search this data without limit before the data gets into the archive, you will make meaningless all the limits I've set. Why would you think I'd let you undermine my order in so transparent a way?"
Unfortunately, Judge Walton wasn't thinking like a techie. The techies who implemented the court's order thought they'd been told to restrict access to the database, and they did. They weren't told to restrict the use of statistical tools that scanned incoming data automatically, so they didn't. They certainly didn't believe they were undermining the court's order. Quite the contrary, they had designed the system to make sure that the alert list was just a starting point. Analysts who learned they had a hit in the database couldn't get any further information without meeting the FISA court's "reasonable articulable suspicion" requirement.
It's hard not to see this as a misunderstanding, perhaps exacerbated by the difference between legal and technical cultures. But that's not how Judge Walton sees it. His opinion dismisses the possibility that this could possibly be a good-faith misunderstanding. It's an outrage, he fumes, and efforts to explain it "strain credulity." Frankly, if anything strains credulity in this case, it's that line in the opinion.
3. The chief judge is so sure there's evil afoot that he calls for briefing on "whether the Court should take action regarding persons responsible for any misrepresentations to the Court or violations of its Orders, either through its contempt powers or by referral to appropriate investigative agencies." For anyone steeped in the disaster caused by Chief Judge Lamberth's witch-hunt for violators of the wall, this is tragically familiar ground. It's almost exactly how the FISA court drove the wall deep into the FBI.
I'm sure we'll be told by the press that this opinion brings to light another scandal and an agency out of control. But that's not how I see it. It looks to me as though NSA was doing its best to implement a set of legal concepts in a remarkably complex network. All complex systems have bugs, and sometimes you only find them when they fail. NSA found a bug and reported it, thinking that it was one more thing to fix. Then the roof fell in.
The interesting question is why it fell in. I think a fair-minded judge encountering the issue for the first time in the courtroom would not likely say that NSA's interpretations were disingenous or the result of bad faith or misrepresentation. Yet Judge Walton went there from the start.
I suspect that it's because we've unfairly given FISA judges a role akin to a school desegregation master -- more administrator than judge. Instead of resolving a setpiece dispute and moving on, FISA judges are dragged into a long series of linked encounters with the agency. In ordinary litigation, the judges misunderstand things all the time and reach decisions anyway, and they rarely discover all that they've misunderstood. The repetitive nature of the FISA court's contacts with the agency mean that they're always discovering that they only half understood things the last time around. It's only human to put the blame for that on somebody else. And so the judges' tempers get shorter and shorter, the presumption of agency good faith gets more and more frayed. Meanwhile, judges who are used to adulation, or at least respect, from the outside world, keep reading in the press that they are mere "rubber stamps" who should show some spine already. Sooner or later, it all comes together in a classic district judge meltdown, with sanctions, harsh words, and bad law all around.
If I'm right about the all too human frailties that beset the FISA court, building yet more quasijudicial, quasimanagerial oversight structures is precisely the wrong prescription. We'll be forcing judges to expand into a role they are utterly unsuited for and we'll put at risk our ability to actually collect intelligence. In fact, the more adversarial and court-like we make the system, the more weird and disorienting it will become for the judges, who will surely understand that at bottom they are being asked to be managers, not judges.
The further we go down the road, the more likely we are to turn FISA into the Uncanny Valley of Article III.
UPDATE: Typo correction: not instead of now. Thanks Raffaela!
I've been struck by an aspect of the Snowden affair that hasn't been covered so far -- the Guardian's troubling decision to destroy its UK trove of Snowden documents rather than let the UK government see them. Court filings in the UK tell the government's side of that story, and they don't make the Guardian look good.
The filings make clear that the UK government wanted the documents back, and that it persuaded the newspaper that it could not keep the files in the UK. Why then did the Guardian destroy them instead of returning them?
Ordinarily, that would be an easy question; journalists don’t disclose their sources.
But that answer won’t wash here. Snowden had already outed himself. Nor would turning over the files have affected the Guardian’s access to the data. By destroying the files, the Guardian was making them unavailable to its UK reporters, but its reporters in other countries still had copies. The same would be true if the Guardian gave its files to the UK government.
From the Guardian’s point of view, either choice had the same effect on its reporting. But from the UK government’s point of view, the choice was momentous. Turning over the documents, rather than destroying them, would have helped the UK government evaluate and mitigate the harm likely to be caused when foreign governments get access to the Snowden files.
So it appears that the Guardian deliberately chose a route that harmed UK national security, even though it helped journalism not at all. Why?
The decision began to take shape earlier this summer, when the UK government approached the Guardian's editors and gave them briefings to show that the paper could not hope to protect Snowden’s files from foreign intelligence services:
We made clear to The Guardian from the outset that we were extremely concerned by their possession of' our sensitive information and that they should not be holding it. We in informed them that we had no confidence in their ability to keep the material safe. Nor could they understand the damage that might flow from its further compromise. We made clear that the information would be targeted by any number of hostile actors and could cause further damage to UK counter terrorism operations….
The Guardian appeared to accept our assessment that their continued possession of the information was untenable. The Guardian continued to refuse to hand over the material and would not move on this point.
The UK government may not have been quite as convincing as it thinks; the Guardian seems to have held on to the (forlorn) hope that an "air gap" would keep foreign spies at bay. But eventually the Guardian was persuaded that it could not keep the Snowden files in the UK.
That left only two choices: Give them back to the UK government or destroy them. As we know, it chose to destroy them. Since it could have given them back without compromising its reporting, my question is, “Why?”
“I would rather destroy the copy than hand it back to them …. I don’t think that we had Snowden’s consent to hand the material back, and I didn’t want to help the UK authorities know what he had given us, so to me I was not going to hand it back to the government, and I was happy to destroy it because it was not going to inhibit our reporting. We would simply do it from America and not from London.”
The line about Snowden’s consent is surely a throwaway. Journalists may have an obligation to protect their sources, but they certainly don’t have an obligation to follow their source’s wishes in how they use the source’s information, a distinction the Guardian has no difficulty grasping when it gets quasi-official leaks from government sources. I suspect that the “consent” rationale only sounds plausible to Rusbridger because it fits his preferences, which he candidly states: “I didn’t want to help the UK authorities know what he had given us.”
Now that we’ve heard the government’s side of that conversation, Rusbridger’s comfortable assertion betrays a breathtaking willingness to sacrifice UK intelligence sources and methods for what sound like ideological preferences.
Thomas Rid, a thoughtful commentator known mainly for his skepticism about claims that cyberwar is imminent, recently called on the Guardian and other journalists to destroy the remainder of the Snowden files because the national security damage of further disclosures does not justify the likely contribution to informed debate about intelligence oversight.
I expect hell to freeze over before the Guardan takes Rid's advice. But perhaps we can at least get the Guardian's answer to a different question; "When it would have cost you nothing to protect British security, why did you kick it to the curb instead?"
I'll be testifying tomorrow, September 11, about DHS's progress in over a decade of existence. A copy of the full testimony is Download Baker testimony to Senate Homeland Sep 2013, but I suspect that the most interesting section concerns cybersecurity, which I've excerpted below.
"Sometimes it's easier to persuade the team to give you the ball than to actually run with it after you get it. That is DHS's problem right now.
"DHS seems to have successfully fended off the many agencies and committees that wanted to seize parts of its cybersecurity mission. Recent presidential orders have given DHS a large role in civilian cybersecurity. This is consistent with the Homeland Security Act, which clearly gave DHS authority over those issues, but that Act does not provide specific or explicit authorization for many of the cybersecurity activities that the Department is now carrying out, especially with respect to protecting critical infrastructure. It is reasonable, then, to codify authority for DHS’s existing activities, thereby cementing the Department’s role for the future. This basic step may seem obvious, but this is Washington, and doing the obvious is not easy.
"That’s particularly true when the technology is changing as fast as our attackers change tactics. When I left the Department, it was just getting started on Einstein – an effort to detect malware and other intrusion signatures aimed at the federal civilian agencies. Deployment of Einstein is now widespread, covering perhaps 60% of the federal workforce. Of course, detecting intrusions is not the same as stopping them. Einstein 3A is meant to automate intrusion prevention, and it is just rolling out now. What’s more, as security researchers have realized how hard it is to stop attacks at the edge of the network, watching inside networks has become a higher priority, and DHS has taken responsibility for deploying Continuous Diagnostics and Mitigation (“CDM”) technology to scan civilian networks for flaws and signs of compromise. These are all necessary and very large programs that pose implementation and turf challenges. Not surprisingly, some agencies have questioned whether DHS has the authority to do what is necessary, and providing a statutory basis for DHS’s programs would be a valuable contribution that this committee could make to cybersecurity.
"One problem that should be of particular interest to the committee is the risk of conflict between the Federal Information Security Management Act (“FISMA”) and CDM. In essence, CDM performs many of the functions that FISMA requires. However, FISMA envisions a paper-centered audit process that is far too slow for the current threat, while CDM performs its audits electronically, on a 72-hour cycle. Everyone recognizes that CDM is better than a paper process, and FISMA should be modified to reflect changes in both the threat and the solution, as well as to make clear that DHS has responsibility for implementing the operationally demanding solution.
"These are all complex systems that DHS is essentially running for most of the civilian government. That would be a challenge for an established agency with a veteran workforce, but DHS does not have nearly the number of trained personnel it needs. Finding talented cyberwarriors is a challenge even for private sector firms. Attracting them to the Department has been doubly difficult, especially with a hiring process that in my experience was largely dysfunctional. The Department's biggest challenge is hiring and maintaining a cybersecurity staff that can earn the respect of private cybersecurity experts. There are bright spots. Doug Maughan, in the S&T Directorate, has the respect of his counterparts at NSA and Goldman Sachs. Phyllis Schneck, recently named as the Department’s deputy undersecretary for cybersecurity, has great technical and private sector credibility in the field. DHS is on the right track, but the way is steep. It must keep expanding its technically competent cybersecurity staff, because that is the foundation of all the other things it must do. That likely means that it must have authority to hire workers in ways that do not fit the standard federal process.
"The other challenges for DHS in cybersecurity are many. They include:
"Building a clear relationship with NSA.
"I am one of the few officials who has worked at a policy level for both the National Security Agency (“NSA”) and DHS. There are certainly days and even weeks when I feel like the child of a troubled marriage. But the fact remains that the outlines of a working relationship between DHS and NSA are obvious. As a concerted campaign of leaks has left NSA reeling and mistrusted by the public, it must be clear that on cybersecurity matters affecting the civilian sector, DHS is calling the policy shots. At the same time, DHS must rely heavily on NSA's technical and operational expertise to succeed. This fundamental truth has been obscured by personalities, mistrust, and impatience on both sides. It's got to end, especially in the face of adversaries who must find the squabbling email messages especially amusing because they are reading them in real time.
"Gaining authority to insist on serious private sector security measures.
"DHS has plenty of authority to cajole and convene in the name of cybersecurity. It's been doing that for ten years. The private sector has paid only limited attention. In part that's because DHS had only modest technical expertise to offer, but it's largely because few industries felt a need to demonstrate to DHS that they were taking its concerns seriously. I fully recognize that cybersecurity measures do not lend themselves to traditional command-and-control regulation, and that information technology is a major driver for economic growth. But the same could have been said about the financial derivatives trade in 2007. We cannot allow the private sector to cut costs by vastly increasing risk, whether in cybersecurity or in financial markets.
"Sometimes the businessmen arguing against regulation are wrong – so wrong that they end up hurting their own industries. I believe that this is true of those who oppose even the lightest form of cybersecurity standards. Even on their own terms, the businesses lobbying against a substantive cybersecurity bill are likely to fail. Most of the soft quasi-regulatory provisions business groups rejected last year in talks with the Senate were incorporated into an executive order that they had little ability to influence. Those provisions will in turn become the basis for future, harder regulations, particularly if Congress delays action until we have a cybersecurity meltdown.
"For now, however, it will be up to DHS to use the soft authorities and the mandate conferred by an executive order with energy and wisdom. And, to be candid, that is a big enough job for the near future.
"Action beyond the legislative and executive order.
"The legislative stalemate does not mean that DHS can only improve cybersecurity by pushing the private sector to do things it doesn’t want to do. There are many other steps that DHS could take to improve cybersecurity without touching the regulatory third rail. Here are some:
"Information-sharing. Everyone understands why the targets of cyberattacks need to share information. We can greatly reduce the effectiveness of attacks if we use the experience of others to bolster our own defenses. As soon as one victim discovers a new command-and-control server, or a new piece of malware, or a new email address sending poisoned files, that information can be used by other companies and agencies to block similar attacks on their networks. This is not information-sharing of the “let's sit around a table and talk” variety. In a world of zero-day attacks and polymorphic malware, it must be automated and must occur at the speed of light, not at the speed of lawyers or bureaucrats.
"I supported the Cyber Intelligence Sharing and Protection Act (“CISPA”), which would have set aside two poorly-conceived and aging privacy laws that made it hard to implement such sharing. I still do. But if CISPA is blocked by privacy groups, as seems likely, we need to ask whether the automated system we need can be built without falling foul of those aging privacy laws. A more creative and determined approach to the law is needed.
"To take one example, many of the privacy rules that restrict sharing can be waived if a service’s customers consent to the sharing. Since the purpose of the sharing is to protect the cybersecurity of those same customers, they are highly likely to consent in large numbers. Working with government, service providers could find ways to obtain consent to a data-sharing regime designed to protect both privacy and cybersecurity – all without amending existing law.
"This committee can move information-sharing forward by calling on DHS to lead an interagency effort that would work within existing law to improve information sharing by considering the adoption of statutory interpretations, standard customer terms, and other techniques that serve everyone’s interest in better cybersecurity.
"Emphasize attribution. We will never defend our way out of the cybersecurity crisis. I know of no other crime where the risk of apprehension is so low, and where we simply try to build more and thicker defenses to protect ourselves. We started on this Maginot Line exercise because attribution of cyberattacks seemed too difficult; attackers could hop from country to country and server to server to protect their identities.
"But that view is out of date. Intelligence agencies have stopped trying to trace each hop the hackers take. Instead, they've found other ways to compromise the attackers, penetrating their networks directly, observing their behavior on compromised systems and finding behavioral patterns that disclose much. In short, we can know who are our attackers are. We can know where they live and what their girlfriends look like. That’s because it’s harder and harder for hackers to function in cyberspace without dropping bits of identifying data here and there. The massive amount of data available online makes the job of attackers easier, but it can also help the defenders if we use it to find and punish our attackers.
"Sometimes the best defense really is a good offense; we need to put more emphasis on breaking into hacker networks and gathering information about what they're stealing and who they're giving it to. That kind of information will help us prosecute criminals and embarrass state-sponsored attackers. It will also allow us to tell the victim of an intrusion with some precision who is in his network, what they want, and how to stop them.
"Again, this committee can put DHS at the center of a new emphasis on attribution. Its Computer Emergency Readiness Team and intelligence analysis arms should be issuing more detailed information about the tactics and tools being used by individual attack units and fewer bland generalities for local law enforcement agencies.
"Move from attribution to deterrence. The committee could also perform a service by calling on DHS to take the lead in identifying ways to use attribution more effectively to deter cyberattacks. There are many ways to improve deterrence. While the administration has become more open about identifying Chinese cyberattacks as a particular problem, the Snowden affair has made “naming and shaming” less effective in this context. Instead, we should be looking for other ways to identify individual attackers and their enablers and then bring U.S. legal pressure to bear on them. This is a target-rich environment:
"Use DHS law enforcement authorities more effectively. The law enforcement agency most associated in the public mind with cybercrimes is the Federal Bureau of Investigation (“FBI”). This is a little odd because two DHS law enforcement agencies, the Secret Service and ICE, both have strong cybercrime units and may between them handle as many cases as the FBI.
"My concern is not who gets the credit for these investigations. But we cannot let law enforcement determine our cybersecurity posture. Agencies like the FBI and Secret Service only occasionally solve hacking cases, and even more rarely are they able to actually arrest the hackers. If they are allowed to hoard evidence of cyberintrusions, we may lose valuable intelligence about the intruders’ tactics and targets. This committee should consider legislation calling for a coordinated approach to all computer intrusions to ensure that detailed information sharing occurs across agency lines. At the same time, it is often law enforcement that tells businesses they have been compromised. This is a “teachable moment,” when all of DHS's cyberdefense and industry-outreach capabilities should be engaged, talking to the compromised company about the nature of the intruder, his likely goals and tactics, and how to defeat them. But that happens less than it should, judging by the experience of my clients. A deeper, Congressionally mandated coordination would make these encounters far more useful to the private sector.
"Finally, I fear that letting law enforcement take the lead on a case-by-case basis means that investigations are not being prioritized in ways that would maximize their intelligence value. (Since these investigations rarely lead to prosecutions, using criminal authorities to gather information about attackers should be a particularly high priority – even when there is no prospect of criminally prosecuting the attackers.) While interagency coordination with the FBI can be a challenge, coordination between DHS's cybersecurity offices and the ICE and Secret Service investigators also seems to be equally ad hoc at best. This committee should consider requiring DHS’s law enforcement agencies to work computer crime cases under the coordinating and deconflicting authority of the National Protection and Programs Directorate (“NPPD”) to ensure strategic use of law enforcement authorities and proper sharing of information.
"Recruit private sector resources to the fight. In my private practice, I advise a fair number of companies who are fighting ongoing intrusions at a cost of $50,000 or $100,000 a week. The money they are spending is almost entirely defensive. At the end of the process, they may succeed in getting the intruder out of their system. But the next week, the same intruder may get another employee to click on a poisoned link and the whole process will begin again. It is a treadmill. Like me, these companies see only one way off the treadmill: to track the attackers, to figure out who they are and where they're selling the information, and then sanction both the attackers and their customers. But under federal law, there are grave doubts about how far a company can go in tracking their attackers. I think some of those doubts are exaggerated, but only a very brave company would ignore them.
"Now, there's no doubt that U.S. intelligence and law enforcement agencies have the authority to conduct such an operation, but by and large they don't. Complaining to them about even a state-sponsored intrusion is like complaining to the DC police that someone stole your bicycle. You might get a visit from the police; you might get their sympathy; you might even get advice on how to protect your next bicycle. What you won't get is a serious investigation. There are just too many higher priority attacks.
"In my view, that's a mistake. The United States should do some full-bore criminal and intelligence investigations of private sector intrusions, especially those that appear to be state-sponsored.
"But if we want a solution that will scale, we have to let the victims participate in, and pay for, the investigation. Too many government officials have viewed private countermeasures as a kind of vigilante lynch mob justice. That just shows a lack of imagination. In the real world, if someone stops making payments on a car loan but keeps the car, the lender doesn't call the police; he hires a repo man. In the real world, if your child is kidnapped, and the police aren't making it a priority, you hire a private investigator. And, if I remember correctly the westerns I watched growing up, if a gang robs the town bank and the sheriff is outnumbered, he deputizes a posse of citizens to help him track the robbers down. Not one of those solutions is the equivalent of a lynch mob. Every one allows the victim to supplement law enforcement while preserving social control and oversight.
"DHS very likely has sufficient authority to try that solution tomorrow , as does the FBI. DHS’s law enforcement agencies often have probable cause for a search warrant or even a wiretap order aimed at cyberintruders. But they rarely have the resources to use that authority fully and strategically against the intruders. I know of no legal barrier to relying on private resources to conduct a deeper investigation under government supervision. (The Antideficiency Act, which prohibits acceptance of free services, has more holes than my last pair of hiking socks, including exceptions for protection of property in emergencies and for gifts that also benefit the donor.) If systematic looting of America's commercial secrets truly is a crisis, and I believe that it is, why have we not already done this?
"I understand the concern expressed by some that we cannot turn cyberspace into a free-fire zone, with vigilantes wreaking vengeance at will. No one wants that. Government should set limits and provide oversight for a true public-private partnership, in which the private sector provides many of the resources and the public sector provides guidance and authorities. The best way to determine how much oversight is appropriate is to move cautiously but quickly to find alternatives to the current failed cybersecurity strategy. Again, this committee can move the ball forward by authorizing DHS and its law enforcement agencies to develop a pilot project -- working with hacking victims and their security firms to use government authorities in a cooperative fashion.
"Use existing funds to improve state and local cybersecurity preparedness. There may still be low-hanging fruit in the Department’s budget to improve cybersecurity. For example, we can make it easier for state and local governments to use existing grant funding to beef up their cybersecurity. Over the last decade DHS has provided billions of dollars to state and local governments to fund the purchase of a wide range of security capabilities. Cybersecurity tools – from installing basic firewalls to deploying advanced defenses that rely on virtual “detonation chambers” – are allowable purchases, along with hazmat suits and interoperable communications tools. However, DHS can do more to encourage state and local governments to spend grant funds on cybersecurity, and Congress should support those efforts.