Our guest for episode 63 of the Cyberlaw Podcast is Alan Cohn, former Assistant Secretary for Strategy, Planning, Analysis & Risk in the DHS Office of Policy and a recent addition at Steptoe. Alan brings to bear nearly a decade of experience at DHS to measure the Department’s growth. He explains how it has undertaken and largely delivered a new civilian cybersecurity infrastructure. And, while Congress dithers, it has begun to build an information sharing network quite independent of the legislative incentives now on offer. Alan also offers his insights into emerging technologies and the risks they may pose, including drones, sensors, and cryptocurrencies.
In the news roundup, the consensus story of the week is the return of Jason Weinstein from a five-week absence, only some of it justified by family vacation and other worthwhile endeavors. In second place is the concerted European attack on Google and the rest of the US tech sector. Michael Vatis and I mull over a high-ranking European official’s astonishing "Washington gaffe" -- usually defined as admitting a politically incorrect truth, in this case that Brussels intends to regulate US technology until European industry can compete. Good luck with that.
In the House, Doug Kantor reminds us, it’s cyberweek, so the data breach law has immediately collapsed into such uncertainty that its Dem sponsor even voted to keep it in committee. The bill has gone back to the shop for repairs to its bipartisan credentials, and the Obama administration, which says it supports a bill, seems to be keeping its distance from the messy business of actually legislating.
Meanwhile, Jason explains why cops are paying ransom to cybercrooks to get their data decrypted, Michael tells us a district court has given life to class action Google Wallet privacy claims under a sweeping theory, and I note that Julian Assange’s Wikileaks has hit a new low in offering a searchable database of stolen Sony email messages. Finally, the SEC’s Mary Jo White is taking heat for standing in the way of ECPA amendments, and the Chinese technological autarky movement seems to be alive and well, with a little help from US companies.
As always, send your questions and suggestions for interview candidates to CyberlawPodcast@steptoe.comor leave a message at +1 202 862 5785.
Our guest for episode 61 of the Cyberlaw podcast is Joseph Nye, former dean of the Kennedy School at Harvard and three-time national security official for State, Defense, and the National Intelligence Council. We get a magisterial overview of the challenge posed by cyberweapons, how they resemble and differ from nuclear weapons, and (in passing) some tips on how to do cross-country skiing in the White Mountains.
In the news roundup, Meredith Rathbone explains details of the new sanctions program for those who carry out cyber attacks on US companies. I mock the tech press reporters who think this must be about Snowden because, well, everything is about Snowden. Michael Vatis endorses John Oliver’s very funny interview of Edward Snowden. Not just funny, it’s an embarrassment to all the so-called journalists who’ve interviewed Snowden for the last year without once asking him a question that made him squirm. In contrast, Oliver almost effortlessly exposes Snowden’s dissembling and irresponsibility. He hits NSA below the belt as well.
Ben Cooper explains the Ninth Circuit decision refusing to apply disability accommodation requirements to web-only businesses (he filed an amicus brief in the case), and we speculate on the likelihood of a cert grant.
As always, send your questions and suggestions for interview candidates toCyberlawPodcast@steptoe.comor leave a message at +1 202 862 5785.
UPDATE: Corrected link to podcast episode.
The executive order allowing the President to impose OFAC sanctions on hackers is good news. I've been calling on the government for several years to go beyond attribution to retribution. See, for example this post from 2012 (caution: cleavage is involved), this Foreign Policy article (sadly, no cleavage), and this recent podcast with Juan Zarate (again no cleavage, you'll be relieved to hear). Similar sentiments were expressed in a 2013 report by the American Bar Association.
The good news from the Sony case is how much better and faster we've gotten at attributing network espionage and network attacks. But that won't do much good until we can also punish those we identify.
This order offers a real possibility that we can. Even the hackers don't want to work for government forever; they hope to run startups just like everybody else, but that will be hard with an OFAC sanction hanging over their heads.
And the companies that benefit from stolen trade secrets could also find themselves sanctioned, since the order extends to them as well. Sanctions can be applied to any company that is:
responsible for or complicit in, or to have engaged in, the receipt or use for commercial or competitive advantage or private financial gain, or by a commercial entity, outside the United States of trade secrets misappropriated through cyber-enabled means knowing they have been misappropriated, where the misappropriation of such trade secrets is reasonably likely to result in, or has materially contributed to, a significant threat to the national security, foreign policy, or economic health or financial stability of the United States.
The program is a bit of an empty shell right now: it authorizes but doesn't apply sanctions to any hackers. But if it's used wisely it could be a game changer -- the first real deterrent to cyberspying and cyberattacks.
Episode 60 of the Cyberlaw Podcast features Paul Rosenzweig, founder of Red Branch Consulting
PLLC and Senior Advisor to The Chertoff Group. Most importantly he was a superb Deputy Assistant Secretary for Policy in the Department of Homeland Security when I was Assistant Secretary.
Paul discourses on the latest developments in ICANN, almost persuading me that I should find them interesting. He expresses skepticism about the US government’s effort to win WTO scrutiny of China’s indigenous bank technology rules; he also sees the DDOS attack on GitHub as a cheap exercise in Chinese extraterritorial censorship.
Michael Vatis, meanwhile, fills us in on two new cyberlaw cases whose importance is only outweighed by their weirdness. And I dissect the House cybersecurity information sharing bill, concluding that it has gone so far to appease the unappeasable privacy lobby that it may actually discourage information sharing.
As always, send your questions and suggestions for interview candidates toCyberlawPodcast@steptoe.comor leave a message at +1 202 862 5785.
I fear that the House bill is indeed seriously flawed, but not because it invades privacy. Instead, it appears to pile unworkable new privacy regulations on the private sector information-sharing that's already going on.
The key point to remember is that plenty of private sector sharing about cybersecurity is already going on. There aren't a lot of legal limits on such sharing, unless the government is getting access to the information. If it is, providers of internet and telecom services can't join the sharing because an old privacy law bars them from providing subscriber information to the government in the absence of a subpoena.
The House bill solves that problem by allowing sharing to occur, "notwithstanding any other law." But overriding even a dysfunctional and aging privacy law quickens the antibodies of the privacy lobby. So they've been pressing for kind of "privacy tax" on information sharing -- specifically, they want assurances that personal data will be removed from any threat information that companies share.
Everyone recognizes, at least in theory, that this can't be a blanket exclusion; some threat data shouldn't be separated from personal data. If an IP address or email account is being used to distribute malware, those things are threat information. And they are also personal data, since some human being is probably tied to the address or account. If personally identifying information about attackers can't be shared under the bill, then the bill won't do much good.
The bill tries to square the circle by allowing companies to share data about attackers; a company sharing information is only required to screen out personal data that is "not directly related to a cybersecurity threat."
So far, so good. But how does a company know that the information it's sharing really identifies only persons "directly related to a cybersecurity threat"? Unfortunately, the kind of intelligence that is routinely shared today does not come with that kind of guarantee. Critical Stack is a startup which aggregates publicly available threat intelligence. A quick look at its sources reveals that much of the threat information is collected with tools that are automated and therefore imperfect. Your IP address can get on the list if you are innocent but happen to act like an attacker -- perhaps by pinging certain ports or having your IP address temporarily misused.
So, if I share such imperfect information under the House bill, do I get the benefit of liability protection or not? My guess is not. Under the bill, companies must "take reasonable efforts to ... remove any information [that the company] reasonably believes at the time of sharing to be personal information of, or information identifying, a specific person not directly related to a cybersecurity threat." In the real world, companies will know that the information they're sharing is not perfect, that it flags as suspicious accounts and addresses that turn out not to be a threat.
Knowing that, how can the company say that it "reasonably believes" it has removed all information identifying a specific person except for information about persons "directly related to a cybersecurity threat"? It can't. (I note that this was not a problem under the earlier version of the bill, which required deletion of data about persons a company "knows" not to be a threat; the question is who bears the burden of uncertainty, and the new bill puts it squarely in the sharing company.)
All this means, I think, that lawyers will end up scrubbing any methodologies that generate threat information before their company decides to share. That's expensive, and the lawyers won't give a lot of clean opinions.
End result: under the House bill, the privacy tax is so high that fewer companies will share threat data, and the ones who do will share less.
It's not clear that this bill will do anything to encourage information sharing.
Richard Bejtlich is our guest for episode 59 of the Cyberlaw Podcast. Richard is the Chief Security Strategist at FireEye, an adviser to Threat Stack, Sqrrl, and Critical Stack, and a fellow at Brookings. We explore the significance of China’s recently publicized acknowledgment that it has a cyberwar strategy, FireEye’s disclosure of a gang using hacking to support insider trading, and NSA director Rogers’s recent statement that the US may need to use its offensive cyber capabilities in ways that will deter cyberattacks.
In the news roundup, class action defense litigator Jennifer Quinn-Barabanov explains why major automakers are facing cybersecurity lawsuits now, before car-hacking has caused any identifiable damage. I explain how to keep your aging car and swap out its twelve-year-old car radio for a cool new Bluetooth enabled sound system. Michael Vatis disassembles the “$10 million” Target settlement and casts doubt on how much victims will recover.
Michael also covers the approval by a Judicial Conference advisory committee of a rule allowing warrants to extend past judicial district lines, explaining why it may not be such a big deal. Maury Shenk, former head of Steptoe’s London office and now a lawyer and a private equity investor and adviser, jumps in to discuss the Chinese cyberwar strategy document as well as China’s effort to exclude US technology companies from its market.
As always, send your questions and suggestions for interview candidates to CyberlawPodcast@steptoe.com or leave a message at +1 202 862 5785.
Cyberspies can’t count on anonymity any more.
The United States (and the private security firm Mandiant) stripped a PLA espionage unit of its cover two years ago with a detailed description of the unit’s individual hackers; that report was followed by federal indictments of members of the unit that described them and their activities is great detail. More recently, the President outed North Korea for the attack on Sony. And as if to underscore the growing confidence of the intelligence community in its attribution capabilities, the Director of National Intelligence almost casually tagged Iran for a destructive cyberattack on Sheldon Adelson’s Las Vegas Sands gambling empire.
That’s good news, but it’s only a first step.To make a real difference, attribution has to yield more than talk.
Unfortunately, neither the companies victimized by network intrusions nor their governments have yet found ways to turn attribution into deterrence. No one expects to see members of the PLA in federal court any time soon. The administration’s public sanctions on North Korea were barely pinpricks. And Iran could be forgiven for concluding that its cyberattacks were rewarded by concessions in the nuclear enrichment negotiations.
But that’s not the last word. I attended a recent international conference where a surprising number of European officials signaled their eagerness to confront countries engaged in cyberespionage against their industries. They assumed that they could identify the countries that were stealing corporate secrets.
What they wanted were legal remedies — and remedies of a particular kind. They didn’t want to punish the hackers, who all too often are well protected by government. What they wanted was a way to punish the hackers’ customers — the state-owned companies who were benefiting from the theft of competitors’ intellectual property. Unlike the hackers, those companies can’t hide at home forever. To get the full benefit of their shiny new stolen technology, they have to sell their products globally. Which means they have to submit to the law and the jurisdiction of western nations.
But what law? Does a company victimized by cyberespionage have any legal remedies against the company that received the stolen data? That’s the question European (and American) trade officials were beginning to ask.
Faced with that question, I found three plausible legal remedies for companies that are victimized by hacking aimed at their corporate intellectual property. Here they are.
First, victims of cyberespionage could sue the foreign company benefiting from the theft of trade secrets. A company can be sued under the Uniform Trade Secrets Act (UTSA) if it uses “a trade secret of another without express or implied consent” and it “knew or had reason to know that [its] knowledge of the trade secret was derived from or through a person who had utilized improper means to acquire it.” UTSA § 1(2)(ii)(B)(II). So if the foreign company had reason to believe that it was receiving data stolen from a competitor’s network, it is at grave risk of liability under the UTSA.
The UTSA has been adopted in one form or another in forty-eight states, and plaintiffs can sue for damages, including “actual loss,” “unjust enrichment . . . that is not taken into account in computing actual loss,” and “exemplary damages” for “willful and malicious” violations. UTSA § 3(a), (b). All of those damages would seem to apply where the defendant was complicit in an attack on the plaintiff’s corporate network.
Second, the federal Computer Fraud and Abuse Act (CFAA) allows private suits against anyone who “intentionally accesses a computer without authorization,” obtains information, and causes at least $5,000 of loss. 18 U.S.C. § 1030(a)(2)(C). That certainly applies to the hackers themselves; but what about the recipients of the stolen data? They’re liable too, at least if they can be shown to have “conspired” with the intruders. 18 U.S.C. § 1030 (b). Proving conspiracy poses a higher hurdle than meeting the UTSA’s “reason to know” standard; some courts say that a charge of conspiracy requires “specific allegations of an agreement and common activities.” See, e.g., NetApp, Inc. v. Nimble Storage, Inc., No. 5:13-cv-05058, 2014 WL 1903639, at *13 (N.D. Cal. May 12, 2014). But there will be many times when the evidence strongly suggests both. For example, if the theft of data was more than just a one-off event, there is every reason to believe that the beneficiary of the thefts was actively telling the thieves what to steal.
A third remedy is section 337 of the Tariff Act of 1930. It allows the International Trade Commission (ITC) to bar the importation of goods produced using stolen trade secrets. The ITC may exclude such goods from the United States if they are the result of “unfair methods of competition . . . the threat or effect of which is to destroy or substantially injure an industry in the United States.” 19 U.S.C. § 1337(a), (d). “Unfair methods of competition” includes a federal common law cause of action for the theft of trade secrets, which closely mirrors the provisions of the UTSA. See TianRui Grp. Co. v. Int’l Trade Comm’n, 661 F.3d 1322, 1327–28 (Fed. Cir. 2011). A complaint can be filed in the ITC even if the theft of trade secrets occurred abroad, so long as the theft violated the laws of the place where the secret was stolen. Id. at 1328. Although Section 337 does not allow for the recovery of money damages, a victim of commercial cyberespionage can at least make sure he’s not competing in the United States against products that are produced using his trade secrets and intellectual property.
In short, there are surprisingly robust legal remedies not just against cyberspies but against the companies who benefit from the spies’ intrusions. But that is not the end of the matter. Just having a good legal case does not mean that a victim will bring suit. There are plenty of practical reasons why a lawsuit might not be prudent even with the law on your side. But that’s a topic for another day, and another post.
In episode 58 of the Cyberlaw Podcast, our guest is Andy Ozment, who heads the DHS cybersecurity unit charged with helping improve cybersecurity in the private sector and the civilian agencies of the federal government. We ask how his agency's responsibilities differ from NSA's and FBI's, quote a scriptural invocation of desert jackals to question his pronunciation of ISAO, dig into the question whether sharing countermeasures is a prelude to cybervigilantism, and address the crucial question of how lawyers should organize cybersecurity information sharing organizations (hint: the fewer lawyers and the more clients the better).
In the news roundup, we revisit the cybersecurity implications of net neutrality, and Stephanie Roy finds evidence that leads me to conclude that the FCC has stolen the FTC's playbook (and, for all we know, deflated the FTC's football). This ought to at least help AT&T in its fight with the FTC over throttling, but that's no sure bet.
I explain why Hillary Clinton's email server was a security disaster for the first two months of her tenure – and engage in utterly unsupported speculation that she closed the biggest security gap in March 2009 because someone in the intelligence community caught foreign governments reading her mail.
In news with better grounding, the Wyndham case goes to the Third Circuit and the bench is hot. We explain why this is good for Wyndham. In other litigation news, the feds respond to Microsoft in the Irish warrant case. Michael and I agree that the Justice Department is praying for a cold bench.
Finally, in two updates from earlier podcasts, it looks as though China may have backed down on backdoors, for now, so Silicon Valley can go back to worrying about Jim Comey. And I explain my claim from last week's that the FREAK vulnerability is overhyped to support a simplistic civil libertarian morality tale.
As always, send your questions and suggestions for interview candidates to CyberlawPodcast@steptoe.com or leave a message at +1 202 862 5785.
This episode of the podcast features Rep. Mike Rogers, former chairman of the House intelligence committee, Doug Kantor, our expert on all things cyber in Congress, and Maury Shenk, calling in from London. Mike Rogers is now a nationally-syndicated radio host on Westwood One, a CNN national security commentator, and an adviser to Trident Capital’s new cybersecurity fund.
The former chairman addresses a host of issues -- gaps in CFIUS, the future of the President’s new cyber threat integration center, the risk of rogue state cyberattacks on US infrastructure – as well as the issues we cover in the news roundup. These include Maury’s take on China’s toughening policy toward US technology, the prospects for a workable bill renewing section 215 (the ex-chairman is not as sanguine as Doug Kantor and I) and the administration’s new privacy bill. (Our take: the bill is ideal for the Twitter age, since you still have 137 characters left after typing “DOA”.)
Maury updates us on the latest reason for delay in adoption of a new European data protection regulation. Doug Kantor and Mike Rogers consider the prospects for an information sharing bill and comment on privacy groups’ goalpost-moving style of congressional negotiation.
And, finally, I respond to Edward Snowden’s claim that he wants to move to Switzerland by reminding him (and the Swiss) what he said about them the last time he lived there. (Said Snowden: “You guys can’t say I look gay any more. I’m living in Switzerland. I’m the straightest-looking man in the country,” Geneva is “nightmarishly expensive and horrifically classist,” and “I have never, EVER seen a people more racist than the swiss.” Apparently a year in Moscow has broadened his horizons.)
As always, send your questions and suggestions for interview candidates toCyberlawPodcast@steptoe.com or leave a message at +1 202 862 5785.
Our guest for Episode 56 of the Cyberlaw Podcast is Siobhan Gorman, who broke many of the top cybersecurity stories for the Wall Street Journal until she left late last year to join the Brunswick Group, which does crisis communications for private companies. Siobhan comments on the flood of attribution stories in recent days, including the US government’s almost casual attribution of the Sands Las Vegas cyberattack to Iran and the leaked attribution of the Saudi Aramco and US bank attacks to the same nation. She also compares private sector cybercrisis planning to the US government’s coordination (or lack thereof) in responding to the Sony attack.
In other news, Stephanie Roy and I take a deep and slightly off-center dive into the FCC’s net neutrality ruling. I predict that within five years the FCC will have used its new Title II authority to impose cybersecurity requirements on US ISPs. (And in ten years, I suspect, there will be a debate in the FCC over whether to throttle or disfavor communications services that don’t cooperate with the FBI’s effort to deny perfectly encrypted security to criminals.) Stephanie demurs.
Michael Vatis and I chew over China’s “overdetermined” (h/t Mickey Kaus) policy of ousting American tech products in favor of Chinese competitors, the prospects of class action plaintiffs in the Komodia/Superfish/Lenovo flap, and NY financial regulator Benjamin Lawsky’s war on the password.
We finally get listener feedback to read on the air, as Michael Samway congratulates Nuala O’Connor for her masterly handling of, well, me. Those who think they can do a better job of humiliating me will have their work cut out for them, but they’re welcome to try, sending emails to CyberlawPodcast@steptoe.comail and voice mails to +1 202 862 5785.
In Episode 55 of the Cyberlaw Podcast, we revive This Week in NSA to explore the claim that GCHQ stole mass quantities of cell phone encryption keys. Meanwhile, Jason explains the complex political battles over Rule 41, Michael explains why so many companies have rallied to Twitter’s first amendment claim against the Justice Department, and both of them explain how Yahoo! managed to beat the government’s indefinite gag order – and why Yahoo! might even be right. After which we melt down into the bottomless hot mess of liability and litigation that surrounds the Lenovo/Superfish/Komodia/Lavasoft flap.
Our interview is with the charming and feisty CEO of the Center for Democracy and Technology, Nuala O’Connor. Nuala and I square off over end-to-end encryption, privacy, and section 215, while managing to find common ground on TLS and even child-rearing.
As always, send your questions and suggestions for interview candidates toCyberlawPodcast@steptoe.com or leave a message at +1 202 862 5785.
Episode 54 of the Cyberlaw Podcast features a guest appearance by Lawfare’s own Ben Wittes, discussing cybersecurity in the context of his forthcoming book, The Future of Violence, authored by Ben and Gabriella Blum. (The future of violence, you won’t be surprised to hear, looks bright.) Ben also floats the idea of taping an episode of all the Lawfare-affiliated podcasts in a bar with some of our listeners. More on that idea to come.
In the news roundup, I cover the President’s surprisingly news-light cybersecurity summit in Silicon Valley. Jason comments on state attorneys generals’ predictable sniping at Anthem for delays in identifying all the potential victims of its hack. I note with satisfaction a serious loss by EFF in the Jewel lawsuit over the US government’s access to AT&T traffic. And Jason lays out a report by the New York State Department of Financial Services on insurance company cybersecurity.
We both express concern about two Kaspersky security reports that identify new hacking tactics and new dangers for computer networks. The patientinfiltration of large bank networks and the extraction of hundreds of millions of dollars casts doubt on the safety of banking systems around the world. Equally troubling is the discovery that what Kaspersky callsthe “Equation” group used firmware exploits to achieve enduring access to a wide variety of hard drives. (Though Kaspersky’s claim that the access depended on having the hard drive makers’ source code looks wrong.)
As always, send your questions, suggestions for interview candidates and offers to stand a round at the Beer Summit to CyberlawPodcast@steptoe.com or leave a message at +1 202 862 5785.
In this week’s episode, our guest is Rebecca Richards, NSA’s director of privacy and civil liberties. We ask the tough questions: Is her title an elaborate hoax or is she the busiest woman on the planet? How long will it be before privacy groups blame the Seattle Seahawks’ loss on NSA’s policy of intercepting everything? How do you tell an extroverted NSA engineer from an introvert? And, more seriously, now that acting within the law isn’t apparently enough, how can an intelligence agency assure Americans that it shares their values without exposing all its capabilities?
In the week’s news, Jason Weinstein, Michael Vatis and I explore the DEA’s license plate collection program and what it means, among other things, for future Supreme Court jurisprudence on location and the fourth amendment. We take on the WikiLeaks-Google flap and conclude that there’s less there than meets the eye.
Jason celebrates a festival of FTC news. The staff report on the Internet of Things provokes a commissioner to dissent from feel-good privacy bromides. The FTC data security scalp count grows to 53, with more on the way. We discover that the FTC has aspirations to become the Federal Telecommunications Commission, regulating telecommunications throttling as well as cramming – and apparently forcing the FCC into the business of regulating hotels. To be fair, we find ourselves rooting for the Commission as it brings the hammer down on a revenge porn site.
And Michael finds the key to understanding China’s policies on cybersecurity and encryption.
The Cyberlaw Podcast is now open to feedback. Send your questions, suggestions for interview candidates, or topics to CyberlawPodcast@steptoe.com. If you’d like to leave a message by phone, contact us at +1 202 862 5785.
My latest venture in podcasting features a debate on attributing cyberattacks. Two guests, Thomas Rid and Jeffrey Carr, disagree sharply about how and how well recent cyberattacks can be attributed. Thomas Rid is a Professor of Security Studies at King’s College London and the author of Cyber War Will Not Take Place as well as a recent paper on how attribution should be done. Jeffrey Carr, the founder and CEO of Taia Global, remains profoundly skeptical about the accuracy of most attribution efforts in recent years.
I question both of them, relying heavily on questions supplied by attribution aficionados via Twitter.
Among the questions we dig into:
I also call out the security experts who heaped scorn on the FBI for its initial fingering of North Korea as the source of the Sony attack. Which of them recanted as the evidence mounted, and which ones doubled down? Details in the podcast.
I linger over the evidence that Europe has swung from hating US tech firms for being too cozy with government to hating them for not being cozy enough: the EU’s top counterterrorism official wants to prevent firms from selling unbreakable encryption, and the French government wants them to take down more terror-related online speech. Later, I spike the ball, pointing to a Pew poll showing that NSA is holding its own in American opinion since the first Snowden revelations and that young voters have a far more favorable view of the agency than those over 65.
In US privacy litigation, Jason tells us that the class action over CarrierIQ’s storage of phone records has gotten a haircut, as the court throws out wiretap claims against hardware makers, and that LabMD has lost yet another peripheral battle in its campaign to force the FTC to spell out exactly what security measures it expects from private companies. And we debate the significance of the revelations about DEA's Hemisphere Project.
I'd welcome feedback, either by voicemail (+1 202 862 5785) or email (CyberlawPodcast@steptoe.com).
And special thanks to the Twitterati: @langnergroup, @NateBeachW, @janwinter15, @pwnallthethings, and @marcwrogers, among others.
I occasionally report here on interviews that I’ve been doing for the Steptoe Cyberlaw Podcast. This week’s guest is David Sanger, the New York Times reporter who broke the detailed story of Stuxnet in his book, Confront and Conceal: Obama's Secret Wars and Surprising Use of American Power. His appearance on the podcast is particularly timely because it allowed David to talk about his latest story for the Times. The story recounts how North Korea developed its cyberattack network, and how the National Security Agency managed to compromise that network and attribute the Sony attack. He explains that understanding the Obama White House helped him break a story that seemed to be about NSA and the FBI. I explain why I think North Korean hackers resemble East German Olympic swimmers, and we meditate on the future of cyberwar.
For those who like such things, Michael Vatis and I also cover a news-rich week, beginning with capsule summaries of the President’s State of the Union proposals for legislation on cybersecurity information sharing, breach notification, and Computer Fraud and Abuse Act amendments. We touch on Europe’s new commitment to antiterrorism surveillance, which officially puts a still-Snowden-ridden United States out of step with just about every developed nation. I try to summarize the new National Academy of Sciences study on why there isn’t an easy software substitute for bulk collection. (Short answer: If you want to recreate the past, you have to bulk-collect the present.)
We ask whether the DEA was the inspiration for NSA’s 215 bulk collection program, call out Rep. Sensenbrenner, who evidently skipped the DEA briefings as well as NSA’s, and wonder why Justice didn’t explain to Congress last year that NSA’s program wasn’t that big a leap from the Justice Department’s own bulk collection – instead of quietly trying to bury its program when the heat built up on NSA. (OK, we didn’t really wonder why Justice did that.)
If you judge by their joint press conference, Prime Minister Cameron seems to have done more to convert President Obama to skepticism about widespread unbreakable encryption than Jim Comey did. Save your Clipper Chips, key escrow will rise again!
Finally, Centcom’s public affairs team, which can’t keep ISIS sympathizers out of its Twitter and YouTube feeds, deserves 24 hours of deep embarrassment, which is surprisingly exactly what it gets.
The Podcast welcomes feedback, either by voicemail (+1 202 862 5785) or email (CyberlawPodcast@steptoe.com).
I've got a short op-ed about returning American jihadis in the Room for Debate section of today's New York Times site. Here's what it says:
Americans returning home from a foreign jihad pose a very real danger to this country, now and for years to come, as the Charlie Hebdo attacks reveal. One of the attackers, Cherif Kouachi, had been caught and convicted of trying to join the war in Iraq, and his brother may have trained with Al Qaeda in Yemen. Despite these warning signs, French authorities lacked the resources to keep watching the brothers.
Our law is even less suited to the threat than France's. We have not made it a federal crime for Americans to join the fight against a U.S. ally. And, like the French, we cannot afford to put 24-hour tails on every returnee. We could afford to conduct electronic surveillance of the returnees, but that would require specific evidence of a new plot here at home. And new plots, the Kouachis showed, are often easy to hide from the authorities.Until we can distinguish the reformed from the continuing threats, the penalty for this new crime should at a minimum include years of probation and electronic surveillance.
These are gaps we should fix. It should be unlawful to join a foreign war against the United States or its allies. That doesn't mean that every returnee should go to jail. I like to think that many will find themselves disillusioned and repelled by the reality of life under Islamist rule. Some will become valuable sources of intelligence on their former comrades, others will simply want to live down a profound mistake.
But until we can distinguish the reformed from the continuing threats, the penalty for this new crime should at a minimum include years of probation and electronic surveillance. Under U.S. law, the government hasfar greater authority to search parolees than ordinary citizens, especially when the purpose of the surveillance is to ensure that the probationer is fully rehabilitated. So even if all prosecutions under the new law were to end in suspended sentences and long paroles, we would greatly cut the risk that the most dangerous of the returnees will evade government monitoring the way the Kouachis did.
Government policymakers have been hoping for twenty years that companies will be driven to good cybersecurity by the threat of tort liability. That hope is understandable. Tort liability would allow government to get the benefit of regulating cybersecurity without taking heat for imposing restrictions directly on the digital economy.
Those who see tort law as a cybersecurity savior are now getting their day in court. Literally. Mandatory data breach notices have led, inevitably, to data breach class actions. And the class actions have led to settlements. And those freely negotiated deals set what might be called a market price for data breach liability, a price that can be used to decide how much money a company ought to spend on security.
So, how much incentive for better security comes from the threat of data breach liability? Some, but not much. As I've been saying for a while, the actual damages from data breaches are pretty modest in dollar terms, and the pattern of losses makes it very hard to sustain a single class, something that forces up the cost of litigation for the plaintiffs.
You can see this pattern in recent data breach settlements. I put this chart together for a talk on the subject at the Center for Strategic and International Studies. While the settlements below all have complications (Sony's settlement was mostly in free game play, for example), they all cap the defendants' total liability. And what's striking about the caps is how low a price these agreements set, espectially on an individual basis, where $2.50 per victim looks to set the high end and 50 cents the low. Of course, to determine how much you spend annually to avoid that liability, a company would have to discount the settlement price by the probability of a breach in any given year. Even Sony doesn't have a breach every year, so a probability adjustment cuts the value of avoiding liability to something between a half and a tenth. At those prices, I wouldn't expect much change in corporate cybersecurity budgets.
(I know that these charts don't account for the biggest claims in cases like Target and Home Depot -- banks suing for the cost of reissuing credit cards. That's a very different theory of liability mainly applicable to a limited number of big retailers. In the end I doubt that liabilities to issuing banks will drive much cybersecurity either, not because the claims are low -- they're more likely to be in the $50 per card range -- but because establishing liability will not be all that easy and because things like tokenization will likely prove much cheaper than improving security.)
Maybe so. Compare this study:
A recent study conducted at the Norwegian University of Science and Technology has revealed that being born during a period of heightened solar activity can shorten our lifespan by over five years.
With this one:
The plot below ... shows the size of the biggest individual spots in each year between 1900 and 2000. Notable spots include the Great Sunspot of 1947, which was three times larger than [a 1991 sunstorm].
From an op-ed for the New York Daily News:
there are widespread reports that North Korea launches its cyberattacks from the luxurious Chilbosan Hotel in Shenyang, China. Perhaps a previously unknown cyberarmy should simply take down the hotel's power and telephone service and threaten worse.There's a risk that such tactics would lead to conflict between the U.S. and China, but China can avoid that by closing the haven it has provided for attacks on America.
These are not easy options to contemplate. But flinching from such conflicts will lead to escalation of another kind, as every tin-pot dictator in the world discovers that Americans can be intimidated on the cheap. Like it or not, history is calling.
From my op-ed in the Hollywood Reporter:
North Korea is one of two countries that have pioneered the use of hacking not for spying but for punishment. The North's attack on South Korean banks was aimed at destroying data, not just stealing it. In addition, Iran is suspected of using malware to destroy Saudi oil industry computers and of using botnets to bring down the websites of American banks. To be blunt, these two countries are testing how far they can go in harming U.S. companies without provoking American retaliation. If the attack on Sony is connected to them and goes unanswered, companies and groups whose speech offends these countries — and, soon, Russia and China — will face the same treatment.
It's a serious dilemma for the Obama administration, which is still largely paralyzed by lawyers and diplomats arguing that the U.S. cannot act against these regimes' cyberattacks, either because we don't have proof beyond a reasonable doubt or because a counterattack would be "asymmetric" — a fancy way of saying North Korea can get along without computers a lot better than we can.
Even so, we can't shrug off the Sony attack. Once the evidence is collected and clearly connected to North Korea, we need an innovative way to hurt Kim Jong-un without triggering a full-on hacking war. We need, in short, the kind of creativity that Hollywood has in spades. If this attack was meant to suppress The Interview, perhaps the best way to deter future attacks is to make sure the attack backfires.
Maybe Sony should give the Defense Department 1 million DVDs of The Interview to drop on Pyongyang from balloons. ...
I’ve spent the last couple of days meditating on the mistakes that web journalists make, and how those mistakes differ from mainstream media's errors. The reason for the meditation is a weirdly escalating cycle of misquotation that I experienced last week.
In general, I don't obsess about the mistakes that journalists make when I talk to them. If you get quoted a lot, you can expect to be misquoted a lot too, and it's best to let it go. Reporters are in a hurry; or their editors lack context; mistakes happen. Complaining feels a little whiny, and in any event, readers are likely to forget the story before a correction hits the wires.
But I was struck by the way this particular misquotation bounced around the web, acquiring authority by repetition without ever being verified, and I suspect it tells us something troubling about where the press is going, even for those of us who celebrate the breaking of mainstream media's narrative monopoly.
First, the background. I'm a skeptic about the Silicon Valley movement to increase the use of communications encryption that even the supplier can't undo. I think it's bad policy, and not particularly good business, for reasons I offered recently in a NYT op-ed:
That decision should not be left to Apple alone. And it won't be.
Companies do not want to give their employees the power to roam corporate networks in secrecy. And even if they did, their regulators wouldn't let them. If Apple wants to sell iPhones for business use, it will have to give companies a way to read their employees’ business communications. Corporate IT departments won’t welcome a technology that could help workers hide misdeeds from their employer.
And as a global company, Apple is subject to regulation and market pressure everywhere. If China doesn't like Apple's new policy, it can ban the iPhone or simply encourage China's mobile carriers to slow Apple's already weak sales there. Even democracies like India, and U.S. allies like the United Arab Emirates, have shown the determination and the clout to force changes in phone makers' security choices.
I repeated much the same view last week in Ireland, on stage with a Guardian editor, noting that Blackberry had run into real resistance in selling its end-to-end encrypted products in other markets. The Guardian wrote up the event in a somewhat sloppy story:
“Blackberry pioneered the same business model that Google and Apple are doing now - that has not ended well for Blackberry,” said Baker.
He claimed that by encrypting user data Blackberry had limited its business in countries that demand oversight of communication data, such as India and the UAE and got a bad reception in China and Russia. “They restricted their own ability to sell. We have a tendency to think that once the cyberwar is won in the US that that is the end of it - but that is the easiest war to swim.”
The sloppiness of the story shows in its still-uncorrected “easiest war to swim” error, but also in its framing. The Guardian's summary of my remarks begins with something I didn't say: “Baker said encrypting user data had been a bad business model for Blackberry, which has had to dramatically downsize its business and refocus on business customers.” It's a plausible misunderstanding of my remarks, but it's wrong, as the Guardian could have easily found out during the many hours I spent that day with its reporter, James Ball.
What's striking is what happened next. The error went from plausible misunderstanding to outright mischaracterization. By the next day, several web outlets were using headlines like this one from ZDNet: “Former NSA's chief lawyer: BlackBerry's encryption efforts led to its demise.” This is unequivocally wrong, both as a summary of my remarks and as a matter of fact, not least because Blackberry is far from demise. But it's also wrong more fundamentally; Blackberry's strongest market is selling to enterprises, many of whom are attracted to the product precisely because it offers very strong encryption that is controlled by the company and not by the individual user. It is, if anything, an illustration of why encryption is a lot more complicated than Silicon Valley's technolibertarian engineers seem to think.
That was not the end of the matter. There were soon a dozen or more web stories making similar claims, including from more or less respected outlets like Slate, TechSpot, the Daily Caller, the Register, and the Inquirer. Remarkably, many of them questioned the accuracy of blaming the “demise” of Blackberry on its strong encryption; in fact, they called that view everything from “strange” and “a bit of a stretch” to “absurd” and “laughable.” What they didn't do was ask me whether I had actually made the claim that they considered so absurd and laughable. Not one of those web outlets called or wrote to ask for a followup quote or to confirm that I had made a statement they clearly thought no one in his right mind would make.
Why not? On reflection, I think it's because they liked the idea that someone on the other side of the crypto debate was saying dumb things about Blackberry and its encryption. The dumber the better, in fact. Call it the Twitterization of debate: Anyone we disagree with must first be caricatured as a dolt and then dismissed in 140 characters. Or call it the metastasization of the Huffington Post clickbait stylebook: “You won't believe this story showing how stupid/evil our opponents are!” Whatever, it's an understandable tactic if you're a partisan for a particular view. What's striking is how far that partisan style has infiltrated web outlets that to all outward appearances are engaged in, you know, journalism. (Indeed, it has much the same effect as actual journalism; within two days, reporters were asking Blackberry officials to respond to the still-unchecked quote.)
The “story that’s too good to check” is part of newsroom lore, and an ever-present temptation for journalists. On the web, though, “too good to check” looks more and more like the norm, not the exception. And that's a problem for consumers of news. Sites like Slate and the Register present themselves as opinion journalism. We expect them to give us the facts along with the attitude. But increasingly it looks as though their facts are as open to question as their opinions.
The chill in the air reminds me that it’s time to open the floor to nominations for the annual awards for Dubious Achievements in Privacy Law -- the Privies for short. The prizes are an opportunity to consider why privacy laws, always enacted amid proclamations of the best motives, nonetheless turn out so badly so often.
Last year we nominated candidates in three categories:
Privacy Hypocrite of the Year
Worst Use of Privacy Law to Protect Power and Privilege
Dumbest Privacy Case of the Year
To start things out, it’s hard to find a better candidate for Dumbest Privacy Case of the Year than the recent decision by a Quebec judge, Alain Breault, who awarded a woman $2250 for a Google Street View photo of her sitting on her front stoop in a skimpy top. Maria Grillo claimed to have suffered shock and embarrassment when she saw just how much cleavage Google had caught on camera. Embarrassing? Maybe. Worth $2250? You be the judge. The before and after clips from Google Street View are from the Journal de Montreal.
The judge acknowledged that she was in public when the photo was taken from the street, but he waved all that away as an “American” view of privacy. North of the border, he averred, a more “European” view of privacy applies. Apparently this means that privacy liability can always be imposed on American technology companies for, well, pretty much anything.
Episode 40 of the Steptoe Cyberlaw Podcast is done. Our guest this week is Bob Litt, the General Counsel of the Office of the Director of National Intelligence. Bob has had a distinguished career in government, from his clerkship with Justice Stewart, his time as a prosecutor in the Southern District of New York and at Main Justice, and more than five years in the ODNI job. This week in NSA: The latest fad in news coverage of the agency is a hunt for possible conflicts of interest in its leadership. And it’s having an effect. Two high-ranking NSA seniors, the CTO and the head of signals intelligence have recently left positions that drew scrutiny for getting too close to private industry. I ask him whether we should be pleased or worried about the trend toward individual converts to Islam carrying out random attacks with whatever weapon comes to hand. Prudently, he refuses to be drawn into my comparison of Islamists to the Manson Family. We debate whether the USA Freedom Act has a chance of passage in the lame duck Congress – and whether it should, focusing among other things on how the act’s FISA civil liberties advocates would function and what ethical rules would govern their day jobs. And we explore another ODNI project – implementing the President’s directive on protecting the privacy of foreign nationals while gathering intelligence. Are the nation’s spies really required to wait until a foreign target’s speech goes beyond what the first amendment protects before they collect and analyze the remarks? Will the requirement for advance justification for collection projects institutionalize risk aversion at NSA? And can government officials look forward to intelligence reports that read like this: “[SYRIAN NATIONAL 1] asked [IRAQI NATIONAL 1] to kill [US PERSON 1]”?
Our news roundup begins with the sudden press interest in possible conflicts of interest in NSA’s leadership. The Supreme Court takes another privacy case – one with no obvious federal connection. Lots of city ordinances require hotels to keep guest registries – and to let the police inspect those registries on demand. But the 9th circuit recently held en banc that these laws touch the privacy interests of the hotel owner, not just the guests, and that the laws are unconstitutional if they offer no opportunity for prior judicial review of the police demand. Just what we need: another opportunity for the Roberts Court to pad a narrow ruling with a lot of ill-considered dicta about Smith v. Maryland.
Harking back to last week’s interview with Tom Finan about insurance coverage for cyber incidents, we discover that where there’s insurance coverage there are also insurance coverage disputes. The head of Steptoe’s insurance coverage practice explains the P.F. Chang dispute with Travelers Insurance and hints that it’s in the first wave of what could be thirty years of litigation. Not that there’s anything wrong with that.
FBI Director Comey isn’t alone in complaining about Silicon Valley’s reluctance to help law enforcement. Leslie Caldwell, the new head of the Justice Department’s criminal division, has joined the chorus.
According to the Stored Communications Act, companies like Google may not provide the contents of emails in response to subpoenas. So what do civil litigants do when they need access to Gmail accounts in, say, divorce cases? The usual solution is for the court with jurisdiction over the civil suit to order the litigants to “consent” to the disclosure of their email messages. But is court-ordered consent really consent? According to a California appeals court, it is. Michael explains.
Whoa! The FCC really is taking cybersecurity seriously. It’s proposing $10 million in fines for two carriers who stored hundreds of thousands of “Obamaphone” beneficiaries’ personal data on a server accessible by anyone on the internet.
Confusion over when you need a warrant to get third party information continues to roil the courts. The Florida Supreme Court raises the bar for cell-site location data. And the NJ AG plots a counter-attack on a billing record warrant requirement in the Garden State. Michael suggests a new feature to keep all the litigation straight: This Week in Smith v. Maryland.
Lawyers with banks for clients have a new reason to upgrade their cybersecurity. As the banks struggle with increasingly sophisticated intrusions, they’re sharing the pain, demanding that their contractors and suppliers adopt stronger cybersecurity. Law firms are expressly included, since they’ve been targeted frequently for what inevitably will be called “bank shot” intrusions.
As I mentioned, I have been doing a weekly podcast on security, privacy, government and law with a couple of my partners, Michael Vatis and Jason Weinstein. This week, in episode 39, our guest is Tom Finan, Senior Cybersecurity Strategist and Counsel at DHS’s National Protection and Programs Directorate (NPPD), where he is currently working on policy issues related to cybersecurity insurance and cybersecurity legislation. Marc Frey asks him why DHS, specifically NPPD, is interested in cybersecurity insurance, what trends they are seeing in this space for carriers and other stakeholders, and what is next for their role in this space. He is forthcoming in his responses and even asks listeners to email him with their feedback.
This week in NSA: The House and Senate Judiciary chairs call for action on USA Freedom Act. And nobody cares. We conclude that the likelihood of action before the election is zero, and the likelihood of action in a lame duck is close to zero. But next week we’ll be interviewing Bob Litt, one of the prime negotiators for the intelligence community on this issue, and he may have a different view.
The Great Cable Unbundling seems finally upon us, as several content providers announce that they’re willing to sell content direct to consumers over the Internet. Does that mean more support for net neutrality? Not necessarily. Stephanie Roy explains.
Are parents responsible for what their adolescent kids do and say on Facebook? That makes sense, if you’ve never had adolescent kids. Maybe that explains why Michael Vatis sees merit in the Georgia appellate court decision finding potential liability. It reversed the trial court, which had granted summary judgment in favor of the parents of a kid who set up a fake and defamatory Facebook page in the name of a classmate he hated. The facts are a little odd. The kid who set up the page never took it down, even after he’d been caught and punished by school and parents. The appeals court thought that the parents had a “supervisory” obligation to make their child delete the fake account, and that they could be held liable for negligently failing to do so. It’s quite possible, though, that everyone in this case is a Privacy Victim; the issue could have been hashed out with a phone call from the parents of the victim to the parents of the perpetrator, but according to the press, “the child’s parents didn’t immediately confront the boy’s parents because their school refused to identify the culprit.” Because privacy.
FBI Director Comey comes out swinging for CALEA reform, saying in a speech at Brookings that the law needs to be updated to require cooperation from makers of new communications systems when the FBI has a court order granting access to those systems.
When it comes to regulating on other topics, though, the Justice Department is a little less restrained; it has opened the door to a round of new disability claims against websites, offering a roadmap to what it thinks the law requires.
The right to be forgotten is attracting more flak in Europe, as the BBC announces a competing “right to remember” website devoted to publicizing stories that Google has delinked. It’s Auntie BBC v. Nanny Europe. Cue popcorn. Unhappily, a “progressive” group most famous for relentlessly sliming Google on privacy issues has urged the search engine to bring the right to be forgotten to the United States. Sigh.
In breach news, TD Bank pays $850,000 to the state AGs over a “breach” that may never have happened. TD lost a backup tape in transit, and the data wasn’t encrypted. Was anyone’s data actually compromised by the loss of the tape? The AGs don’t say. They just want their money. And they get it.
The Russians are getting sloppy, or maybe they’re taking a leaf from China’s book – figuring it doesn’t matter if they get caught. And caught they have been, by iSight Partners, which reports that Russian hackers used a Microsoft zero-day to target Western governments and Ukraine. Meanwhile, the FBI is warning about another and even more sophisticated set of Chinese government hackers. And hackers are now adding a new form of targeted attack to their arsenal a tactic that combines spearphishing with watering hole attacks. They’re targeting ads at users that take them to a compromised website that serves malware.
And, in good news for privacy skeptics, the Video Privacy Protection Act gets a narrow reading.
We remind everyone that the Steptoe Cyberlaw Podcast welcomes feedback, either by email (CyberlawPodcast@steptoe.com) or voicemail ( +1 202 862 5785) and that the views expressed by the participants are their own, not the firm's.