Edward Snowden cleared up a lot when he appeared on Vladimir Putin's "town hall" video program. https://www.youtube.com/watch?v=w1yH554emkY
His question for Putin was familiar to anyone who's followed Snowden's remarks in recent months: spying isn't bad, but "the mass surveillance of online communications and the bulk collection of private records " is evil. He trashes the US for programs that "unreasonably intrude on the private lives of ordinary citizen"' and asks, "Does Russia intercept, store or analyse in any way the communications of millions of individuals?"
I've prepared and answered a lot of questions at hearings, and a compound question like that is almost always a setup: It begs for a categorical "No." And that's what it got. It sure looks as though Snowden is playing the Kremlin's game here, serving up a pre-arranged softball on demand.
Equally interesting is the Russian government's implicit endorsement of the Snowden "mass surveillance" talking point. This television program is tightly scripted, and Snowden's question must have been approved at the highest levels of the Russian government to get past the screeners. So this is clearly a message that the Russian government wants to promote.
I've suspected for a while that Snowden's objection to mass surveillance point was a phony. It doesn't explain most of the stories Snowden has fathered or most of the documents that Snowden has compromised. Is it mass surveillance for NSA to monitor the communications of the Syrian military or to join with Norway in scrutinizing Russia's activities in the Arctic or to modify a USB cable so it can extract the secrets of a single computer -- to name just three programs that the Snowdenistas have disclosed?
Now we can see not just that the "mass surveillance" justification for Snowden's leaks is false but where the falsehood came from: it was almost certainly manufactured by the same Russian government that has now embraced it.
Why does Russia want this particular lie in circulation? Putin's answer tells us that too. After making the laughable claim that Russian surveillance is controlled by Russian law and Russian courts, Putin lets his mask slip just a bit: "there is no mass scale .... We do not have as much money and as many devices as the US to do that."
The Russians can't match NSA in money or technology (or in allies, he might have added). So Russia wants to drastically erode the American advantage in these things. And that, of course, is exactly the effect that Snowden's disclosures have had. If he persuades Americans to turn against NSA's foreign intelligence methods, or if he induces our allies to trim NSA's wings, or if he gets American technology companies to refuse to help their country, then Russia's lack of money, allies, and technology won't matter as much.
To sum up, for the last several months, while living in Russia, Snowden has been putting forward a justification for his acts (a) that he knows is not true, since it doesn't explain his actions, (b) that is approved at the highest levels by the Russian government and (c) that gravely harms the US and helps Russia in its confrontations with the US around the world.
I've said for a while that I thought the jury was out on whether Snowden is a traitor.
Now I think I hear it filing in.
Who says you can't learn anything watching Russia's propaganda programs?
An army of researchers recently published a short study of a weakness that NSA is alleged to have introduced into a public security standard. Joseph Menn of Reuters gave the study lengthy and largely uncritical coverage; here's the gist:
Security industry pioneer RSA adopted not just one but two encryption tools developed by the U.S. National Security Agency, greatly increasing the spy agency's ability to eavesdrop on some Internet communications, according to a team of academic researchers. Reuters reported in December that the NSA had paid RSA $10 million to make a now-discredited cryptography system the default in software used by a wide range of Internet and computer security programs. The system, called Dual Elliptic Curve, was a random number generator, but it had a deliberate flaw - or "back door" - that allowed the NSA to crack the encryption. A group of professors from Johns Hopkins, the University of Wisconsin, the University of Illinois and elsewhere now say they have discovered that a second NSA tool exacerbated the RSA software's vulnerability.
The allegation that NSA weakened the dual elliptic curve random number generator has been floating around for some time, and it has already had some policy impact. The President’s Review Group was reacting to the story when it declared that the US Government should "fully support and not undermine efforts to create encryption standards [and] not in any way subvert, undermine, weaken, or make vulnerable generally available commercial software."
A careful reading of the actual study, though, suggests that there’s been more than a little hype in the claim that NSA has somehow made us all less safe by breaking internet security standards. I recognize that this is a technical paper, and that I’m not a cryptographer. So I welcome technical commentary and corrections.
With that disclaimer, however, it seems to me that the paper makes two points that take a lot of the air out of the "NSA wrecks internet security" balloon:
1. If there’s a backdoor in the standard, no one has found it.
It’s an article of faith among academic cryptographers (and something the Reuters article just assumes) that there is a backdoor in the dual elliptic curve standard. In 2007, some Microsoft researchers explained how a backdoor might have been implanted in the standard. Researchers have been looking for ways to exploit the backdoor – and thus prove its existence – ever since. Yet the paper concedes that the researchers can’t confirm the existence of a flaw. Instead, the researchers had to make up a different flawed protocol and show how quickly they could exploit that vulnerability. The artificiality of that exercise probably should have made Reuters a little more skeptical about the study's results, but there's a more important point in the researchers' concession.
Seven years is a lifetime in cryptanalytic attacks, so it’s quite a surprise that no backdoor has been proved in all this time. It raises the possibility that there really is no flaw – or that NSA has introduced a flaw that only NSA can exploit. That’s important because the press and a lot of cryptographers have been saying that NSA weakened internet security for everyone. But if there is no flaw, or if it’s a flaw only NSA can exploit, then at worst internet security has been weakened for adversaries and intelligence targets of the United States.
Call me old-fashioned, but that sounds like a good thing to me. Of course, academic cryptographers may still argue that it's not, but only by flirting with a moral relativism that most Americans don’t share.
2. If there’s a backdoor in the standard, it’s had no discernible effect on internet security.
Talk about burying the lede. After measuring how fast their fake standard’s contrived flaw could be exploited, the researchers decided to go looking for examples of the flawed elliptic curve standard in the wild. What they found seems to cast doubt on the news value of the whole flap.
It turns out that you can scan more or less every public-facing server on the internet in less than an hour. A company called Zmap will do it for you for free. The researchers used ZMap, and they found a total of 21.8 million servers offering secure http connections of the sort that the controversial elliptic curve standard is accused of subverting. And how many of those 21.8 million servers were clearly using the controversial standard?
Let me say that again. 720 out of 21,800,000 secure servers used the standard that is accused without conclusive proof of weakening security on the internet.
In a fit of understatement. the researchers note that this is “much less than 1%.” Well, yes. In fact, it is less than one percent in the same way that the weight of your cat is less than that of a bull African elephant – three orders of magnitude less.
Put another way, only .0003% of the secure servers on the internet were identified as running code that is subject to the famous flaw, if it is a flaw. And it’s likely that the vast majority of those servers are of no interest to the United States government, so the backdoor would never be used for them. If you assume that NSA has a real interest in maybe 1% of internet traffic, that’s 72 servers on the internet whose security might be put at risk by the standard -- and then only if they harbor information of intelligence interest to the United States government.
Big whoop. That's not even table stakes in the world of computer security.
When other researchers went looking for devices on the internet that were open to attack because of flawed plug and play protocols, they found 40 or 50 million online devices with the security flaw, a flaw that some manufacturers have simply refused to fix. And there are between 300 and 500 million computers running Windows XP that will get their last security updates from Microsoft this weekend; after that, it's open season on those machines.
So when it comes to weakening internet security, there are a lot of people and companies that are way, way ahead of NSA. Though you wouldn't know it from the credulous press coverage given to academic cryptographers' attack on the elliptic curve number generator.
Academic cryptographers have seen NSA as their adversary for fifty years, and press coverage so far has simply treated their worst assumptions about the agency as received truth. Despite that, the academic cryptographers' campaign against NSA's role in standards has not attracted widespread public support or serious legislative proposals. Nor did the Obama expert’s group recommendation gain much traction inside the administration.
If I’m right about the two lessons to be learned from this academic paper, that is just about the right response.
Notes: When I did my calculations, I didn’t count SChannel servers, which account for 12% of secure servers. That’s because the researchers admit that, while the controversial protocol is an option in SChannel, it is not the default. Similarly, ZMap could only identify servers running the Java version of the controversial protocol, not the C++ version. But even assuming that there are twice as many, or ten times as many, C++ implementations as Java implementations, the possible flaw in the protocol is dwarfed in its impact by many known security flaws that no one seems to be especially exercised about – suggesting that the flap over NSA’s role in the standard grows out of an agenda other than security.
UPDATE: Dropped an erroneous zero from my percentage calculation. There's no greater honor than having Dorothy Denning correct your math.
According to the New York Times, the President has decided to kill the existing NSA phone metadata program and come up with a substitute that leaves the metadata with the phone companies. The decision will limit the government's ability to find older connections, since few companies hold records for three or more years; it will also be hard to construct a social graph that combines customers of different carriers.
This may have been inevitable when large swaths of the Republican party decided to treat NSA as though it were an arm of Organizing for America. But even so, the President's decision is disappointing for other reasons. The key passage for the future is this passage in the NYT story:
In recent days, attention in Congress has shifted to legislation developed by leaders of the House Intelligence Committee. That bill, according to people familiar with a draft proposal, would have the court issue an overarching order authorizing the program, but allow the N.S.A. to issue subpoenas for specific phone records without prior judicial approval.
The Obama administration proposal, by contrast, would retain a judicial role in determining whether the standard of suspicion was met for a particular phone number before the N.S.A. could obtain associated records.
The administration’s proposal would also include a provision clarifying whether Section 215 of the Patriot Act, due to expire next year unless Congress reauthorizes it, may in the future be legitimately interpreted as allowing bulk data collection of telephone data.
The House intelligence committee has been working to produce a bipartisan replacement for the metadata program. The President had a chance, rare for him, to embrace bipartisanship and work with the House committee. This certainly looks doable, since it appears from press coverage that the differences between the White House and the House approach are modest.
Instead, the White House just couldn't resist sniping at the House and posturing itself as a hair more privacy protective than the bipartisan House approach. This is a sadly familiar story; the White House did the same thing on CISPA, the cybersecurity information sharing bill. There the White House tacked left at the last minute, threatening to veto a bipartisan House bill because it lacked privacy protections that the President's own bill hadn't included.
So which approach is better? Looking at the press coverage, the White House is highlighting two differences in approach. One seems completely symbolic -- deciding how section 215 should be interpreted between the time the new bill passes and the time section 215 expires. But there may be no such interim, since legislation takes a long time to pass, and in any event the new bill is likely to repeal the current program.
The other difference, requiring the FISA court to evaluate each request for phone data, is a bigger deal. It's also problematic. First, it is inconsistent with criminal practice, where subpoenas are routinely served by investigators without court involvement. Does the administration think that stopping cross-border terror attacks is less urgent than investigating bank robberies?
Second, I'm not aware of any circumstances where judges make "reasonable articulable suspicion" determinations in advance. In fact the whole point of the "articulable" part of that test is that the government needs to be able to explain itself later to a judge. What does judicial review of such a standard look like? Do the judges have to decide that the phone number also looks suspicious to them or just that it's reasonable for the government to be suspicious?
Third, the metadata program is needed mainly to speed up a cumbersome process of mapping contacts more or less by hand, but the administration's proposal adds new delays by injecting the court into the front end of the process. No one knows how or whether that will work, because we've never put the courts into that stage.
Finally, there is at least some reason to worry that the administration is going to inject the court into every request for data from the carriers. I hope not, because that would be completely unworkable. Remember, in the new system, all the data remains with the phone companies, so assembling one suspicious character's social graph means first assembling a list of all the people he calls, which is easy -- just serve his phone company with the request -- and then assembling a list of his contacts' contacts. That's the second hop. To collect second-hop records means obtaining records from every carrier whose customers showed up on the first hop. Right now, NSA can move from the first hop to the second with the click of a mouse. But under the proposed new system, every hop requires a batch of new subpoenas to a batch of carriers. That's going to slow the process quite a bit. Adding the courts to the process, though, will turn it into a morass. I hope that's not what the administration has in mind.
At best, this is an opportunity missed. The President seems genuinely convinced that his efforts to build bridges to Republicans have failed because of right-wing intransigence. Sorry, Mr. President, it's stupid point-scoring by your staff, like this leak, that makes you look like someone who either can't do Congress or doesn't care to.
For some reason, debates about Snowden are thick on the ground these days, and I've joined a couple of them. The most fun was the Oxford Union, which has been preparing future Parliamentarians (and Prime Ministers) all around the British Commonwealth since 1823. The Oxford Union debate was "This House would call Edward Snowden a Hero." My argument to the contrary is here:
Highlights of the debate included the arguments of Jeffrey Toobin, with whom I agree on nothing but Snowden, and P.J. Crowley, lately of the Clinton State Department -- both of them well worth watching. I also thought Chris Huhne and Chris Hedges did particularly well in support of the motion. And Charlie Vaughan, the Aussie student who stepped in to support our side, already shows signs of being a formidable politician. They can all be found here.
The motion carried, but narrowly (something like 212-175), which I thought a moral victory with a university audience outside the United States. (And an audience that thinks very highly of itself; Even at Harvard I would have expected a laugh when I declared that being a toady was the key to debating success and then immediately told the audience that it was the most intelligent I had ever appeared before. At Oxford, no one saw anything remotely humorous in the suggestion.)
UCLA also held a debate, on "Snowden -- Patriot or Traitor," a choice I wasn't fond of, since I think there's an element of intent in being a traitor that is hard to judge from this distance. Luckily the school left room for a third choice, "Neither," so I encouraged the audience to vote for anything but patriot. I was paired with Judge James Carr of the N.D.Ohio, formerly of the FISA court. Our opponents included Jesselyn Radack and Trevor Timm. Bruce Fein argued for "neither" though his attack on the government was unrelenting.
UCLA took two votes, one before and one after the debate. Gratifyingly, the room flipped after hearing the argument. The vote was 43-33 in favor of "Patriot," at the outset, but it declined to 34-51 when the debate was done. Here's the (rather long) UCLA debate from beginning to end. (I show up at 29:00 and again at 1:26:20.)
I've also started to take straw polls of audiences on the question "Snowden, Good or Bad?" Snowden doesn't do well in that binary choice. He lost about 10:1 at a Suits and Spooks conference for civil liberties and security researchers three weeks ago, and he lost about 4:1 at a conference of minority corporate counsel where I spoke a week ago.
All this suggests that Snowden is wearing out his welcome with the American public as he compromises intelligence program after intelligence program without producing anything more shocking than the fact that NSA is an aggressive, effective collector of intelligence in a dangerous world.
A French court has upheld a government agency's order requiring that Google post a notice on its famously clean home page. The notice draws attention to the agency's ruling that Google violated French privacy law when it collected personal information under a consolidated privacy statement rather than using several different statements for its different business lines.
Translated loosely from the French, then, the ruling is:
"You have learned facts that the government did not want you to learn without first saying words that the government wanted you to say. To make sure you never do that again, the government will now require you to say other words that the government has written for you."
And all in the name of human rights.
You've got to hand it to the Turks. Just when it seemed that the European Union would never see how abusive privacy laws can be, the Turkish Parliament adopted a privacy bill that caused even the EU to choke. According to the Wall Street Journal, the law is a prime candidate for a Privy -- a genuinely Dubious Achievement in Privacy Law:
The law, which must be approved by President Abdullah Gül to take effect, would allow the agency charged with monitoring telecommunications to block access to Internet sites within four hours of receiving complaints about privacy violations. ...
"The approach that the Internet is being banned, is being censored is wrong," Transport, Maritime Affairs and Communications Minister Lutfi Elvan said Thursday. The measure will prevent infringement of personal rights by bypassing lengthy court procedures that failed to protect privacy in a timely manner, he said.
Shortly after the bill passed, the European Union, which Turkey seeks to join, criticized it for introducing restrictions on freedom of expression. Turkey has an estimated 40 million Internet users.
"The Turkish public deserves more information and more transparency, not more restrictions," said Peter Stano, spokesman for the European Commissioner for Enlargement Stefan Füle. "The law needs to be revised in line with European standards."
Meeting "European standards" for privacy law? That'll be tough. I'm guessing the Turkish Parliament could choose between renaming the law as "The Reding Right to Be Forgotten, Faster, Act" or simply amending it so it applies only to American corporations.
The press is still after James Clapper, Director of National Intelligence, for his statements in response to a question from Sen. Wyden (D OR) in March of last year. Wyden asked whether NSA was collecting data on millions of Americans. “Not wittingly,” Clapper responded.
CNN's Jake Tapper asked President Obama on Friday whether he had concerns about Clapper's answer. Tapper got the Presidential equivalent of a shrug:
"I think that Jim Clapper himself would acknowledge, and has acknowledged, that he should have been more careful about how he responded," Obama said. "His concern was that he had a classified program that he couldn't talk about, and he was in an open hearing in which he was asked, he was prompted to disclose a program, and so he felt he was caught between a rock and a hard place."
The press keeps wondering why Clapper's response hasn't wrecked his career. Maybe a parable will help explain his survival.
Imagine that the Senate is preparing to confirm the nomination of a well-known woman to an important administration job. The committee chairman loathes the nominee and her policies. But his investigators have turned up nothing against her – until they discover that she had an affair with a foreign national fifteen years ago, about a year before the birth of her only son.
The chairman calls the official into his office and confronts her with the evidence.
“It's true,” she says. “It was a terrible mistake. I ended it almost immediately. Then I discovered I was pregnant. The biological father doesn't know. Neither do my husband or my son.”
“This affair was as reckless as your policy judgments,” thunders the chairman. “The committee and the American people deserve to know your true character.”
“Please,” she pleads. “I will tell every member of the committee about it, and if they want to vote against my confirmation, so be it. But I beg you not to disclose this publicly. My son and husband will find out, and it will wreck their lives.”
“Oh, I won't disclose it," says the chairman. “You will. Because one of my first questions at the hearing will be 'You and your husband have had one biological child together, is that correct?' “
He smiles. “You can answer that question honestly and disclose the affair, or you can commit perjury. Your choice.”
At the hearing, the chairman asks the question.
“Yes,” the official answers, “My husband was there when I gave birth to my son, and he's been there for us every day since.”
So here's my question: Who is the hero of this story and who the villain?
If you can't bring yourself to condemn the official or to praise the chairman, well, now you understand the executive branch's view of the exchange between Clapper and Wyden.
I interviewed David Medine this week in the course of Steptoe's latest podcast on technology, security, privacy, and government. The interview yielded a good overview of the Board's report, and not an uncritical one. I questioned the Board's decision to write a legal brief on the 215 program, as well as the Board's remarkable claim that it had found the unambiguous "plain meaning" of section 215 -- despite the fact that 15 judges disagreed. David is a fine lawyer, and he gave as good as he got.
The exchange is interesting, and I think it digs deeper into the report than most news stories have.
Almost immediately after the Republican National Committee adopted an error-filled resolution attacking the NSA and its telephone metadata program, current and former GOP officials took a strong stand against the RNC resolution:
[T]he RNC resolution threatens to do great damage to the security of the nation. It would be foolhardy to end the program without ensuring that we remain safe from attack. This database provides a uniquely valuable capability for discovering new phone numbers associated with international terrorist organizations, including numbers that may be used by terrorist cells within the United States. Former Deputy Director of the CIA Michael Morrell has testified that having this capability might have prevented 9/11 and could help to prevent the next 9/11.
This is not a Democratic or a Republican program. Protecting Americans from terrorism should not be a partisan issue. The program was first launched under President George W. Bush. It was approved by Congressional leaders of both parties. And for good reason. It helps to keep Americans safe.
It may be appropriate to modify the program in certain respects, if that can be done without a significant loss in effectiveness, but abolishing it without any idea how to close the intelligence gap that 9/11 exposed is not a recipe for partisan advantage. It is a recipe for partisan oblivion.
Count us out.
Signatories included a current intelligence committee member, Rep. Mike Pompeo, and a host of former Bush administration officials: Attorney General Mukasey, Homeland Security Secretary Chertoff, CIA Director Hayden, Homeland Security Adviser Wainstein, DOD Under Secretary Edelman, OLC head Bradbury, and me.
Former Homeland Security committee chair Peter King expressed similar views even more colorfully.
In other contexts, I've called it Obama Derangement Syndrome, where suspicion of the President begins to distort GOP views of even the least politicized national security elements of government.
That really is a dead end.
In my experience, privacy law produces a remarkable number of foolish outcomes. The reason, I suspect, is that our notions of "privacy" evolve too quickly to be reduced to law. It's like writing a law codifying good manners. Over time, as our definition of good manners or privacy changes, the old code starts producing irrational results -- or it is enforced only selectively, to punish those who offend the powerful. That observation led to annual awards for Dubious Achievement in Privacy Law -- the Privies for short. The nominees from last year can be found here.
It's a new year, but privacy law is already living down to my expectations, throwing off stupid or venal results at a rapid clip. It's time to open nominations for the 2015 Privies. Here is the first:
Worst Use of Privacy Law to Serve Power and Privilege: University of North Carolina at Chapel Hill
There's nobody more powerful at UNC than the big athletic programs. So when Mary Willingham, a UNC researcher, disclosed that 60% of the Tar Heel student-athletes she studied were reading at a level between the fourth grade and the eighth grade, she was asking for trouble.
She got it. An angry UNC- Chapel Hill chancellor put four counter-researchers to work attacking Willingham's research and then denounced it as "a travesty."
But this is academia, where it's not enough to debate your opponents. They have to be crushed.
And what better weapon to use against inconvenient speech than federal privacy law? Before she knew it, Willingham's approval to conduct her research was suspended. Why? Because she was using individual students' names to correlate their test scores with their grades. The UNC Institutional Review Board,which regulates human subject research to protect student privacy, declared that it hadn't approved her collection of student names, so the research had to be shut down, now. it didn't matter that "Willingham ... thought she was following IRB rules because as the primary investigator she never released names to anyone" at least until a hostile UNC administrator demanded them. Just keeping the names in a file drawer was a violation, according to IRB administrators at UNC.
Of course the administrators also denied that Willingham had been singled out for punishment or that the controversy over UNC's academic standards had triggered their action. She was free, they insisted, to apply for approval to continue embarrassing the Tar Heels.
Fat chance. Willingham's only hope of fighting this abuse of power is to find some equally powerful ally.
Maybe she should seek approval for her research from Duke's Institutional Review Board.
I've been doing a regular weekly podcast with Michael Vatis and Jason Weinstein, two of my partners who share an interest in security, privacy, and technology, as well as a background in government.
More recently, we've started inviting newsmakers to join us for a half-hour interview.
Earlier this week, I interviewed Chris Inglis, the recently retired Deputy Director of the National Security Agency. It's a wide-ranging interview that touched on everything from NSA's morale to the changes in its culture that this crisis will demand. Chris Inglis flagged the Snowden disclosures he finds most disturbing and unjustifiable even on Snowden's terms but refused to accuse Snowden of working with Russia, saying he hadn't seen evidence of that. It's a useful contribution to the debate by an insider who is now free to be a bit more candid than before, within the limits imposed by classified information rules.
Next week, I'll be interviewing David Medine, chairman of the Privacy and Civil Liberties Oversight Board, about the Board's report, which I've already panned here. It should be a civil but vigorous exchange of views! If you want to subscribe to the podcasts, the RSS feed is here.
I've now had a chance to look at the report of the Privacy and Civil Liberties Oversight Board on section 215 and the telephone metadata program. What a disappointment.
The PCLOB declares by a bare majority that the program is unlawful and should be shut down. The report's 45-page (!) statutory analysis reads like an opinion written by a court that is bound and determined to reach a favored outcome.
Elsewhere the PCLOB expresses enthusiasm for adversarial briefing and argument: "Our judicial system thrives on the adversarial presentation of views." The PCLOB majority, though, would apparently prefer to thrive without the hassle of, you know, briefs and arguments and stuff, especially if they might get in the way of its preferred legal determination.
Rachel Brand in dissent gives the entire 45-page exegisis the back of her hand, and with justification:
This legal question will be resolved by the courts, not by this Board, which does not have the benefit of traditional adversarial legal briefing and is not particularly well-suited to conducting de novo review of long-standing statutory interpretations.
The other dissenter, Elisabeth Cook, similarly devotes only a sentence to the statutory issue and the Board's effort to play judge. I don't think it's because the dissenters lacked for ammunition to rebut the majority's statutory labored and tendentious argument. I suspect that they thought the whole thing was pointless and largely self-rebutting.
I feel the same way, but I can't help pointing out a few of the flaws in this part of the report. First, the Board argues that all the phone records in the country can't be deemed "relevant" to an FBI investigation of terrorism. That has some plausibility, since the vast majority of phone records aren't going to be relevant to any investigation.
The problem for the Board is that the law has never required that discovery orders exclude all irrelevant data. In fact, courts have routinely approved civil, criminal, and administrative order that sweep up lots and lots of utterly irrelevant information about perfectly innocent parties. The best you can say about the law in this area is that it allows the government to subpoena information in buckets, even if only a few spoonfuls of clearly relevant information can be found in each bucket.
The courts have struggled with exactly how many spoonfuls of relevant data in how big a bucket of irrelevant data can still be obtained in discovery. As the majority admits:
To be sure, the case law regarding civil discovery, grand jury subpoenas, and administrative subpoenas shows that relevance is interpreted broadly, and that incidental production of unrelated materials is accepted as essential to enable fulsome investigative efforts. Standards of relevance thus permit parties and the government to engage in a degree of fishing, so long as it is not arbitrary or in bad faith. But the case law makes equally clear that the definition of relevance is not boundless.
And here's the problem with the majority analysis: It tries to talk about the program as though the government were actually searching every piece of metadata in the database. But we all know by now that the order requiring production of the data was matched by an order greatly restricting searches to a few hundred a year, searches that are relevant to terror investigations under the most demanding standard imaginable.
Viewed as a whole, the 215 metadata program is like a discovery order telling a party to put a mass of records into a court-supervised escrow, where the mass will be searched for a few bits of relevant data that are then supplied to the other party. The Board majority is willfully blind to the direct connection between the production order and the minimization requirements that accompany it.
There's an old joke that to think like a lawyer you have to be able to treat two intimately connected facts as though they were completely unrelated. If so, the Board's majority opinion is the most lawyerly thing I've read in years.
One more point. Section 215 was renewed twice by Congress after the FIS court approved the current interpretation of "relevant." Since Congressional action re-enacting a statute is usually viewed as approving the administrative and judicial interpretations adopted before reenactment, this is kind of a bad fact for the Board majority.
They respond with a flurry of argument (never an indication of confidence). Extending section 215's sunset date isn't the same as re-enacting it, they say. And the rule on reenactments doesn't apply if the language of the statute is clear; since the Board majority is sure that its one-eye-closed reading of section 215 is plainly right, it can ignore the reenactment rule (a particularly ballsy statement given that the three Board members' interpretation has so far lost 15-1 in front of actual judges).
Finally, the Board majority says the reenactment doctrine doesn't apply because, while the FIS court's interpretation of 215 was known to the intelligence and judiciary committees of both houses and to many other members as well, it was still classified and so not known to all members or the public.
This argument is also willfully blind, this time to the ruling Supreme Court precedent, Lorillard v. Pons, 434 U.S. 575 (1978). That case held that jury trials were available in private enforcement actions under the Age Discrimination in Employment Act (ADEA), even though the act said nothing about jury trials. Why? Because the ADEA said that it would "be enforced in accordance with the 'powers, remedies, and procedures' of the Fair Labor Standards Act (FLSA)." Now, the FLSA doesn't say anything about jury trials either, but the courts interpreting that Act had all decided that it did allow them in private suites. So the Supreme Court presumed that Congress understood that when it adopted the "procedures" of the FLSA it was adopting the jury trial interpretation of courts applying the FLSA: "[W]here, as here, Congress adopts a new law incorporating sections of a prior law, Congress normally can be presumed to have had knowledge of the interpretation given to the incorporated law, at least insofar as it affects the new statute."
So here's my question: How many members of Congress had any idea that they were incorporating those FLSA decisions into the ADEA, let alone what the decisions said? One? Five? If more than a handful of committee chairmen and floor managers were even vaguely aware of the cases the Supreme Court presumed they fully grasped, I'll eat my hat.
The members of Congress who understood the interpretation of section 215 when they voted on its extension probably outnumber by ten or twenty to one the Congressmen who understood that the ADEA required a jury trial when that law was adopted.
The Board majority claims that "it is not a legitimate method of statutory construction to presume that these legislators, when reenacting the statute, intended to adopt a prior interpretation that they had no fair means of evaluating." The problem with that statement is that it could have been made with equal justice about the ADEA and the Supreme Court's statutory construction in Pons. (Pons also puts a hole below the waterline of the claim that the presumption only applies to "real" reenactments, since the Pons Court applied the presumption to interpretations of the FLSA, a statute that wasn't being reenacted at all.)
So the Board majority in the end stumbles into overturning the entire re-enactment doctrine in its zeal to kill an important national security program. Life is hard when you try to make law without briefs and arguments and stuff.
Since the report's recommendation to abandon the 215 program has already been rejected by President Obama, much of the Board's report thus boils down to an unpersuasive amicus brief aimed at undermining the argument the President's lawyers will be making in the Second and DC circuits.
I would have expected a more serious and useful work product from the Board, especially in its first outing.
According to Charlie Savage at the NYT, the Privacy and Civil Liberties Oversight Board will issue today a report declaring that the NSA's telephone metadata program is illegal and should be ended. That is the conclusion of the three Democrats on the board; the two Republicans dissented. If you were wondering why it took the Obama administration three years to fill the board, you now have the answer. How does the board get around the fact that the statute was reauthorized by Congress twice after the metadata program began? The story hints at the PCLOB's view:
Defenders of the program have argued that Congress acquiesced to that secret interpretation of the law by twice extending its expiration without changes. But the report rejects that idea as “both unsupported by legal precedent and unacceptable as a matter of democratic accountability.”
I find it hard to believe that this position withstands analysis but I'll wait to see the full report.
Randy Barnett argues that NSA's metadata program is bad because the government will use the information to target people for their political views and to embrace mission creep.
His solution is to leave the metadata in the hands of the phone company. But really, what good would that do?
Suppose that, as Randy fears, Congress wakes up one day and decides to use phone metadata to suppress dissent and gun ownership across America. The fact that the data is stored in four or five phone companies' databases rather than NSA's will forestall the Dark Night of Fascism for, oh, about 90 minutes. For the sake of that speedbump, we should give up our ability to identify cross-border terror plots?
Randy's solution to that problem is to overrule a line of Supreme Court cases (Smith v. Maryland) holding that no one has a reasonable expectation of privacy in information they've disclosed to a third party. With Smith v. Maryland set aside, the government would need a search warrant to see the metadata.
Overruling existing Supreme Court precedent is a law professor's prerogative, but the rest of us don't have to go along. And in fact the Smith v. Maryland doctrine makes sense, especially compared to Randy's solution. We all learned no later than the third grade that secrets shared with another are not really secrets. They can be revealed at times and in ways we never expected. It hurts, but it's a fact of life.
Randy's solution is a fiction; he wants the courts to deny the facts of life and pretend that we still control information we willingly gave away. And considering how many slippery slopes Randy has to invoke to make metadata collection scary, he hasn't given much thought to the slipperiness of the doctrine he wants to create. Data gets cheaper to collect and to share all the time. Exactly which kinds of data would he leave under our fictional control after we have given it up, and for how long?
After the fictionalizing and overruling is done, though, all Randy achieves is to require a warrant before the government can see phone metadata. That rule would break the NSA program, for sure, and it would recreate the gap that existed on September 10.
So what benefit offsets that high cost? After all, courts too are government agencies staffed by human beings; the ex parte process of obtaining warrants is hardly a guarantee against the Dark Night of Fascism. It's just a bigger speedbump – and probably a less effective protection than we now have.
The metadata program, unlike a warrant, is reviewed by members of both parties, both houses of Congress, and the judiciary. It includes audits and oversight that search warrants never get.
It seems to me that Randy's approach is the equivalent of knocking down a house because the roof may leak some day and erecting in its place a lean-to made of sticks.
Ars Technica has published an article highlighting a recently declassified FIS court opinion. The opinion says in a footnote that "NSA expects that it will continue to provide on average approximately three telephone identifiers per day to the FBI." Earlier opinions say NSA is providing two identifiers a day. The opinions stop putting a number on NSA's referrals in 2009. This story is accurate up to a point, but it then veers off into weirdness and paranoia:
Some experts speculated that this system of the NSA tipping off the FBI may be an unusual arrangement—analogous to the NSA’s giving information to the Drug Enforcement Agency to prosecute criminal cases. “I am not sure it tells us anything new but rather adds more confirmation to a widely suspected and occasionally confirmed technique of law enforcement following intelligence leads and then reverse-engineering a paper trail to use in court," Fred Cate, a law professor at Indiana University, told Ars. ... However, others pointed out that in the absence of further information as to how exactly the NSA’s information is sent to the FBI, and under what circumstances, it’s impossible to know precisely what’s going on. “Furthermore, given how broadly it's possible to define the word ‘tip,’ we have no information on how useful those thousand tips were,” Brian Pascal, a research fellow at the University of California Hastings College of the Law, told Ars. “Both intelligence and law enforcement organizations receive many, many tips, and a large part of their job is separating the signal from the noise. “As far as parallel construction goes, the only thing I can say for certain is that if one records a sufficiently large number of dots, then it's possible to connect them to draw any number of pictures. This is not always the result of nefarious intentions—it can happen unintentionally too. Think about all the people who were improperly placed on watchlists due to conclusions reached by some opaque algorithm.”
Huh? We don't need any of this speculation to understand why the FBI is getting tips from NSA. We just need a refresher on how the 215 program works: NSA gets a suspicious number in the US and does a link analysis to see what other numbers might be tied to that number and are themselves suspicious. If it finds a suspicious set of numbers, NSA gives them to the FBI to check out.
This means, of course, that NSA doesn't actually know even the names that are associated with the metadata it is analyzing, a fact that a fair-minded observer might be expected to know, since it's part of NSA's explanation for why the metadata program isn't "spying" on all Americans.
In fact, Ars Technica doesn't seem to realize that the FBI tips it's getting exercised about have been part of the public explanation of the 215 program for months. Despite all the hyperventilating about how NSA's search of three hops' worth of calls could lead to scrutiny of millions of subscribers, it turns out that, at its peak, the program was leading to scrutiny of maybe a thousand actual subscribers a year. I say "at its peak" because we also know that the number of tips to the Bureau has declined since 2007. By 2012, the number was down to 500 tips a year.
So, really, the headline should be "NSA cut surveillance by 50% before Snowden leaks."
But I won't hold my breath waiting for that entry to appear on Drudge.
The Committee on Foreign Investment in the United States, or CFIUS, reviews foreign investments for national security risks. It is now beyond doubt that Chinese investment is getting much closer scrutiny from CFIUS. A total of ten transactions failed to survive review in 2012, according to a just-released Treasury report. That may not sound like a lot, but in 2011, only two deals failed to make it through the process. At the time, two was a lot of deals to kill in a year, since CFIUS has sometimes gone a decade or more without deep-sixing any. When in government, I had a reputation as a CFIUS security hawk, but I doubt I ever recommended killing more than two deals in a year.
This crowd is tough.
Matt Blaze, a well-known public cryptographer and NSA critic (but I repeat myself), offered what seemed like a modest concession in the relentless campaign against NSA intelligence gathering:
The NSA's tools are very sharp indeed, even in the presence of communications networks that are well hardened against eavesdropping. How can this be good news? It isn't if you're a target, to be sure. But it means that there is no good reason to give in to demands that we weaken cryptography, put backdoors in communications networks, or otherwise make the infrastructure we depend on be more "wiretap friendly". The NSA will still be able to do its job, and the sun need not set on targeted intelligence gathering.
Don't get me wrong, as a security specialist, the NSA's Tailored Access Operations (TAO) scare the daylights of me. I would never want these capabilities used against me or any other innocent person. But these tools, as frightening and abusable as they are, represent far less of a threat to our privacy and security than almost anything else we've learned recently about what the NSA has been doing.
TAO is retail rather than wholesale.
A day later he revealed just how modest this olive branch was, making clear that he wants to take away the NSA's best hacking tools. He told the Washington Post today that NSA should be required to surrender any undiscovered vulnerability it finds:
Among the weapons in the NSA’s arsenal are “zero day” exploits, tools that take advantage of previously unknown vulnerabilities in software and hardware to break into a computer system. The panel recommended that U.S. policy aim to block zero-day attacks by having the NSA and other government agencies alert companies to vulnerabilities in their hardware and software. That recommendation has drawn praise from security experts such as Matt Blaze, a University of Pennsylvania computer scientist, who said it would allow software developers and vendors to patch their systems and protect consumers from attacks by others who may try to exploit the same vulnerabilities.
Matt tries to square that circle by saying that NSA can keep exploiting the vulnerability at the same time that it reports. So at least we'll have good intelligence on really stupid targets who don't update their software. That's some compromise.
The zero-day problem is a thorny one, to be sure. There are times when it's in the country's interest to patch rather than exploit a hole, but a policy requiring that holes always be patched will not stop hacking by anyone other than NSA.
Sen. Bernie Sanders (I-VT) has written a letter to NSA's director, asking whether the agency has spied on members of Congress. It sounds like he's uncovered a scandal, until you read the fine print. It turns out that Sen. Sanders is simply asking whether NSA collects Americans' telephone metadata, and every sentient American already knows that answer: NSA's program collects metadata for all US calls. So Sen. Sanders's letter isn't an inquiry, it's a stunt.
The Guardian is an enthusiastic participant in the stunt, with Spencer Ackerman writing that NSA "did not deny collecting communications from legislators of the US Congress." Well, duh. Unfortunately, it looks as though Ted Cruz, who so far has avoided the worst fever swamps of NSA paranoia, also fell for the stunt, tweeting "@SenSanders asks ? millions of Americans would like answered: Are any law-abiding citizens safe from NSA spying?"
At the risk of being repetitive, Sen. Cruz, we've all known the answer for months. NSA's 215 program collects all domestic call metadata, and it protects all that data by requiring that any search of the data be based on a reasonable suspicion of terrorism. All means all. All Americans' metadata is collected. All Americans' privacy is protected by the minimization requirements. Sen. Sanders's stunt adds precisely nothing to what we know about the program, or to the debate.
But as long as the press covers the stunt as though it were a story, I think we can predict the next batch of letters that Sen. Sanders will send to NSA:
The New Yorker has a remarkably thought-provoking article on what some call the "neurobiology" of plants. That's a deliberately edgy way of pointing out just how much communicating and sensing and adapting plants do, all without anything resembling a brain. Some samples:
Plants have evolved between fifteen and twenty distinct senses, including analogues of our five: smell and taste (they sense and respond to chemicals in the air or on their bodies); sight (they react differently to various wavelengths of light as well as to shadow); touch (a vine or a root “knows” when it encounters a solid object); and, it has been discovered, sound. In a recent experiment, Heidi Appel, a chemical ecologist at the University of Missouri, found that, when she played a recording of a caterpillar chomping a leaf for a plant that hadn’t been touched, the sound primed the plant’s genetic machinery to produce defense chemicals. Another experiment, done in Mancuso’s lab and not yet published, found that plant roots would seek out a buried pipe through which water was flowing even if the exterior of the pipe was dry, which suggested that plants somehow “hear” the sound of flowing water....
Mimosa pudica, also called the “sensitive plant,” is that rare plant species with a behavior so speedy and visible that animals can observe it; the ... mimosa also collapses its leaves when the plant is dropped or jostled. Gagliano potted fifty-six mimosa plants and rigged a system to drop them from a height of fifteen centimetres every five seconds. Each “training session” involved sixty drops. She reported that some of the mimosas started to reopen their leaves after just four, five, or six drops, as if they had concluded that the stimulus could be safely ignored. “By the end, they were completely open,” Gagliano said to the audience. “They couldn’t care less anymore.”
Was it just fatigue? Apparently not: when the plants were shaken, they again closed up. “ ‘Oh, this is something new,’ ” Gagliano said, imagining these events from the plants’ point of view. “You see, you want to be attuned to something new coming in. Then we went back to the drops, and they didn’t respond.” Gagliano reported that she retested her plants after a week and found that they continued to disregard the drop stimulus, indicating that they “remembered” what they had learned. Even after twenty-eight days, the lesson had not been forgotten. She reminded her colleagues that, in similar experiments with bees, the insects forgot what they had learned after just forty-eight hours. ...
Time-lapse photography is perhaps the best tool we have to bridge the chasm between the time scale at which plants live and our own. This example was of a young bean plant, shot in the lab over two days, one frame every ten minutes. A metal pole on a dolly stands a couple of feet away. The bean plant is “looking” for something to climb. Each spring, I witness the same process in my garden, in real time. I always assumed that the bean plants simply grow this way or that, until they eventually bump into something suitable to climb. But Mancuso’s video seems to show that this bean plant “knows” exactly where the metal pole is long before it makes contact with it. Mancuso speculates that the plant could be employing a form of echolocation. There is some evidence that plants make low clicking sounds as their cells elongate; it’s possible that they can sense the reflection of those sound waves bouncing off the metal pole.
Equally sophisticated are plants' chemical communication systems:
Since the early nineteen-eighties, it has been known that when a plant’s leaves are infected or chewed by insects they emit volatile chemicals that signal other leaves to mount a defense. Sometimes this warning signal contains information about the identity of the insect, gleaned from the taste of its saliva. Depending on the plant and the attacker, the defense might involve altering the leaf’s flavor or texture, or producing toxins or other compounds that render the plant’s flesh less digestible to herbivores. ... Several species, including corn and lima beans, emit a chemical distress call when attacked by caterpillars. Parasitic wasps some distance away lock in on that scent, follow it to the afflicted plant, and proceed to slowly destroy the caterpillars.
I can't help tying these capabilities to the Next Big Thing in computing: the Internet of Things, more properly thought of as mass deployment of sensors. In many ways, that's a capability in search of an application. It's easy to wire your house so the network knows what room you're in, but really, why bother? In contrast sensors that can eavesdrop on plant communications could have lots of applications. Farmers can wait to apply pesticides until their crop tells them which pests are attacking which plants. Hunters can ask the forest where deer congregate to do their browsing. Maybe the grass in minefields is already broadcasting the location of the explosives its roots are avoiding.
Lots of these capabilities could be built into smart phones, perhaps with sensor attachments. Even more sophisticated work could be done with special purpose devices mounted on drones or just on the Google Street View car. It's nice to have pictures of houses along the road, but imagine Google Plant View: a map of everything the plants know about a neighborhood: soil types and pH content, homes with toxic molds, the progress of invasive insects, herbivores, and plants.
Of course we'd have to be able to translate plant volatiles into English. Or maybe Italian, since the "poet-philosopher" of the field is an Italian researcher by the name of Stefano Mancuso; and he has already begun to assemble a dictionary:
His somewhat grandly named International Laboratory of Plant Neurobiology, a few miles outside Florence, occupies a modest suite of labs and offices in a low-slung modern building. ... Giving a tour of the labs, he showed me ... a chamber in which a ptr-tof machine—an advanced kind of mass spectrometer—continuously read all the volatiles emitted by a succession of plants, from poplars and tobacco plants to peppers and olive trees. “We are making a dictionary of each species’ entire chemical vocabulary,” he explained. He estimates that a plant has three thousand chemicals in its vocabulary, while, he said with a smile, “the average student has only seven hundred words.”
The dubious achievement awards, also known as the Privies, were dominated by officials of the Obama Administration.
The awards are a light-hearted way of expressing skepticism about the effort to write evolving notions of privacy into law. Because concepts of what is private change rapidly while laws remain on the books for decades, unintended consequences are common. Outmoded privacy laws are often misused to protect the powerful or are invoked hypocritically to achieve other ends, and judicial applications of privacy statutes often make no sense to ordinary people, whose concepts of privacy have evolved faster than the law.
The winners of the 2014 Privies exemplify all of these flaws.
Health and Human Service Secretary Kathleen Sebelius was voted Privacy Hypocrite of the Year for imposing harsh penalties on private companies whose systems for handling personal health data had security weaknesses -- the same kind of weaknesses that HHS ignored when it rolled out the deeply flawed healthcare.gov site.
Agriculture Secretary Thomas Vilsack, meanwhile, won the prize for Worst Use of Privacy Law to Protect Power and Privilege. Vilsack's Agriculture Department invoked privacy law to prevent the New York Times from checking the names and addresses of people who made questionable claims for federal funds in the “Pigford” scandal. Since media attention to fraud in the program would have cast doubt on the department's stewardship of taxpayer funds, most voters thought the government was actually applying a common government understanding of privacy: "Privacy Law Protects You From Anything That Might Embarrass Me."
Finally, in the one category where no executive branch candidates were nominated, the award for Dumbest Privacy Case of the Year went to U.S. District Court Judge Lucy Koh for her opinion opening the door to claims that all 425 million users of Gmail are victims of wiretapping by Google (and quite possibly are themselves aiding and abetting wiretapping when they send mail to others). The decision also hints that spam filters are themselves a form of wiretapping in the absence of detailed consent procedures. One decision, three remarkably dumb results.
The Privies are based on the votes of privacy professionals, who know the dirty secrets of privacy law better than most, but the general public was also invited to vote for a People's Choice award in the same categories. Kathleen Sebelius and Tom Vilsack dominated the voting among both privacy professionals and the general public. Judge Koh was the clear favorite of privacy professionals, but she was edged out in popular voting by the Boston Police Department, which invoked wiretapping law to threaten a citizen who recorded and posted on the Internet his conversation with a press spokesman of the Boston Police Department.
The full slate of nominations can be found here. The final results of balloting are listed below.
Privacy Hypocrite of the Year
Kathleen Sebelius, US Secretary of Health and Human Services
Viviane Reding, European Commissioner for Justice, Fundamental Rights, and Citizenship
Francois Hollande, President of France
James Sensenbrenner, U.S. House of Representatives
Angela Merkel, Chancellor of Germany
Tom Vilsack, Secretary of Agriculture
China's Privacy Law
Max Mosley, former president of the Fédération Internationale de l'Automobile
Spain’s Data Protection Agency (Agencia Española de Protección de Datos)
Gmail Wiretapping Claims (Hon. Lucy Koh)
FTC v. LabMD (Federal Trade Commission)
Joffe v. Google (Hon. Jay Bybee, Ninth Circuit)
Boston Police Department Wiretap Prosecution (Commissioner William Evans)
Note: columns may not sum to 100 due to rounding.
I'm shocked to discover that the august Ninth Circuit has been tampering with the balloting for the Privies, perhaps hoping to save its own Judge Bybee from winning the award for "Dumbest Privacy Case" of 2014. The nomination was for a decision that exposed Google to liabilty for gathering wi-fi signals while driving by on the street.
As we noted in the nomination, "the law exempts the capturing of radio broadcasts and publicly accessible communications; there's not much doubt that wi-fi uses radio waves and can be accessed by the public if it's not secured. But Judge Bybee of the Ninth Circuit wasn't deterred by either of the barriers to holding Google liable. He decided that radio communications are only those things we hear on the AM-FM dial. As for being publicly accessible, he writes, why that's ridiculous: if you listened to wi-fi signals on an AM radio, "they would sound indistinguishable from random noise."
Now Judge Bybee seems ready to admit that he didn't really think that whole "how would the signals sound on an AM radio/" thing through. Responding to the imminent threat of a Privy Award (and Google's rehearing petition), the panel has modified the opinion to make it less ridiculous. It has granted rehearing and dropped the entire discussion about what is and is not publicly accessible, leaving the definition of "publicly accessible" to be argued before the district court in the first instance.
There are still some tight races, whether in voting by the public or by privacy professionals. But there are differences between the two groups. The most interesting difference concerns the crucial vote for "Privacy Hypocrite of the Year." Among the public, the top two contenders are Rep. James Sensenbrenner, for deliberately skipping classified briefings and then complaining that he wasn't told about NSA's classified program, and Sec. Kathleen Sebelius, for launching healthcare.gov without any of the security features her Department has penalized private health companies for failing to implement.
But among privacy professionals, the race for top honors is between Secretary Sebelius and a little-known Brussels bureaucrat, European Commissioner (and Vice President) Viviane Reding, who is notorious for trying to regulate US intelligence activities while admitting that she has no authority to regulate European intelligence agencies.
The votes of privacy professionals are weighted more heavily precisely to give obscure but outrageous abusers of privacy law a fair shot at winning, so privacy professionals with strong views on whether Commissioner Reding deserves the prize need to weigh in now.
You have only 24 hours to make your vote count.
Quick reactions to a couple of books I had a chance to read over the Christmas break.
I can highly recommend Company Man by John Rizzo. Rizzo was one of the first lawyers at the CIA, and he recounts a thirty year career there with grace and a remarkable absence of rancor, even though he was denied the ultimate promotion -- to General Counsel -- after a highly politicized confirmation hearing. (His offense was asking the Justice Department whether certain harsh interrogation techniques were legal, and not selling out the CIA officers who relied on Justice's advice by disavowing it when he got to the hearing.)
Rizzo had a ringside seat at all the most dramatic political events involving the CIA from the 1970s to the Obama Administration. He brings self-deprecating wit and a lot of human insight to his portrayal of these events and the CIA directors he helped guide through them. It's available on January 5, 2014. (Disclosure: I got an early copy because John and I have been friends and colleagues for a long time. But in the interest of full disclosure, I have no incentive to overpraise his book, since I'm afraid it's actually better than mine.)
In contrast, The Frackers by Gregory Zuckerman was a disappointment. The book is getting praise from the right blogosphere because it tells the story of fracking straight, with only occasional flaming faucets and with considerable attention to the remarkable contribution that the frackers have made to the nation's energy independence. I tend to agree that that's the right take on the industry, but as a read, the book is benefiting from conservative affirmative action. It's long, dense, and full of characters whose stories are admirable but pretty much indistinguishable. Wait, which founder nearly went bankrupt and which one was fired after hitting a slump? Which one bet big on shale in Texas? North Dakota? Pennsylvania? Who ended up making his wife a very rich divorcee and whose son developed a drug problem? And why do I care? The book would have been better with fewer stories and a bit more differentiation among them.
Voting for the 2014 Dubious Achievements in Privacy Law is almost done, and the race is heating up. Who used privacy law most egregiously to serve power and privilege? There are plenty of candidates, but the leaders this year are two: On the one hand, the Chinese government, which adopted a privacy law and promptly brought criminal privacy charges against a Western investigator examining corporate misdeeds. And on the other, the Obama administration's Agriculture Department, which cited privacy grounds in refusing to name any of the beneficiaries of the notoriously fraud-ridden "Pigford" settlement.
But if your favorite was a man who could afford both a naked five-hour, five-hooker sadomasochistic orgy and a litigation campaign to clear his name by proving that it was not a naked five-hour, five-hooker sadomasochistic orgy with a Nazi theme, well, Max Mosley isn't quite out of the running yet. With a surge of support, his privacy law campaign to force the Internet to forget pictures of his naked five-hour etcetera still could qualify as the worst use of privacy law to protect the privileged.
If you're sure you know which of the candidates is abusing privacy law most egregiously to serve the powerful, and you haven't already voted, now is the time to review the candidates and then to cast your ballot.
Usually it takes a couple of stories. First foreign officials condemn reports that NSA has gathered intelligence on their government. Then, later, they have to admit that, well, yes, they do sometimes spy on the United States.
But Israel has taken chutzpah to new heights -- simultaneously demanding that the United States stop spying on Israel and that it release the guy caught spying on the United States for Israel:
Senior Israeli officials on Sunday demanded an end to U.S. spying on Israel, following revelations that the National Security Agency intercepted emails from the offices of the country’s top former leaders.
It was the first time that Israeli officials have expressed anger since details of U.S. spying on Israel began to trickle out in documents leaked by former NSA contractor Edward Snowden. The scandal also spurred renewed calls for the release of Jonathan Pollard, a former American intelligence analyst who has been imprisoned in the U.S. for nearly three decades for spying on behalf of Israel.
“This thing is not legitimate,” Israeli Intelligence Minister Yuval Steinitz told Israel Radio. He called for both countries to enter an agreement regarding espionage.
“It’s quite embarrassing between countries who are allies,” Tourism Minister Uzi Landau said. “It’s this moment more than any other moment that Jonathan Pollard (should) be released.”
Unfortunately, while voting for the 2014 Privacy Hypocrite of the Year is still open, it is too late for Israel to overcome the lead of nominees like Kathleen Sebelius, Jim Sensenbrenner, and Francois Hollande.