Thursday, April 9, 2015

Why Don't Surveillance State Defenders Seem To Care That The Programs They Love Don't Work?

As the pressure is finally on over renewing Section 215 of the PATRIOT Act (and the mass surveillance programs enabled by the law), there are some interesting questions being raised: such as why doesn't the intelligence community seem to care about whether or not its programs work. That link takes you to a great article by former FBI agent (and now big time defender of civil liberties) Michael German, investigating the issue in the context of cybersecurity legislation. Here's just a snippet in which he notes that basically everyone agrees that these programs won't help at all, and yet some are still pushing for them:

There is a strong argument for ending these programs on the basis of their high cost and lack of effectiveness alone. But they actually do damage to our society. TSA agents participating in the behavioral detection program have claimed the program promotes racial profiling, and at least one inspector general report confirmed it. Victims unfairly caught up in the broader suspicious activity reporting programs have sued over the violations of their privacy. The Privacy and Civil Liberties Oversight Board concluded the telephone metadata program violated the Electronic Communications Privacy Act and raised serious constitutional concerns.

The Cybersecurity Information Sharing Act passed by Senate Intelligence Committee last week is yet another example of this phenomenon. Experts agree that the bill would do little, if anything, to reduce the large data breaches we’ve seen in recent years, which have been caused by bad cyber security practices rather than a lack of information about threats. If passed by the full Congress, it would further weaken electronic privacy laws and ultimately put our data at greater risk. The bill would add another layer of government surveillance on a U.S. tech industry that is already facing financial losses estimated at $180 billion as a result of the exposure of NSA’s aggressive collection programs.

He also details some of the over-inflated claims of other surveillance programs in the past -- all of which were later shown to be false. But, the article doesn't really attempt to answer the question -- just raise it. In the past, we've noticed that the entire concept of a cost-benefit analysis seems antithetical to the way the surveillance state does business. But why is that?

There are a few theories. The most obvious one is the one put forth by the ACLU's Kade Crockford a few months ago, highlighting a statement by former FBI assistant director Thomas Fuentes in a documentary about the FBI's fake plots, The Newburgh Sting, in which he basically admits that keeping the public scared is how you get your budgets approved:
If you’re submitting budget proposals for a law enforcement agency, for an intelligence agency, you’re not going to submit the proposal that ‘We won the war on terror and everything’s great,’ cuz the first thing that’s gonna happen is your budget’s gonna be cut in half. You know, it’s my opposite of Jesse Jackson’s ‘Keep Hope Alive’—it’s ‘Keep Fear Alive.’ Keep it alive.
In other words, it's the bureaucratic momentum that leads the surveillance state to just keep pushing the "fear" story, and never wants anyone to look at whether or not that story is true or if the cost related to it makes sense. That's certainly supported by the fact that many of the earliest hypers of "cybersecurity" were those who stood to profit handsomely from it (and have done so).

In our recent podcast with Barry Eisler (himself a former CIA agent), he suggested a similar, but slightly different rationale, pointing to the "streetlight effect" based on the old joke of a drunk man searching for his lost keys under a streetlight, while admitting they were actually lost somewhere else. When questioned about this, he notes that he's searching under the light because "that's where the light is." In other words, the surveillance state collects all this useless data because they can -- and the costs associated with it (not just the direct costs, but all the damage done to US companies, trust in government and more...) don't really matter.

There's probably a combination of both of those factors at work here, but I'll toss another one on the list which may be at work as well: the CYA theory. That is, most of the people in the surveillance state know pretty damn well that these programs are useless. But they don't want to be the one left holding the bag when the music stops on the next big attack, and the press and politicians are pointing to them and asking why they didn't do "X" to prevent whatever horrible thing just happened. So those officials need to "cover their ass" in being able to claim that they did everything possible -- and that always means more surveillance, because they don't want to be told that they could have gotten some information but didn't (even if having more information obscures finding the important information.)

In other words, many of those involved are doing a cost-benefit analysis, not for the safety of the country or national security but for their own reputations. And that's how bad policy gets made. They don't do the right thing because no one wants to stand up there after there's some sort of attack or problem, and say "well, we didn't know those bad people were doing this because we didn't want to violate everyone's rights." That just doesn't play well, unfortunately.

That's why the point that Bruce Schneier has been trying to make for years is so important: we need to bring society back to a place where people accept that there's some risk involved in everything. That's the nature of being alive. If we can rationally come to terms with that fact, then people don't need to freak out so much. But, unfortunately, it doesn't seem like that societal shift is going to happen any time soon.

Permalink | Comments | Email This Story







No comments:

Post a Comment