#PrivacySeppuku: game-theoretic exploration of the asymmetric power of "no" in ephemeral, privacy-centric marketsAs we were part of the development of the original "corporate seppuku" pledge that Cryptocloud incorporated into their privacy policy, back in 2008, we've a good understanding of what the motivation was behind the pledge. Actually, it's more than that. We pushed strongly for the inclusion of this language in their founding framework - now known as the
Privacy Seppuku pledge, as others have adopted it - because we think it's a high-leverage, low-cost way for the entire community to create a resilient, reliable bulwark against certain forms of mass surveillance. The really creepy, destructive, trust-negating sorts.
Since then, we've always intended to more fully expand our thinking behind the issue, because on the surface it seems either trite, or dumb, or perhaps both: shut the whole damned company down? What possible good could that do? I mean, sure you might stop the goons from getting at some certain individuals - this time. But now you're "out of the game," and you've just removed an otherwise-useful service from the market, and thus being available to everyone else out there.
This is an understandable criticism, but it's totally wrong.
We say that because we've always envisioned the Privacy Seppuku issue as being of use
only when it gains broader acceptance and visibility. As a one-off for one company - Cryptocloud - it's at best some marketing polish that nobody's likely to notice unless it was actually needed in a realtime shutdown. Then, as we've since seen, people do notice. They notice very well. And, with Lavabit's leadership coming to the fore, now the topic is back on the move and it's time to get this "essay" done. Alas, this won't be a nicely-footnoted, well-polished, academic document - we're pushing to get it out timely, and perhaps someday (ha) we'll have one of our folks find space in her life to publish the "proper" version. The perfect is the enemy of the complete, and we choose completion over perfection. Apologies for the rough edges involved - particularly to academic readers, who will find our assertions lacking in citations and reference to the wider literature(s) relevant to this topic.
Game theory involves analytic tools that embrace dynamic, multi-party interactions that are temporally fluid. That is, game theory is used to model things that involve a bunch of agents interacting over a period of time. A big part of game theory's strength is that it doesn't give needless primacy to any one participant's actions: all actions impact all other interactions, and the way the whole thing flows depends on the sum total of the interactions, not merely one decision by one agent. In more expanded versions, each agent can - and does - make predictions about the likely decisions of
other agents in the event they themselves make certain choices: "if I do this, she'll do that, and then I'll have to do that other thing... which I don't want to do - so I'll do something else instead, so she'll probably respond by doing that other thing, and then I get what I want, yay!"
Chess is game theory in motion, of course: a pawn is sacrificed to get the opponent to think an effort to castle is in process... only it's not, it's just bait to open up a different attack. And so on. This stuff is part of our everyday lives, as social mammals. Game theory simply wraps a nice, powerful mathematical framework around it and allows it to be deployed systematically and across lots of data sets.
Game theory is often used for interactions that take place over time ("n-iteration scenarios," for the buzzword-hungry). A does this, B reacts, then A reacts, then B reacts... and in parallel channels, A is predicting all those future B reactions, as is B to a. I know you know I know that you know that I know - that sort of thing. This can get iterative, and recursive, very quickly; it can also eat computational cycles, and turn into NP Complete, intractable problems with little warning.
The PS Pledge (for short) takes place in an n-iteration world, where there's a whole series of interactions between "them" and "us." That's obviously a false dichotomy, but it's a starting point: there's the spies, and those of us who want to remain secure against spying. We interact. What happens?
Here's where things stand, right now:
The spies have compelled companies not only to give up historical data on customers, but also to provide ongoing, realtime, continuous tracking of current activities. Single-target examples, like Hushmail's injection of bad javascript into web-based encryption at the DEA's insistence (Hushmail is Canadian, btw - this isn't an uniquely American issue, despite hopeful fantasies on the part of some non-US citizens) have been supplanted by broader deployments of realtime surveillance against mass populations: every indication is that major US telco and internet companies have become, in practice, realtime spyware machines: feeding current data into the NSA's massive databases, across the board, with no disclosure to customers. Indeed, many have lied bald-faced about the alleged "security" of their #snitchware - see Microsoft lying about Skype, or Apple and the iMessage disinformation scheme run alongside the NSA.
As a result, users of network services now have a reasonable concern that they are being spied on by their tech tools - not only the ones already "outed" as snitchware, but also those claiming vehemently not to be such. Worse, because the court orders compelling these activities are themselves secret and require their targets to remain secret or face contempt of court charges (possible federal felonies, in the U.S.), silence is not good news. Not at all. We're all sort of cringing and cowering, unsure who to trust - or whether to trust anyone at all. But, per Schneier, trust is the foundation of all digital security - indeed, of all security... and of all societies, as well. We trust nobody, and we're hamstrung in the process.
For the surveillance overlords - "them" - this is an excellent outcome. Everyone is afraid they're being spied on, all the time. It's Bentham's Panopticon, made real. Worldwide. Without confidence that privacy tools can actually keep them private - how can we be sure? - many people just give up trying, and use stuff that they know is #snitchware, but at least it's shiny and pretty and makes nice TeeVee ads for us to watch, or whatever. Dissidents and activists are hampered in communicating securely; when they do, they're still holding back, because... you never know. Everyone self-censors. Nobody wants to criticize the madmen in power - publicly or privately - because we all know we might get a late-nite knock on the door. Or a SWAT team smashing the door in, guns drawn, shooting our dogs and vandalizing our homes. Or, a trip via extraordinary rendition to a far-off place where torture is on the daily menu. Or Guantanamo.
Now, let's play out a scenario with Privacy Seppuku mixed in:
Folks worried about their security shift their support over to companies that have made a public statement in support of privacy seppuku: they'll shut down before becoming ongoing, dragnet, secret components of Alexander the Geek's surveillance regime. The spies - "they" - see this, and now they have to make some choices...
They go after a company that seems juicy and ripe for plucking: let's say it's lavabit. Secret order from a secret court, compelling secrecy from the target. Lavabit says (we're paraphrasing a bit here"): go fuck yourselves, and shuts down. Servers wiped. Code deleted (or archived in encryped offsite containers). Here's what you get, NSA: you get nothing. Oh, and we're going public with our shutdown. No, we're not going to disclose you're double-dog-secret NSA super-secret court order from your secret rubber-stamp court. Nope. We're just shutting down, so we don't have to betray our customers. Read into it what you will. Want to charge me with contempt of court? Go for it: we'll go in front of a real judge, not some Republican clown on a kangaroo court packed with John Roberts' fascist friends. We'll get coverage, worldwide, about things. Not very secret, is it?
Yikes. That didn't go over so well.
So maybe "they" decide, what the hell: we'll make an example out of one of these crazy crypto-hippie do-gooder troublemakers. We'll make his life hell, and that'll teach 'em. Well, to do so they'll have to dig deep - fake up something, a bit of extra-legal harassment on the side. All along, you're going to have the press - the real press, not the usual neutered lapdogs, and also the people's press (twitter and reddit and whatnot) - following along. Maybe his car has an "accident" all by itself, goes up in a ball of fire. Poof. There, you hippie scum - try that again. But wait... now we've kicked a hornet's nest. Congress is investigating, Front-page stories that even Fox News can't ignore. Pressure. Heat. Maybe someone cracks, and leaks the facts to a Glenn Greenwald... and maybe, even, some of us end up in prison for our crimes.
And even if we beat the shit out of that one guy, what happens if there's a dozen more? A hundred? A thousand? Can we have them all get into inexplicable "car accidents?" Not really practical. Will smashing one really stop everyone else? No. In fact...
That one that went thru with the seppuku? She'll likely have a new service up and running in a few days or weeks. The customers who got dinged by the shutdown? They'll all get up and running on her new service. This is all 1s and 0s, remember? You don't have to demolish a car manufacturing plant, after all - you're just wiping some VMs and reincorporating elsewhere. Lease new machines. Call it "lavabutt" on the new corporate docs, in Andorra. Sign on to the Privacy Seppuku pledge, as lavabutt, again. Off you go. Do you think it'll be hard to get customers - old ones migrated over, and new ones alike? Think on that: a privacy company that shut down rather than be #snitchware... do you trust them, now?
Yep, we do. Actions speak louder than words.
Because, we forget: a company is just people (and not the Soylent Green kind). These teams, they're (mostly) small. Facebook is a behemoth, but let's not kid ourselves: we won't see Facebook on this list. Most of our teams are a handful of folks - people, with names and email address and twitter handles and stuff. When we "shut down" a company, we're still alive! This isn't real seppuku, the kind where you eviscerate yourself (read: slit your stomach with a sharp sword, cut your abdominal muscles, and watch your intestines fall out and splatter across the floor in front of you). These are just damned companies: pieces of paper, words. It's not even the code: the code can go where it wants, particularly if it's opensourced.
In the second scenario, what have "they" learned? That shutdown just made a (temporary) martyr of someone - or a team - and that team's now earned serious credibility to start up elsewhere. Maybe the same service category, maybe something new. Whatever the case,
they're not vanishing into thin air - and now they've got a pretty big gold start in terms of their credibility in standing up against surveillance pressure.
Whack-a-mole, on steroids... because even the moles you whack come back - smarter, stronger, higher visibility.
Spooks aren't dumb - far from it. They do these kinds of analysis - hell, they hire some of the best game theoretic minds in the world, and always have. Local cops might be power-drunk and unable to see how their actions play out over time; the NSA isn't any of that. They have whole buildings full of very smart people paid good money to think about this stuff. They won't get it wrong.
And the outcome is simple: if the Privacy Seppuku concept spreads, it becomes
useless to target companies on the pledge list! You won't get what you want, you'll make some heroes who go out and do bigger stuff next, you'll out yourselves as dangerous thugs, your "secrecy" is shot to hell, and after all the effort involved you end up backwards from where you were before. That's the scenario, it's how it plays out. There's really no alternative scenario.
No, it doesn't just drive everyone to companies that don't support the pledge: that would assume that companies who do seppuku themselves into an early grave don't simply get reborn elsewhere... and that's wrong. They will get reborn, same team or new team it matters not. They'll be reborn smarter, better refined to make spying pressure against them harder however they can. And they'll be on the pledge list, again - with a bullet. They'll get
waaaay more publicity, too - and they'll draw more people to use them. Multiply that by a few hundred, and you're just breeding smarter privacy tech projects.
As to the impact on those who do shut down? C'mon, this isn't Bradley Manning being tortured in the desert for a year by vengeance-maddened military goons. Tech project teams are fluid anyhow, we come and go and today's new is tomorrow's old-and-boring. The value of the gold star - "I shut down rather than become #snitchware, and I'll do it again if I need to - is almost impossible to measure, in terms of value. And it's for life.
Cycle the domain names, spin up new server instances... hell, rewrite the codebase from scratch, it always did need a bit more OO-ish love. Eat some ramen, if need be - none of us are going to starve to death. And come back with one that does it better, bigger, broader, more badass (or bradass). Take that, "them."
[align=center]- - -[/align]
This is applied game theory. Model your opponent's reactions, and build those projected reactions into your decisioning process. Embrace the fluidity of events - these are n-iteration games... they go on, and on, and on. One round passes - lavabit shuts down - but there's a bunch more rounds to come. Look at the totality of interactions, and the scenarios come pretty clear in this case.
This is asymmetric power: a diverse community of folks engaged in privacy-centric services can, collectively, protect themselves against a vastly more powerful adversary by using that adversary's very power against it - judo for the private soul. It's low-cost, it's legal, and it's (predicted to be) powerfully effective. But it's also, in a sense, counterintuitive: how can
shutting down be a powerful act? It isn't - it's the larger context, the public pledge to shut down, that has the real power.
There's classic example scenarios in game theory that work like this: one party has something the other party wants. The wanting party has an incentive to force the owner to give it up. But if the owner can
convince the wanter that pressure will just result in the destruction of the desired object, the wanter doesn't waste time on pressure - it won't get her what she wants. Simple model, but the logic is relevant.
The Privacy Seppuku version is really interesting because we're considering
digital goods and corporations - both of which are ephemeral, non-physical, intangible. "Destroy" a company and it can pop back into existence as the proverbial Newco, overnight. Delete the code? There's backups. Or it's opensourced, anyway: just go pull the src from github, eh?
That's the thinking behind the Privacy Seppuku pledge. It's what we always meant to say about it, but never got around to doing. The go around has come 'round, and now we've done our best to document the bigger framework for others to consider.
With respect,
Baneki Privacy Labs