github repository: github.com/cryptostorm/KeyChain
Late last week, I made use of the opportunity to lay out some of the ground-level work we as a team have been doing since last fall, via a post at our crypto.cricket blog. As I was "volunteered" for this duty by the rest of the team, I wrote long... as I tend to do. For some, that sort of writing is painfully boring to read. I concur, for the most part. However, our decision as a team was to allow my worst long-form impulses to assert themselves, in order to provide some framework for the real heart of the story.
The heart of the story is delivering network security - real, reliable, consistent, comprehensive network security - to our members. So, today, my job is to share that part of our work in as concise a form as I can. To make that possible, here's several citations of previous items published by our team, on topics related to this; those who want to know why we're doing what we're doing are encouraged to start with these. Those who prefer to 'cut to the chase' and see the plan as it turns tangible, can simply continue forth from here.
- superfish & 'ssl kneecapping' - a summary of misadventure, and distilling what it means for cryptostorm
deepDNS.dk: our mesh-based system of recursive, blockchain-mapped, dnschain-enabled Domain Name System resolvers
Cthulu & the dark side of DNS - enumerating DNS mysteries, and what they mean for network security service overall
cleanVPN.org - searching for reliably 'clean' (opensource, no malware, no fakeware clients) VPN service providers, via community review
DNS hijacking, in the wild - an unexpectedly close to home demonstration of core weakness in DNS integrity
In search of curve 25519 - hardening torstorm.org via precise cryptographic parameter selection & deployment
browser privacy & fingerprinting 101 - the risks your browser poses to online security, & how to mitigate them
STUNnion - a demonstration of IP address exposure during Tor .onion site visits, via webRTC calls
webRTC leakblocks and cryptostorm - tactical notes from the front lines of a harbinger of problems to come
Decentralised Attestation - DA & the #CAfree future of network crypto
Hostname Assignment Framework - redundant, resilient, decentralised network resource administration
network access tokens - cryptostorm's approach to privacy-enabled secure network auth
Yesterday, in Part 1 of this piece, I laid out a series of flexibly-connected, global-scale, complex, systems-level threats that, when seen in full perspective, constitute an ontological thread to the security of cryptostorm's network and its members. Actually, I left quite a few pieces out - hardware-based attacks, client-side rootkits, side-channel weakness ubiquity, and so on - as the laundry list can start to seem overwhelming in full roar. But I hope the point has been made: big issues are relevant and require attention.
Today, we're tactical. And a tactical example helps set the stage for what can go really wrong in the "think local" side of things. This little vignette begins with an exchange that took place recently on twitter:
We at cryptostorm - and I personally - aren't interested in overstating this issue, but there's no nicer way to say this than: this is what happens when bad crypto meets low external review. I promised to stay short with this reply, and despite the temptation to veer astray, I'll stick it out. Besides. Moxie's essay on this topic is, in a word, brilliant - there's nothing I can say that'd improve on his explanation and it's best I just point those curious for the deeper details over there.
As HMA says in their reply to our criticism:
"The PSK is for authentication, not encryption or decryption. It's used as an alternative to certificates."
it sounds reasonable to say that the "pre-shared-key" is only for authentication, not for actual encryption... and in that case, if we don't care about that weirdly abstract authentication nonsense, we can just cut to the chase and do the encrypting that counts. And actually, it is possible to do that... but only if you basically do authentication but call it something else - same difference. Or, of course, you can use pre-shared-keys... which must remain private and secure to be of any use.
In this case, HideMyAss is doing neither. They've published their PSK on the internet, so anyone can find it. It's so low-entropy in any case that a ten year old could guess it in a few minutes. That means, for the specific underlying protocol on which they have based this "encryption" service, MS-Chap is what's available (2... as well as 1, the latter being beyond broken and into satire):
Long story short, these network sessions are functionally plaintext for anyone who goes to the trouble of gathering them up - or storing them for later review. And while it seems like authentication can be carved off from crypto, in reality that's not how things work.
Incidentally, if there's any question regarding the decrypt we'd gladly receive captured traffic on a session or two, and we'll turn around plaintext from them. This is not a challenge, given that Moxie did automate the entire process (much of that automation exists to figure out the PSK or equivalent passphrase - which isn't needed here, since it's published). This really is as simple a case of useless crypto as can be imagined. Or as Moxie put it a few years ago...
"In many cases, larger enterprises have opted to use IPSEC-PSK over PPTP. While PPTP is now clearly broken, IPSEC-PSK is arguably worse than PPTP ever was for a dictionary-based attack vector. PPTP at least requires an attacker to obtain an active network capture in order to employ an offline dictionary attack, while IPSEC-PSK VPNs in aggressive mode will actually hand out hashes to any connecting attacker."
tl;dr - authentication matters
☯ ☯ ☯
Which is something of a problem, because authentication for effectively all network cryptography in the civilian world is structurally crippled. That's the bad news.
The good news is that the underlying mathematics - the cryptographic primitives - underlying real auth models that work across untrusted network pathways work just fine. I'd say they're genuine marvels, these asymmetric key exchange algorithms - close on to magic. The failure (intentional or not) lies in the way it's put into practice, out in the world.
Out in the world, authentication hinges on chains of trust - only, these chains aren't trust like most of us would use the term. Instead, they're rigidly hierarchical - anyone at "root" trust level can vouch for anyone else in the entire system, including themselves. This is a parody of rigid hierarchy, and unsurprisingly it shows all the failures such rigid models are known to produce.
In practical terms, when you decide you want to create a secure communications channel with a particular news website, for example, the way you know the website you're visiting is really the website you want to visit is that you are implicitly trusting the entire, broken, dysfunctional edifice of Certification Authority-based session verification to make sure you're pointed right. Further, the same question of knowing who you're connecting to is at the root of protecting against Man-in-The-Middle attacks - so that's a fail, as well.
There's a whole giant edifice that's grown up around this admittedly broken way of making sure our network traffic is secure. And there's enormous effort expended by smart, talented, motivated folks to fix the CA system so we can make the internet secure again. It's all a hopeless waste, and worse it's all completely unnecessary.
Cryptostorm has found a path out of this mess doesn't require "fixing" an un-fixable mess. Nor does it envision throwing that entire system out and starting, tabula rasa with some idealised new system of perfection, whole-cloth. Instead, we've gone through the convoluted inner workings of the existing CA model, highlighted the bits that actually do a decent job of their tasks, pushed aside the unnecessary complexity and baroque filigree of silly absurdity, stayed firmly based on proven cryptographic primitives, and sought out iterative steps to deploy instead of big-bang, all-at-once pipe dreams.
We call it Decentralised Attestation. And it works.
We'll show you, right now.
☯ ☯ ☯
Asymmetric key exchange, the way we get a secure channel going across an insecure internet, has a couple of absolute requirements in order for it to work. Basically, each party in the discussion needs to have a public key they've verifiably received from the other. Those keys aren't secret - unlike HideMyAss's PSK, these keys are meant to be public. If the two sides can get their hands on known-good public keys for each other, we're off to the races.
We get alot closer to good answers when we remind ourselves that "certificates" are just public keys wrapped in some extra "I vouch for you" stuff tacked on by Certification Authorities (and not wrapped well, or securely, at all). The public key is right there, in the certificate. As a thought experiment, take the certificate fluff out and you've got two parties, trying to communicate securely, and they need to be sure they've got each other's public keys. That's the crux of the whole thing.
Which can be seen in either of two ways, basically. On the one hand, we might say "sure, no problem - they send each other their public keys and that's that." Or, we might say "there's no way to do that across an insecure network that isn't itself subject to MiTM and thus already a failure." They're both wrong: yes, the public keys can't just be emailed to back and forth before the secure channel exists... obviously anyone wanting to can grab the keys off the wire, switch them for different keys, generally blur the channel so bad that there's no way to even get started.
On the other hand, the assumption that it's impossible to get keys into the right hands - provably do this - is also wrong. It seems like it might be pessimistically true... but that's only if we don't notice all the useful new techniques & technologies around that can crack that nut.
Incidentally, the way this is done for https "secure" web sessions is a mix; certificates do get sent back and forth plaintext, but they also have this shambling "trust store" idea that at core tries to get certs to end-users without going across insecure networks. This is done by having trusted companies like Comodo and DigiCert "vouch" for certs... which in practice often means checking if certs are legit by plaintext network sessions - exactly what we know is not going to be secure. Total failure, fractally broken top to bottom. Let's just put that aside.
What we have is the need to get public keys into the hands of the people who need then, in a way that's reliable and robust. Also we need to be sure that when a public key is no longer controlled by the person who used to, the people relying on it can find out it's been "revoked." Two sides of the same coin.
This isn't just https web browsing, either. Cryptostorm makes use of asymmetric authentication to create our secure cryptographic connections with our members. And, yes, that system has lots of places it can fail - not as many as CA-based https fortunately... but still too many to stay as it is.
We're deploying DA-based authentication for all cryptostorm sessions - that's already in the works - but meanwhile we thought a tangible example, a proof of concept (PoC) in security tech speak, would help cut through the sea of my boring words, and demonstrate what's actually happening.
☯ ☯ ☯
We have made our main websites - cryptostorm.is & cryptostorm.org - available as .onion Tor hidden services sites since last fall. Mostly this helps us understand the tech required to do this, and keeps us current in such matters. Given torstorm's role in our team's work, and cross-network access to .onion and .i2p sites we already provide for on-storm members, it's important that we're hands-on with these tools.
Naturally, once we'd made those sites available, we wondered about https versions. Not because we're fanatics about https; if anything we're deep into pessimistic terrain (in that, we're hardly alone). But rather, it's an obvious question for anyone of a crypto-tech frame of mind. Yes, facebook did it (<fake cheer>), and more recently blockchain.info (their shallot-forced address is vastly cooler, we think). How, specifically? We read all the reports in the press, but that's not brass-tacks.
So we pulled the server-issued certs themselves to take a look at them firsthand, and we de-PEM'd them. Here they are:
Skipping over details, we tried replicating their approach and got shot down by the CAs. Apparently there's alot of begging involved if you want an "official" ssl cert for an onion site. Some on our team were keen to find a workaround to that - I'm sure they're possible; cut me loose with some outside-UTF8 glyphs and it's a simple spoof, I suspect - but as we got more clever about it, eventually we realised we were off the path entirely.
Because, wtf? I'm sorry, but CAs "vouching" for hidden service websites in Tor is just an astonishingly, brazenly, sneeringly horrific plan. And it's already well along the way to becoming real - they want to bring "trust" to .onion hosted content... with CA-based session validation!! Specifically, here's the rationalization offered...
– Powerful web platform features are restricted to secure origins, which are currently not available to onion names (in part, because of the lack of IANA registration). Permitting EV certs for onion names will help provide a secure origin for the service, moving onion towards use of powerful web platform features.
- Currently, access to .onion names over https from a standard browser results in the standard existing ‘Invalid Certificate’ warning. Training users to click through security warnings lowers the value of these warnings and will cause users to miss important security information. Removing these warnings for the user, through use of a digital certificate, will help users recognize and avoid real MITM attacks.
– The public needs attribution of ownership of the .onion address to differentiate onion services, including potential phishing services. Because onion names are not easily recognizable strings, providing the public with additional information about the operator has significant security improvements, especially in regions where use of the incorrect name could have lethal consequences.
I boldfaced the particularly cringe-inducing parts. Because anyone who can sign a document claiming that .onion sites need CA-controlled https in order to "help users recognize and avoid real MITM attacks" really does have a possible second career in acting, Impressively done! Painting CAs as harbingers of improved security, stable attribution, and generally well-run secure network sessions really is over the top, however.
So why do we need CAs to help make sure .onion-land enjoys all the putative benefits of crippled, CA-controlled validation as it's witnessed on the conventional web already?
Simple answer: we don't.
There's no improvement, indeed an anti-benefit, to be found in having CAs involved in onion session security or onion identity authentication. This is utterly the case, since .onion websites are routed via the coordinates that actually make up their address. How do you know an .onion site is what it says it is? Send packets to it - definitionally, they get to the address that is encoded in the address itself. This is really basic stuff.
Rewind back to the public and private keys. I already know an onion site is who it "says" it is because its address is its identity. This is basic ontology. However, as we debated this issue in team meetings for the last couple months, we eventually came to something of a consensus that an additional payer within the Tor model, when visiting onion sites, has essentially no drawback and in some cases can add genuine security benefits. We're fans of layered topologies - tunnelled tunnels - on the team, so it's our default assumption that they're work considering. Technically, they work fine - https is all TCP by definition, so there's no UDP issues coming across Tor.
Indeed, technically this proved easy to do: it's only getting a "real" certificate that's a bottleneck.
A real certificate? We decided to demonstrate what that means...
☯ ☯ ☯
Take 100 megabyes of quantum-generated, high entropy almost-not-pseudo "random" source material. Mix in some customisation of the obscure parameters involved in generating RSA-based keypairs, pull out all the useless crap of corrupt CA "validation" and this is what you get...
Code: Select all
Version: 1 (0x0)
Serial Number: 17006882260368345458 (0xec04954f3a25c972)
Signature Algorithm: sha1WithRSAEncryption
Issuer: C=IS, ST=H\xC3\x83\xC2\xB6fu\xC3\x83\xC2\xB0borgarsv\xC3\x83\xC2\xA6\xC3\x83\xC2\xB0i, L=Reykjavik, O=\xC3\x83\xC2\xA7r\xC3\x83\xC2\xBF\xC3\x83\xC2\xBEt\xC3\x83\xC2\xB8st\xC3\x83\xC2\xB6rm\xC3\x83\xC2\xB0\xC3\x83\xC2\xA5rk\xC3\x85\xC2\x8B\xC3\x83\xC2\xAAt, OU=decentralised_attribution, CN=\xC3\x83\xC2\xB0\xC3\x83\xC2\xB8rk\xC3\x83\xC2\x9F\xC3\x83\xC2\xB6t/emailAddress=DAkeypair-onionHTTPS@cryptostorm.is
Not Before: Mar 22 09:56:13 2015 GMT
Not After : Mar 19 09:56:13 2025 GMT
Subject: C=IS, ST=H\xC3\x83\xC2\xB6fu\xC3\x83\xC2\xB0borgarsv\xC3\x83\xC2\xA6\xC3\x83\xC2\xB0i, L=Reykjavik, O=\xC3\x83\xC2\xA7r\xC3\x83\xC2\xBF\xC3\x83\xC2\xBEt\xC3\x83\xC2\xB8st\xC3\x83\xC2\xB6rm\xC3\x83\xC2\xB0\xC3\x83\xC2\xA5rk\xC3\x85\xC2\x8B\xC3\x83\xC2\xAAt, OU=decentralised_attribution, CN=\xC3\x83\xC2\xB0\xC3\x83\xC2\xB8rk\xC3\x83\xC2\x9F\xC3\x83\xC2\xB6t/emailAddress=DAkeypair-onionHTTPS@cryptostorm.is
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (4096 bit)
Exponent: 65537 (0x10001)
Signature Algorithm: sha1WithRSAEncryption
There's still some fiddling we've to do with it, yet. Like, we wanted to do a bit of experimentation firsthand with the encoding & parsing of extended Unicode that x.509 actually produces in the wild, so we fed these parameters to the GPG certificate signing request (CSR) daemon:
- Organization Name: çrÿþtøstörmðårkŋêt
State or Province Name: Höfuðborgarsvæði
Common Name: Reykjavik
Email Address: DAkeypair-onionHTTPS@cryptostorm.is
Comment: “May we be fortunate in our endeavours and able to say looking back on these times that we did, in fact, succeed in doing it right ~ みんな ~ çrÿþtøstörmðårkŋêt.xyx”
...as you'll see below, we did bend the bidirectional encoding transform pretty well out of shape - which is what we expected. Whether one can use this to inject unintended behaviours into the entire process, I leave it to curious readers to confirm for themselves.
So yes, it's a bit of a tweaked-out certificate since we got up to our usual cryptostorm unicode silliness with it.... but it'll do for now - and a far sight better than the crap handed out by CAs, to be blunt.
What purpose does this certificate serve? In plain language, it can be used to encrypt stuff (with the public key part of it) that only the server holding the corresponding private key can decrypt. That feature is almost always used to share some initial data that, in turn, primes the pump of the rest of the crypto process for the session. So, to do that well and reliably, this cert (i.e. key) has to have genuinely ergodic ("random"), high-entropy foundations. It needs to be a nice long key so it's really hard to brute force break, and it needs to have a good algorithm used to create it.
We've got all that, in spades.
So... what doesn't it have? It doesn't have a "chain of trust" in the form of a bunch of extra certificates all signing for one down the chain, confirming that they are... um, they are valid? That's an open issue, sort of. Anyway, it has an "issuer" listed on it - cryptostorm_darknet, Decentralised_Attribution department. That's us. Because we issued it. We made the keypair, on one of our servers. We then signed the public key with the private key, which is how you create a certificate ("and when two keys love each other very much, honey, sometimes they come together and their love makes something beautiful: a certificate!).
We'd just as soon skip the certificate mumbo-jumbo and work with public keys. That's what we do for PGP encrypted email, after all. Publish the public keys at MIT, or keybase, or onename, or wherever. But, web browsers want certificates because... blah. Just because. So feed them certificates. However, they only accept as "legitimate" certificates that are pre-loaded in their "trust store" already. Why, and who decides? Don't get me stared. Trust me, you would regret it
So here's where the start getting fun...
Visit our .onion version main site as an http session - https://stormgm7blbk7odd.onion/, and sure enough your browser will throw a scary warning at you: not trusted! invalid certificate! However, if you pay Digicert a whack of money, they'll issue you a much less cryptographically robust certificate... and that's about it. They "vouch" for it, which means they have a root cert in the browsers, so the browsers will show a green lock. That's about it.
Certificate Revocation Lists, OSCP, and other mechanisms to revoke "bad" certificates? No root certificate has ever been revoked. None. CRLs are now officially abandonware, repurposed over the years as malware depots, spooky spy dead drops, or whatever else - who knows? OCSP fails open, so you block the session and the cert validates. On and on...
Ok, so how can you know that our certificate is "legitimate" and not, umm, like those fake Microsoft certs you still see around, years after they were supposedly "cancelled?" Here's where our first principles of crypto come in, and here's where we start building a DA-based validation system everyone can use.
☯ ☯ ☯
If you check over on this keybase page, you'll see the following blob of text come up:
Code: Select all
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1
-----END PGP PUBLIC KEY BLOCK-----
(yes, that keybase account is named 'superfish' - an oblique homage to the piscine progenitor of much of our work on DA-validated network sessions)
That's the public key of the keypair that sits behind the certificate that is presented to visitors to https://stormgm7blbk7odd.onion/. Put another way, anyone who encrypts (or signs) a message with that public key can be very, very confident that only the holder of the corresponding private key of the pair will be able to decrypt (or signature) verify it.
We've also posted that public key over at MIT's old-school PGP keyserver[/ur]. It's known as key "0x2743eca1c44c1379" over there, and is mapped to our .onion site's [url=https://pgp.mit.edu/pks/lookup?op=vindex&fingerprint=on&search=0x2743ECA1C44C1379]identity as embedded in the key's outside wrapper. Of course, that public key is now posted here, in this thread - which is another place to verify it. We can post it a dozen more places - on a different .onion site, accessible via torstorm, and hosted inside an i2p-housed 'eepsite,' and all over whatever cryptocoin blockchain we want to use. Pretty much, it's everywhere we want it to be. For someone to either remove or subtly alter all of those public key copies, so we don't realise it's been changes, would be extremely difficult to accomplish.
This is a unification of the signing keypair with the authentication keypair - which traditionally (for reasons we're not really sure we can articulate effectively because we're not sure they make any sense when you look closely) are not only created separately and stored separately but - despite using the exact same underlying cryptographic primitives (RSA, SHA1, etc.) - they were encoded and presented in formats just different enough to make them fail at interoperation. This sectarian divide was created out of thin air, and can be de-created the same way (note, we're not the first to recognise this, obviously - others have developed nice toolsets for & pointed out the value of asymmetric key unification years ago... but, prior to public blockchains we conclude that they were a bit ahead of the times, and ahead of the internet foundations required to really make the concept bloom).
Anyone who has ever, in a flash of obvious-in-hindsight intuition, realised that the keypairs underlying HTTPS and ssl are the same sort of crypto gadgets as are the keypairs used in PGP email or SSH logins, decided to confirm that via experimentation can speak to the astonishingly frustrating, opaque, bizarre netherworld of transformative generative parsing wasteland that exists between the two. Despite a few tutorials written up by patient, brilliant, calm-minded pioneers on how to make one kind of key into another format of the same key, at some point in that process mere mortals will find themselves cursing the fates for having allowed such a horror to be borne.
I'm not actually exaggerating. Try it. You'll end up there - that cursing place - if you do. Fair warning.
But that's just procedural nonsense. We've automated that via a series of open scripts, enabling fluid, bidirectional mappings of keysets and certificates without need to fight through the dead zone manually. Of course, anyone can confirm the code is doing what it says; we'll have it up in the Decentralised Attribution repository shortly. However, that procedural complexity shouldn't let us be pulled astray from the core value of asymmetric keypairs: one key proves the other key (private to public), whereas going in the "other direction" (public to private) gets you nothing.
In other words, that public key "vouches" for the identity of the holder of the private key... which is the exact same private key that spawned the ssl certificate being presented by our website. That public key is basically a blob of high-entropy ("random") data - mated with the private key on the server, it's the best validator of identity that human beings have been able to develop thus far.
☯ ☯ ☯
So here's where all the words and digressions start clearing away, and the core of the system itself come forth. The ssl certificate presented by our onion site is simply the public key - the same public key as posted on keybase - paired with a private key we control. When our server presents that public key (in the form of a certificate), and a browser uses it to encrypt a bit of startup entropy for a crypto session with our server, and that dollop of encrypted entropy comes back to us, we know - in formal mathematical terms - that we receive the encrypted dollop without any edits being made to it, nor it being decrypted on the fly.
Concomitantly the web browser knows that only the holder of the corresponding private key can receive and decrypt that dollop of entropy - entropy that kicks off the entire cascade of ephemeral cryptographic wonder that keeps all the rest of our session secure. Given that, what's the reason identity verification has proved so troubling for https, thus far?
Simple: how do we know that public key that shows up in our web browser when we head to https://stormgm7blbk7odd.onion/ is the public part of a keypair that's really controlled by cryptostorm? Couldn't the NSA just sit on the wire (holding aside Tor, for argument's sake) and feed us their own public key? Our visitors would think they're coming to cryptostorm, but in fact it's the NSA - the classical MiTM attack.
But we've really take a good step towards making that attack unproductive, because now all we have to do is compare the public key that comes from the server with the public key posted over at keybase. They should be - they must be - identical. If someone swaps out the https-site public key on the fly, the keybase one won't match any more, and we know something bad is going on.
Seems simple, doesn't it? Too simple to really work, perhaps. Well, there's more to the full model - parallel redundancy, write-outs to other public authenticators, and so on. But for a start, keybase does pretty well. And, no, it's not that we "trust" keybase not to swap out our public key for one of the NSA's if the pressure gets too heavy.. they write out records of key-posting transactions to the bitcoin blockchain (thus leveraging Merkle root validation), which makes it very hard for someone to go back and "un-write" them secretively. Once that key is up there, it's up there - verifiably so.
Of course, there's second-order attacks possible. We've got mitigations. The crux of the DA solution - as the label decentralised emphasizes - is resilience through distributed redundancy. Rather than having one monolithic, centralised, hierarchical mechanism of enforcement and control - the CA model, in short - DA spreads attestation duties out, across a diverse spectrum of channels and technologies. With that diversity, an attacker would find herself facing a complex and resource-intensive tasks, if she wants to intercept that session verification successfully: instead of one CA to subert, she's got a welter of weird-tech parallel, decentralised mechanisms going off all at once. Not so simple, that.
No Certification Authorities. No trust chains. No complex, but-prone layers of parsing syntax. A private key. A public key. The math that binds them. That's the core, and as we move forward we'll winnow down to more and more of a minimal-complexity deployment of DA auth. To start, we're piggybacking on he existing browser-certificate foundation... even though it's buggy and sad. And even though we really have no use for certs - we're doing keypairs, and that's it.
We do that because we can leverage what's there, iteratively improve it, build momentum, and deliver genuine security benefits right away rather than in some utopian future with a New System that never arrives.
☯ ☯ ☯
Brass tacks, how does it work?
Crawl, walk, run.
To begin, verifying the certificate in the browser is simple enough to do manually. Click on it and, stripping out the fluff & the fractal complexity of encoding formats, look at the mathematical guts of the embedded public key. Look over at keybase, and confirm the gots of the public key posted over there match (we're sitting lots of weird format-spanning curlicues out of this explanation, because it's easy enough to script into submission via web-based widgets, right off the bat). If they don't, walk away from the session... or route it another path to see if you can get a clear line of sight to connect clean.
Or, in order to authenticate our .onion server's DA-cert authenticity, take the cert as it shows up in your browser ("PEM encoded"), and use any standard unpacking tool to expand it into readable form. You'll see a chunk of text, up near the top, labelled as the "modulus" of the key (in short, the stubby remainder left over after two numbers are run through a function together). In the case of our DA-cert, the full modulus is...
Code: Select all
Now, take a look at that public key posted over at keybase, and MIT, and everywhere else. It's also encoded, so what we want to do is run it through a process to extract its modulus value as well. The quick way to do so at the command line reads like this...
Code: Select all
# openssl rsa -in cstorm_onion_private.key -noout -modulus > modulus.txt
The content of that file the algorithm produces, is...
Code: Select all
There they are. The same public key sits behind both... and anyone can verify that, with any open tool, and know that the website is legitimately the one being published by cryptostorm... the same cryptostorm who has control over that public key that's been posted hither and yon on the internet. If those two moduli don't match, something's wrong - and depending on circumstances, that might mean anything from not visiting a website or perhaps trying a few different avenues of transit to get there without being ambushed by digital MiTM banditry along the way.
Of course, some automation will go a long way to making things east and low friction; doing this manually for every website and ever visit would be really distracting, really quickly. How about an opensource browser extension that does the check for you, so you can look at the two thumbprints and ok them... let it ok them if they match? Those exist already, for different projects - simple to do, not a major security issue, and a big step forward.
Better yet, enable client-side logic to do the test transparently for any local application needing a cert ok (openssl has hooks to enable such things, with a bit of dusting-off of disuse)... similar in concept to the way DNScrypt carries resolver questions up to the nameservers securely via its own special channel. Make it a simple API, a framework callable by anything that needs to check and be sure a fingerprint comes up clean.
Note that there's no "trust cryptostorm" in this, generally speaking - the validation happens between the member, keybase, and the blockchain. We route packets, but we don't control that process. Decentralised.
In fact, there's no trust in this model (in any meaningful sense of the word). We trust that the math works, and we trust that we can create enough redundant paths to posted public keys that we won't get tricked by a few forgeries. But we don't need to trust Certificate Authorities, or the government, or keybase, or cryptostorm, or anyone else (apart from compiler designers and hardware chip-fabs and other important issues we're not going to bog down discussing here). There's no risk of someone being bribed into subverting this system, as there's nobody with the power to unilaterally make it misfire and spit out corrupted authentications.
That's kind of a big deal, because it means we can start building trust back up in HTTPS and SSL and everything that depends on them: we have a firm foundation from which to do that rebuilding, having stripped out all the booby-trapped snarls of ostentatious complexity that were clogging the view of the core machinery of public key crypto.
And, obviously, beyond this little proof of concept the immediate prize is doing exactly this sort of authentication for cryptostorm network sessions. We don't use CAs in our PKI model and never have. Indeed, we auth asymmetrically - clients verify server identity with asymmetric RSA crypto but the server does no such thing for clients. We've already carved out a bit of useless complexity, years ago.
If we want to up the crypto engine to start hardening our security models against quantum crypto, we can swap in new-age asymmetric algorithms from (non-NIST) ECC through lattice crypto and everything else. It's modular: the structure embraces whatever sub-tools make the most sense and have the most going for them. We're not tied into specific bits of code that we'd just as soon get away from.
But we can do more, and we can bind cstorm network sessions more cleanly to the guts of key-based session initiation - toolsets like DJB's NaCl (which, famously, fits into 100 tweets ) do asymmetric crypto tight, clean and fast... enabling us to strip out the entire goofy apparatus of x.509 itself. Replicate that improvement across the network, as we drop cruft and open the way for substantially more focus on the parts that matter and matter a great deal, indeed.
Key-validation lookups routed through Tor, touching namecoind instances running on hidden services .onion servers. Validation queries pushed out via clever ICMP encapsulated proto-tunnels... invisibly blending into the pingstream. Etc. and so on. Each one has weak spots, exposed attack surfaces, opportunities for DoS-based blocks. But, instantiaed in parallel they prove to be a willowy, flexible, evasive bundle of slippery threads to grab all at once.
That's the ideal.
And yes, website admins will need to take the trouble of posting a public key places it can be touched by DA-based agents, to verify it's thumbprint. This is not an unreasonable hope, given that - for onion sites in particular - the alternative is hegemonic despotism enforced by the same CAs that broke security on the conventional internet. Also: DA costs nothing. So there's that.
We've done a DA-based PoC with our onion site as a demonstration: rough-cut, manually-deployed, still a bit loose and evolving to be something used in daily production context. It's a pressing need - proper https for onion sites, without becoming infected with CA-ware misery along the way - and we're happy to do it ourselves, be our own crash-test dummies. Our .onion site isn't high-security in any case: it's brochureware. Gorgeous, unique, and utterly fascinating... but nobody's going to be tortured to death if it's found out they've visited our glitched-out little gem.
We'll continue to publish functional components of the DA auth system in our github repository, and we'll continue to develop the details of its evolution here in-forum, with the community. This is most certainly an emergent structure: we've not planned it down to the tiny details, as we know it will benefit from some space to find its feet, and from the contributions of the community along the way.
Good tech evolves and moves with the flow of the times. Bad tech digs in, and demands the world bend to its rigid needs. We like good tech, fwiw.
☯ ☯ ☯
This isn't "sexy" stuff, is it? It's all a bit.. conceptual, hardening against attacks that are rarely seen overtly but instead are inferred from data points scattered worldwide. We'd sell more tokens if we just followed the latest trend - lately it appears to be "warrant canaries," whatever that means in the context of tech-challenged VPN services. Or maybe this quarter it's... honestly, I don't have any idea. We just don't stay very current on the shenanigans of the VPN industry, as a team. Too much other constructive work to do, frankly.
But this stuff actually matters. Ssl kneecapping, MiTM hijacks, DNS poisioning, and general ssl-based fuckery is a constant, droning background noise on the internet today. It's not even possible to keep track of it all. The funny thing about it is that it leaves little evidence it's happening, for the most part: network sessions are a bit flaky, packet retransmission rates up. But all those thoughts and dreams and desires and fears are being archived off to Bluffdale, or wherever else spy shops get their dirty little fingers in everyone else's little pies. And, a day or a week or a decade later, politics change and those data re-emerge from the crypt, ready to wreak havoc on the living who assumed they'd long since returned to the source...
It puts us in the role of taking forceful, focussed steps to protect members from a threat they will rarely see manifest physically in plain sight - and it means that, the more we succeed in doing so, the less likely the threat will ever become real! That's a bit of a jape the universe seems to have with us, perhaps - do things well, and what we do looks easy and perhaps needlessly paranoid. Fail, and suddenly everyone wants "protection" and tokens fly out the door.
Bah, to hell with all that
One thing we've learned in the last few years is that our members have good intuition. Often they tell us as a team - and me, in particular - that my seemingly-incohate rambling about this or that esoteric attack model is to them impenetrable gibberish. Often it's boring, unless one is deeply engaged with the subject. And we - me, in particular - don't do so well in converting raw tech insight into communications useful for real people who have real lives outside cryptographic minutia. So why are we honoured with the member support that's always been the core of the project?
Folks know we're doing this for real, not just playing along to the laugh track. We're not pretending - we've poured ourselves into making this service the best it can be... and then making it better from there. We make mistakes, we get distracted, we have off days and some of our projects stall or mutate eternally... all true. That's also all part of putting oneself deeply into a process - of being present in the singularity of the now.
Whatever else good or bad can (and will, and has) be said about us, one thing is clear: we mean it, we really do.
And with that, I'll step back from my terrible habit of pontification to the point of absurdity. This is the model - this is DA auth. It's a good thing. We've already begun building it. We'll keep at that, shifting back to a more intensive deploy schedule and out of something of an introspective phase of "what next," this winter. We, as a team, are ready for that shift - I think it's in the air, and we're keep to see it play out.
We're also rock-solid in our conclusion that this DA-based transition is needed, and needed immediately. It's not window dressing; it's the core of our service and the benefits we offer our members. We do this right, or we've no claim to be in the business at all.
May we be fortunate in our endeavours and able to say looking back on these times that we did, in fact, succeed in doing it right.
- ~ pj (aka ðørkßöt), writing on behalf of cryptostorm's team, core & extended