Ξ welcome to cryptostorm's member forums ~ you don't have to be a cryptostorm member to post here Ξ
∞ take a peek at our legendary cryptostorm_is twitter feed if you're into that kind of thing ∞
Ξ we're rolling out voodoo network security across cryptostorm - big things happening, indeed! Ξ
Ξ any OpenVPN configs found on the forum are likely outdated. For the latest, visit GitHub Ξ

Decentralised Attestation: cryptostorm's #CAfree framework for legitimate cert-based https & tls security

To stay ahead of new and evolving threats, cryptostorm has always looked out past standard network security tools. Here, we discuss and fine-tune our work in bringing newly-created capabilities and newly-discovered knowledge to bear as we keep cryptostorm in the forefront of tomorrow's network security landscape.
User avatar

Topic Author
Pattern_Juggled
Posts: 1492
Joined: Sun Dec 16, 2012 6:34 am
Contact:

Decentralised Attestation: cryptostorm's #CAfree framework for legitimate cert-based https & tls security

Postby Pattern_Juggled » Thu Mar 05, 2015 9:02 pm

{direct link: cryptostorm.org/cafree}


edit: framework name revised from 'root2root' to 'Decentralised Attestation' because, well, DA sucks alot less :-)


"There are these two young fish swimming along, and they happen to meet an older fish swimming the other way, who nods at them and says, "Morning, boys, how's the water?" And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes, "What the hell is water?" "

~ David Foster Wallace


In the 18 months of time since the first round of Snowden's disclosures began, it's been my pleasure to watch from the inside as cryptostorm has evolved from a starry-eyed vision of a "post-Snowden" VPN service into a globally deployed, well-administered, high-profile leader in the network security service market. That kind of a transition in such a short period of time can leave one with a sort of future-shock: the phases blur until the only real phase is one of transition. It's exciting, challenging, exhausting, exhilirating, and fascinating all at once.

One faces the very real risk of myopic blindness, a loss of situational awareness, when one becomes accustomed to live inside such a red-shifted existence: the world outside the bubble can come to seem distant, slow, and less relevant to the local frame of reference every day. That can make for exceptional focus on tactical obligations - I think cryptostorm's excellent record in deploying innovative tools quickly and consistently speaks to the value such a focus can bring - but it can also lead to a form of brittle ignorance of the flow of macro events.

In the past month or so, largely because my operational duties on the team are relatively small (hence the luxury I enjoy of being able to post here in the form more than most anyone else on the team) I've been able to step back from some of the red-shifted intensity of cryptostorm's internal ecosystem, and consider not only the trajectory we've been on since Snowden but also the trajectory leading forward from here.

Which all sounds awfully boring and maudlin, admittedly, so let's move along to the interesting stuff, eh?

In summing up what we do, I'd say the core of cryptostorm's mission is providing a layer of genuine security around the data our members send back and forth to online resources. That layer isn't end to end, but it does protect against local-zone snooping and it provides an ubiquitous level of identity decoupling - "anonymity" - for most all routine online activities. That, in turn, frees our members from a constant fear of having the ugly snarling bits of the internet come back down the pipeline and appear on their physical front door (amoungst other fears allayed). And although there's unquestionably areas in that remit where we can continue to improve - and must continue to improve - in general I say (with all humility) that we're pretty good at that job. That's a good thing to say; it reflects quite a bit of wisdom, experience, expertise, creativity, and bloody hard work on the part of the whole team... plus enormous support from our close colleagues and the larger community along the way.

So: yay.

But: now what?

Do we continue to iteratively improve our core "data in transit" remit as we move forward, keeping that as unitary focus? Or... is there something else that's sitting on the edge of our peripheral vision, only waiting for us to recognise it? Yes, the latter.

No need to bore with the etiological summary of how these obvious-in-hindsight revelations have come to us as a team in recent months (there's equal bits of webRTC, torstorm, deepDNS, komodia, fishycerts, torsploit, superfish, and more fishycerts mixed in with who knows how much else), let's simply lay out some things facts we've been fortunate enough to see staring us in the face, as a result:

    1. Data-in-transit is one part of a larger challenge our members face in staying safe and secure online, in general

    2. Doing our small part of that work really well is helpful and important... but leaves many other areas uncomfortably exposes

    3. Most of those areas are not part of our core expertise as a team... but a few, somewhat obviously, are.

    4. Of all the areas of uncomfortable exposure beyond the confines of cryptostorm's network edges, "secure" web browsing via https is unquestionably the most badly broken, most widely used, and most complex to mitigate security problem our members face online in their day to day activities.

Simply put, https is badly broken. Or, no that's not quite right... how about this: the cryptographic foundations of https (which are, after all, TLS) are reasonably strong and reasonably reliable even in the face of strong attack vectors. But, the model of ensuring integrity of identity upon which https is built is designed to be unreliable, opaque, inconsistently secure, and open to whole classes of successful exploitation and attack. That design means that the cryptographic solidity of https is essentially fully undermined by the horrific insecurity of centralised identity verification that exists in the form of the "CA model" (as it's generally known). Certificate Authorities - CAs - act as "root" guarantors of identity within the CA model, and these CAs (in theory) are the foundation on which confidence in network session integrity is built.

Only that's not how any of it actually works.

I'm not even going to attempt to summarise how this all came to be, nor how it actually plays out at a systems-theoretical or technological level. Many brilliant people have written on those subjects far more effectively than I ever will, and I encourage anyone interested in these matters to read those writings rather than wasting time reading any attempt of mine. But, while I may not have the ability to articulate the CA model in all its gruesomely convoluted, counter-intuitive, opaque hideousness... I do know how it works as an insider and a specialist in this field. I know it from years of frontline engagement, elbows-deep in x.509 syntax & CRL policies & countless complex details stacked in teetering layers.

I also know it as someone who has as a professional obligation ensuring that our members are secure in the context of this insecure CA model... which is to say as someone who is tasked with making something work that's designed not to work. Because, yes, the CA model is designed to be insecure, and unreliable, and opaque, and subject to many methods of subversion. This is intrinsic in its centralised structure; indeed, it's the raison d'être of that structure itself. What exists today is a system that guarantees the identity of both sides of an "end to end" https network connection... except when the system decides to bait-and-switch one side out for an attacker... if that attacker has the leverage, resources, or connections to have access to that capability.

The CA model also puts browser projects - Chromium, Mozilla, etc. - in the role of guardians of identity integrity, through their control over who gets in (and stays in) the "trust store" of root certs held (or recognised) by the client's browser. But of course browser vendors are in fact advertising businesses and they make their daily bread on the basis of broad coverage, broad usage, and no ruffled feathers... they are the last entities in the world with any incentive to be shutting down root certs if a CA is compromised in a way that can't be easily swept under the rug. So the browser vendors loathe the role of CRL guardians, and basically don't do it. Which means every root cert out there today is going to stay "trusted" in browsers, more or less, irrespective of whether there's any actual trust in the integrity of their vouching, or not.

Editing in [6 March] a relevant summation of this dynamic from Dr. Green. Here, he's speaking in reference to Superfish - an acknowledged distribution of badly-broken (unauthorised, although in what context one can say whether a root cert is "authorised" or not becomes one of ontology, quickly) root certificate and private key in the wild:

The obvious solution to fixing things at the Browser level is to have Chrome and/or Mozilla push out an update to their browsers that simply revokes the Superfish certificate. There's plenty of precedent for that, and since the private key is now out in the world, anyone can use it to build their own interception proxy. Sadly, this won't work! If Google does this, they'll instantly break every Lenovo laptop with Superfish still installed and running. That's not nice, or smart business for Google.


Not smart business for Google, indeed. This makes a mockery of the entire concept of "revocation lists" - which actually become "lists of stuff Google et al may or may not revoke, depending on their own business interests at the time... and any political pressure they receive behind the scenes" rather than any kind of objective process (not picking on Google here; indeed all appearances are that they're the least-bad of the lot).

One more aside on CRLs. First, they're accessed via plaintext by just about every root certificate I've ever looked at myself. Let me repeat that, in boldface: certificate revocation lists to revoke bunk certificates are sent out via plaintext http sessions and are accessed via plaintext per the URLs hard-coded into certificates themselves. Really. They are.

Here is a specific example, from the cert we all love to hate, namely StartCom's 30 year 4096 SHA1 '3e2b' root:

X509v3 CRL Distribution Points:

Full Name:
URI:http://cert.startcom.org/sfsca-crl.crl

Full Name:
URI:http://crl.startcom.org/sfsca-crl.crl


Can't imagine any problems with that, can you? I'm hardly the first person to notice this as "an issue," nor will I be the last - it's another example of structural weakness that enables those with central hegemonic authority to bend the system arbitrarily as they desire in the short term, while retaining the appearance of a "secure" infrastructure in the public mind.

After some posts about this in our twitter feed recently, @stribika let us know that this is an intentional design decision:

Publishing them over HTTPS wouldn't fix it because the cert is assumed to be good on CRL download failure.


Good point, but one can see how this spins quickly into a recursive pantomime of any legitimate sort of CLR-based assurance of root cert integrity.

~ ~ ~

From here, I could launch in to a foam-speckled summation of the DigiNotar Hack of 2011, now, to illustrate all these points. But I won't... or if I do, I'll do it in a separate thread so the foam doesn't speck over this one too much. But, yes... DigiNotar. One image to emphasise:

NSAdiginotar.png


The CA model serves the purpose (in structural functionalist terms) of giving the appearance of reliable identity validation to the majority of nontechnical users who see the green padlock in their web browser and think "secure," while simultaneously ensuring that the door to subversion of that security is always and forever available to those with enough access to central political power to make use of it. So: if you're Microsoft and you really want to, of course you can break any https session because you can sign root certs - short term - that browsers will swallow whole-cloth, and MiTM your way to plaintext. Same for the NSA and other such spooky entities, of course. If you do it too much, too broadly, someone might notice (certificate transparency at least might do this, sometimes, maybe)... but if they do what of it? There will be a half-baked story about a "hacker" with a ski mask on, etc... no root certs pulled from trust stores, no big heat, really not much hassle at all. Give it a bit to die down, and come right back to the trough.

IMG_20141124_190757.jpg


This is not a "failed CA model." It's the exact requirements the CA model fills. Those who seek to "fix" the CA model are trying to fix something that's doing exactly what it's supposed to do for those who make the macro decisions about how it will be managed. To say such efforts are hopeless is actually giving them more chance of success than they have. They are sink-holes for naive enthusiasm, able to sop up technological radicalism in unlimited volumes... eating entire professional lives of smart and eager activists, leaving nothing behind but impenetrable whitepapers and increasing intake of alcohol over time.

But I digress.

This all became crystal clear to many people - and was re-emphasised for those of us who already knew -via the Superfish debacle. And, personally, as I dug into that research topic, I started seeing more and more evidence of how deeply subverted the CA model is - and is designed to be. I could send many bits of foam flying talking about bunk certs and hjacked hostnames and DNS caching evils, and on and on...

I could also spend months or years documenting all that, and eventually add that pile of documentation to the mountains already in existence - more landfill fodder. But, to be blunt, I'm interested in addressing the issue - not in writing about it. I know enough firsthand to know without a quantum of uncertainty that https is unreliable as a secure transport mechanism today. That's enough - it's enough for me to move forward, knowing the facts on the ground as they exist today.

It'd be easy to say that https isn't cryptostorm's job. And it'd be basically true, in historical terms. We route packets, and if those packets carry https sessions that are themselves subverted by cert fuckery... well that's not our problem. Members should be more careful (how?), and besides we can't fix it anyhow. Well, we've debated this as a team quite a bit in recent months. I can't say we have complete consensus, to be honest... but I do feel we've got a preponderance of support for the effort I'm describing here.

Simply put, we're expanding our protection offered to on-cstorm members: we're tackling the problem of broken https at the cryptostorm level, and while we won't be able to nullify that attack surface in one step, we're already able to narrow it considerably, and our mitigation from there has ongoing room to move asymptotically towards zero viable attacks on https identity. We've started calling this mechanism for credible identity validation for https sessions "root-to-root" identity authority, as opposed the Certificate Authority model out there today. Root-to-root doesn't replace the CA model, nor is it in a "battle" with it; it subsumes it, in a sense, in a simpler wrapping of non-mediated identity validation.

In short, we're shifting the Authority of the Certificate Authority model back to individual network members... they're the real "root authorities" in a non-compromised model, and thus root-to-root sessions are the way to ensure the model meets their needs.

~ ~ ~

Implementing r2r for on-cstorm sessions requires us to be clear in what problem we're seeking to solve. That problem - verifying identity online - is actually composed of two distinct, but deeply intertwined - sub problems. Those problems, or questions, are...

    1. How can I be sure that an entity I already know is the same entity, over time, and not some other entity pretending to be they in order to gain access to communications intended for the real one?

    2. How can I be sure that when I engage in network-routed communications with a particular entity, those discussions go to that entity rather than being surreptitiously redirected through a fake transit point masquerading as that entity?

The second of these problems we usually refer to as MiTM, and the first is why we have things like PGP key signing parties. In technical terms, the first one has been considerably narrowed through the unreasonable effectiveness of public-key cryptography. It is still, however, plagued by the problem of in-band "oracular router" subversion of public key identity validators. Simply put, if an attacker can undermine the ability of those communicating to have confidence in getting public keys from each other, the effectiveness of asymmetric crypto technology drops to near zero in practical terms.

The second problem - "how can I have confidence that the network entity I am talking to is the same as the "real" entity I want to talk to?" - is presently tacked by a mongrel mix of DNS and CA model centralisation... which is to say, it's got two famously complex and insecure systems entwined in an ugly fail-dance ensuring that there's no way in hell anyone can be 100% sure - or even 95% sure - that the two systems together give a reliable answer to the question of whether I'm sensing packets to "Janet" at a network (logical) address that is actually controlled by Janet. Usually, my packets will get to Janet... except when they don't. And I'll most likely never know if they don't get there, because an attacker with access to the skeleton keys of DNS and/or CA credentials can do so invisibly. I never know when I'm being screwed, nor does Janet. This uncertainty serves central power just fine.

The second problem emerges from the ontological roots of routed networking: the divergence between physical and logical network topology, as well as the distribution and dynamic evolution of "connectome"-level entity-relationship information embedded in those model layers. The first problem, in contrast, is simply a by-product of remote communications for a species of mammal evolved to know each other in physical terms, not as amorphous, disembodied conceptual categories.

Both problems must be solved, concurrently and robustly, if we are to have easy and consistent confidence that when we visit https://janetphysicsconsulting.org we are sending our latest experimental data to "the real Janet" rather than someone pretending to be Janet, and that those data are being routed to an endpoint controlled by Janet rather than some sneaky GiTM along the way...

IMG_20141107_083232.jpg


Currently, to send those data to Janet's website with confidence they'll arrive unmolested in Janet's custody, we have to both have confidence that the hostname "janetphysicsconsulting.com" will translate into instructions for our data to go to Janet's computer (DNS resolution and routing table integrity), and that janetphysicsconsulting.com is actually controlled by Janet and not some imposter pretending to be Janet (the TLD registrar system of authoritative nameservers, etc.). If either - or both - of those assurances fail, then no amount of clever crypto will prevent our data from getting fondled in a most unseemly way.

That's the problem, in a nutshell.

The solution, most emphatically, is not to continue to incrementally refine the CA model, or (merely) encrypt DNS queries... each might have its uses (indeed we're supporters of DNS query security ourselves): those may be useful in and of themselves, but they cannot act as a substitute for systems-level alternative mechanisms for solving this problem. I'm repeating this point over and over, because until we accept that reality, we're self-precluded from ever seeing our way forward. Like the fish in the sea who never imagined the concept of "sea," we're swimming in waters of which we remain pathetically unaware.

We're in the water, all of us. We must see that, before we can even talk about what that means.

~ ~ ~

Oh, right, I'd mentioned something about cryptostorm solving these intertwined problems of network identity for folks on-cstorm, hand't I? A quick sketch, so as to leave room for more technical exposition once we've rolled out a tangible proof-of-concept in the form of r2r-verified connections to cryptostorm itself (which should be done in a day or so... we'd scheduled that earlier, but pushed STUNnion up the queue given its serious opsec implications).

There's two main components to our r2r framework: one addresses routing, and one addresses public fingerprint verification. Fortunately, both problems have already been essentially solved (in technical terms) via creative, vibrant technologies that were essentially nonexistent a decade ago.

Verification of the integrity of publicly-published data is a problem fundamentally solved by by the blockchains. Consensus validation of chain-posted data works, and has proved robust against very strong attacks thus far. It is not perfectly implemented yet, and there's still hard problems to be tackled along the way. That said, if cryptostorm wants to post something publicly in a way that anyone can access and have extremely high confidence both that it was posted by cryptostorm, and that it has not been modified since, blockchains work. Whether with pleasant frontends such as those offered by keybase.io or onename.io (as a class), or direct to blockchain commit, this system gets data pushed into a place from which it is nearly impossible to be censored, and in which it is nearly impossible to modify ex-post. This works

Successful routing of data across an unfriendly network substrate, with exceedingly high confidence that those data not being topologically hijacked mid-stream and that the endpoint to which the data were directed at the initiation of route setup is in fact the endpoint at which they arrive (and the reverse), has been solved by meta-network technologies of Tor and i2p (in a form of convergent evolution of disparate architectures). Both mate packet transit with asymmetric cryptographic verifications of bit-level data and route trajectory, and both work. An oracular attacker sitting on mid-route infrastructure can of course kill routing entirely by downing the network itself, but no practical or theory-based attacks enable such an attacker to enjoy oracular route-determination control over such sessions. These tools, also, work.

With those two technical primitives in place, the challenge of enabling confidence in our visit to Janet's website is fundamentally met. We can verify that Janet is Janet, publicly and reliably, via blockhain commit... and, we can ensure that the essential components of this process are routed reliably through the use of meta-topological tools from either Tor or i2p. Simply put, we can do blockchain lookups via topologically hardened Tor/i2p routing constructs that allow us to establish reliably secure connectivity with Janet's website. Once we have that session instantiated, in cryptographic terms, we are in good shape: TLS works, once it's up and running, and we need not try to restructure TLS to fix the problem of route/identity validation and integrity assurance.

Rather, we graft on exogenous tools - themselves well-proven in the field but somewhat at a remove from "mainstream" https currently - atop the existing strengths of https. Further, this approach generalises into non-https network encryption. Once the extra superstructure is in place to bulwark against the structurally implicit weaknesses of the CA, DNS, and TLD-nameserver systems, there's no intrinsic bounds on how far it can be extended from there.

~ ~ ~

We're making no fundamentally new tech, at cryptostorm, in order to bring r2r to life. The tools are there, because creative and dedicated individuals and teams have invested their passion and wisdom in bringing them to life. We're using components from the DNSchain team, from the entirety of the Tor Project's work, from Drs. Bernstein & Lange's c25519 breakthroughs, and from dozens of other brilliant technologists. We're just stacking those wonderful building blocks up in a way that enables something really useful for folks seeking secure network access, via cryptostorm.

The final piece of the puzzle is our deepDNS resolver/reply system, which has emerged from our earlier work to ensure integrity of DNS queries in the micro sense. With deepDNS, we are able to deploy "active masking" at the cstorm-network level - ensuring that privacy-relevant attack surfaces are minimised for folks on-cstorm.

Once we recognised the implicit capabilities of deepDNS - once we noticed that we're swimming in the water, as it were - the jump to r2r was all but inevitable. We are able to provide robust assurances of both data-in-transit integrity and routing-trajectory integrity for the on-cstorm leg of member network sessions... and that bootstraps all the rest. It's a layered, fluid, topologically heterogeneous meta-system that makes r2r possible. And it works.

So that's that. Despite this "too many words" essay, the deploy is somewhat trivial in practice. Once we've got tangible examples of this methodology in the field, we expect to find improvements, refinements, and extensions currently not obvious to us. And we hope others will take what we're able to do, and build in turn new capabilities and technologies we don't yet imagine ourselves.

Here's to those of us who are brash enough to worship at the alter of the cult of the done...

Cheers,

    ~ ðørkßöt


ps: down with fishycerts! :-P
...just a scatterbrained network topologist & crypto systems architect……… ҉҉҉

    ✨ ✨ ✨
pj@ðëëþ.bekeybase pgpmit pgpðørkßöt-on-consolegit 'er github
bitmessage:
BM-NBBqTcefbdgjCyQpAKFGKw9udBZzDr7f

User avatar

parityboy
Site Admin
Posts: 1051
Joined: Wed Feb 05, 2014 3:47 am

Re: root-2-root: cryptostorm's roadmap to a simplified, decentralised, credible future of secure web browsing

Postby parityboy » Thu Mar 05, 2015 9:54 pm

@OP

Building on this requirement of authenticity, I remember something I'd read about .onion sites being spoofed. This was able to work due to .onion addresses being l0t50fg00bl3dyg00k.onion; many times users won't remember what the real .onion address is for the Silk Road (for example), and so will assume that the site they are using is the real one.

I'm of the opinion that there is a market for a (perhaps) distributed, blockchain-based registry of .onion addresses which are signed by site owners, and (again perhaps) agreed upon by some kind of consensus. This could be projected at user-level by some kind of browser extension which could either provide a searchable list of sites, or simply verify what is in the address bar as being genuine.

I don't know enough about blockchains or sidechains to be able to implement this to any degree of robustness (if at all), but I thought I'd put it out there as an idea to try to address the "at-a-glance" trust given to .onion addresses by users.


Guest

Re: root-2-root: cryptostorm's roadmap to a simplified, decentralised, credible future of secure web browsing

Postby Guest » Fri Mar 06, 2015 2:33 am

Here's a sloppy edited block of condensed factual info I pasted together to try and understand how this works.

Root-to-root doesn't replace the CA model, nor is it in a "battle" with it; it subsumes it, in a sense, in a simpler wrapping of non-mediated identity validation.
In short, we're shifting the Authority of the Certificate Authority model back to individual network members... There's two main components to our r2r framework: one addresses routing, and one addresses public fingerprint verification. Verification of the integrity of publicly-published data is a problem fundamentally solved by by the blockchains. Consensus validation of chain-posted data works, and has proved robust against very strong attacks thus far. Successful routing of data across an unfriendly network substrate...has been solved by meta-network technologies of Tor and i2p. Both mate packet transit with asymmetric cryptographic verifications of bit-level data and route trajectory. With those two technical primitives in place, the challenge of enabling confidence in our visit to Janet's website is fundamentally met. We can verify that Janet is Janet, publicly and reliably, via blockhain commit... and, we can ensure that the essential components of this process are routed reliably through the use of meta-topological tools from either Tor or i2p. Simply put, we can do blockchain lookups via topologically hardened Tor/i2p routing constructs that allow us to establish reliably secure connectivity with Janet's website.



So- if I'm getting this correctly...
DnsChains/ok turtles verifies IP/Dns match through network consensus and blockchain commit.
...and after that I'm lost. something i2p, something something Tor, blockchain, fingerprint, CA techno voodoo magic...

something about consensus and blockchain also being used to verify 'fingerprints' which I assume means certs? or is it more then that? How can the client utilise this? or is this somehow all done server side? How?

How can topological routing be verified via tor/i2p pki unless 'janet' is running on tor/i2p? as I understand it- tor/i2p pki only verifies/validates routing within tor/i2p- once traffic exits to clearnet it's back to square one, vulnerability wise. or do you mean just the cert (err fingerprint?) to janet is validated via tor/i2p/blockchain somehow, and checked for consensus? Wouldn't Tor/ip2 exit nodes be a prime candidate for exactly the kind of interception your trying to avoid- ie, if most tor/i2p nodes are targeted for interception then the consensus itself might be wrong? In any case, what happens when there's a legitimate cert change- how is that transition handled?

You've done a great job explaining all the issues surrounding these standard outdated clusterfuck "security" systems- just when it comes to the proposed solution/implementation and the nuts and bolts of how it works- I'm lost; I'm either too ignorant of the underlying tech, and/or there's not enough info here to understand what you're implementing. Could you please explain more clearly on the fundamentals of how this new system works?

User avatar

Topic Author
Pattern_Juggled
Posts: 1492
Joined: Sun Dec 16, 2012 6:34 am
Contact:

Re: root-2-root: cryptostorm's roadmap to a simplified, decentralised, credible future of secure web browsing

Postby Pattern_Juggled » Fri Mar 06, 2015 3:24 am

Guest wrote:How can topological routing be verified via tor/i2p pki unless 'janet' is running on tor/i2p? as I understand it- tor/i2p pki only verifies/validates routing within tor/i2p- once traffic exits to clearnet it's back to square one, vulnerability wise. or do you mean just the cert (err fingerprint?) to janet is validated via tor/i2p/blockchain somehow, and checked for consensus? Wouldn't Tor/ip2 exit nodes be a prime candidate for exactly the kind of interception your trying to avoid- ie, if most tor/i2p nodes are targeted for interception then the consensus itself might be wrong? In any case, what happens when there's a legitimate cert change- how is that transition handled?


Short reply now; more later. If one does the namecoind query inside one of the meta-networks, it becomes exponentially difficult to reliably inject altered results into the process. The blockchain doesn't have a routing address - it replicates everywhere and can be read to or written from anywhere - so pinging it (or "it," as it's a bunch of copies of itself) can be done anywhere inside those networks.

You've done a great job explaining all the issues surrounding these standard outdated clusterfuck "security" systems- just when it comes to the proposed solution/implementation and the nuts and bolts of how it works- I'm lost; I'm either too ignorant of the underlying tech, and/or there's not enough info here to understand what you're implementing. Could you please explain more clearly on the fundamentals of how this new system works?


The fault is in my very skim-level presentation of the structure of a technical plan for how to do this. It felt like the essay would become even more over-bloated were that grafted on, so I've spawned that off to handle separately - not as a "here's the answer" but rather a "here's my model, let's kick it around & refine it down the most elegant version of itself."

There are some things that won't translate well - although they are generally the things that are least in need of additional bulwark, in my way of seeing things. Ephemeral, multi-layered, centrally-administered CDNs don't (initially at least) translate well. Same goes for fast-flux-ish iterative domain::IP mappings - that stuff is designed to be fleeting and easy to change, and that's not really the core of what is most broken.

What is broken tends to be the "there's a server, it's got fairly stable IPs associated with it, I need to know I can spin up a good https session with it and not have a bunch of nasties bum-rushing the process every hop along the way.

That problem can be solved.

Cheers,

~ pj
...just a scatterbrained network topologist & crypto systems architect……… ҉҉҉

    ✨ ✨ ✨
pj@ðëëþ.bekeybase pgpmit pgpðørkßöt-on-consolegit 'er github
bitmessage:
BM-NBBqTcefbdgjCyQpAKFGKw9udBZzDr7f

User avatar

Topic Author
Pattern_Juggled
Posts: 1492
Joined: Sun Dec 16, 2012 6:34 am
Contact:

Re: root-2-root: cryptostorm's roadmap to a simplified, decentralised, credible future of secure web browsing

Postby Pattern_Juggled » Fri Mar 06, 2015 3:28 am

One more quick little note-let...

This can work, and work with minimal drama. I know this is true because my PoC for it has been a manual process of doing gut checks of connections to websites, for the last month or so. One can often, after a bit of practice, spot problems as they happen - and with access to cryptostorm, as one example, one can often simply redirect sessions a different pathway to avoid the badness.

If that can be done with meatspace implements, it can be done better and more efficiently with a bit of scripting and the benefits of the blockchain & meta-networks. That's the ground-up approach I've taken to proofing the implementation capability. The rest is simply fine-tuning and improving efficiency...

Cheers,

~ pj
...just a scatterbrained network topologist & crypto systems architect……… ҉҉҉

    ✨ ✨ ✨
pj@ðëëþ.bekeybase pgpmit pgpðørkßöt-on-consolegit 'er github
bitmessage:
BM-NBBqTcefbdgjCyQpAKFGKw9udBZzDr7f


Guest

Re: root-2-root: cryptostorm's roadmap to a simplified, decentralised, credible future of secure web browsing

Postby Guest » Fri Mar 06, 2015 12:07 pm

If one does the namecoind query inside one of the meta-networks, it becomes exponentially difficult to reliably inject altered results into the process.


If I understand correctly, this is the Dns aspect- it's done by the specifically configured dns server, and it retrieves blockchain records placed by numerous Dns servers of what IP www.janet.com resolves to and forms a consensus of results that will make bad resolves stick out and be rejected. -presumably it would also briefly make legitimate IP changes appear illegitimate until the consensus balance adjusted to the change. This effectively means that a website is much much harder (impossible?) to spoof and can't be spoofed to just a single user or server solely via dns records.
This part I think I get. this is dnschain/ok turtles- the deepdns stuff; right? ..or does the block chain only work for .bit address's? I've not been clear on that tbh.

So that leaves:
1. The TLD registrar aspect of "does janet actaully own www.janet.com?
2. The routing aspect of "is IP xxx.xxx.xxx.xxx really xxx.xxx.xxx.xxx?"
3. And the Cert aspect of "is the janet.crt that the client received from janet.com the correct .crt for janet.com?"

So:
1. I'm completely clueless how the first can be addressed without site operator implementation of namecoin .bit registration or similar tech. Janet has to wake up and fix this- no?
2. Perhaps the second could be addressed by some sort of minimal site local area routing topology included in blockchain consensus? "site janet.com originates from route A and then through B or C..etc" -but how would that be implemented?
3. Similarly I can imagine a blockchain consensus of "janet.com gives cert janet.crt" but again- how is that implemented? especially without any of this requiring client-side software? Namecoin blockchain entry can host 512bit's iirc- that would work for a few 32bit topology entries, but it's not nearly enough for a cert? multiple entries? I've not read of this being done with it- and it would still need to transition from server side scripting to useful client side action. How?

Maybe you could strip the client cert stores and have a client side CS root Ca; with everything else done server side?? ...there's a controversial thought- doesn't seam like your style at all though. it's good enough for HMA-lol! Course I didn't get the sense that was at all what they where doing with it though. Presumably you'd be much much more trustworthy. gah- I'm rambling; surely that can't be it... I'm probably too noob to understand what your cooking here. Hope my confusion is at least more defined now and that this post isn't unwelcome. Working on it is more important then explaining it to my dumb ass.. Really look forward to an explanation when you get the time- if you could do so in this format- explaining how the implementation addresses each of these aspects, that would be awesome. I find this stuff fascinating and enjoy learning about it.

User avatar

Topic Author
Pattern_Juggled
Posts: 1492
Joined: Sun Dec 16, 2012 6:34 am
Contact:

Dr Green: "tunnel traffic through some alternative (secure) protocol..."

Postby Pattern_Juggled » Fri Mar 06, 2015 2:01 pm

Following up on this comment from yesterday:

Pattern_Juggled wrote:...with access to cryptostorm, as one example, one can often simply redirect sessions a different pathway to avoid the badness.


I ran into a convergent explanation of this solution path from Dr. Green this morning:

One option for Google is to find a way to deal with these issues systemically -- that is, provide an option for their browser to tunnel traffic through some alternative (secure) protocol to a proxy, where it can then go securely to its location without being molested by Superfish attackers of any flavor. This would obviously require consent by the user -- nobody wants their traffic being routed through Google otherwise. But it's at least technically feasible.


This works much better via cryptostorm than with Google attempting to do browser-based encapsulations - we don't need to move up to those OSI layers to address the problem, and rather are continuously moving HTTPS traffic through cryptostorm's network transit fabric the entire time.

(note that, yes, this doesn't solve the problem of hideously-subverted browsers or rootkits on member computers... I do not think there's any network level-mechanism that can do much to help in the event a member is pwned at root on their local machine)

As usual, Dr. Green's writing is much better than mine!

Cheers,

~ pj
...just a scatterbrained network topologist & crypto systems architect……… ҉҉҉

    ✨ ✨ ✨
pj@ðëëþ.bekeybase pgpmit pgpðørkßöt-on-consolegit 'er github
bitmessage:
BM-NBBqTcefbdgjCyQpAKFGKw9udBZzDr7f


Guest

Re: root-2-root: cryptostorm's roadmap to a simplified, decentralised, credible future of secure web browsing

Postby Guest » Sat Mar 07, 2015 5:39 am

I just watched the vid I'm posting below, which is quite informative and on topic. It seams I didn't actually even understand the dns aspects of this... So- dumbass indeed, it's good to be humbled when your wrong; as much as I hate being 'that guy'. I'm not even going to mention how I think this works now, as it's quite possible I still don't understand it; but a couple things that seam clear is- blockchain entries ARE only .bit address's, and the consensus is NOT an average of samples taken by multiply observers of multiple sources, but rather a consensus on the existence and content of a single cypto-signed authoritative blockchain entry by the owner of the .bit address.

Securing communications with blockchains
http://www.youtube.com/watch?v=wAbvN_PoSrs

User avatar

parityboy
Site Admin
Posts: 1051
Joined: Wed Feb 05, 2014 3:47 am

Re: root-2-root: cryptostorm's roadmap to a simplified, decentralised, credible future of secure web browsing

Postby parityboy » Sun Mar 08, 2015 4:10 am

@Guest

Yeah I watched that video as well. At a certain point the speaker says that there is a way to get .com addresses into the blockchain (at time index 12:33), but does not elaborate further - I assume that was beyond the scope of the presentation. If this is indeed true, then once the correct tools are in place, then as I see it, it would then be a case of promoting the use of these tools so that domain owners take advantage of them.


JakeMan

Re: Decentralised Attestation: cryptostorm's #CAfree framework for legitimate cert-based https & tls security

Postby JakeMan » Mon Mar 28, 2016 6:00 am

So - I get the blockchain is a great replacement for the CA. But - currently DNSchain and namecoin have mainly .bit domains. How would you be using the blockchain for the wider internet? (At least in the near future?)

Also - the security of the blockchain hinges quite a lot on the checkpoints that client-program installer files are supplied with. Procurement of the client program becomes a problem (risk) if it is not out-of-band.


Last bumped by Anonymous on Mon Mar 28, 2016 6:00 am.


Return to “cryptostorm reborn: voodoo networking, stormtokens, PostVPN exotic netsecurity”

Who is online

Users browsing this forum: No registered users and 4 guests

Login