github repository: #skRATched
Those who follow such things may have noticed that a good chunk of the cryptostorm team has been, not to put too fine a point on it, a little bit distracted in recent days. That's not to say that we've not been covering our duties, nor making steady progress on network improvements and new capabilities for cryptostorm members. Rather, it's that we've been ever less good (read: bad) at marketing lately. We've deployed really interesting and useful new features and barely mentioned them. We put our new website into production in a final push of production-code fine-tuning that culminated in us playing a bit of a prank on the internet. And also deepDNS.
It's not that we're dismissive of the value of such things - not at all. It's been something else entirely: #superfish. For those who stay a healthy distance from the front lines of security tech, this was a situation in which the world suddenly realised that Lenovo has been pre-installing some really noxious adware on their laptops, adware from venture-funded superfish that bundles into its installer (or "installed," in the case of pre-loaded laptops) a self-sighed, low-rent root ssl certificate enabling for comprehensive man-in-the-middle (MiTM) plaintext visibility of all "secure" web browsing people with infected machines might do via https sessions.
There's been oceans (sorry) written about superfish since then, as well as the provider of the creepy TLS-hijacking network kit behind its most frightening characteristics, #komodia. We did our part to help dig through the initial waves of public relations chaff and focus attention on the horrific side-effects such MiTM attacks have on the reliability of https as a secure transit medium, in general. This is important work, and we were happy to help out as we could. By the time Filippo released the first in a series of iteratively-improving vuln-testing tools, we'd been pulled deep into the superfishy waters.
For that it helps to step back and remember the part cryptostorm plays in good security practice. We're essentially generalised transit layer security, what we sometimes call "encrypted packet routing" (not a very sexy tagline, admittedly). The idea is that we wrap all data traffic to and from our network members in an encrypted, encapsulated wrapper as it goes to and from the plaintext internet. We work hard to ensure stuff doesn't "leak" out of that wrapper, and our model involves constant vigilance on such matters. We know that terrain, we know what we do well and where we're still pinning down loose bits. It's our home turf.
But then there's web browsers.
Ideally this wouldn't matter to cryptostorm: after all, we wrap https inside our own layer of in-transit crypto, so if https is broken then it's just a good thing we're there as fallback. Only with MiTM attack models, not so simple; sure, from the member to the cryptostorm exitnode, we can cover that channel securely (and we do)... but those https sessions exit cryptostorm and (unless they're .onion or .i2p traffic heading into those subnetworks via our deepDNS service). When they do, they're vulnerable to interception.
They're also vulnerable right at the member computer - the "endpoint" - and superfish showed just how easy it is for that vulnerability to become real. After all, if ssl kneecappers like komodia's interceptor jack data traffic right on the member PC, having cryptostorm secure the already hijacked data doesn't help our members at all.
In one sense, this isn't our problem: we route packets. Only... no. That's not how things are evolving. Our real role, we understand, is to help ensure our members are secure online. Real secure, not fake "secure" (see below... trust me). So when browser vulns get critical and members are leaking data right through the middle of our secure tunnel, we don't just ignore it and "not our problem" look the other way. That said, we're not browser specialists and we don't run client-side kernels... but somewhere in the middle, there's important work for us in keeping our members secure.
Superfish - and the tragic insecurities of the CA model - falls right in that terrain. That terrain is also where we've enabled support for Namecoin-blochchain-based DNSchain... to help members avoid the need for reliance on centralised DNS lookups, for example. Thus superfishing wasn't just a distraction from our real work: it's part of our job. Ok also it's fascinating, and working with smart colleagues to untangle the ball of sad this whole thing represents is exhilarating and entirely self-fulfilling (if you're a geek, right?). Many an hour was not slept in the past week, as our team has cycled in and out of superfishy forensics.
But a funny thing happened on the way to the superfishing: we started getting tips from friends and colleagues, out in the etherverse, that they'd seen similar cert-baed, ssl-hijacking, proxied weirdness well beyond superfish. Some of that's simply komodia's kneecapper-ware getting spread out like some nasty infection... but not all. Soon enough we were chasing down injection-based subversions of ssl security in a range of platforms, thanks to data provided by the community. (we owe apologies to some who have waited patiently all week while we gut pulled into... well, you'll see below). The rot runs deep - we've only scratched the surface, and my own pastime of collecting "fishycerts" in the wild will likely carry me through many a year of my peaceful retirement.
But here's where things get weird.
We got our first tip on a "VPN service" that might be superfishy late last week. Our reply perhaps speaks for itself...
That project is still a work in process (again, see below for an explanation of the delay in publishing); ongoing data are available publicly for other researchers to review and expand on via our github repository. Are we just being, well assholes in our interest specifically in VPN services that might be superfishing their customers? Actually, no - we hope not.
Rather, we're strong proponents of operational and technical transparency when it comes to customer-focussed security tech. The most obvious part of that is opensource code, of course. And yet, the "VPN industry" is filled with closed-source "mystery binaries" (as Graze calls them)... blobs of who-knows-what downloaded and installed by people trusting they will "stay secure" and "have total privacy" if they just pony up their creditcard every month.
It's very time-consuming to analyse binaries, as compared to source code: possible, but something of a fine art... and it's not our fine art. We run a great network security service; we don't pretend to be world-class .exe reversers deobfuscation specialists in our spare time. In a pinch, we've got team resources with strong skills to cross into that space - but it's not our core zone and we're slower and less competent at it than are the many folks who do it full-time and do it very well indeed.
But that shouldn't be necessary for privacy tech, ffs!
There's just no legitimate excuse to have close-source code getting pushed around in this space. Stick a humble little repository on github (like our quite-humble widget repo), and dump pre-compile code into it. Sure, it'd be great to do full reproducible build recipes & the other high-powered stuff (we're working on that as a medium-term goal), but even a simple repo and some independently-available hashes go a long way to ensuring totally arbitrary, dodgy code isn't being superfished all up in customers' grills, you know? This is not so much to ask.
So that's our axe to grind, our idée fixe as it were. With open code, iterative improvement is just so much more structurally inevitable. Someone sees something derpy in the code, they raise a fuss, derp is lessened, code recompiled... the world is a better place. It's a collaborative process, and the entire "industry" improves over time. That's the idea, anyway.
In the "VPN industry," unfortunately, it doesn't work like that at all, just yet. There's some exceptions - we've gone back and forth with Mullvad, to be blunt, on some issues publicly... and despite the friction such can entail, it's been a net benefit for everyone. We (genuinely) appreciate the eyes on our code they provide, and we hope our chivvying (and occasional trollolol-y good-natured tomfoolery) might just be of benefit to them. Anyway, that's how it's supposed to work: it's not perfect, and it's not always pretty... but it delivers better tech over time, mostly. So that's good.
In the "VPN industry," it's all snake-oil all the time: every me-too "VPN service" is the "world's fastest" and blah blah blah. No metrics, no public view into the code. We've found instances of companies faking their 'server stats' pages with php scripts, we've watched as server certs showed up across different VPN companies (happens when the "DO NOT USE THESE TEST CERTS IN PRODUCTION" certs included in the openvpn installers get, yep, used in production)... there's more Alot more. We write up perhaps 5% of what we find, time being limited. Graze refers to it as a "backlog of weirdness" we have in our files and that's an apt phrase indeed.
So when PIA claims to be safe against 'advanced alien technology,' it's not an aberration. And when we point out how silly that is, we're accustomed to being quite well savaged in reply. Trolls, smears, (very poor quality) SQLi pokes at our machines... it's temper-tantrum-level stuff, and it's how the "VPN industry" responds to public discussion & debate of security tech.
That has to change.
Hence we do try to put a bit of a light on less-than-optimal issues in this little corner of the world - as we encourage others to do with us, as well - so that we can come in line with every other part of security tech in recognising that open code is the foundation of the vast majority of solid security tech.
It does make us seem a little bit growl-y and suspicious, there's no doubt. That's a shame, and I hope over time we come to more of an equilibrium with the "VPN industry" in such matters. But that'll happen when things move into the best-practice model of open, public code... not by us compromising on our position on the question. Just to be clear. We do think it'll happen, and if we get tarred a bit as the bête noir of the "VPN industry" meanwhole, so be it. We'll live - and our members are no less secure for it, that's for certain.
As we were unpacking and undertaking forensic analytics on the VPN software installer that bundles not only the fiddler testing package (which does ssl kneecapping very well, indeed) but also trusted cert-key pairs that remain in the Windows trust store even after uninstall, something came our way. It came from... a friend with some very precise information. And as such tends to be, it was short and to the point: cyberghost is far from the worst; take a look at this binary (loaded from this page... you'll see if you look hard enough.
Oh. Ok then.
Here's the installer pulled from that URL. It has a SHA256 hash value of: ef2af4837195f85335d5434d198e75b07246d3e79b942b2a542e96c90a5aac59.
Here is the malwr.com report.
(reports also from virustotal and deepviz.com, and herdprotect.com)
For whatever value such things add, here's the chain of trust underling the code signing certificate on the binaries:
edited to add: it's been pointed out to me that, per data provided via the deepviz.com scanner, the HMA installer doesn't pass the "PE check," or code signing cryptographic verification. I've not done these calculations directly, so I can't say whether this PE-fail is spurious or not - but here's the report as it is presented:
Those red flags raised by some of the scanners are, as we've been reminded by friends in the malware analysis world as well as knowing firsthand, likely false-positives and aren't some big red flag. And at that level, nothing to see here folks - move right along. It's closed-source code, so we can't see alot of what's going on... but what we can see isn't overtly evil. Fair enough.
Dig a little deeper, and you see that the installer makes an http call to this URL:
Code: Select all
Code: Select all
GET /download/0/6/1/061F001C-8752-4600-A198-53214C69B51F/dotnetfx35setup.exe HTTP/1.1
User-Agent: NSIS_Inetc (Mozilla)
Cookie: MC1=V=4&GUID=ff8156e0b7c7464e9c5ca694a076d81e&HASH=&LV=&LU=1412024236610; optimizelySegments=%7B%22223040836%22%3A%22direct%22%2C%22244338170%22%3A%22none%22%2C%22223033821%22%3A%22false%22%2C%22223082014%22%3A%22ie%22%7D; optimizelyEndUserId=oeu1411991834643r0.7160220457384006; optimizelyBuckets=%7B%7D; MUID=121AAB31755E6EFF1B43ADF5715E6C85; A=I&I=AxUFAAAAAADvBgAAYeNKI3hJ0WqAQsafK8vAFw!!&V=4
This download actually routes through akamai, via this trajectory at present:
[root@fenrir ~]# wget http://download.microsoft.com/download/ ... 5setup.exe
--2015-02-26 16:42:29-- http://download.microsoft.com/download/ ... 5setup.exe
Resolving download.microsoft.com... 18.104.22.168, 22.214.171.124, 2a02:26f0:71::5c7b:4826, ...
Connecting to download.microsoft.com|126.96.36.199|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2959376 (2.8M) [application/octet-stream]
Saving to: `dotnetfx35setup.exe'
100%[===================================================================================================================================>] 2,959,376 651K/s in 4.5s
2015-02-26 16:42:34 (642 KB/s) - `dotnetfx35setup.exe' saved [2959376/2959376]
Here is the file that results:
As the wget results show as well, it's exactly 2,959,376 byes long. The file hashes as follows:
Full-payload pcaps of the HMA installer making the calls to this file, via IP address 188.8.131.52, are here:
This file that has been pulled from that address, via those packets, at the request of the HMA installer, purports to be an incremental update to Microsoft's .net framework. Specifically (from microsoft.com):
Microsoft .NET Framework 3.5 contains many new features building incrementally upon .NET Framework 2.0 and 3.0, and includes .NET Framework 2.0 service pack 1 and .NET Framework 3.0 service pack 1.
File details, from the microsoft site:
File Name: dotNetFx35setup.exe
Date Published: 11/20/2007
File Size: 2.7 MB
Hmmm... 2.7 megabtyes? Let's download that one, directly from microsoft (by clicking the "download" button on that page, and navigating around their effort to trick the unwary into allowing some Bing search crap to come along for the ride - stay classy, M$!), and see what we get. I've added "_microsoft" to this filename - "dotNetFx35setup_microsoft.exe" - in order to keep the two apart; I did not change the capitalisation, that's how the file has arrived, which also matches the spelling of the text title given by the Microsoft page, above):
Here's the trajectory the download request took:
[root@fenrir ~]# wget http://download.microsoft.com/download/ ... 5setup.exe
--2015-02-26 17:21:06-- http://download.microsoft.com/download/ ... 5setup.exe
Resolving download.microsoft.com... 184.108.40.206, 220.127.116.11, 2a02:26f0:71::5c7b:4896, ...
Connecting to download.microsoft.com|18.104.22.168|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2869264 (2.7M) [application/octet-stream]
Saving to: `dotNetFx35setup.exe'
100%[===================================================================================================================================>] 2,869,264 645K/s in 4.4s
2015-02-26 17:21:11 (641 KB/s) - `dotNetFx35setup.exe' saved [2869264/2869264]
How does this file hash out?
So, ok, the files aren't bit-identical. Indeed we can see that the file from microsoft.com is smaller by exactly 90,112 bytes. Not alot, but not zero either.
Let's scan the two files....
Here's virustotal's summary stats and metadata on dotnetfx35setup.exe (the one from the HMA installer), and here's the same virustotal metadata summary on dotNetFx35setup_microsoft.exe, the link directly from microsoft.com. Side by side, the two pages look like this:
edited to add: reading back over this, one can see that this "side by side" graphic I did is pretty much useless. Not my strong suit, sadly. I'm leaving it as-is, but adding in these two additional side-by-side comparisons in the hopes they're a bit more helpful.
Here' are the microsoft ("legit") and HMA ("corrupted") virustotal detail screenshots, the former atop the latter:
And from the deepviz.com scanner, here's the microsoft ("legit") as compared to the corrupt ("HMA") versions' respective static sections analysis, the former atop the latter:
Only a few divergences, actually. Of course the hashes differ - no way to fudge that, as it's driven by bit-level metrics.
The microsoft file shows version "3.5.21022.08" and the HMA one "3.5.30729.01" which makes sense because the microsoft binary was code-signed on 4:43 AM 11/8/2007 whereas the HMA one wasn't code-signed until 8:24 AM 7/30/2008... nearly a year later. So maybe the HMA file is some later build, right?
Except the compilation timestamp for both files is identical: 2005-06-01 16:46:51
As is the EXIF timestamp: 2005:06:01 17:46:51+01:00
Review the PE Sections data, some interesting things stand out (this is beyond my personal level of Windows expertise, so I'm making note solely in hopes others will see and offer explanation). Virtual addresses are identical, as is virtual size. Raw size lists identical for text and data... but not for .rsrc; microsoft's version lists as 2827264, whereas the HMA version is 2917376. A size difference of exactly 90,112 btyes. Naturally, the hashes for the .rsrc values diverge... but it's interesting to note that the hashes of the data values also diverge. That suggests that some of the payload is actually in .data, although the metadata weren't updated to reflect that?
When we flip over to the comparative data offered by malwr.com a similar small but nonzero divergence is once again apparent. The microsoft version shows a reasonably mundane dynamic analysis snapshot (again, the specifics of this are past my expertise level; I'm sharing the data as they appear):
The HMA/mystery version, in contrast, is quite a bit different in that regard:
A similar pattern shows in most all other areas: much more activity in the HMA version, as compared to the legitimate one.
Perhaps most striking is the network behaviour divergence. Here's the pcaps pulled from the microsoft version's installation process:
Here's the mystery/HMA pcaps:
9902433cc3ad6e00ecc3e321e9d68c83e12fcd2991327a405d4b8ad22525fbb2.pcap 55.6 MB
(that's a mega.co.nz link because the HMA version generated 55 megabytes of traffic during that short time-window on the VM testbed; in contrast, the microsoft version's clock in at... 8kB)
There's a great deal more data to look through in the sandboxed VM runs of the initial HMA installer and then of the dotnetfx version it downloads. Some of it, after looking closely and research carefully and asking around, we're still not quite clear on. For those in this field, its likely obvious how it all fits together.
We did run some local test installs of the HMA application itself, and there's network data gathered from that. We don't have those data analytics complete, and it felt inappropriate to stall publication of what we've worked on thus far whilst waiting on that. Most of all, we wanted to put our raw data into view of others who do this sort of thing in a dedicated manner. We've brought our best work to it, but at this point review is the most productive next step and it is for that reason we've brought these data forward.
What's in the payload of this modified binary? We haven't looked inside and let it run on a local machine long enough to answer that question. But, really... there shouldn't be a question.
As an aside, I believe what's happened here - and this will be obvious to many in the field, I'm sure - is that the legitimate microsoft SP1 upgrader has about 100kB of stuff stuffed into it by whoever did these mods. What's in that payload? The strings give some hints, as do the registry edits... but that's something others can dig into with the data published here. Pains were taken to make the tweaked version look very similiar... it took me a few days to finally see through that. Once you see it, of course, it's obvious.
Did the developers who made this HMA VPN application intend to bring in this apparently malicious payload, or was it some sort of serious security error? This I cannot say... it's an open question.
- - -
Is there a short-form, tl;dr summary of this work as it has developed so far? For us, we're confident in saying that sending out closed-source, mystery binaries as applications - with no validation mechanism possible, and no ability for outside researchers to effectively review code integrity, is a really bad idea. These are VPN software clients - this isn't heavily complex, super-secret technology... there's no reason it's not publishable, and it makes no impact on competitiveness to be honest.
Absent that open code standard, we're left with a situation such as we've uncovered with the HMA binaries. There they are. They are installing things that are not very friendly-looking, and all the signs are that it's a bad state of affairs. Or is it? Maybe there's a totally legit, obvious-in-hindsight explanation for why this flows the way it does... we're not saying that's impossible, and indeed if the developers of this application would step through the design, code, compile process it would be instructive and valuable for everyone involved.
As to what exactly these mystery binaries are doing - and, in terms of providing network security, not doing - we leave up to professional malware investigators to determine. From our end, we'd not distribute these binaries to our members. Nope, never. Perhaps we're just too cautious, or too old-fashioned... or have seen too much to be so carefree.
And superfish, the thing that got this going in the first place? One very tangible lesson we're all learning from that debacle is the ease with which the current CA system can be subverted is not a theoretical matter, or even one specific to nation-state level actors in the field. Rather, it's being exploited widely, for all sorts of shady or outright nefarious reasons. It's particularly pernicious in that these injected trust creds tend to stick for the long-term, and thus open new attack surfaces almost impossible to fully imagine. It's really a mess, and it's going to take some real work to get past this.
Now, having at the least scoped the extent of the problem, I feel we're in a much better place to map what we can do about it. And for that, I'd like to thank the cryptostorm team, who have patiently covered for me in so many ways in the last couple weeks as I had my "gone fishin'" sign posted out front of my office. I knew this stuff mattered, and I knew this stuff was chunky enough to take some sustained work. I didn't know it'd be weeks of fugue-state intensity... but such is life. So it goes.
Finally, a note of thanks to Filippo and the rest of the crew that has pitched in on this - publicly and privately - both in loose coordination with cryptostorm and via independent tracks. Many have helped enormously from behind the curtains, and some have offered amazingly valuable public expertise only through cloaked channels. Filippo, by stepping forward to be a face and a personality raising the alarm about these issues, has paved the way for so many of the rest of us to contribute most effectively. This is a newly-evolving model of emergent threat-analysis responsiveness, I believe, and it's been an honour to take part in it... despite the need to juggle other tasks in order to clear calendar time for superfishing.
We've so many more interesting things to pursue in this line of research; Graze's "backlog of weirdness" is now a growing deluge-in-waiting. The team here is hoping we can create syncretic working relationships with proper, card-carrying security researchers so that these items don't go stale under the weight of our myriad production obligations within cryptostorm.
Meanwhile, there you go. This is my idea of a short research note, although I apologise for the verbosity. A risk in this line of work, unfortunately.
ps: the one man who knows the true path forward, lest we forget, is Mike Espresso! Here's to #test2, w00d!