Ξ welcome to cryptostorm's member forums ~ you don't have to be a cryptostorm member to post here Ξ
∞ take a peek at our legendary cryptostorm_is twitter feed if you're into that kind of thing ∞
Ξ we're rolling out voodoo network security across cryptostorm - big things happening, indeed! Ξ
Ξ any OpenVPN configs found on the forum are likely outdated. For the latest, visit GitHub Ξ

On Honeypotting

Freewheeling spot to chew the fat on anything cryptostorm-related that doesn't fit elsewhere (i.e. support, howto, &c.). Criticism & praise & brainstorming & requests for explanation... this is where it goes when it's hot & ready for action! :-)
User avatar

Topic Author
cryptostorm_team
ForumHelper
Posts: 159
Joined: Sat Mar 02, 2013 12:12 am

On Honeypotting

Postby cryptostorm_team » Sat Nov 29, 2014 10:40 am

{direct link: honeypots.cryptostorm.org}


This is an article about honeypot awareness.

Weird heroes and mould-breaking champions exist as living proof to those who need it that the tyranny of the 'rat race' is not yet final.”

~ Hunter S. Thompson


What is a honeypot? A honeypot is a security resource set up specifically to draw in unsuspecting visitors and thereby compromise their security. The classic honeypot example in the "VPN industry" is that run by CumbaJohnny. This "VPN service" was set up, operated, and designed solely to gather information on 'carders' using it, and thereby prosecute them. There are others, although none as well-documented publicly and as bright-line in their goals as that one (we have found a honeypot VPN service run by a... an entity, and have been tracking it for more than a year - that story will continue elsewhere).

But apart from pure-form honeypots there's a sliding scale down from there. How about the "VPN service" that uses such badly designed encryption that it is effectively useless against even the most minimal surveillance effort? Customers pay to use it, largely unaware that this "private network" is cryptographically useless. The classic case of this was iPredator in its early years. As a reseller of Relakks' PPTP-based "VPN service," iPredator was offering a security tool that was functionally useless in securing anything. Eventually, when confronted, Peter Sunde admitted the tool was useful only to make a "political statement" and not as a security technique. Today, iPredator offers more competent service - although at last check, they still supported PPTP, as well.

Is that old-form iPredator a honeypot? The usage of honeypot usually suggests some sense of intentional setup: a trap. In that sense, it's a bad match. All evidence suggests iPredator offered only PPTP because it was cheap, easy to deploy, built-in to most client OSes, and what Relakks had already used. That's poor security procedures, but not a honeypot.

How about a "VPN service" that advertises itself aggressively as "secure" and "private" with "no logs," but privately acknowledges that it hands over dockets on its customers to LEO dozens of times a month, on an "unofficial" basis - no warrants required? In this case, the company is actively marketing itself as "secure" and its marketing emphasizes the "no logging" element - which, of course, is total nonsense. This is much closer to a honeypot: there's an intentional misrepresentation, an effort to make visitors feel safe while knowing they're anything but.

So how do you know if a security service you are using is a honeypot?

We get asked this question alot. Often, it's from "trolls" or paid shills for other "VPN services" who are looking to limit competent competition. It's the nature of the business that such things happen; we take it in stride. Sometimes, smart members or prospective members ask about honeypot concerns, and we point them to that CumbaJohnny article and encourage them to do their own digging and research, and make their own decisions. That's all well and good, but there's not much out there worth reading on this subject, unfortunately. What is there usually has that kind of linear, overly-simplistic "don't get caught in honeypots!!" sort of useless advice that has nothing to say in terms of specifics.

So we've been kicking around our in-house views and advice on this topic. This article summarises what we know.

First off, talking about - and asking about honeypots and honeypotting and general trust in technology is good. Without discussion and questions being asked, this whole topic gets shrouded in FUD and whispered nonsense - that doesn't improve security and it doesn't keep people safe. On the flipside, talking about honeypotting and honeypot awareness will inevitably result in more accusations of being a honeypot - once folks realise this is an important topic with few cut-and-dried answers, they start to see honeypots everywhere. The pendulum swings. We are used to this, and on balance it works out ok,

But the real question is: how can someone determine whether a given service is a honeypot, or not? What's the punchlist to make that determination with confidence? And, in short, there is no such punchlist and no way to answer definitively. Sorry. That's reality.

So the first lesson is this: anyone who tells you they can prove they aren't a honeypot is, at best, not credible in their expertise and, at worst, has something overt they are trying to hide (like being a honeypot). There's things we might feel help gain confidence in a resource, but nothing to prove it's solid.

The flipside is that it is possible - on very rare occasions - to prove that something is a honeypot. Listen to those warnings, of they come! In the CumbaJohnny case, one researcher (Max Vision) noted that the server sometimes leaked IP addresses directly tied to the FBI. That's pretty much solid proof, by anyone's definition. He was largely ignored, under the assumption he was just a jealous admin of a competing site (which was true) and that his evidence was faked (which was not true - it was real). "I heard from this guy who heard from this guy" isn't hard evidence; nor, sadly, is the much-loved screenshot. It is very easy to fake most any screenshot, sorry but true. If you are experienced enough to understand the details of this sort of thing, you'll know if a service leaks a proof-positive instance of honeypotting. Watch for those, although they're quite rare.

That leaves us with a very large middle ground - not proved honeypots, but also no way to prove them 100% secure. Here's our rules of thumb, that we as a team use personally and have developed over decades of life out in the digital wilds...

    1. Something looks too good to be true? That's a concern. The honeypot service we mentioned, that we've been tracking for a while, gives away their service. That makes you wonder, doesn't it? Note: this is not to say any free service is a honeypot, so just stop ok? We're saying that too good to be true is a possible red flag. Same goes for ridiculous claims of magical crypto or whatnot: if it sounds fishy, check deeper.

    2. If it's too shiny and perfect, think twice. This one is really subjective, but we stand by it. Real life has bumps and scratches and bits of chuff sitting around. That's life. Real teams, working hard and under pressure and tight on cash, miss stuff like that sometimes - it happens. Broken link on a website, etc. In contrast, things that are so perfect they just sparkle make us nervous. There's a certain twitter account, and "he" seems to be available 24/7. Every link is perfect. Every page has Excellent Graphics(tm). Each post is without typo, every blog entry formatted to spec. This is totally, wildly unrealistic for anyone who actually lives in tech. For the general public, it seems an image they love: the Super Hacker Elite, no mistakes. Hoo-rah! But in reality, that's not how it is. What such perfection suggests is a great team: PR flack, a few interns, steely-eyed ops managers, etc. Think of those rooms of spooky-good workers in the Bourne movies. They don't have many typos, their blog posts go out on time, they don't drunk-tweet. Watch for a drunk-tweet now and again... a sign of mortal humans, not honeypots run by efficient LEO.

    3. If the people associated with the project do strange and organic things, that's a good sign. Some projects have had coders who embodied nasty, ugly ideologies regarding racial topics. That's sad... but also not likely to happen in a well-run honeypot, is it? Not impossible of course - an existing project that gets "turned" could have all these little rough corners... but on balance, strangely discongruent stuff helps build confidence. No governmental agency or competent LEO operation is going to put a hard-drinking racist slob in charge of the servers... even if she's the best in the world at that particular job. Too risky, not their style.

    4. Idiosyncratic tech choices can go either way. LEO and honeypots in general are going to go for low-risk, boring tools with licensing agreements and sales reps, on average. Wild-eyed, loopy, big-dreaming crypto tech teams usually have at least one erratic tech choice in the mix, and often more than one. We only communicate by... Pond! Or: our servers all run... whonix! You get the point. Real technologists develop fetish-like obsessions with weird areas of tech, often somewhat impractical and difficult to understand from the outside. This is a badge of authenticity, however, frustrating it may be otherwise. A tech team with no such fascinations? Again, just a bit too white-gloved and perhaps worth a second look.

    5. Backstory. This is a big one, a very big one. Every tech team - every man or woman in the security tech world - has a backstory. Some really don't want to share those backstories, for any of a host of reasons. Fair enough. Some want to splash their PR pictures all over the website. Again: fair enough. Some are shy, some introverted, some loud-mouth braggarts. They're all, unquestionably, people: human people, with flaws and history and strange quirks and dark corners and, as often as not, more than a few scuffed spots somewhere in the past. Few will post all that on the project blog... not unheard-of, but rare. More common, folks will suggest that they're "known in the community." A bit of asking around, someone who knows someone who knows someone... and there's likely someone who got drunk with that person at some con a decade ago and ended up in a public restroom singing marching tunes. Or whatever. Point being: this is a very, very small world - the security tech world. Everyone knows everyone, everyone has history... and if people show up (or personas, really) that nobody knows firsthand? That's odd. No dirty stories of old days gone bad? A little odd, unless they're academics who tend to be more white-gloved (not always!). No fingerprints left anywhere in terms of past projects, failed startups, burned colleagues, jilted lovers, embarrassing rap sheets? Suspicious as fuck. Sorry, that's how we call it.

    6. Shifty about discussing honeypots, snitching, LEO, and in general questions of trust? That's a concern, right there. Some folks get furious when accused of snitchy honeypotting. Some ignore it as beneath them, snooty and contemptuous. Some try to argue the trolls to a standstill when such questions come up, and some fume and vow revenge. All are, in a sense, valid replies - human replies. Honeypots seem to float above this fray, often as not. A shiny, teflon veneer. No response if questioned about such things: no emotion. This isn't a 100% rule, and indeed no honeypot rules are (see above). Some honeypots in other areas have been super-aggressive in attacking anyone who questioned them. That can be a red flag, too. Responding with indignation is one thing; going all red-hot-vengeance is sort of over the top. Mostly, look for a human, imperfect, varied, erratic, slightly ragged response... that's how reality often is. Good days, bad days... variation. And variation among the team. Some might be dismissive, some steely-eyed angry. Variation makes sense, for a real team.

    7. Weird, unexplained absences that never really get folded into the narrative are a huuuuuge red flag. LEO seems to do these sorts of days-long absences - for training, for meetings, for whatever - far more than do real tech ops teams. Real teams are used to being paged 24/7, pinged by phones, tweeted at in the shower, called, jabbered, IM'd... the works. We might vanish for a few days due to exhaustion, personal crusades, whatever - but usually these vanishments fit in somehow, even if in only a jagged and weird way for folks watching from a distance. But the LEO vanishments, they seem to happen unannounced - and remain unexplained later, A drunk reply from an overworked sysadmin is really human and not totally uncommon, nor is a frazzled tech support staffer being needlessly crabby. Robotic drones that vanish for days, and then show back up as if nothing happened? That's a big read flag. Unless they were in jail, in which case... well, could go either way tbh. :-)

    8, Finally, and somewhat in summary, watch for gloves too white. This is security tech. It's not golf course management. That's not to say everyone in the infosec world is secretly a black hat rooting servers at night - obviously that's both silly and disrespectful and we make no such intimation. However, really.. if someone is so spooked by any rub-up with the seedier elements then that's a bit of a flag. Yes, the outfits selling 0days to spooky govs are less likely to be mixing with the hacker rabble... but not really, in fact. Where to the 0days come from? Where do they hire their analysts? Even those shops rarely have pure-white gloves. So if you see shiny-white gloves, what's that about?

Security tech and trust are intertwined. They always will be. Inside this little bubble of reality, many such decisions are made based on personal relationships and personal trust. We know someone, who knows someone, who has known someone for a very long time and trusts them - technically, personally, whatever. And we do make decisions about tech like this, often. Why use that OS, or that tool, or that parameter set? We know this gal, she's best-in-class. She is the uber-expert on that particular thing. And she says it's the bee's knees. She will talk your ear off for hours explaining why, and likely has. That - that matters. Listen to that, in our world.

Same goes for honeypot awareness. The deep tech people, the ones with roots and history and old feuds and scars and blurred memories and stories they'd rather not tell about mistakes they wish they didn't make? Ask them. They may have an ugly feud with a team or a person... but they'll likely know if that's a legit project or not. If nobody knows a team, nobody can say good or bad? That's a big flag.

To wrap it up, scars and rough edges prove a real existence. Nobody gets far in this space without racking up a good bit of both. Enemies, failures, embarrassing episodes. Broken tech. Also of course smashing victories, brilliant code, vibrant github porftolios... it's all part of being genuine, the good and the bad. If you winnow out anyone and any project with any "bad," what you're doing is ensuring that real teams are out of the running - for any real team has scars as well as plaudits. If you do that, what's left is basically the fake - and a good-sized chunk of those are honeypots.

That's our view of the terrain, take from it what you will.

With respect,

    cryptostorm_team

User avatar

Pattern_Juggled
Posts: 1492
Joined: Sun Dec 16, 2012 6:34 am
Contact:

& other stuff

Postby Pattern_Juggled » Sun Nov 30, 2014 5:44 am

This subject didn't really fit into the honeypot post compiled from team contributions, above, and since it got vetoed (properly so, imho) from that post, I'm putting it here as an addendum...

Those who follow our twitter feed have approached us to express their concern that, quite often, things are posted there that seem to suggest a lack of confidence in the strength of our operational security model.

Stuff like this...
insecurity1.png

Or this...
insecurity2.png

Or this...
insecurity3.png

Or this...
insecurity4.png

Or this one (h/t @dakami):
insecurity5.png

Or this:
insecurity6.png


...well, you probably get the point. And yes, if it's existential tech angst that's being spread via our twitter feed, most likely I'm the one doing it (we rotate twitter-manning duties, amoungst the team). It's just one of my things, as it were.

So how do you square that with the kind words shared by oft-experienced folks, such as these snippets...
conesec.png

dipshits_cropped.png

(heh, yep - we're those dipshits)

lucyreco.png


Also this, which flags the hopeypot issue... and got a favourite from our idol!
w00t.png


Anyway yes, if we're so uncertain and insecure about our model, our technology, and life in general... why do people trust us? Why do we feel confidence in what we're doing, if we're not confident in general? Shouldn't we be telling people "trust us, our magical technology is magical and will save you from all dangerous things... including "advanced alien technology[" {wtf?}... so don't worry and keep paying!"

No. First, because it's bullshit. Second, because it's terrible security practice.

Technology is imperfect. Security technology is imperfect. Making security tech systems in deployed, real-life context is a hard problem and is certainly not getting any easier now that we know just how well-resourced, aggressive, and legally unconstrained are adversaries such as the NSA and cohorts. There are trade-offs, decisions about what to prioritise, assumptions regarding attacks and the risk of attack and the cost of breaches: is a 1 in a billion risk of a serious breach worth more time and resource investment than a 1 in 100 risk of a mild breach? These are real questions that a real project team faces - and answers - day after day. We answer these questions by our actions and our choices, even if we don't verbally articulate those replies.

Cryptography in its modern form has some truly powerful variants. Some flavours leverage deep structural components of the nature of reality in order to do things that seem, on the surface, impossible. Those who work with such tools daily, as we do, are constantly in awe at what they are able to do; if we're not in awe, then we're either ignorant to what they're really doing, inured to the power as a result of exhaustion or burnout, or we aren't understanding the crypto properly. I'm speaking here, of course, primarily of public-key crypto and all its variants. But also, some components of ECC-based ciphers and ring-key crypto have those sparks of unreality about them.

For all that, cryptographic engineering - putting crypto into practice - is fucking brutally hard to get right. Saying so doesn't mean I'm pulling a Barbie-style "math is hard" sad face. Rather, it's an objective reality. Schneier:
...cryptographic systems are broken all the time: well before the heat death of the universe. They're broken because of software mistakes in coding the algorithms. They're broken because the computer’s memory management system left a stray copy of the key lying around, and the operating system automatically copied it to disk. They're broken because of buffer overflows and other security flaws. They're broken by side-channel attacks. They're broken because of bad user interfaces, or insecure user practices.
...
The world needs security engineers even more than it needs cryptographers. We're great at mathematically secure cryptography, and terrible at using those tools to engineer secure systems.


And Dr. Greene, right after the Snowden documents first started coming out in 2013:
Readers of this blog should know that there are basically three ways to break a cryptographic system. In no particular order, they are:

Attack the cryptography. This is difficult and unlikely to work against the standard algorithms we use (though there are exceptions like RC4.) However there are many complex protocols in cryptography, and sometimes they are vulnerable.
    Go after the implementation. Cryptography is almost always implemented in software -- and software is a disaster. Hardware isn't that much better. Unfortunately active software exploits only work if you have a target in mind. If your goal is mass surveillance, you need to build insecurity in from the start. That means working with vendors to add backdoors.
    Access the human side. Why hack someone's computer if you can get them to give you the key?

Bruce Schneier, who has seen the documents, says that 'math is good', but that 'code has been subverted'. He also says that the NSA is 'cheating'. Which, assuming we can trust these documents, is a huge sigh of relief. But it also means we're seeing a lot of (2) and (3) here.


Secure systems are (mostly) broken because of bad implementation (i.e. engineering), or because of human flaws intrinsic to bad organisational structures. Good crypto - properly chosen, properly parameterised, properly combined if needed - rarely gets broken in actual practice... and when it does, there's usually a lead-up, warnings in the theoretical literature that something's starting to look shaky, that sort of thing.

What's far, far more common is stuff like this 0day pop of the i2p system underling Tails and other au courant privacy tech, by Exodus:

This permissions configuration allows us to craft a payload, execute it under the i2psvc user, and phone back to a server of our choosing. Once the plugin is loaded the code will execute in the background with no further user interaction required. The user will only see that they were redirected back to their I2P console. Once the i2psvc user executes our payload it will display the IP address in which the user is connecting to the I2P network.

As previously stated the I2P plugins are similar to Firefox plugins and are written in Java. For our demo we had the I2P user phone back to our server. We have many other options for crafting our payload. For example, the i2psvc user is allowed R/W/X access to the /tmp directory. Knowing this we could write a backdoor to the /tmp directory and execute it under the i2psvc user. Other options would be further data exfiltration allowing us to grab the users MAC address, files on the system, or routing tables.


If you read that blog post and don't fucking wince, then... beh, I dunno. You should wince. Not because "i2p is pwned" or there's some secret conspiracy or because the developers who wrote the code exploited in this attack were dumb n00bs, or any of that shite. Just stop already.

Rather because this attack is not exceptional, does not require astonishing breaks of cryptographic primitives, and isn't the result of dumb coders writing crap code. Rather, it's a nice little elegant pop of a system that's received lots of attention and contributions of expertise from lots of smart folks... a system designed precisely and explicitly to prevent this exact sort of attack!

That's the point.

Note that the i2p folks quickly patched these bugs, and this isn't even a little bit a schadenfreude party at their expense. Quite the reverse: but for the grace of God...

Oh also Exodus - and many other such - exploit dev shops sell their 0days privately, almost always to governmental entities. For vast sums of money. So the fact that this one exploit ended up being blogged is sort of the black swan amidst vast oceans of white swans. How many undisclosed 0days are held by the Tao, GCHQ, and hundreds of other gov spook shops worldwide? It's simply impossible to even estimate... but the numbers are high.

None of this touches the intentional backdoors in many production systems. We tell ourselves we can dodge those with open code - and that's mostly true. Mostly. What about the Cisco vulns, with our packets running across Cisco fabric so often? How about hardware backdoors stuck in (intentionally or otherwise) by chip manufacturers - thus subverting the machines on which our fancy crypto runs? Transient BIOS injection attacks, anyone?

Don't get me started. I'm far from the sharpest knife in the toolbox of security tech, but even I know enough to know that the rabbit hole runs arbitrarily deep. Sure, some of these attacks are esoteric and expensive and (probably) rare. Probably. And some of them. Not all, no way.

To acknowledge these realities is to take the first step towards responsibly confronting them. Can we plug all the holes, all the time? Ha, yeah right. That's not the universe we live in. Can we prioritise the right holes, and put the right defensive tools in place at the right time to lower risks as much as we can, with as little risk of opening up new holes in doing so? Yes, I think we can.

That's what we do, on this team. It's messy and it's imperfect and it's not worthy of press releases usually. When it's done right? When it's done right nothing happens. Nothing. Attacks don't succeed. Data don't leak. Members stay secure. Nothing happens. That's what our job is: to make nothing happen.

We have some fundamental technical approaches we tend to take on this team. We don't build new, if there's existing tools that are well-tested and well-explored. We don't assume default settings, default configurations, or default procedures are ok just because they're default - or because everyone else uses them. We seek out new tools that are promising, solve real problems, and aren't overly ambitious. We revisit past choices we've made, based on new info and new wisdom. And we seek - actively - to have smart people poke holes in what we do, so we can plug the holes and make things better.

We also do our best to stay abreast of new research, new exploits, new attacks and new thinking. Sometimes that can seem like useless noodling around - don't ask me how many hours I've spent with Pond, please, as it's a sensitive topic around here - but that's also a part of what we are paid to do. If we aren't in the waters, swimming neck-deep in the weird oceans of the infotech world, we're creating a bubble of ignorance around what we do.

And, yes, to swim in those waters is to be aware of how deep they go. When we - well, when I - talk about that depth-vertigo I feel from seeing how deep it all goes, that's not an admission of defeat. It's a symptom of facing the reality of what we do face-on, rather than sidling around and acting as if it's all simple and neat.

We do good work, and I'm proud of our team and the service we deliver. I'm also aware, every waking moment (and some sleeping) of the limitations, weak spots, needed upgrades, and structural limitations of the work we do. I'm not shy in sharing those truths, too, because our team - and myself, personally - long since passed through the childish stage wherein not talking about the monster under the bed is the best defence we could muster.

Turn the lights on, see the monster, and react accordingly.

Cheers,
    ~ pj

User avatar

parityboy
Site Admin
Posts: 1092
Joined: Wed Feb 05, 2014 3:47 am

Re: On Honeypotting

Postby parityboy » Sun Nov 30, 2014 4:20 pm

@PJ

Well said. There's no subsititute for confronting reality head-on, since you have to face it anyway, eventually (or worse yet face the conssequences of ignoring it). I'm going to go out on a limb however, and disagree with Dr. Greene: the impact of flawed hardware crypto is way worse than flawed software, in my opinion. Why? Software can be patched, that's why. You sell a million units of a USB crypto key with a broken implementation and all of those million installs are screwed.

At least with software it can be patched within 1-24 hours of a flaw being announced; I'm not saying that such updates are guaranteed, but they will happen way quicker than any hardware upgrade. Which brings me onto another question: has the AES engine in AMD and Intel CPUs been externally tested and validated for vulnerabilities, or is it "dumb" enough for such concerns to not apply?


@OP

As readers of this forum may well remember, when CryptoStorm Free was announced I very loudly said people would see a free VPN as a honeypot, simply because of economics: bandwidth costs money and has to be paid for somehow and no, ad money won't cut it. That obviously doesn't guarantee that paid-for services are not honeypots - a few are well known and have been highlighted numerous times here and in other places.

So I'm interested in the views expressed towards the team both here on the forums and from others in the team's sphere of activity. What's been the general reaction?


onyx
Posts: 5
Joined: Fri Jan 02, 2015 5:48 pm

Re: On Honeypotting

Postby onyx » Sun Jan 04, 2015 11:23 am

What a great read... I loved it. Thanks!

And thanks for all the hard work you folks do... much respect. All the best in 2015 <3


justaguy

Re: On Honeypotting

Postby justaguy » Sun Apr 05, 2015 9:34 am

Great read. Informative as well as reassuring. Thanks for the great product guys, keep up the great work :ugeek:


WHITEDEVILX90

Re: On Honeypotting

Postby WHITEDEVILX90 » Sat Oct 22, 2016 11:24 am

Image


Return to “general chat, suggestions, industry news”

Who is online

Users browsing this forum: No registered users and 18 guests

cron

Login