DesuStrike wrote:I'm interested in what is bad about lzo compression and thus why it is a goal to disable it.
I honestly have no idea what this does... :\
Quick reply, for now: I'm the one that neutered lzo in the production conf's & my thinking on that is bare-bones simple...
Mixing compression with real crypto has proved to be a Bad Idea, repeatedly. In theoretical terms, this is no surprise: think about what compression is doing, in the context of packetized (and thus, definitionally, block-ciphered) data, and at least a few of your hairs will turn white as you see the open attack vectors this inevitably creates.
Basically, all manner of padding-based attacks become far more feasible. HMACs get less useful/reliable (in general, although not universally). Timing attacks become alot more juicy (unless you take clever steps to prevent them, each of which step is another potential attack surface). Traffic analysis might well become more fruitful - feed patterned data into the tunnel, watch what happens to the packet size differentials, and profit...
Or: if I feed a never-ending series of 1s into the tunnel, I don't want anything characteristic in the encrypted packet traffic to tip the hand that something like that is going on, do I? If that happens, it's a side channel leak. You can say it's not, but it really is - just a semantic issue.
Now, I'll be the first to say that I have confidence a clever cryptographer could deploy compression within heavily-encrypted channels with no loss in security - indeed, I know it's been done, and is done in the wild. But I don't assume we're clever - I assume we're mortals, and make mistakes. If we can remove the risk of a mistake from the model, ceteris paribus
, we do. One less possible fuck-up.
(I'm also troubled, at an ontological level, by the functional congruence between real crypto and lossless compression - being two sides of one coin, fundamentally - and the convergence of these two functions just doesn't make me feel like I want to try to do both at once, from two directions... not something I could defend in a dissertation presentation, but nevertheless a horse-sense feeling I am not willing to abandon just yet)
The "benefit" of lzo, best-case, is a few percent uptick in theoretical maximum tunnel capacity... with all sorts of footnotes limning that benefit further (since so many OSI layers do some form of compression, nowadays - does anything really stream a bunch of 1s off a physical NIC, in this day and age?). Fuck that. I'd much rather throw exitnode capacity at things, than play games with security parameters to try to squeeze a few more micro-clicks out of the model. It's just got bad idea written all over it.
There's all sorts of interesting theoretical literature to bring to bear on this question - compressed encryption, etc. - and perhaps a quiet snowy weekend I'll troll my academic records and pull the more accessible papers for additoon to this thread. Or someone else is free to stick a few google'd results into the discussion, here. Start with SSL vulns exposed via assumptive compression of HTTPS session streams - that's as obvious a place as any. And a damned good object lesson in the risks.