Within the next couple of days, the cryptostorm admin team is finishing deployment of an upgrade/migration of the core nodes within the Montreal exitnode cluster, and we wanted to give folks some background information and a place to discuss things as the process unfolds. So... here goes.
Since our alpha launch last year, we've had our core Montreal cluster hardware provisioned via iWeb. That's been a mutually beneficial relationship, and we do appreciate everything iWeb has done over the past year for the cryptostorm project. During that time, we've gone from an early-alpha concept with a few dozen testers using the service, to one that nowadays has member connecting from all around the globe in ever-growing volume. Whereas we used to watch aggregate traffic for gigabit-level perturbations, nowadays the network transits terabits of traffic per day... and those are on the slow days.
In other words, much has changed during that time; it's been quite a ride. And, sadly, now it's time for us to part ways from our colleagues at iWeb.
Back in the day, our traffic volumes fit well into their business model (that's what we've been told, anyway - they can of course speak for themselves!). But since then, iWeb has been sold, cryptostorm has grown like the proverbial weed, and now we find ourselves often disagreeing about billing issues. That's usually a good sign that it's time to part company, and retain a collegial and mutually respectful relationship along the way. It's a small world, after all, and surely our paths will cross again.
As a result, in the next few days/weeks, you'll notice - if you're the sort who watches such things closely - that IP resolutions for various cluster- & loadbalancer-based HAF entries will stop pointing at iWeb-based IPs, and start rolling over to new infrastructure providers. Mostly, we wanted to let folks know that panic is not required: this isn't the NSA or someone else pwning our infrastructure, or domain resolution functionality, or anything else scary. Rather, it's just us growing and expanding and deploying infrastructure that best fits the project's ever-expanding ability to serve more members, more effectively, around the world.
We don't expect any actual "hiccups" in the process; it should all be more or less transparent to members. You might, if you're a denizen of the Montreal cluster, see a network session drop and reconnect at some point (if you stay connected continuously for days at a time, this is more likely - not that there's anything wrong with that, just to be clear!). Otherwise, one time you reconnect to Montreal, that new connection will auto-magically resolve to the new instance IPs... and away you go. No muss, no fuss.
There will also be a migration of our much-beloved tokenizer (i.e. cryptostorm.nu) - which borrows infrastructure resources mostly from Montreal, for historical reasons too boring to explain here. Again, that should be a fairly seamless cut-over: just a divergence in IPs to which the host record resolves. But, in the event there's weirdness in some local DNS lookup caching on the part of members, we wanted to let everyone know so there's no unnecessary panic.
Also: if you do notice weirdness, we suggest you take whatever steps are needed to flush your local DNS cache. The process for doing so varies quite a bit between OSes and whatnot; if you're not sure how to do that, post a note in this thread & surely someone will share the needed info. Mostly, this isn't necessary... but sometimes it is. We thought you should know, anyhow.
In terms of improvements, we're cautiously optimistic that this infrastructure upgrade will result in consistently better throughput for all of our Montreal-based nodes. Historically, one of our machines in the cluster (dear old, beloved Bruno...) had some issues with a less-than-modern NIC; folks might remember that, last winter, we worked hard to upgrade the drivers for several of the Montreal NICS. Mostly that was successful, and we didn't migrate back then as a result. But, there have been transient performance hiccups ever since.
Basically, we were driving those machines too hard for what they're really able to support. In the "old days" of last fall, that wasn't a big deal. Nowadays, with so many members pushing 20+ megabit sessions up & down, consistently, that older-era hardware struggles to keep up (NICs are a big problem, even more so than proc or memory bottlenecks, usually). So upgrading is required.
tl;dr is that it's a good thing, and shouldn't be a big hassle - but we wanted everyone to know, so there's not worry about Bad Things having taken place mysteriously
By all means, feel free to post questions and comments and suggestions here. As always. In the meantime, if there's any major info updates, we'll post 'em here as well. If you don't hear anything like that, it means the migration/upgrade has gone off more or less smoothly. No news, in this case, is actually good news.
- ~ cryptostorm_admin