defied wrote:lamont wrote:NetApps are awesome, but 'spensive.
One guy on IRC commented one time that EMC shops are full of people who are broke and are constantly responding to pagers, shops with NetApps are equally as broke, but they get to sleep all night long.
They're the only piece of equipment I think I've come across that I think is really four 9's, maybe five...
I can agree to this. Great equipment, not the most profitable business.
Actually I think that most storage like this is about to get severely disrupted. Google, Amazon and Yahoo all built their storage based on GoogleFS-like clusters of cheap commodity servers that push the $/GB down to about 2x the cost of picking up the drives themselves at fry's (4x the cost of spindles at Fry's if you duplicate the data). EMC and NetApp can't compete with that kind of cost structure.
Its just a matter of time before everyone is either using Amazon/Google for file storage, or else everyone is "building their own cloud" with open source products like Hadoop on commodity hardware, and EMC/NetApp and other expensive storage providers become marginalized.
The only problem right now is people almost universally don't know what the hell they're talking about when it comes to "the cloud", the only thing is that IT managers universally know that they want it.
Just like Linux killed off all the Big Iron proprietary Unixes, Hadoop and S3 are about to at least severely punish EMC.
I'd say a Netscaler 17000 instead of BigIP.
How dare you...
The Netscalers are actually really good machines, and they are our biggest competitor (Juniper was.... was...).
We have them owned in the platform market, but if I recall, their pricing is better. They are, however, heading down the same business model path of many other shops, where they lowball as much as they can and end up losing the developers who want the big bucks, because they can no longer afford raises. But, like I said. They are great competition, and are very good at what they do.
Ah, I didn't realize that's where you worked.
If the Netscaler guys let their talent cash out and leave, they could fall behind. So far I haven't seen that. The VPX virutalized load balancer images are also really useful to play with since I can set one up on in a lab environment on a XenServer host and test things out without having to play games with our one pre-production load balancer. Being a system engineer and not a network engineer I have zero budget for pre-prod load balancers just to mess around with, but I do have a host or two running XenServer.
And that's where I think that both you and VMware may have some serious problems, since Citrix isn't a one-trick shop.
this I STRONGLY agree with. The cost of running this system on EC2 with their new database application would be comparable to what they are paying now for this lousy service. I too believe I could build a compression schema that would allow better performance from my home server than what Dreamhost is giving these guys now.
Yeah, I just responded to the Ivar's thread and its still running slow, and its just this site (well, and scubaboard, but that is always slow too).
You may have met your match my friend.... Well... I usually try not to spout off about what I know, so maybe not. And I'm more network engineering, and kernel level platform integration....
On the networking front, once you get into BGP then I'm lost. I've got a reasonably strong RFC-level knowledge of OSPF, but don't have any actual keyboard-level knowledge of that. When it comes to edge-networking of servers, I become much more of an expert -- so, I debugged things like the way that full-NAT loadbalancers violate the PAWS/TCP Timestamp RFCs and what that can do to very high TPS servers as you start reusing TIME_WAIT sockets and the RSTs that can get sent back by servers via just spelunking in the kernel code looking for where the linux stack will send a RST in response to an initial SYN. I also did most of the server-level security at Amazon since the security team there was mostly SDEs and didn't like doing SA work (I had an open invite to join them, but never did, since most of their day-to-day crap was just phishing site takedowns -- which got outsourced after I left), and did most of the global server configuration management when I was there -- which went from 400 machines when I was hired, to doing config pushes to 30,000 servers when I left. So, I've updated something like 15,000 servers running RH7.2 and old ssh 1.2.27 sshd binaries to openssh-4.0p1, along with building most of the gold standard image of what Amazon was using (patching the config management infrastructure they had when I got there to support the concept of a global image and then doing 80 out of about 100 of the commits to that global image when i left). In addition to that was just handling all the edge condition escalation for things that wouldn't happen once in 10 years at a site with 200 servers, but which happen once a month when you've got 30,000 servers.
Oh yeah, I helped port NMAP to a half dozen different platforms back after Fyodor first posted it on Phrack (An interesting project into discovering all the undocumented bullshit in different RAW socket implementations in Solaris, Digital Unix, Irix, etc), and wrote my own Digital Unix shellcode for buffer overflows back when nobody though you could buffer overflow Digital Unix (R.I.P. DEC Alpha).
What we really need to do, though, is get a bunch of us AlphaGeeks together to in a startup and make some serious DiveUnits... I just don't have any useful ideas for a business plan...