The idea of corporate responsibility to bettering society continues to pick up steam. We hear about how businesses should proactively address their manufacturing and business practices to encourage ethical treatment of employees, we hear about ‘green’ companies who only use suppliers who use resource-sustainable manufacturing processes, etc. How many of you think about disaster recovery as a social responsibility? I maintain it may be one of the most important responsibilities organizations have.
It seems as though I spend more and more time pouring over VMware’s Capacity Planner reports, and I am sure you have seen a chart that is not too far off the one below:
This is nothing if not the poster child for the opportunity to virtualize with ferocity.
Recently, I have also been spending an increased amount of time speaking with the owners of said performance metrics. Our main topic of discussion being over those last few emotional attachments to the physical world before finally taking the plunge into the brave new world of virtualization. Increasingly these discussions are met with less and less resistance.
Then there is the next discussion: How low can we go?
While I am not an advocate for clusters of less than three machines, I find that many of the IT Directors and administrators seem to have a mental/emotional resistance to consolidation ratios of greater than 10:1. The thought of taking their fifty or so machines down to fewer than five hosts, seems to take a datacenter operator to a dark place.
There are precious few workloads which require more than a couple of processors and 8G of RAM. Now that machines can go to dozens of cores and hundreds of gigs of RAM, there are very few examples one can find where environments cannot easily get to 20:1 or even 50:1 ratios. Possibly taking the beloved “data center” down to such a small fraction of its former glorious footprint is just too emasculating to take all at once.
I thought it would still be worth throwing out a few more practical reasons to consider going ultra dense:
- The “Pooling” Factor – Any well designed virtual infrastructure is designed to accommodate its intended guest machines during normal business hours with an eye to an occasional spike. Despite the most thorough of studies, occasionally applications misbehave or see a utilization spike that was unexpected. In environments where we allow ourselves 20% or 30% overhead, using smaller physical hardware, these spikes can have a profound effect on the aberrant guest’s host-mates. A side benefit is created when using much larger machines for greater consolidation ratios because if a guest were to run away, it would have two to five times as much headroom to work with before affecting other servers. In many cases, this gives administrators time to make the appropriate adjustments to the layout of the guest before they affect the end users’ experience.[divider_padding] Pooling also allows for a more gradual impact to the host as more guests are added over time. If a server is only designed to hold ten guests, that eleventh guest will be consuming, on average another 10% of the machine’s resources. This could make the machine dangerously susceptible to performance spikes. In machines designed to accommodate thirty or fifty machines, new projects that require a new guest will only increase the impact to the server 2% to 3%. This percentage is much more tolerable and will yield a more predictable response to the environment as a whole.[divider_padding]
- The Network Effect – Obviously all these guests are going to drive a lot of network traffic. The good news is if the guests have a tendency to talk to one another, and if these guests are on the same host, this network traffic never leaves the host. That’s efficient and efficient is fast. Having more machines on the same host provides a greater opportunity for this machine intercommunication to stay local to the host.
- The License Effect – RAM is getting cheaper and cores are multiplying like rabbits on the processors. Looking into my (cloudy) crystal ball, I suspect it will not be too long before all the server vendors “basic” configurations look like the current monster box configurations. The question becomes whether it is worth buying virtualization licenses today for servers that you may not need in the future due to the inevitable trend in hardware performance growth.
- Virtual Appliances – New tools and functionality are being released regularly to make our virtual hosts more flexible and manageable. However, these tools require their own resources and often must run on every host in the environment. Having the local resources to spare on the host for these virtual firewalls, WAN accelerators, virtual storage arrays, etc. is increasingly important in order to take full advantage of what virtualization has to offer. Waiting until an undersized server depreciates, or going to buy more hardware because these tools bite into the existing infrastructure’s ability to keep up, will be a bitter realization.
In the end, there is no wrong decision here. However, while the manufacturers are making these super servers more and more cost effective, it may be time to embrace that less (hardware, heat, power, footprint), once again, may be more (performance, scalability, resilience, cost savings) for IT.[Featured image photo credit to Carbon Arc via Flickr; Included image photo credit to MRP529 via Flickr.]
Brian Burch, HP’s Director of SMB Marketing had an article in InformationWeek recently that I found quite interesting. He discusses what he sees as the top misconceptions many SMBs have about their technology needs. I encourage people to read the article. It’s very well thought out and, while I think it misses the mark slightly in some areas, I agree with a lot Brian has to say.
The fact the article was written is a testament to the issue we at IDS face every day. How do smaller organizations address technology needs they may not even realize they have? We consult with small to mid-size organizations regularly who are facing the pains of growth and the uncertainty of whether a given technology will aid them in doing the one thing they have to in order to survive: drive more revenue at profitable margins.
What I found most interesting about the article, however, is how different the reaction to this issue is from a manufacturer’s point of view. I see a dichotomy between Brian’s first misconception “I don’t need an IT strategy”, and how frequently Brian’s discussion around technology turned to laptop and desktop PCs. I couldn’t agree more with this fundamental point, however, and I personally run into it all the time.
Many smaller organizations have no strategy when it comes to IT. They buy pieces of technology as they are struck by some pain point, often with no aid from a trusted advisor to offer up the experiences of other similarly situated companies. Brian says:
“[A] well developed and executed IT plan will ensure the right technology is implemented, improving customer satisfaction results and meeting company needs. Without an explicit plan, technologies are introduced into the company without focus and can diminish employee productivity and customer service.”
Spot on. In other words, Prior Planning Prevents Pretty Poor Performance!!! What I routinely hear out in the marketplace are things like “a plan requires money to develop”, or “the more I put into the design, the more money a solution will cost”, and things of that nature. It’s a misconception in and of itself that to do things the right way requires significantly more money than to do them ad hoc, with no thought to long term consequences.
When I read through the rest of the article, I started seeing a potential reason why this misconception exists. For example, in discussing misconception three, “I Don’t Need More IT”, Brian makes the argument that by upgrading to new desktops or laptops, “SMB’s can enjoy tremendous cost and technology benefits…”. Upgrading PCs is an IT strategy now? The reason so many SMB’s look at IT not as a help but as a necessary evil is this kind of thinking, IMHO. The “strategy” most vendors put in front of SMB’s is “buy this new gadget”, or “great trade-in values for old PCs”. The strategy has little or nothing to do with HOW a company uses the technology, but only “what are you buying from me today”.
In fairness, the reason for some of that strategy is that you can’t, from a pure COGS perspective, put enough sales people in the field to truly consult with the sheer number of technology buyers in the marketplace. You have to market to them via web sites, mailers, e-commerce ordering, phone banks, etc. The only way to be profitable with the products we’re selling is to do it with a shotgun approach (from a manufacturers perspective, anyway). This in turn feeds into—and maybe breeds—the problem where SMB customers, who perhaps don’t have a dedicated IT staff, are simply inundated with emails about the deal of the day, webinars, webcasts, cold calls, FAXes, trade magazines, etc., etc., all telling them to buy this appliance, refresh this piece of technology, or say ‘to hell with it all—move to the cloud’!!! This is a strategy we’re promoting? Let’s get real.
At IDS we deal with this every day. A significant portion of our customers fall into the SMB space, and we see the challenges Brian mentions all the time. The reality of this situation for most SMB organizations is they need people with enterprise experience to expose them to the successful patterns other organizations have used repeatedly to solve problems. In other words, they need engineered solutions, not products thrown at them. Sometimes, they need help just understanding what the problem is or that there even is a problem. What they don’t need, is people treating them like they’re idiots and telling them that the best way for them to save money is to spend a bunch of money buying new PCs. They need people to show them how technology will help their business specifically, not a generic Gartner document or whitepaper about the theory of how low-power processors will save them a bunch of money in power savings over the next 12 years. They need help designing a strategy that is as elegant as they need without sucking down all the profit they’re projecting for the next several years.
They ultimately need, I suppose, people in the IT industry to stop perpetuating these misconceptions and actually help them use IT as a strategic asset.
Of how CIOs fail.
DISCLAIMER: This is a little bit of a rant b/c I’m more than a little peeved I STILL can’t get my preorder of the iPhone 4 through either Apple or AT&T for delivery. Standing in line for a full day at an Apple store next week (or whenever they can get stock now) is simply not an option. I’m not THAT much of a gadget geek.
I don’t meant to be harsh, but this was a colossal failure on the part of both AT&T and Apple’s IT planning, change control, capacity planning and incident response teams, IMHO. Since both companies obviously have separate teams to accomplish these tasks, I have to place responsibility at their CIOs feet.
If the stories going around the ‘Net are accurate, basically neither organization was prepared for the onslaught that fell on them on the 15th. Both Apple and AT&Ts web sites had minor issues early in the day, but the sites themselves stayed up (at least for me) all day long. What never worked (again, at least for me, and reportedly for a WHOLE LOT of other people) was the promised ability to order an iPhone 4 for delivery on June 24 to my home. That was the promise, and so that was the expectation set. It wouldn’t work from the Apple store or the AT&T store. Why not?
From what I can gather from the reports, the issue wasn’t load on the individual sites. While there were a few issues throughout the day getting to the sites, those issues seemed to be very short lived. The issue was the integration between the sites and all the people accessing the back end systems. Since Apple won’t allow a US customer to buy an iPhone without it being tied to an AT&T contract, we have to interface back into AT&Ts systems to determine eligibility, which in turn dictates pricing, yadda, yadda, yadda, you put a phone in the cart, agree to extend your indentured servitude to AT&T for 2 more years, bada bing, bada boom, you get a shiny new iPhone 4 on June 24. At least that’s what was promised by Steve Jobs at the WWDC, and on every banner ad AT&T and Apple could find to buy between WWDC and June 15.
The first thing that comes to mind while Apple is busy touting how wonderful they are for selling 600,000 iPhones (apparently NOT through the AT&T web site), I’m left wondering if the CIOs had mandated some true load testing and planning be done between their systems, how many more they would have sold (I know at least two would have been sold for my wife and myself for sure!). Could it have been the first device to sell a million units on launch day? Who knows. We’re in the Tootsie Pop commercial, my friends — the world may never know because IT just makes the servers run for the business, they aren’t part of the business.
The next thing that comes to mind is “how in the world does this happen”? Could these IT departments be ANY MORE disconnected from either of their businesses? This has been only the single most over-hyped and talked about device in the last decade, from the Gizmodo unveiling to the heavy-handed revenge tactics Apple used, to the exceedingly hyperbolic descriptions of how the phone is going to apparently change everything (I’m pretty sure there’ll be an app for cleaning up the Gulf oil spill soon, it’s all dependent on that A4 chip, so it’s not backwards compatible with the 3Gs or 3G, sorry) to everyone knowing that this is the only device which AT&T is not allowed to cripple with their own branding and application placement (take a look at this Information Week piece to understand what I’m talking about if you don’t already). How do these guys miss this? I mean, it’s not like they’ve never been through an Apple device launch before. They don’t have any excuses here! To say that they couldn’t have predicted this is naive and untrue. You spend marketing dollars for a reason, and given the track record of product launches for Apple in recent years, this was completely predictable. Obviously not a specific number, but it would not have surprised me at all for them to have sold 1 million units — if the systems had allowed it.
I’ve seen some arguments made in defense of AT&T that you can’t build out your systems to address theoretical unlimited transactions. Really? If you want to turn around the perception that you’re a completely inept partner to the “stellar” Apple you do! It’s called capacity planning. It is not an impossible task to have made some very intelligent predictions on what was going to happen here. They should have looked at past device launches and seen that 1 million units was not out of the realm of possibility, especially given the (relatively) lackluster response the 3Gs got. There are a lot of 2G and 3G users out there who have been waiting for this device. There are plenty of users who have resisted coming over who are now because some of the shortcomings of earlier generations are being addressed. How do you NOT predict this? And if you overshoot on your capacity planning, and build out your systems to provide a stellar experience to 1 million preorders but still only do 600,000, guess what? YOU HAVE 600,000 HAPPY CUSTOMERS!!!! Instead, you have a muted response from those who were able to get the phone ordered, because even for them it was a pain in the backside, and you have countless others who are questioning whether they want to do business with you. What a no-brainer.
Of course, then I’m scratching my head over the apparent lack of intelligent change control. What in the world was the AT&T CIO thinking when it was (reportedly) allowed, the weekend before a massive, exclusive device launch, for the core customer database to undergo significant changes that had not been tested under real load? I mean, really, didn’t anyone besides the WHOLE WORLD know this device launch was coming? Weren’t vacations cancelled at AT&T so all hands could be on deck? Seriously? Does anyone believe in change control anymore? In my IT past, we went into change lockdown before new launches, all end of year events, and just about any event that would drive significant orders until the event was past. it’s not a difficult concept, and it’s curious why it isn’t followed at AT&T, and why Apple wouldn’t demand something like that from its only partner in the iPhone business so neither of them got black eyes.
This is precisely why CIOs shouldn’t be providers to the business, they should be PART OF the business. They should be sitting in these discussions making sure these things are thought of, taking a global view of these various integration points, and speaking up when there are issues. I am assuming, of course, that they didn’t do any of that, and if they didn’t, they should probably be fired. If they did, and were ignored, then they should probably quit, because they clearly aren’t seen as a valuable and strategic member of the management team.
To keep it all in perspective, it is just a stupid gadget. But this is such a clear case of IT not serving the business well it screams for a detailed case study of what not to do if you want to serve your customers well.
Does CIO mean Chief Information Officer? If I ask this question to most CIOs I know, assuming they answered me honestly, they would have to answer ‘no.’ Why? Because most CIOs are not the chief of information; they would more accurately be described as the chief of infrastructure.
These are very exciting times for Integrated Data Storage, and by extension for our current and future customers too. We are growing, as our listing on the 2009 Crain’s Chicago Fast 50 demonstrates. Even in the economic valley we’ve been in over these trying months, our revenues have continued to grow. We believe this is due to our unique value proposition in the marketplace.
On behalf of myself and the dedicated employees of the company, I’d like to say that Integrated Data Storage is honored to be named to Crain’s 2009 Fast Fifty list. We see it as a noteworthy accomplishment.
Managing the growth hasn’t always been easy, but we’re truly grateful for the company’s success—for the benefit to the bottom line, yes, but more so because it has enabled us to establish long-term relationships with more clients, partnering with them to deliver integrated solutions that meet their complex data management needs.
It’s somewhat ironic that the propeller of our growth and widening market reach (we recently expanded to three new markets, Seattle and Kansas City among them) has been our continued narrow focus on a smaller set of products, each best of breed. This absolute focus has allowed our staff of engineers and salespeople to build an immense depth of knowledge in the design and delivery of our products, along with an ability to expertly fine tune them to create fully integrated and customized solutions for our clients.
In 2002, when our company was founded, the scope of our service was the Chicago area. Last year, for a client with a multinational presence, we successfully rolled out a globe-spanning product implementation of EMC’s Avamar that reached across thirty-nine countries. Thanks to the hard work and commitment of our team, like the other businesses on the list, we’ve come a long way in a short time. Now, instead of taking a breath, we look forward to the challenge of continuing the pace—we hope for a long time to come.
Why am I occupying real estate in cyberspace?
This is the question I’ll try to answer in this, my first blog post.
Why, in the name of all that is good and holy, am I writing a blog? There are probably billions of people in the world that know more than I do about any one of the topics I might post about. So why would I do this? Well, the fact of the matter is, my boss wants me to.
I asked him not to make me do it, so if you end up getting upset over something I write, take it up with him. :-] When I asked him why he wanted me to do this, his answer was pretty simple. I tell people what I think, based upon reason and facts, and I communicate what I think with passion based on those reasons. He didn’t put it QUITE that way, but that’s what he meant. I think.
I’m not sure how that qualifies me to be a blogger. I really don’t. I regularly read Chad Sakac’s blog over at VirtualGeek, Chuck Hollis’ blog, StorageZilla, Storage Anarchist, Barry Whyte, and a lot of other guys and gals out there and never once put myself in the category of people that should regularly be read by a card-carrying, dyed-in-the-wool, proud to be geek. Not once. Still don’t.
That said, I’m supposed to write. What can you expect from me? You can expect a lot of opinion. I’m probably not going to tell you how to configure your Avamar system for maximum throughput of de-duplicated bits across a converged network. I’m probably not going to give you great bits of Linux tweaking advice or try to be crowned the ESX guru of the century with sage advice on how not to tune all the kernel parameters. I’ll tell you what I think of things going on in the technology world. I’ll tell you how I think they really matter, or really don’t matter, to real customers, with real data centers, and real jobs they’re trying to keep. I’ll try to make sense out of the hype, the homerism, and the marketing stuff and boil it down. If I do that, then maybe people will read. If I miss the mark, probably not.
Who am I?
I’m the newest member of IDS’ engineering team. I come from almost 8 years of experience at EMC. While there, I supported Sprint PCS, the Channel, the Proven Professional program (yes, you can blame me for a lot of the questions on the E20-521 exam and a few others), and the Commercial Division. I was a Systems Engineer, Technical Consultant, District Channel Manager, Technology Solutions Manager, and a Global Solutions Architect. I did a lot of things in 8 years. Before that, I spent several years in the telco space, with a CLEC and an (at the time) up-start wireless carrier.
Why does any of that matter? To you, it might not. To customers, it means that I’ve seen a lot of different things. I’ve been involved in complete project failures, and partial project successes. I’ve helped customers who have ‘data centers’ the size of a small broom cabinet to customers with data centers the size of a shopping mall do one thing: meld the technical and the business.
That’s why I do what I do (besides the fact it feeds my seven kids — do you KNOW how much food seven kids can eat? That’s another blog for another day :-]). Anyway, it’s what I do. I knew from the time I took apart an Apple IIe in 5th grade and made it work with the LOGO turtle robot, that making computers do things for people in ways that mattered was what I wanted to do.
So, that’s what I try to do.
I’m not always right (mostly not), but I always have a reason for what I think. I can always tell you why I think what I think and endeavor to do so in terms that make sense TO YOU. You don’t have to agree with me. You don’t have to like me. I honestly don’t care either way. I’m going to make more than a few of you upset, I’m sure. I’ll probably start more than a few flame wars, and I’m okay with that. As long as things stay civil, we’ll all have some fun with spirited debate (assuming anyone actually reads my drivel). I have always thought that Chuck Hollis’ approach to things was the right way to go — let comments come and allow them to be seen even when they disagree with you, but I won’t allow this blog to become a sounding board of derogatory remarks or silliness. Comments will be moderated.
SO, without further ado………….
Why the V-Max was cool, but not all that exciting to me.
Now before all you EMCers go off telling me I’ve betrayed the family and how could I defame the good name Symmetrix, hear me out. And anti-EMCers, back off. There’s no blood in the water here or reason for you to take a swipe. I could fill reams of paper with the utter ho-hum nature of your last 3 or 4 product "updates".
I’m very impressed with the technology. I’m floored by the technology, actually. I think it’s quite amazing what’s been done to take DMX concepts into a virtualized hardware layer that then (without too much hyperbole) can infinitely scale (there’s probably still some hyperbole there, because nothing scales infinitely outside of Time, but still). The scale we’re talking about here is truly astounding for a storage subsystem.
The fact that manageability has taken a quantum leap forward for the Symmetrix is an indication that the engineering sea change that was supposed to have happened several years ago may have actually begun to occur, and Symmetrix Engineering may have begun to realize that 1) not all customers are idiots who can’t manage complex technology, but 2) not all customers want to obtain a PhD in how to manage storage technology because some of them have lives. That’s a whole lot of goodness as far as I’m concerned.
The geek in me looks at this thing and says ‘holy cow, how’d they really do that’? The fact that this technology enables VMware to truly scale and become serious about commoditizing the compute layer is utterly awesome. This, I think, along with a truly modularized server layer and the converged network, is what VMware needs in order to become the de facto data center OS.
I’m glad they moved to commodity processors. I see a ton of potential in management and down-market mobility afforded by moving to a commodity processor platform. I think this is an infinitely cool machine.
The business guy in me says ‘great, another device on the market that is supposed to save the world, but only plays to the top 5%’ (or less). I’m not sure how else to see this. Part of it is that I play in a space, and have for several years now, that looks at systems like the DMX, or V-Max, where to acquire one is more than the entire total cost of their infrastructure, including plumbing, electrical, the parking lot, the soda machine, oh, and IT. I get that my perspective is a little bit skewed toward the middle to lower end of the market. I’m happy to play here, because in my experience, they listen more, are more receptive to new ideas, and generally have a passion about what they do, more than most people working in large IT shops do. They’re not a cog in the wheel, they ARE the wheel, so to speak.
So what set me off about the whole V-Max thing is how this is ‘the vision of the future data center’. That’s great. 150 of the data centers of the world will be able to take advantage of these ground-breaking features, functions, and benefits.
Okay, I’m being hyperbolic now, but you get the point. There’s no down-stream message. If you don’t play with V-Max, you aren’t doing anything worth talking about. If you’re not going to deploy something that can support 8 bazillion drives, you don’t need the uber-integration with VMware (or perhaps aren’t worthy of it).
Once again, EMC has basically laid it out that you aren’t going to be included in the wave of data center efficiency if your data center consists of 3 or 4 ESX hosts running less than 100 VMs. Apparently it’s okay for you to still do everything manually, with no end to end visibility and management capability.
Okay, now I’m being harsh. But again, you get the point.
Don’t get me wrong, I love CLARiiON. I’ve sold and architected a LOT of it. It’s a superb platform to everything in its space, hands down, bar none; NAS, SAN, whatever. What I guess I’m upset about is that EMC thinks the only place to revolutionize the industry is in the top 5%. And no, quite frankly, I do NOT consider the CX4 revolutionary. Putting modular connectivity was a must to stay ahead, not a revolutionary idea (servers have had PCIe slots you can put different multi-port cards into for years). But it goes deeper than that. Go back and compare the CX4 launch to the V-Max. Not even close in comparison. The CX4 launch was a blip on the sub-radar of the V-Max. There was no passion to it, no earth-shattering feel to it, just another generation of CLARiiON. ‘yay’.
I understand how much revenue comes from that top space, but how much GROWTH? Not much, if any. I feel like EMC missed the opportunity to REALLY turn the tables upside down. Why not go completely downstream (not to the AX4 or CX4-120 space, but that CX4-240 or 480 level and up) with this massively scalable virtualized engine and introduce GREATER than 5 9s to the mid market? Scalable doesn’t only mean bigger. It means it can grow and contract. It means it can start out small, AND SMALL MEANS <96 DRIVES IS AFFORDABLE.
This architecture should support that concept. These are Intel processors now, with even more commodity parts, right? Then bring the joy of mass production TO the masses, guys. Scale this baby down and turn the storage world upside down. How could HDS possibly compete with that? NTAP, are you kidding me? They couldn’t touch it. Don’t even get me started on Compellent, Xiotech, or 3Par. And Equallogic, errr, Dell? Right.
I’m sold on the V-Max conceptually. It’s the reality I’m left to deal with, though, and I’m not feeling it. At least not yet.
What is an EMC/VMware Center of Excellence?
For several years, EMC and VMware have been working hand-in-hand to provide customers with solutions to drive costs out of the Data Center and provide the highest levels of performance and availability. Taking the next step to demonstrate their solutions for the virtual Data Center, EMC and VMware have teamed up with strategic Integrators to create eight Centers of Excellence across the United States. Integrated Data Storage is proud to be chosen as one of these Centers of Excellence and the first in the Midwest.
The IDS Center of Excellence is comprised of a Demo Center which showcases EMC and VMware’s leading technology solutions, and a staff of highly trained Engineers with field experience in the integrated solution offerings. The combination of VMware’s virtualization technology, EMC’s information infrastructure solutions, and IDS’s expertise in architecture and implementation create a unique opportunity for customers to see working implementations of the solutions they can use to increase the return on technology investments.
The Demo Center contains a full lab of EMC and VMware technologies including:
- VMware Virtual Infrastructure with VMotion, DRS, and VM HA
- EMC Celerra Unified Storage platform
- EMC Celerra Virtual Storage Appliance (VSA)
- EMC Replication Manager for application integrated snaps and clones
- Ontrack Power Controls for Exchange single-message restore
- VMware backup with EMC Avamar for source-based de-duplicated backups
- VMware backup with EMC Networker and Disk Library 3D 1500 for target-based de-duplicated backups
- Disaster Recovery using EMC Celerra Replicator and VMware Site Recovery Manager
- Virtual Desktop Infrastructure with VMware View
- EMC Storage Viewer plug-in to view your EMC Storage through vCenter
With all of these technologies on display, customers can come in and witness how the solutions work in-person. Customers evaluating solutions for their virtual Data Center are encouraged to come in and test drive the technologies with the help of an IDS Engineer to see the EMC and VMware integration first-hand.