Category

Strategy

Sneakernet vs. WAN: When Moving Data With Your Feet Beats Using The Network

By | Disaster Recovery, Networking, Strategy | No Comments

Andrew S. Tanenbaum was quoted in 1981 as saying “Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.”

The underlying story on this is written in the non-fiction section of Wikipedia. It was derived from NASA and their Deep Space network tracking station between Goldstone, CA and their other location at Jet Propulsion Labs about 180 miles away. Common today, as much as it were 30 years ago, a backhoe took out the 2400bps circuit between the two locations. The estimate to fix it was about one full day. So, they loaded a car with 9-track magnetic tapes and drove it 3-4 hours from one location to the other to get the data there six times faster than over the wire.

So, they loaded a car with 9-track magnetic tapes and drove it 3-4 hours from one location to the other to get the data there six times faster than over the wire.

That got me to thinking about IT and business projects that require pre-staging data. Normally, we IT folks get wind of a project weeks or months in advance. With such ample notice, how much data can we pre-stage in that amount of time?

With a simple 100Mbit connection between locations, and using a conservative compression ratio, we can move nearly 1TB of data in a day. That seems plenty of time to move source installation files, ISOs, and even large databases. Remembering that our most precious resource is time, anything a script or computer can do instead of us manually doing is worth a careful consideration.

Below is a chart listing out common bandwidth options and the time to complete a data transfer.

chart - 1

The above example is not as much about data center RPOs and RTOs, as it is about just moving data from one location to another. For DR objectives, we need to size our circuit so that we never fall below the minimums during critical times.

For example, if we have two data center locations with a circuit in between, and our daily change rate of 100TB of data is 3%, we will still need to find the peak data change rate timeframe before we can size the circuit properly.

chart - 2

If 50% of the data change rate occurs from 9am to 3pm, then we need a circuit that can sustain 250GB per hour. A dedicated gigabit circuit can handle this traffic, but only if it’s a low latency connection (the location are relatively close to one another). If there’s latency, we will most certainly need a WAN optimization product in between. But in the event of a full re-sync of data, it would take 9-10 days to move all that data over the wire plus the daily change rate. So unless we have RPOs and RTOs measuring weeks, or unless we have weeks to ramp-up to a DR project, we will have a tough time during a full re-sync, and wouldn’t be able to rely on DR during this time.

So, that might be a case where it makes sense to sneakernet the data from one location to the other.

Photo credits via Flickr: Nora Kuby

Why Thinking Like A Child Matters In Business And IT

By | Strategy | No Comments

They say that youth is wasted on the young. Children often have the luxury of acting without thinking, doing and then failing, and then just getting back up again with little more cost than a scraped knee and a bruised ego.

When I become old enough to drive, I would explore every road I could. I would embark far out into unknown country back roads, not knowing or caring much if my old pickup truck would break down or I would run out of gas, both of which happened so often that I became practiced at parking on hills so that I could roll-start the engine or coast to a gas station easily. Tools and tow ropes were my friends in a world where teenagers didn’t really have cell phones.

There’s something to be said for that same spirit and childish attitude in the adult world of business and IT: exploring new technology could be the road to success in a time of fierce competition and the 100 mile-per-hour pace of technology today. We can barely keep up.

The problem, though, is not just having time. It’s risk. If sticking your neck out is the road to advancement, and the key to unlock your Porsche is the execution of an important project, then preparation is the airbag that will save you when a car pops out of nowhere.

Mark Horstman from Manager-Tools once said:

Managers should not try to reduce risk in business because risk is constant and cannot be reduced. Instead, we can educate ourselves to better understand, quantify, and prepare for risk so that we can make higher quality decisions to achieve the best possible outcome, while at the same time choosing a path or solution that has the best risk/reward outcome.

In other words, take the “no pain, no gain” concept and factor in what happens if you push too far, get really hurt, and have a major setback. What’s the sweet spot of pushing hard but not too far? The answer may lie in the insights of the Marshmallow Challenge, which talks of executives and kindergartners.

In business and IT, we need to take chances on newer technology, because it’s the only way to advance forward.

But as adults in positions of power, our failures can affect hundreds or thousands of people’s lives. So preparation is the key to a successful road trip. Having a plan B isn’t enough when Murphy’s Law is at hand. Have three or fourth backup plans. Before departing, check the oil and the air in the tires, especially the spare. Have an emergency kit with food, water, a knife, a map, and duct tape. It costs like twenty bucks. Save the phone number to AAA towing.

Research and planning are necessary in any endeavor, but so is talking with others who have traveled the same roads or destinations. And for God’s sake, have fun. My most rewarding vacations included experts who knew how to rock-climb at Joshua Tree, landed a helicopter on the top of Hawaii waterfall, or negotiated through a class 5 rapid in West Virginia. So use an IT partner who knows the ropes and have done this before.

One last question: where do we want to go?

Photo credits via Flickr: Seema Krishnakumar

Insights From The Pony Express: Why The Five-Year Tech Refresh May End Up Costing You More

By | Strategy | No Comments

Prior to 1860, it could take several weeks to months for correspondence from the east coast to reach the Pacific states. As settlement of the West exploded due to the promise of free land from the Homestead Act and the discovery of gold, a faster line of communication was needed. Especially with war looming, the ability to facilitate faster mail delivery became critical to every facet of society, enterprise, and government.

To undertake this monumental feat (remember, this was before the advent of the continental railroad), William Russell, Alexander Majors, and William Waddell—who already ran successful freight and drayage businesses—joined forces to devise a network of stations roughly 10 miles apart that stretched from St. Joseph, MO to San Francisco, CA.

Riders, who had to be lightweight, travel light, and run their specially-selected smaller horses at a gallop between stations, would often ride around 100 miles per day. (The horses were burdened with only 165 pounds on their backs, including the rider.)

With this relay method of riding fast, light, and taking on fresh horses at every stop, a letter could make it across the country in 10 days, an accomplishment that was previously thought impossible.

“A letter could make it across the country in 10 days, an accomplishment that was previously thought impossible.”

So let’s summarize the technology and methodology of the day and what made delivery of mail to the West coast in 10 days possible:

Tech

  • Small riders, often children (orphans were preferred because of the hazardous occupational dangers accompanied by job of the rider)
  • Small horses (the happy medium between speed and overhead; larger horses eat more and can’t run as fast at distance)
  • The Mochilla pouch (fit over the saddle and carried locked mail compartments, water and a bible)

Method

Stations 10 miles apart helped overcome:

  • Finite running capacity of the horses
  • Weather
  • Terrain
  • Danger from crossing into Indian lands that were hostile

Operating for a period of just 18 months, the Pony Express was one of the most ambitious endeavors in American history. By October 1861, it was over, displaced by the more cost-effective technological advancement of the telegraph.

Let’s take a look at the known costs of running the Pony Express from a CAPEX and OPEX stand point. We don’t have historical records on all details, so I will take some liberties using guestimation.

CAPEX

  • 400 horses at $200 each
  • 184 stations at $600
  • Tack (saddles, horse shoes, Mochilla) $20 per horse

Total: approximately $200K
Value today: approximately $76 Million

OPEX

  • 80 riders at $100 per month
  • 400 staff at $20 per month
  • Feed at $5 per horse per month

Total: approximately $18K per month
Value today: approximately $7 Million

old telegraph key

The reason I use this example is to show that disruptive technologies throughout history have led to monumental changes in the way we conduct commerce. The advent of a letter traveling with a Pony Express rider across the country in 10 days, which once was thought impossible, was eclipsed by laying down telegraph cable and Morse code. Not only did the telegraph render the enterprise of the Pony Express irrelevant, but cost prohibitive, especially at $1 per ½ once.

It would have taken years to recoup the startup capital and turn a profit. We can also look to the advancement of the telegraph as seeding other life-changing technologies, such as the telephone and the continental railroads (which followed the telegraph lines across known safe passages into the West.)

“Do we really want to put ourselves in a position that stifles our ability to ride the ebb and flow of advancement … because we are locked into rigid maintenance contracts with original equipment manufacturers?”

I guess the point I’m trying to articulate is: technology that supports and drives business today can change even faster than in 1861. Do we really want to put ourselves in a position that stifles our ability to ride the ebb and flow of advancement, which blocks our ability to drive up efficiencies and drive down costs, because we are locked into rigid maintenance contracts with original equipment manufacturers?

I say no.

Keep your business agile. Be prepared for the next big thing that poses relevant change in information technology. Keep your maintenance schedules to three years. Don’t let history repeat itself on you and your enterprise.

Photo credits via Flickr, in order of appearance: bombeador; digitaltrails.

Spending Money To Make Money: An IT Strategy That Really Works?

By | Strategy | No Comments

Tell me if you’ve heard this one before: “Look, if I spend more than $50 at this store, I get 20% off!” Or how about adding an extra item to Amazon’s shopping cart to qualify for free shipping? You have to wonder if these tactics are win/win, or if they are just getting us to think we are saving money while in reality, we are being tricked into spending more. I think the answer to that question just leads to more questions, such as, what are you buying, how often, etc.

But I admit that sometimes it really does make sense to spend more money to save in the long run—you just have to put some work into finding out how. I think there are certain strategies that work better than others. Consider the difference in cost between hardware and software.

Game Console Comparison

An Xbox or PlayStation might sell for $400 (the hardware), but what is the cost of all the games you might purchase over the lifetime of owning it (the software)? Assuming that the average number of games is probably between 5 and 10 games at roughly $60 per game, it wouldn’t take long before the software costs would exceed the hardware costs.

If your plan is to play and use as many games as possible, a good strategy would be to trade, buy used, or use a rental service. However, if you only planned to have a few favorite games, your goal would be to wait until the console and game packages became available with a one-time coupon or sale.

Having a strategy for enterprise hardware and software can also make sense in a similar fashion. Let’s use the example for software licensing for Microsoft and VMware, as opposed to the hardware (the server and Intel CPU) that runs that software. Which one is more expensive? What’s the TCO for each? And where do you get the most value for your enterprise?

Scenario

Here’s a scenario for you: suppose you have blade servers in your environment that are approximately 2.5 years old, and your budget includes money to potentially refresh those servers at year 3. Like any good custodian of your company’s money, you want to get the best price with the best solution.

In some cases, the budget gets taken by other projects. So while you might benefit from newer servers, it might be tough to justify to the business how to spend the money on new hardware, when the current hardware is “just fine.” You also might have a Microsoft or VMware renewal coming, which typically are also every 3 years.

So what’s the best way to maximize your current budget and get the most value for you and the organization? Let’s run the numbers and see what happens.

The below table might illustrate your current scenario. Let’s assume you bought the original blades for $10,000 each using the Intel Xeon x5690 6-core CPU for a list price of $1,663 per socket and loaded them up with RAM. If you have even older 4-core CPUs in your blades, then you stand to gain even more benefit; but, let’s look at the 6-core CPUs for now, since they are still in service in a lot of places and will likely run for a long time without issues.

Scenario A

Scenario A Software Hardware (6-core) Total $ $-per-VM
Windows 2012 Std $882 $8k+ $1,663 CPU $10,545 $5,272 (2 VMs)
Windows 2012 DTC $4,809 $8k+ $1,663 CPU $14,472 $1,608 (9 VMs)
vSphere Enterprise Plus $3,495 $8k+ $1,663 CPU $17,967 $1,633 (11 VMs)

Microsoft will allow you to run Windows 2012 standard and virtualize up to 2 instances per OS. That’s not really a bad deal for around $440 per OS instance. The Intel CPU chip itself is $1,663 list, nearly 5x more expensive, and if the server was $8,000, the true cost to own each instance is around $5,300. Again, not bad, especially if you are running your critical business on those two servers from a value perspective (let’s leave high availability, backup, and DR out of this for now to keep it simple).

But most companies have a lot more servers than that. If you had, let’s say, 30 servers at $10k apiece, you are looking at $300,000 to run this environment.

So, in the case where you have hundreds of VMs, it may make sense to consolidate into denser workloads to save time, cooling, space, and last but not least … cost.

Using the same CPU model and running Windows Datacenter and VMware Enterprise Plus editions, the software cost is going to be significantly more expensive than a single processor. And while you are getting a lot more features using more advanced software, the added cost can be offset by increasing your density. Suppose that you can run up to 9 VMs using Hyper-V (11 on VMware taking advantage of SIOC, NIOC, SDRS, and other enterprise features) on each of those $10,000 blades. The cost per VM goes way down, from $5,300 to around $1,600.

This is really nothing new and is one big reason why virtualization has gotten traction over the years. Yes, $3,500 per CPU for virtualization is a big chunk of change, but I will gladly pay that over the old way of building servers for all the other benefits, including cost.

Now, let’s run the numbers by looking at what this would cost to buy all brand new servers and try to get even higher density. The new Intel E5-2697 v2 12-core CPUs are out in the marketplace, and they list for more money: $2,618 to be exact, as of May 2014. I could have used the 15-core CPUs, but those start at $6,400+, so I doubt the 3x cost increase for minimal performance improvement would be worth it.

Scenario B

Scenario B Software Hardware (6-core) Total $ $-per-VM
Windows 2012 Std $882 $8k+ $2,618 CPU $10,545 $6,191 (2 VMs)
Windows 2012 DTC $4,809 $8k+ $2,618 CPU $14,472 $857 (18 VMs)
vSphere Enterprise Plus $3,495 $8k+ $2,618 CPU $17,967 $860 (22 VMs)

Using this scenario, we can drop the cost per VM from around $1,600 to around $860, or around 47% cheaper. Wow, that’s a big difference, and if your server budget is $300,000, you can save $141,000 to free up on other projects. Granted, budgets don’t always work that way – sometime if you don’t use it, you lose it. If that’s the case, you can still use up the $300k, but buy more servers, add more RAM, install more IO cards, graphics cards, etc. for $140k.

But hold on a second. Is this really a fair comparison? What ever happened to the servers I already bought 2.5 years ago? Isn’t Scenario A one where the $10,000 servers and CPUs are already a sunk cost? Why wouldn’t I make the comparison that the hardware cost is zero in that case, compared to a new server? Well, if we did that, we’d have to include some other costs, such as hardware and software maintenance, and the results are surprising:

Scenario C

Scenario
C
Legacy 6-core Server
Hardware & Software TCO
New 12-core Server
Hardware & Software TCO
Hardware $0 $10,618
Software (Windows & VMware) $8,304/CPU $8,304/CPU
50 VMs $49,824 (3 – double
CPU blades)
$24,912 (3 – single CPU blades)
200 VMs $149,472 (9 – double CPU blades) $66,432 (4 – double CPU blades)
600 VMs $448,416 (27 – double CPU blades) $199,296 (12 – double CPU blades)
1200 VMs $913,440 (55 – double CPU blades) $381,984 (23 – double CPU blades)
HW Maintenance 3/yr $6,000/BC + $900/server $6,000/BC + $900/server
SW Maintenance 3/yr 20%/yr of cost ($8304*0.2*CPU*3) 20%/yr of cost ($8304*0.2*CPU*3)
50 VMs w/ maintenance only $8,700 HW + $24,912 SW $8,700 HW + $14,946 SW
200 VMs w/ maintenance only $11,400 HW + $94,665 SW $9,600 HW + $39,859 SW
600 VMs w/ maintenance only $28,200 HW + $274,032 SW $16,800 HW + $119,577 SW
1200 VMs w/ maintenance only $73,500 HW + $543,081 SW $32,700 HW + $229,190 SW
50 VMs TCO $83,846 $48,558
200 VMs TCO $255,537 $115,891
600 VMs TCO $751,528 $335,673
1200 VMs TCO $1,530,021 $643,874

Wow! Those are considerable savings if you include maintenance and software costs.

I calculated these costs by assuming that blades require hardware and software support on the chassis and the blades themselves. I also added in the Microsoft and VMware maintenance costs, which I’ve estimated to be roughly 20% per year.

Finally, the performance difference between the 6-core CPUs and the 12-core CPU is more than 2x (it’s not just double the core count). There’s probably anywhere from 10%-60% increase in performance given all the other improvements. So, instead of 11VMs per socket and just doubling it due to core count, I assumed a conservative additional 20% increase in density for these new CPUs for a total of 26 VMs per CPU socket. Note, the 50-VM example will have 3 nodes for redundancy, but only 2 nodes would have worked for capacity purposes.

Based on Scenario C, while it may seem surprising that tossing your old servers, buying new ones, and saving money is possible, the numbers work out.

Of course, there is a big time effort to do this as well; however, even if you paid someone to do this, you’d still likely save money.

To think of this of this another way: 80% of the cost of running your “free” (or already paid-for servers) is the hardware/software maintenance and licensing to go along with them. So in the case where you are looking at software licensing, where you have the option to go out and add capacity to your VMware farm or pay up software licenses, it almost always makes sense to buy new hardware along with the software.

Here’s another scenario: Let’s say that instead of the 6-core CPUs, you have the 2-yr-old 8-core CPUs and were curious to know when it would be a good time to refresh. If you do the math, you find that you would only have to pay about 20% more cost to recycle these perfectly good servers for new ones. This is one reason why a lot of companies look for a 3-year refresh cycle for IT, even though servers can and do run a lot longer than that.

One final scenario: consider the case where you have a standard server, but a new more powerful CPU is released, and you need to add capacity to your existing farm due to business growth. As much as it makes great sense to standardize on one platform and keep buying the older model, consider the cost implications of doing so.

I’d argue that it might be more important to standardize on the hypervisor or management suite rather than the server itself.

Consider just going to a different CPU model and leave everything else the same in the server. The cost savings of adding 16-core CPUs when they become available when you already have 12-core CPUs can actually be difficult to deny. VMware’s Enhanced VMotion Compatibility feature (EVC), for example, allows you to run multiple processor architectures and not lose any features or performance when mixing and matching CPUs from the same manufacturer.

Granted, there are scenarios when this strategy doesn’t make sense. If you are running low-cost software, or you choose not to renew maintenance on software, or you have a hardware model that allow you not to have support on it, then going scale-out in a wide fashion might make sense. Then again, we didn’t even talk about the power and cooling costs associated with larger numbers of servers.

Conclusion

The shift in thinking about recycling more often is not about disposing of perfectly good hardware. It’s about looking at the total cost of ownership, the software and maintenance costs especially, and building a strategy around where you want to spend your money.

In today’s world of rapidly evolving hardware and software, the hardware is still plenty cheaper than most software. As long as that’s the case, I prefer to have the best hardware available and look to keep my software costs under control.

This strategy is just the tip of the iceberg. When it comes to really expensive software, such as SQL or Oracle, the software can be 10x the cost of the hardware. Look for SQL consolidation projects to get your overall costs down while at the same time, acquire new hardware. And remember, the proof of the pudding is in the eating, as they say, so always “do the math” to make sure the dollars add up.

Photo credit: Craig Piersma via Flickr

float(2)